text
stringlengths 226
34.5k
|
---|
AttributeError: 'MatrixFactorizationModel' object has no attribute 'save'
Question: Im trying to run the example on Apache Spark's MLlib
[website](http://spark.apache.org/docs/latest/mllib-collaborative-
filtering.html). Below is my code:
import sys
import os
os.environ['SPARK_HOME'] = "/usr/local/Cellar/apache-spark/1.2.1"
sys.path.append("/usr/local/Cellar/apache-spark/1.2.1/libexec/python")
sys.path.append("/usr/local/Cellar/apache-spark/1.2.1/libexec/python/build")
try:
from pyspark import SparkContext, SparkConf
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
print ("Apache-Spark v1.2.1 >>> All modules found and imported successfully.")
except ImportError as e:
print ("Couldn't import Spark Modules", e)
sys.exit(1)
# SETTING CONFIGURATION PARAMETERS
config = (SparkConf()
.setMaster("local")
.setAppName("Music Recommender")
.set("spark.executor.memory", "16G")
.set("spark.driver.memory", "16G")
.set("spark.executor.cores", "8"))
sc = SparkContext(conf=config)
# Load and parse the data
data = sc.textFile("data/1aa")
ratings = data.map(lambda l: l.split('\t')).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2])))
# Build the recommendation model using Alternating Least Squares
rank = 10
numIterations = 10
model = ALS.train(ratings, rank, numIterations)
# Evaluate the model on training data
testdata = ratings.map(lambda p: (p[0], p[1]))
predictions = model.predictAll(testdata).map(lambda r: ((r[0], r[1]), r[2]))
ratesAndPreds = ratings.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)
MSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).mean()
print("Mean Squared Error = " + str(MSE))
# Save and load model
model.save(sc, "/Users/kunal/Developer/MusicRecommender")
sameModel = MatrixFactorizationModel.load(sc, "/Users/kunal/Developer/MusicRecommender/data")
The code is running till printing the MSE. The last step is to save the model
to a directory. I am getting the error `'MatrixFactorizationModel' object has
no attribute 'save'`(I've pasted last few rows of the log) below:
15/10/06 21:00:16 INFO DAGScheduler: Stage 200 (mean at /Users/kunal/Developer/MusicRecommender/collabfiltering.py:41) finished in 12.875 s
15/10/06 21:00:16 INFO DAGScheduler: Job 8 finished: mean at /Users/kunal/Developer/MusicRecommender/collabfiltering.py:41, took 53.290203 s
Mean Squared Error = 405.148403002
Traceback (most recent call last):
File "/Users/kunal/Developer/MusicRecommender/collabfiltering.py", line 47, in <module>
model.save(sc, path)
AttributeError: 'MatrixFactorizationModel' object has no attribute 'save'
Process finished with exit code 1
I have reinstalled and made sure I have the latest version of Spark but that
did not help it. I am running this on a 10MB file only which is a tiny split
of the larger file.
Operating System: OSX 10.11.1 Beta (15B22c)
Answer: It happens because you use Spark 1.2.1 and `MatrixFactorizationModel.save`
method has been introduced in Spark 1.3.0. Moreover documentation you use
covers a current version (1.5.1).
Spark documentation urls look like this:
http://spark.apache.org/docs/SPARK_VERSION/some_topic.html
So in your case you should use:
http://spark.apache.org/docs/1.2.1/mllib-collaborative-filtering.html
|
python writing to different file
Question: I'm a huge python noob. Trying to write simple script that will split a line
in a file where it see's a "?"
line in Input file (inputlog.log):
`http://website.com/somejunk.jpg?uniqid&=123&an=1234=123`
line in output file (outputlog.log): `http://website.com/somejunk.jpg
uniqid&=123&an=1234=123`
The goal here is to end up with a file that has 2 columns:
Here's my code it kinda works except it wont write to the 2nd file
> "TypeError: expected a character buffer object"
import re
a = raw_input("what file do you want to open? ")
b = raw_input("what is the file you want to save to? ")
with open(a, 'r') as f1:
with open(b,'w') as f2:
data = f1.readlines()
print "This is the line: ", data #for testing
for line in data:
words= re.split("[?](.*)$",line)
print "Here is the Split: ", words #for testing
f2.write(words)
f1.close()
f2.close()
Answer: Your problem is that 'words' is a list. You cannot write that to your file.
You need to convert it back to a string. Also, you need to pay attention when
converting it back to make sure that you create the spacing/split you want
between the string.
You should do something like this.
words = ' '.join(words)
Pay close attention to the space inside the single quotes. That indicates it
will put a space between your strings.
Finally, you then make your call to:
f2.write(words)
Upon making that change, I tested your code and it successfully split and
wrote them to the file per your specification.
|
reading HDF4 file with python - more than one dataset with same name
Question: I have a HDF4 file I need to read with python. For this I use `pyhdf`. In most
cases I am quite happy to use `SD` class to open the file:
import pyhdf.SD as SD
hdf = SD.SD(hdfFile)
and then continue with
v1 = hdf.select('Data set 1')
v2 = hdf.select('Data set 2')
However I have several groups in the HDF file and some variables appear in
more than one group having the same name:
In `Group 1` I have `Data set 3` and in `Group 2` I have `Data set 3` so my
`select` command will only select one of then I guess (without me knowing
which one?).
Is there a simple way of selecting (reading) `Data set 3` from `Group 1` and
then from `Group 2`?
I have looked at the `V` and `VS` modules. I found an example script that will
loop through all groups and subgroups etc. and find all variables (data sets).
But I have now Idea of how to connect those variables to the parent, as for me
to know into which group they belong.
Answer: I think that pyhdf might not be the best choice for this particular task. Have
you looked at [PyNIO](https://www.pyngl.ucar.edu/Nio.shtml)?
From the [HDF section of their
documentation](https://www.pyngl.ucar.edu/NioFormats.shtml#HDF):
> PyNIO has a read-only ability to understand HDF Vgroups. When a variable
> that is part of a Vgroup is encountered, PyNIO appends a double underscore
> and the group number to the end of the variable name. This ensures that the
> variable will have a unique name, relative to variables that belong to other
> Vgroups. It also provides two additional attributes to the variable:
> hdf_group, whose value is the HDF string name of the group, and
> hdf_group_id, whose value is the same as the group number appended to the
> end of the variable name.
|
Get the full substring matching the regex pattern
Question: I am not good with regular expressions and was looking at some online
resources for what I would like to do. So basically, I have a regular
expression in Python as follows:
import re
pattern = re.compile(r'(?=(ATG(?:...)*?)(?=TAG|TGA|TAA))')
This is supposed to find all sub strings which begin with ATG and end in TAG
or TGA or TAA. I use it as:
str = "ATGCCCTAG"
print pattern.findall(str)
However, this returns `ATGCCC` and removes the trailing `TAG` and I would like
it to keep the trailing `TAG`. How can I change it to give me the full
substring?
Answer: > To find all sub strings which begin with ATG and end in TAG or TGA or TAA
You will need a
ATG(?:...)*?(?:TAG|TGA|TAA)
This regex also makes sure there are 0 or more 3-symbol (excl. newline)
sequences in-between `ATG` and the last `TAG`, `TGA` or `TAA`.
See [regex demo](https://regex101.com/r/xA8fU7/2)
[Python demo](https://ideone.com/TLsbiF):
import re
p = re.compile(r'ATG(?:...)*?(?:TAG|TGA|TAA)')
test_str = "FFG FFG ATGCCCTAG"
print (p.findall(test_str))
This will work if you need to find _non-overlapping_ substrings. To find
overlapping ones, the technique is to encapsulate that into a capturing group
and place in a non-anchored positive look-ahead:
r'(?=(ATG(?:...)*?(?:TAG|TGA|TAA)))'
| | ||
| | --- Capture group ------- ||
| -- Positive look-ahead ------ |
See [regex demo](https://regex101.com/r/nQ7iH0/1)
|
No api proxy found for service "taskqueue"
Question: When I'm trying to `python manage.py changepassword` command I get this
errror:
AssertionError: No api proxy found for service "taskqueue"
Here's what I have in my `PYTHONPATH`:
$ echo $PYTHONPATH
lib/:/usr/local/google_appengine
And my `DJANGO_SETTINGS_MODULE` points to the settings file that I use for
GAE:
$ echo $DJANGO_SETTINGS_MODULE
settings.dev
There's some package for `taskqueue` in appengine api folder:
/usr/local/google_appengine/google/appengine/api/taskqueue$ ls
__init__.py __init__.pyc taskqueue.py taskqueue.pyc taskqueue_service_pb.py taskqueue_service_pb.pyc taskqueue_stub.py taskqueue_stub.pyc
What could I miss here?
Answer: I assume `manage.py` is executing sdk methods without starting a local
`dev_appserver`. `dev_appserver.py` sets up stubs to emulate the services
available once your application is deployed. When you are executing code
locally and outside of the running app server, you will need to initialize
those stubs yourself.
The app engine docs have a section on testing that tells you [how to
initialize those
stubs](https://cloud.google.com/appengine/docs/python/tools/localunittesting?hl=en#Python_Writing_task_queue_tests).
It isn't the exact solution to your issue, but it can point you to the stubs
you need to set up.
import unittest
from google.appengine.api import taskqueue
from google.appengine.ext import deferred
from google.appengine.ext import testbed
class TaskQueueTestCase(unittest.TestCase):
def setUp(self):
self.testbed = testbed.Testbed()
self.testbed.activate()
# root_path must be set the the location of queue.yaml.
# Otherwise, only the 'default' queue will be available.
self.testbed.init_taskqueue_stub(root_path='tests/resources')
self.taskqueue_stub = self.testbed.get_stub(
testbed.TASKQUEUE_SERVICE_NAME)
def tearDown(self):
self.testbed.deactivate()
def testTaskAddedToQueue(self):
taskqueue.Task(name='my_task', url='/url/of/my/task/').add()
tasks = self.taskqueue_stub.get_filtered_tasks()
assert len(tasks) == 1
assert tasks[0].name == 'my_task'
|
Send payment using Blockchain.info API - Runtime Error: ERROR: For input string: ".001"
Question: I am trying to send a payment using Blockchain.Info's payment API. I am using
a Python libary found on GitHub here:
<https://github.com/gowness/pyblockchain/blob/master/pyblockchain.py>
When running the code below I am getting the following error: RuntimeError:
ERROR: For input string: ".001" Does anyone know what is going on here? I am
running Python 2.7. Once I have got the initial sending of one transaction
working I would like to look at sending multiple transactions.
from __future__ import print_function
from itertools import islice, imap
import csv, requests, json, math
from collections import defaultdict
import requests
import urllib
import json
from os.path import expanduser
import configparser
class Wallet:
guid = 'g'
isAccount = 0
isKey = 0
password1 = 'x'
password2 = 'z'
url = ''
def __init__(self, guid = 'g', password1 = 'x', password2 = 'z'):
if guid.count('-') > 0:
self.isAccount = 1
if password1 == '': # wallet guid's contain -
raise ValueError('No password with guid.')
else:
self.isKey = 1
self.guid = guid
self.url = 'https://blockchain.info/merchant/' + guid + '/'
self.password1 = password1
self.password2 = password2
def Call(self, method, data = {}):
if self.password1 != '':
data['password'] = self.password1
if self.password2 != '':
data['second_password'] = self.password2
response = requests.post(self.url + method,params=data)
json = response.json()
if 'error' in json:
raise RuntimeError('ERROR: ' + json['error'])
return json
def SendPayment(self, toaddr='TA', amount='0.001', fromaddr = 'FA', shared = 0, fee = 0.0001, note = 'test'):
data = {}
data['address'] = toaddr
data['amount'] = amount
data['fee'] = fee
if fromaddr != '':
data['from'] = fromaddr
if shared == 1:
data['shared'] = 'true'
if note != '':
data['note'] = note
response = self.Call('payment',data)
return
def SendManyPayment(self, txs = {} , fromaddr = 'FA', shared = 0, fee = 0.0001, note = 'test'):
responses = {}
for tx in txs:
SendPayment(self, tx[0], tx[1] , fromaddr , shared , fee , note )
return
print(Wallet().SendPayment())
Answer: I fixed it, payment amount needed to be in Satoshi not BTC. Should have looked
at the API docs a bit more first ;)
|
change order of colums in csv (python)
Question: I made a script, which reads a given input-file (**csv**), manipulates the
data somehow and writes an output-file (**csv**).
In my case, my given input-file looks like this:
| sku | article_name |
| 1 | MyArticle |
For my output-file, I need to re-arrange these columns (there are plenty more,
but I think i might be able to solve it, when someone shows me the way)
My output-file should look like this:
| article_name | another_column | sku |
| MyArticle | | 1 |
Note, that here is a new column, that isn't in the source csv-file, but it has
to be printed anyway (the order is important as well)
This is what I have so far:
#!/usr/bin/env python
# -*- coding: latin_1 -*-
import csv
import argparse
import sys
header_mappings = {'attr_artikel_bezeichnung1': 'ARTICLE LABEL',
'sku': 'ARTICLE NUMBER',
'Article label locale': 'Article label locale',
'attr_purchaseprice': 'EK-Preis',
'attr_salesPrice': 'EuroNettoPreis',
'attr_salesunit': 'Einheit',
'attr_salesvatcode': 'MwSt.-Satz',
'attr_suppliercode': 'Lieferantennummer',
'attr_suppliersitemcode': 'Artikelnummer Lieferant',
'attr_isbatchitem': 'SNWarenausgang'}
row_mapping = {'Einheit': {'pc': 'St.'},
'MwSt.-Satz': {'3': '19'}}
def remap_header(header):
for h_map in header_mappings:
if h_map in header:
yield header_mappings.get(h_map), header.get(h_map)
def map_header(header):
for elem in header:
yield elem, header.index(elem)
def read_csv(filename):
with open(filename, 'rb') as incsv:
csv_reader = csv.reader(incsv, delimiter=';')
for r in csv_reader:
yield r
def add_header(header, fields=()):
for f in fields:
header.append(f)
return header
def duplicate(csv_row, header_name, fields):
csv_row[new_csv_header.index(fields)] = csv_row[new_csv_header.index(header_name)]
return csv_row
def do_new_row(csv_row):
for header_name in new_csv_header:
for r_map in row_mapping:
row_content = csv_row[mapped_header.get(r_map)]
if row_content in row_mapping.get(r_map):
csv_row[mapped_header.get(r_map)] = row_mapping.get(r_map).get(row_content)
try:
yield csv_row[mapped_header.get(header_name)]
except TypeError:
continue
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--infile', metavar='CSV')
parser.add_argument('-o', '--outfile', metavar='CSV')
args = parser.parse_args()
arguments = vars(args)
if len(sys.argv[1:]) == 0:
parser.print_usage()
sys.exit(0)
# print arguments
# parse_csv(**arguments)
"""
"""
csv_reader_iter = read_csv(arguments.get('infile'))
# neuer csv header
new_csv_header = list()
csv_header = next(csv_reader_iter)
for h in csv_header:
if h in header_mappings:
new_csv_header.append(header_mappings.get(h))
# print new_csv_header
new_csv_header = add_header(new_csv_header, ('Article label locale', 'Nummer'))
mapped_header = dict(remap_header(dict(map_header(csv_header))))
# print mapped_header
with open(arguments.get('outfile'), 'wb') as outcsv:
csv_writer = csv.writer(outcsv, delimiter=';')
csv_writer.writerow(new_csv_header)
for row in csv_reader_iter:
row = list(do_new_row(row))
delta = len(new_csv_header) - len(row)
if delta > 0:
row = row + (delta * [''])
# duplicate(row, 'SNWarenausgang', 'SNWareneingang')
# duplicate(row, 'SNWarenausgang', 'SNWareneingang')
csv_writer.writerow(row)
print "Done."
"""
print new_csv_header
for row in csv_reader_iter:
row = list(do_new_row(row))
delta = len(new_csv_header) - len(row)
if delta > 0:
row = row + (delta * [''])
duplicate(row, 'Herstellernummer', 'Nummer')
duplicate(row, 'SNWarenausgang', 'SNWareneingang')
print row
"""
Right now, even though it says "ARTICLE LABEL" first, the sku is printed
first. My guess: This is due the order of the csv-file, since sku is the first
field there... right?
Thank you in advance
Answer: If you use the `DictWriter` from the `csv` lib you can specify the order of
the columns. Use `DictReader` to read in rows from your file as dicts. Then
you just explicitly specify the order of the keys when you create your
`DictWriter`.
<https://docs.python.org/2/library/csv.html#csv.DictReader>
|
getting sublime text 3 to use anaconda python
Question: So I've installed anaconda to a directory I have privileges for but I can't
get sublime text 3 to recognise that the shell is now using anaconda python:
>which python
/local/home/USER/Apps/anaconda/bin/python
when I build with sublime launched from the same shell:
import astropy
print astropy.__file__
it gives a different directory: /soft/python-SL7/lib/python2.7/site-
packages/astropy/**init**.pyc
My .tcshrc file reads:
setenv PATH /local/home/USER/Apps/anaconda/bin:${PATH}
alias subl /local/home/USER/Apps/sublime_text_3/sublime_text
My .bashrc (not that it should be using it) reads:
export PATH="/local/home/sread/Apps/anaconda/bin:$PATH"
Any ideas?
Answer: The easiest way is to create a new [build
system](http://docs.sublimetext.info/en/latest/file_processing/build_systems.html)
that points to your Anaconda installation. Create a new file in Sublime with
JSON syntax and the following contents:
{
"cmd": ["/local/home/USER/Apps/anaconda/bin/python", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
Save the file in your `Packages/User` directory (should be `~/.config/sublime-
text-3/Packages/User`) as `Anaconda.sublime-build`. Finally, select **`Tools →
Build System → Anaconda`** , and when you hit `Ctrl``B` in a Python file it
should now run using Anaconda.
If you want to set up
[`SublimeREPL`](https://packagecontrol.io/packages/SublimeREPL) to use
Anaconda with IPython in Sublime, you can follow the instructions
[here](http://stackoverflow.com/a/20861527/1426065) to set up the proper menu
option (altering the path to suit your environment, of course), and my gist
[here](https://gist.github.com/MattDMo/6cb1dfbe8a124e1ca5af) for setting up
SublimeREPL for IPython 4 and Jupyter.
|
Update or 'Refresh' a Text Field in Pymel / Python
Question: Currently writing a simple script inside maya to fetch camera info and present
it in a GUI. The script prints the camera data of the selected camera no
problem however I can't seem to get it to update the text fields with the data
when the button is hit. I'm sure its a simply callBack but i can't work out
how to do it.
Heres the code:
from pymel.core import *
import pymel.core as pm
camFl = 0
camAv = 0
win = window(title="Camera Information", w=300, h=100)
layout = columnLayout()
txtFl = text("Field Of View:"),textField(ed=0,tx=camFl)
pm.separator( height=10, style='double' )
txtAv = text("F-Stop:"),textField(ed=0,tx=camAv)
pm.separator( height=10, style='double' )
btn = button(label="Fetch Data", parent=layout)
def fetchAttr(*args):
camSel = ls(sl=True)
camAttr = camSel[0]
cam = general.PyNode(camAttr)
camFl = cam.fl.get()
camAv = cam.fs.get()
print "Camera Focal Length: " + str(camFl)
print "Camera F-Stop: " + str(camAv)
btn.setCommand(fetchAttr)
win.show()
Thanks!
Answer: Couple of things:
1) you're assigning `txtAV` and `textFl` to _both_ a textField and a text
object because of the comma on those lines. So you cant set properties, you
have two objects in one variable instead of just pymel handles.
2) You're relying on the user to select the shapes, so the code will go south
if they select the camera node in the outliner.
Otherwise the basis is sound. Here's a working version:
from pymel.core import *
import pymel.core as pm
win = window(title="Camera Information", w=300, h=100)
layout = columnLayout()
text("Field of View")
txtFl = textField()
pm.separator( height=10, style='double' )
text("F-Stop")
txtAv = textField()
pm.separator( height=10, style='double' )
btn = button(label="Fetch Data", parent=layout)
def fetchAttr(*args):
camSel = listRelatives(ls(sl=True), ad=True)
camSel = ls(camSel, type='camera')
camAttr = camSel[0]
cam = general.PyNode(camAttr)
txtAv.setText(cam.fs.get())
txtFl.setText(cam.fl.get())
btn.setCommand(fetchAttr)
win.show()
|
integrate 3 body system in python
Question: This is more mathemathical question.
We have 3 body system, with known initial parametrs like positions, masses,
velocityes. This make system like
[](http://i.stack.imgur.com/VyZjB.gif)
i and j = 1,2,3
So question is how to deal with this, to feed this system to **scipy odeint**
?
**UPDATE**
I wrote code
from scipy import linspace
from scipy.integrate import odeint
def rf(i, j, r):
x1, y1, z1 = r[0]
x2, y2, z2 = r[1]
x3, y3, z3 = r[2]
r12 = ((x1-x2)**2+(y1-y2)**2+(z1-z2)**2)**2
r13 = ((x1-x3)**2+(y1-y3)**2+(z1-z3)**2)**2
r23 = ((x2-x3)**2+(y2-y3)**2+(z2-z3)**2)**2
if i == 1:
if j == 2:
r = r12
elif j == 3:
r = r13
if i == 2:
if j == 1:
r = r12
elif j == 3:
r = r23
if i == 3:
if j == 1:
r = r13
elif j == 2:
r = r23
return r
def dotf(r, t0):
x1, y1, z1 = r[0]
x2, y2, z2 = r[1]
x3, y3, z3 = r[2]
x = [x1, x2, x3]
y = [y1, y2, y3]
z = [z1, z2, z3]
k2 = 6.67e-11
m = [2e6, 2.2e7, 0.1e3]
for i in range(1, 3, 1):
xs, ys, zs = 0, 0, 0
for j in range(1, 3, 1):
if i != j:
r = rf(i, j, r)
xs = xs + m[j]*(x[i]-x[j])/r
ys = ys + m[j]*(y[i]-y[j])/r
zs = zs + m[j]*(z[i]-z[j])/r
x[i] = -1 * k2 * xs
y[i] = -1 * k2 * ys
z[i] = -1 * k2 * zs
return [[x1,y1,z1], [x2,y2,z2], [x3,y3,z3]]
t = linspace(0, 50, 1e4)
r1 = [1, 2, 1]
r2 = [0.5, 0.1, 3]
r3 = [0.6, 1, 1.5]
r = [r1, r2, r3]
u = odeint(dotf, r, t)
and get output error:
Traceback (most recent call last):
File "3b.py", line 73, in <module>
u = odeint(dotf, r, t)
File "/usr/local/lib/python2.7/dist-packages/scipy/integrate/odepack.py", line 148, in odeint
ixpr, mxstep, mxhnil, mxordn, mxords)
ValueError: object too deep for desired array
Answer: I've corrected two obvious bugs in your code; it runs, but I'm not sure it's
correct. Code at the end.
The first bug is that `odeint` wants to deal in vectors for state and state
derivative. By vector I mean rank-1 arrays; you were submitting a matrix
(rank-2 array) as your initial condition. I've changed the `r` assignment,
possibly confusingly, as `r = r1 + r2 + r3` \- the `+` operator on lists is
concatenation, though on numpy arrays it is element-wise addition.
This change meant I had to change the assignment to `x1`, etc., in `dotf` and
`rf`. I also change the return from `dotf` to be a vector.
The second bug is that you used `r` to mean two different things: first, the
system state vector, and second the return from `rf`. I changed the second one
to be `rr`.
The solution appears to be unstable; I don't know if this is plausible. I
imagine there are other bugs, but you at least have something that runs now.
My suggestion is starting with something simpler that you know the solution
to, e.g., stable first-order linear system, or a stable underdamped second-
order system, or Lorenz if you want something chaotic.
See <http://docs.scipy.org/doc/> for Scipy documentation, e.g., I used
<http://scipy-0.13.0/reference/generated/scipy.integrate.odeint.html#scipy.integrate.odeint>
for `odeint` for Scipy 0.13.0 (which is what comes with my Ubuntu 14.04
system).
from scipy import linspace
from scipy.integrate import odeint
def rf(i, j, r):
x1, y1, z1 = r[:3]
x2, y2, z2 = r[3:6]
x3, y3, z3 = r[6:]
r12 = ((x1-x2)**2+(y1-y2)**2+(z1-z2)**2)**2
r13 = ((x1-x3)**2+(y1-y3)**2+(z1-z3)**2)**2
r23 = ((x2-x3)**2+(y2-y3)**2+(z2-z3)**2)**2
if i == 1:
if j == 2:
r = r12
elif j == 3:
r = r13
if i == 2:
if j == 1:
r = r12
elif j == 3:
r = r23
if i == 3:
if j == 1:
r = r13
elif j == 2:
r = r23
return r
def dotf(r, t0):
x1, y1, z1 = r[:3]
x2, y2, z2 = r[3:6]
x3, y3, z3 = r[6:]
x = [x1, x2, x3]
y = [y1, y2, y3]
z = [z1, z2, z3]
k2 = 6.67e-11
m = [2e6, 2.2e7, 0.1e3]
for i in range(1, 3, 1):
xs, ys, zs = 0, 0, 0
for j in range(1, 3, 1):
if i != j:
rr = rf(i, j, r)
xs = xs + m[j]*(x[i]-x[j])/rr
ys = ys + m[j]*(y[i]-y[j])/rr
zs = zs + m[j]*(z[i]-z[j])/rr
x[i] = -1 * k2 * xs
y[i] = -1 * k2 * ys
z[i] = -1 * k2 * zs
return [x1,y1,z1, x2,y2,z2, x3,y3,z3]
t = linspace(0, 50, 1e4)
r1 = [1, 2, 1]
r2 = [0.5, 0.1, 3]
r3 = [0.6, 1, 1.5]
r = r1 + r2 + r3
u = odeint(dotf, r, t)
# uncomment to plot
# from matplotlib import pyplot
#
# pyplot.plot(t,u)
# pyplot.show()
|
How do I connect with Python to a RESTful API using keys instead of basic authentication username and password?
Question: I am new to programming, and was asked to take over a project where I need to
change the current Python code we use to connect to a Ver 1 RESTful API. The
company has switched to their Ver 2 of the API and now require IDs and Keys
for authentication instead of the basic username and password. The old code
that worked for the Ver 1 API looks like this:
import requests
import simplejson as json
import pprintpp as pprint
#API_Ver1 Auth
USER = 'username'
PASS = 'password'
url = 'https://somecompany.com/api/v1/groups'
s = requests.Session()
s.auth = (USER, PASS)
r = json.loads(s.get(url).text)
groups = r["data"]
I can connect to the Ver 2 API via a terminal using a cURL string like this:
curl -v -X GET -H "X-ABC-API-ID:x-x-x-x-x" -H "X-ABC-API-
KEY:nnnnnnnnnnnnnnnnnnnnnnn" -H "X-DE-API-ID:x" -H "X-DE-API-
KEY:nnnnnnnnnnnnnnnnnnnnnnnn" "<https://www.somecompany.com/api/v2/groups/>"
I have searched, but have been unsuccessful in finding a way to get the IDs
and Keys from the cURL string to allow access to the Ver 2 API using Python.
Thanks for your consideration in helping a noob get through this code change!
Answer: you can add HTTP headers to a request
headers = {
'X-ABC-API-ID': 'x-x-x-x-x',
'X-ABC-API-KEY': 'nnnnnnnnnnnnnnnnnnnnnnn',
'X-DE-API-ID': 'x',
'X-DE-API-KEY': 'nnnnnnnnnnnnnnnnnnnnnnnn'
}
r = requests.get('https://www.somecompany.com/api/v2/groups/', headers=headers)
|
Python pandas map dict keys to values
Question: I have a csv for input, whose row values I'd like to join into a new field.
This new field is a constructed url, which will then be processed by the
requests.post() method.
I am constructing my url correctly, but my issue is with the data object that
should be passed to requests. How can I have the correct values passed to
their proper keys when my dictionary is unordered? If I need to use an ordered
dict, how can I properly set it up with my current format?
Here is what I have:
import pandas as pd
import numpy as np
import requests
test_df = pd.read_csv('frame1.csv')
headers = {'content-type': 'application/x-www-form-urlencoded'}
test_df['FIRST_NAME'] = test_df['FIRST_NAME'].astype(str)
test_df['LAST_NAME'] = test_df['LAST_NAME'].astype(str)
test_df['ADDRESS_1'] = test_df['ADDRESS_1'].astype(str)
test_df['CITY'] = test_df['CITY'].astype(str)
test_df['req'] = 'site-url.com?' + '&FIRST_NAME=' + test_df['FIRST_NAME'] + '&LAST_NAME=' + \
test_df['LAST_NAME'] + '&ADDRESS_1=' + test_df['ADDRESS_1'] + '&CITY=' + test_df['CITY']
arr = test_df.values
d = {'FIRST_NAME':test_df['FIRST_NAME'], 'LAST_NAME':test_df['LAST_NAME'],
'ADDRESS_1':test_df['ADDRESS_1'], 'CITY':test_df['CITY']}
test_df = pd.DataFrame(arr[0:, 0:], columns=d, dtype=np.str)
data = test_df.to_dict()
data = {k: v for k, v in data.items()}
test_df['raw_result'] = test_df['req'].apply(lambda x: requests.post(x, headers=headers,
data=data).content)
test_df.to_csv('frame1_result.csv')
I tried to map values to keys with a dict comprehension, but the assignment of
a key like `FIRST_NAME` could end up mapping to values from an arbitrary field
like `test_df['CITY']`.
Answer: Not sure if I understand the problem correctly. However, you can give argument
to `to_dict` function e.g.
data = test_df.to_dict(orient='records')
which will give you output as follows: `[{'FIRST_NAME': ..., 'LAST_NAME':
...}, {'FIRST_NAME': ..., 'LAST_NAME': ...}]` (which will give you a list that
has equal length as `test_df`). This might be one possibility to easily map it
to a correct row.
|
No modules name Items in scrapy
Question: When i run My spider in scrapy it show no modules name items
In Items file i have defined only two items and i need to make csv for that
items and in spider file iam importing that file and in console importing
error is shown below
Here is code of items file :
import scrapy
class OddsItem(scrapy.Item):
Title = scrapy.Field()
Date = scrapy.Field()
Here is code of spider:
import scrapy
import time
from odds.items import OddsItem
from selenium import webdriver
class OddsSpider(scrapy.Spider):
name = "odds"
...... other code ....
Error in console :
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 142, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 209, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 115, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 296, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/local/lib/python2.7/dist-packages/scrapy/spiderloader.py", line 30, in from_settings
return cls(settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/spiderloader.py", line 21, in __init__
for module in walk_modules(name):
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/yohack/scraping_scrapy/odds/odds/odds/spiders/odds.py", line 3, in <module>
from odds.items import OddsItem
ImportError: No module named items
Answer: Scrapy generates the following directory structure by default
odds/
|
|---scrapy.cfg # deploy configuration file
|
`---odds/ # project's Python module, you'll import your code from here
|
|---__init__.py
|
|---items.py # project items file
|
|---pipelines.py # project pipelines file
|
|---settings.py # project settings file
|
`---spiders/ # a directory where you'll later put your spiders
|
|---__init__.py
|
`---odds.py
.
from odds.items import OddsItem
Looks for items.py in the `odds` directory containing `__init__`.py and
spiders directory by default. Check if you have this structure right. Also
make sure you have `__init__`.py file in that folder it tells python to look
for sub modules in that directory.
|
Calling and creating a function in python
Question: I'm fairly new to python (still in the first few chapters of an introductory
book) and i was wondering, once i have defined a function and saved it in
script mode, and i want to use it in another script do i have to define it the
new program?
To clarify suppose a create a function `multiplybyfive(x)` which takes an
input and x and returns 5x. Now suppose i create a program which uses
`multiplybyfive(x)` in it, do i have to define it again in this new program or
can i simply call it somehow?
Answer: You can import the python file which contains function multiplybyfive(x) into
your new file.
For example, if you write multiplybyfive(x) in file mathfunction.py file. You
can use it in a new program by
from mathfunction import multiplybyfive
x = 1
y = multiplybyfive(x)
print y
|
asyncio: Wait for event from other thread
Question: I'm designing an application in Python which should access a machine to
perform some (lengthy) tasks. The asyncio module seems to be a good choice for
everything that is network-related, but now I need to access the serial port
for one specific component. I've implemented kind of an abstraction layer for
the actual serial port stuff, but can't figure out how to sensibly integrate
this with asyncio.
Following setup: I have a thread running a loop, which regularly talks to the
machine and decodes the responses. Using a method `enqueue_query()`, I can put
a query string into a queue, which will then be sent to the machine by the
other thread and cause a response. By passing in a `threading.Event` (or
anything with a `set()` method), the caller can perform a blocking wait for
the response. This can then look something like this:
f = threading.Event()
ch.enqueue_query('2 getnlimit', f)
f.wait()
print(ch.get_query_responses())
My goal is now to put those lines into a coroutine and have asyncio handle
this waiting, so that the application can do something else in the meantime.
How could I do this? It would probably work by wrapping the `f.wait()` into an
Executor, but this seems to be a bit stupid, as this would create a new thread
only to wait for another thread to do something.
Thanks! Best regards, Philipp
Answer: > By passing in a `threading.Event` (or anything with a `set()` method), the
> caller can perform a blocking wait for the response.
Given the above behavior of your query function, all you need is a thread-safe
version of `asyncio.Event`. It's just 3 lines of code:
import asyncio
class Event_ts(asyncio.Event):
#TODO: clear() method
def set(self):
#FIXME: The _loop attribute is not documented as public api!
self._loop.call_soon_threadsafe(super().set)
A test for functionality:
def threaded(event):
import time
while True:
event.set()
time.sleep(1)
async def main():
import threading
e = Event_ts()
threading.Thread(target=threaded, args=(e,)).start()
while True:
await e.wait()
e.clear()
print('whatever')
asyncio.ensure_future(main())
asyncio.get_event_loop().run_forever()
|
How to use a vector of struct/class in python
Question: I can't seem to find a way to implement a vector in python
struct Struct{
int a, b;
};
This is say a struct, and you could use it in a vector by declaring it
Struct b[100];
for (i = 0; i <= 99 ; i++)
b[i]->a = i + 2;
b[i]->b = i - 2;
So how do you this in python since the variables declare themselves?
Answer: `namedtuple` is good for making simple data containers:
from collections import namedtuple
Vector = namedtuple('Vector', 'a b')
v = Vector(a=1, b=2)
vv = [Vector(i-2, i+2) for i in range(100)]
|
Python - re-ordering columns in a csv
Question: I have a bunch of csv files with the same columns but in different order. We
are trying to upload them with SQL*Plus but we need the columns with a fixed
column arrange.
Example
required order: A B C D E F
csv file: A C D E B (sometimes a column is not in the csv because it is not
available)
is it achievable with python? we are using Access+Macros to do it... but it is
too time consuming
PS. Sorry if anyone get upset for my English skills.
Answer: You can use the [csv module](https://docs.python.org/2/library/csv.html) to
read, reorder, and then and write your file.
**Sample File:**
$ cat file.csv
A,B,C,D,E
a1,b1,c1,d1,e1
a2,b2,c2,d2,e2
**Code**
import csv
with open('file.csv', 'r') as infile, open('reordered.csv', 'a') as outfile:
# output dict needs a list for new column ordering
fieldnames = ['A', 'C', 'D', 'E', 'B']
writer = csv.DictWriter(outfile, fieldnames=fieldnames)
# reorder the header first
writer.writeheader()
for row in csv.DictReader(infile):
# writes the reordered rows to the new file
writer.writerow(row)
**output**
$ cat reordered.csv
A,C,D,E,B
a1,c1,d1,e1,b1
a2,c2,d2,e2,b2
|
Jenkins Python API, create_node(), error: 400
Question: I'm setting my jenkins environment and, as part of this process, I need to
create a slave node.
Bellow follow the script that is crashing:
> ` import jenkins server = jenkins.Jenkins('http://localhost:9090')
> server.create_node('slave') `
Follows the output:
` Traceback (most recent call last): File "<stdin>", line 1, in <module> File
"/usr/local/lib/python2.7/dist-packages/jenkins/__init__.py", line 1156, in
create_node self._build_url(CREATE_NODE, params), b'')) File
"/usr/local/lib/python2.7/dist-packages/jenkins/__init__.py", line 341, in
jenkins_open response = urlopen(req, timeout=self.timeout).read() File
"/usr/lib/python2.7/urllib2.py", line 127, in urlopen return _opener.open(url,
data, timeout) File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 523,
in http_response 'http', request, response, code, msg, hdrs) File
"/usr/lib/python2.7/urllib2.py", line 448, in error return
self._call_chain(*args) File "/usr/lib/python2.7/urllib2.py", line 382, in
_call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line
531, in http_error_default raise HTTPError(req.get_full_url(), code, msg,
hdrs, fp) urllib2.HTTPError: HTTP Error 400: Bad Request `
Answer: Use jenkinsapi module,
from jenkinsapi.jenkins import Jenkins
server = Jenkins('http://localhost:9090')
server.create_node('slave')
|
Capture the run time from running a unittest TestCase into a Python variable
Question: I have several Python files, each containing a Python function and a TestCase,
similar to the one below. I want to have a separate python script which will
execute each file and capture the output. As it happens, all of the tests will
be passing and the output will be similar to below "Ran x tests in x seconds"
but this output is stderr, NOT stdout.
I would like to be able to append each output to a list.
From a bash shell i can simply use:
$ python combine.py &> result.txt
which would save the output to a file(not ideal, but at least something I can
then retrieve), so I've tried using:
os.system('python combine.py &> result.txt')
..that just prints it to the terminal though.
There may be a simpler way, such as calling a TestCase method directly rather
than executing the whole file, which may actually be a better solution, but I
have not been able to figure out a way to do that either.
The ultimate goal is to execute the unittests in each of several files, and
return a list of them. such as ['Ran 3 tests in 0.3 seconds', 'Ran 2 tests in
0.6 seconds', etc..]
# combine.py
def combine(*args):
result = []
args = list(args)
while len(args):
for arg in args:
try:
result.append(arg.pop(0))
except IndexError:
a = args.index(arg)
args.pop(a)
return result
from unittest import TestCase, main
class CombineTests(TestCase):
def runTest(self):
result = ['a', 1, 'one', 'b', 2, 'two', 'c', 3, 4]
self.assertEquals(combine(['a','b','c'], [1,2,3,4],['one','two']), result)
if __name__ == '__main__': main()
* * *
## output (stderr):
Ran 5 tests in 0.002s
OK
Answer: The `unittest` module's `main` method had two options I think you'll find
helpful:
main(module='test_module', exit=False)
The returned value of this execution is a `TestProgram` class object. You can
run the below in an interactive interpreter.
import unittest
import datetime
def fnc(a, b):
return a + b
class MyTest(unittest.TestCase):
def test_one(self):
self.assertEqual(fnc(1,1), 2)
start = datetime.datetime.now()
tmp = unittest.main(module = '__main__', exit = False)
end = datetime.datetime.now()
duraction = end - start
print(duration)
There may be a way to extract the runtime from the object saved here into
`tmp`, but I'm not sure what it would be. You can loop through your modules,
swapping their value in for `'__main__'` in the call to `unittest.main()`, and
capture the runtime duration.
|
Slinky in VPython
Question: The goal of the VPython code below is to model a slinky using 14 balls to
represent the broken-down components of the slinky. But I am facing some
problems with my code. For instance,
R[i] = ball[i].pos - ball[i+1].pos
raises
> 'int' object has no attribute 'pos'
What is the meaning of the above error?
This is my program:
from __future__ import print_function, division
from visual import *
ceiling = box(pos=(0,0,0), size=(0.15, 0.015, 0.15), color=color.green)
n = 14 #number of "coils" (ball objects)
m = 0.015 #mass of each ball; the mass of the slinky is therefore m*n
L = 0.1 #length of the entire slinky
d = L/n #a starting distance that is equal between each
k = 200 #spring constant of the bonds between the coils
t = 0
deltat = 0.002
g = -9.8
a = vector(0,g,0)
ball=[28]
F=[]
R=[]
#make 28 balls, each a distance "d" away from each other
for i in range(n):
ball = ball+[sphere(pos=vector(0,-i*d,0), radius = 0.005, color=color.white)]
#ball[i].p = vector(0,0,0)
for i in range(n-1):
F.append(vector(0,0,0))
R.append(vector(0,0,0))
#calculate the force on each ball and update its position as it falls
while t < 5:
rate(200)
for i in range (n-1):
R[i]=ball[i].pos-ball[i+1].pos
F[i]=200*(mag(R[i]) - d)*norm(R[i])
#r[i] = ball[i].pos - ball[i+1].pos
#F[i] = 200*((r[i].mag - d)*((r[i])/(r[i].mag)))
for i in range(n-1):
ball[i+1].p = ball[i+1].p + (F[i] - F[i+1] + (0.015*a))*deltat
ball[i+1].pos = ball[i+1].pos + ((ball[i+1].p)/0.015)*deltat
t = deltat + t
Answer: It looks like you've assigned an `int` value to `ball[i]` object. With that in
mind, the error message is pretty clear: an integer object doesn't have an
attribute `.pos` (which belongs to the `sphere` class.
If you expect ball to be a list of sphere objects, you need to do something
differently. Currently you've initialized `ball = [28]`, which is a _list_
with a single element, integer value 28.
So, where `i` = 0, `ball[i]` returns `28`, which obviously doesn't have any
such attribute of a sphere.
This might do it:
ball=[] ### initialize ball as an empty list
F=[]
R=[]
#make 28 balls, each a distance "d" away from each other
"""
You said you want only 14 balls, but the comment above says 28...
"""
for i in range(n):
ball.append(sphere(pos=vector(0,-i*d,0), radius = 0.005, color=color.white))
|
Error downloading HTTPS webpage
Question: I am downloading some data from some https webpage
<https://www.spar.si/sl_SI/zaposlitev/prosta-delovna-mesta-.html>, so I get
this error due to HTTPS. When I change the webpage manually to HTTP, it
downloads fine. I was looking for similar examples to fix this but I did not
find any. Do you have some idea what to do ?
Traceback (most recent call last):
File "down.py", line 34, in <module>
soup = BeautifulSoup(urllib.urlopen(url).read(), "html.parser")
File "g:\python\Lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "g:\python\Lib\urllib.py", line 213, in open
return getattr(self, name)(url)
File "g:\python\Lib\urllib.py", line 443, in open_https
h.endheaders(data)
File "g:\python\Lib\httplib.py", line 1049, in endheaders
self._send_output(message_body)
File "g:\python\Lib\httplib.py", line 893, in _send_output
self.send(msg)
File "g:\python\Lib\httplib.py", line 855, in send
self.connect()
File "g:\python\Lib\httplib.py", line 1274, in connect
server_hostname=server_hostname)
File "g:\python\Lib\ssl.py", line 352, in wrap_socket
_context=self)
File "g:\python\Lib\ssl.py", line 579, in __init__
self.do_handshake()
File "g:\python\Lib\ssl.py", line 808, in do_handshake
self._sslobj.do_handshake()
IOError: [Errno socket error] [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:5
90)
This is my programme:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# encoding=UTF-8
#
# DOWNLOADER
# To grab the text content of webpages and save it to TinyDB database.
import re, time, urllib, tinydb
from bs4 import BeautifulSoup
start_time = time.time()
#Open file with urls.
with open("G:/myVE/vacancies/urls2.csv") as f:
lines = f.readlines()
#Open file to write HTML to.
with open("G:/myVE/downloader/urls2_html.txt", 'wb') as g:
#We parse the content of url file to get just urls without the first line and without the text.
for line in lines[1:len(lines)]:
#Read the url from the file.
#url = line.split(",")[0]
url = line
print "test"
#Read the HTML of url
soup = BeautifulSoup(urllib.urlopen(url).read(), "html.parser")
print url
#Mark of new HTML in HTML file.
g.write("\n\nNEW HTML\n\n")
#Write new HTML to file.
g.write(str(soup))
print "Html saved to html.txt"
print "--- %s seconds ---" % round((time.time() - start_time),2)
"""
#We read HTML of the employment webpage that we intend to parse.
soup = BeautifulSoup(urllib.urlopen('http://www.simplybusiness.co.uk/about-us/careers/jobs/').read(), "html.parser")
#We write HTML to a file.
with open("E:/analitika/SURS/tutorial/tutorial/html.txt", 'wb') as f:
f.write(str(soup))
print "Html saved to html.txt"
print "--- %s seconds ---" % round((time.time() - start_time),2)
"""
Thank you!
Answer: You should use the `requests` library, see <http://docs.python-
requests.org/en/latest/user/advanced/#ssl-cert-verification> as a reference.
**Updated to add**
Now with your url here is an example with the `requests` library.
import requests
url = "https://www.spar.si/sl_SI/zaposlitev/prosta-delovna-mesta-.html"
r = requests.get(url, verify=True)
print(r.text)
Here is an example with `beautifulsoup` and Python 3.3 which also seems to
work.
import urllib
from bs4 import BeautifulSoup
url = "https://www.spar.si/sl_SI/zaposlitev/prosta-delovna-mesta-.html"
soup = BeautifulSoup(urllib.request.urlopen(url).read(), "html.parser")
print(soup)
|
What is the purpose of using python `super() `, inside `__init__ ` other than the inheritance?
Question: I found this simple program in a Youtube tutorial which is used QtSide modules
with python. Basically what it does is connect `QLineEdit` to a
`QTextBrowser`. As you can see below, the entire program handled by single
class. I have basic idea of super() function which is used in multiple
inheritance. So here, I don't understand what `super(Form,
self).__init__(parent)` statement does. I tried running the same program after
commenting that statement which produced below error message.
Error:
Traceback (most recent call last):
File "/home/dazz/Projects/PycharmProjects/FirstTutorial/a2_gui.py", line 35, in <module>
form = Form()
File "/home/dazz/Projects/PycharmProjects/FirstTutorial/a2_gui.py", line 17, in __init__
self.setLayout(layout)
RuntimeError: '__init__' method of object's base class (Form) not called.
Program code:
import sys
from PySide.QtCore import *
from PySide.QtGui import *
class Form(QDialog):
def __init__(self, parent=None):
super(Form, self).__init__(parent)
self.browser = QTextBrowser()
self.lineEdit = QLineEdit()
layout = QVBoxLayout()
layout.addWidget(self.browser)
layout.addWidget(self.lineEdit)
self.setLayout(layout)
self.lineEdit.returnPressed.connect(self.update_ui)
self.setWindowTitle('Calculate')
def update_ui(self):
try:
text = self.lineEdit.text()
self.browser.append('%s = %s' % (text, eval(text)))
self.lineEdit.selectAll()
except:
self.browser.append('%s is invalid!' % text)
app = QApplication(sys.argv)
form = Form()
form.show()
app.exec_()
Here, what is the use of `super()`? I found a
[question](http://stackoverflow.com/questions/15131282/about-the-super-
function) which may related to this. But it's not clear to me.
Answer: `super()` is used is inheritance, multiple or otherwise, to call methods that
may have been overridden in the current class.
Here `super()` is used to call the original `__init__` method defined for
`QDialog` or a parent class of that. Not calling the original probably will
have consequences as that original method does work you don't want to have to
replicate in your own `__init__` method.
`super()` makes multiple inheritance easier and more flexible to deal with,
but it is not _just_ for multiple inheritance.
|
django "__init__takes at least 2 arguments" for dynamic drop down list function
Question: Regarding the django dynamic drop down list, I made some trials,
**If no dynamic function, it works without problem**
from django import forms
from django.forms import ModelForm
from .models import input
from result.models import result
from django.contrib.auth.models import User,Group
import Queue
class inputform(forms.ModelForm):
regionlist = forms.ModelChoiceField(queryset=result.objects.values('Region').distinct())
class Meta:
model = input
fields = ('company', 'Region')
**If add the dynamic function like below, there is error of "__Init__takes at
least 2 arguments(2 given)**
...before are same as above....
class inputform(forms.ModelForm):
region = forms.ModelChoiceField(label=u'Region')
def __init__(self,*args,**kwargs):
super(inputform,self).__init__(*args,**kwargs)
self.fields['region'].choices=((x.que,x.disr) for x in result.objects.values('Region').distinct())
......below are the same as above one...
**Traceback**
Traceback:
File "C:\Python27\lib\site-packages\django-1.8.3 py2.7.egg\django\core\handlers\base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Python27\lib\site-packages\django-1.8.3-py2.7.egg\django\views\decorators\csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "C:\Users\user\XXX\inputform\views.py" in input
13. form = inputform() #??????
File "C:\Users\user\Desktop\XXX\inputform\forms.py" in __init__
22. self.fields['region'].choices=((x.que,x.disr) for x in result.objects.values('Region').distinct())
File "C:\Python27\lib\site-packages\django-1.8.3-py2.7.egg\django\forms\fields.py" in _set_choices
851. value = list(value)
File "C:\Users\user\Desktop\XXX\inputform\forms.py" in <genexpr>
22. self.fields['region'].choices=((x.que,x.disr) for x in result.objects.values('Region').distinct())
Exception Type: AttributeError at /input
Exception Value: 'dict' object has no attribute 'que'
**Second Trial**
region = forms.ModelChoiceField(queryset=None,label=u'region')
def __init__(self,*args,**kwargs):
super(InputForm,self).__init__(*args,**kwargs)
iquery = Result.objects.values_list('region', flat=True).distinct()
iquery_choices = [('', '')] + [(region,region) for region in iquery]
I think now it is dic, but it still reports the same error. Please help thanks
in advance.
Answer: You must pass queryset to your
[ModelChoiceField](https://docs.djangoproject.com/en/1.8/ref/forms/fields/#fields-
which-handle-relationships). From docs:
> For more complex uses, you can specify `queryset=None` when declaring the
> form field and then populate the queryset in the form’s `__init__()` method:
class inputform(forms.ModelForm):
region = forms.ModelChoiceField(queryset=None, label=u'Region')
def __init__(self,*args,**kwargs):
super(inputform,self).__init__(*args,**kwargs)
self.fields['region'].choices=((x['Region'].que,x['Region'].disr) for x in dupont.objects.values('Region').distinct())
|
Python urllib.request.urlopen() returning error 403
Question: I'm trying to download the HTML of a page (<http://www.guangxindai.com> in
this case) but I'm getting back an error 403. Here is my code:
import urllib.request
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
f = opener.open("http://www.guangxindai.com")
f.read()
but I get error response.
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
f = opener.open("http://www.guangxindai.com")
File "C:\Python33\lib\urllib\request.py", line 475, in open
response = meth(req, response)
File "C:\Python33\lib\urllib\request.py", line 587, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python33\lib\urllib\request.py", line 513, in error
return self._call_chain(*args)
File "C:\Python33\lib\urllib\request.py", line 447, in _call_chain
result = func(*args)
File "C:\Python33\lib\urllib\request.py", line 595, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
I have tried different request headers, but still can not get correct
response. I can view the web through browser. It seems strange for me. I guess
the web use some method to block web spider. Does anyone know what is
happening? How can I get the HTML of page correctly?
Answer: If your aim is to read the html of the page you can use the following code. It
worked for me on Python 2.7
import urllib
f = urllib.urlopen("http://www.guangxindai.com")
f.read()
|
Python file copying deletes original file
Question: I've got the program below which runs via cron and backs up asterisk call
recordings.
It works fine for the most part, however if a call is in progress at the time
then the act of trying to copy it seems to kill it, i.e. it disappears from
both the source and destination.
Is there any way to prevent this, i.e. could I test if a file is in use
somehow before trying to copy it?
Thanks
from datetime import datetime
from glob import iglob
from os.path import basename, dirname, isdir
from os import makedirs
from sys import argv
from shutil import copyfile
def copy_asterisk_files_tree(src, fullpath=None):
DEST = datetime.now().strftime('/mnt/shardik/asteriskcalls/' + src)
if fullpath is None:
fullpath = src
if not isdir(DEST):
makedirs(DEST)
for path in iglob(src + '/*'):
if isdir(path):
copy_asterisk_files_tree(path, fullpath)
else:
subdir = '%s/%s' % (
DEST, dirname(path)[len(fullpath) + 1:]
)
if not isdir(subdir):
makedirs(subdir)
copyfile(path, '%s/%s' % (
subdir, basename(path).replace(':', '-')
))
if __name__ == '__main__':
if len(argv) != 2:
print 'You must specify the source path as the first argument!'
exit(1)
copy_asterisk_files_tree(argv[1])
Answer: What you need to do is use a lock. Take a look at the docs ...
<https://docs.python.org/2/library/fcntl.html#fcntl.flock>
fcntl.flock(fd, op)
Perform the lock operation op on file descriptor fd (file objects
providing a fileno() method are accepted as well). See the Unix manual
flock(2) for details. (On some systems, this function is emulated
using fcntl().)
This has also been answered on SO in previous questions, such as this one:
[Locking a file in Python](http://stackoverflow.com/questions/489861/locking-
a-file-in-python), which uses `filelock`
(<https://pypi.python.org/pypi/filelock/>). Filelock is platform independant.
You could also write to a temporary file/s and merge them, but I'd much prefer
|
Python Gooey Run Directory
Question: I'm using the latest [Gooey](https://github.com/chriskiehl/Gooey) library on
Python 2.7 on Windows as a GUI for a simple `argparse`, but for some reason,
the script keeps giving me the `[Errno2] No File Exists`.
I think it is because there is a space in the path of the Anaconda
installation (i.e. `C:\Users\FirstName LastName\Etc.`) but I'm stumped.
I have tried `str.replace` all the `\` with `\\`, but I keep getting the same
error message. Any ideas of what to do?
Code:
from __future__ import print_function
import pandas as pd
import numpy as np
import glob
import sys
import os
import json
from argparse import ArgumentParser
from gooey import Gooey, GooeyParser
@Gooey(program_name="CPT Lookup")
def parse_args():
stored_args = {}
parser = GooeyParser(description='CPT Lookup')
#Eventually make into checkboxes
parser.add_argument('year',
action='store',
default=stored_args.get('year'),
widget='FileChooser',
help="CSV file with extracted year")
parser.add_argument('CPT',
action='store',
default=stored_args.get('CPT'),
widget='TextField',
help='CPT Code')
args = parser.parse_args()
return args
def loadCSV(year):
#DO I DO SOMETHING LIKE YEAR.REPLACE('\','\\')?
df = pd.read_csv(year)
return df
if __name__ == '__main__':
conf = parse_args()
print("Opening CSV file")
sales_df = loadCSV(conf.year)
Answer: This was an issue with the Gooey library itself (I'm the author). It wasn't
quoting incoming file paths correctly.
If you pull down the latest release from PyPi (`pip install -U gooey`), your
example script should run without issue.
|
How to split one event into many with reactive extensions?
Question: How do you take a single event in a reactive extensions stream and split it
into multiple events in the same stream?
I have a sequence that retrieves json data which is an array at the top level.
At the point where the json data is decoded, I would like to then take each
element in that array and continue passing those elements along the stream.
Here's an example with an imaginary function that I wish existed (but with a
shorter name!). It's in Python, but I think it's straightforward enough that
it should be legible to other Rx programmers.
# Smallest possible example
from rx import Observable
import requests
stream = Observable.just('https://api.github.com/users')
stream.map(requests.get) \
.map(lambda raw: raw.json()) \
.SPLIT_ARRAY_INTO_SEPARATE_EVENTS() \
.subscribe(print)
Put in other words, I want to make a transition like so:
From:
# --[a,b,c]--[d,e]--|->
To:
# --a-b-c-----d-e---|->
Answer: You can use the `SelectMany` operator:
stream.SelectMany(arr => arr)
This will "flatten" you stream of events just as the C# LINQ `SelectMany`
operator can be used to flatten a sequence of sequences.
|
Python selenium and fuzzy matching
Question: I'm using `Selenium` to populate some drop down menus. These dropdown menus
are fairly dynamic.
What I have though are values that could be in the dropdown, for example:
<select>
<option>Red, wooly, jumper, large, UK</option>
<option>Blue, wooly, jumper, small, USA</option>
<option>Red, wooly, scarf, small, UK</option>
</select>
Ideally, what I'm looking to do is select the option that closest matches the
following string
'Red, wooly, small, UK'
This would select the 3rd item from the dropdown
Could this be done with some kind of matcher? If so, how would I select the
correct element from the dropdown?
Thanks
Answer: have you tried using a regular expression?? Python regex to match the third
line, or even using pythons builtin .find() method. Since you're using
selenium you can find all the options elements, iterate over each element,
check the text of each element, and compare it to your string.
For example
elem = browser.find_elements_by_tag_name("option")
for ele in elem:
if ele.get_attribute("innerHTML").find('Red') > -1 and ele.get_attribute("innerHTML").find('wolly') > -1 and ele.get_attribute("innerHTML").find('small') > -1 and ele.get_attribute("innerHTML").find('small') > -1:
#TODO
However that gets kind of long so I would use a regex, for example:
import re
elem = browser.find_elements_by_tag_name("option")
for ele in elem:
m = re.search(r'(Red,.+wooly,.+small,.+UK)', ele.get_attribute("innerHTML"))
if m:
print m.group(1)
if `.get_attribute("innerHTML")` doesn't get the inner text try .text()
|
vcfutils for parsing multiple vcf files
Question: I have multiple VCF files and What I need to achieve is to extract some rows
from VCF file based on the defined filters.. So in order to achieve that I
started off with using
import vcf
import vcf.utils
which seems to be to straight forward and neat. but I am getting issues, in
between it would be really great if some one take a look and guide a little
bit to get to the desired output..
VCF file looks like this its has lines starting with # and later information
we needed,(few lines with headers and the rows needed are as follows,)
##fileformat=VCFv4.1
##source=V2
##INFO=<ID=DP,Number=1,Type=Integer,Description="Total depth of quality bases">
##INFO=<ID=SOMATIC,Number=0,Type=Flag,Description="Indicates if record is a somatic mutation">
##INFO=<ID=SS,Number=1,Type=String,Description="Somatic status of variant (0=Reference,1=Germline,2=Somatic,3=LOH, or 5=Unknown)">
chr10 197523 . G A . PASS DP=26;SS=1;SSC=2;GPV=5.4595E-6;SPV=6.1327E-1 GT:GQ:DP:RD:AD:FREQ:DP4 0/1:.:17:8:9:52.94%:5,3,4,5 0/1:.:9:4:5:55.56%:2,2,2,3
chr10 198411 . T G . PASS DP=37;SS=1;SSC=5;GPV=1.2704E-5;SPV=2.7151E-1 GT:GQ:DP:RD:AD:FREQ:DP4 0/1:.:19:13:6:31.58%:8,5,1,5 0/1:.:18:9:8:47.06%:3,6,5,3
and so I used following piece of python code to get the information I need,
The flowing piece of code throws error message,
reader_BM_CR_ID = vcf.Reader(filename="sample/sam/sample.vcf", compressed=False)
writer_CR = vcf.Writer(open('same/sam/sample_filtered.vcf', 'w'), reader_BM_CR_ID)
for variants in vcf.utils(reader_BM_CR_ID):
for call in variants.samples:
if call.sample == 'T':
if call.data.FREQ >='20%':
if call.data.FREQ >'0%':
if call.data.FREQ !='100%':
if call.data.DP >=20:
writer.write_record(id_var)
The error message,
TypeError Traceback (most recent call last)
<ipython-input-471-526e4c3bbab1> in <module>()
----> 1 for variants in vcf.utils(reader_BM_CR_ID):
2
3 for call in variants.samples:
4 if call.sample == 'T':
5 if call.data.FREQ >='20%':
TypeError: 'module' object is not callable
Any help is really appreciated..!!
Answer: You are trying to call `module` as `function`.
Python carefully reports about it:
TypeError: 'module' object is not callable
|
Dynamic Time Warping recursive implementation
Question: Good evening ladies and gents,
I want to implement a Dynamic Time Warping (DTW) algorithm in Python.
For testing purposes I set up a small random distance matrix (e.g., generated
by the Manhattan metric) and then call my DTW algorithm with it.
import numpy as np
from dynamic_time_warping import compute_dtw
x=np.zeros((3,4))
x[0,2]=1.0
x[0,3]=2.0
x[1,2]=1.0
x[1,3]=2.0
x[2,0]=1.0
x[2,1]=1.0
x[2,3]=1.0
compute_dtw(x)
My DTW algorithm looks as follows:
def compute_dtw(W):
if W.shape[0]==1 or W.shape[1]==1:
C=float("inf")
if W.shape[0]==1 and W.shape[1]==1:
C=0.0
else:
C=W[len(W),len(W)]+min(compute_dtw(W[0:-1, 0:-1]),
compute_dtw(W[0:-1]), compute_dtw(W[:, 0:-1]))
return C
I want the algorithm to take the m*n value of x and add it to the next minimum
value which I tried to achieve by calling the function again with a smaller
matrix. `(compute_dtw(W[0:-1, 0:-1]), compute_dtw(W[0:-1]), compute_dtw(W[:,
0:-1]))`
This gives me the following error after going once through the script:
> C=W[len(W),len(W)]+min(compute_dtw(W[0:-1, 0:-1]), compute_dtw(W[0:-1]),
> compute_dtw(W[:, 0:-1])) IndexError: index 3 is out of bounds for axis 0
> with size 3
Apparently, I am calling an element of the array that's not existent, but I
can't figure it out where it is breaking.
Thanks for your suggestions and help!
//updated code:
def compute_dtw(W):
if W.shape[0]==1 and W.shape[1]==1:
C=0.0
elif W.shape[0]==1 or W.shape[1]==1:
C=float("inf")
else:
C=W[W.shape[0]-1,W.shape[1]-1]+min(compute_dtw(W[0:-1, 0:-1]), compute_dtw(W[0:-1]), compute_dtw(W[:, 0:-1]))
return C
Answer: Python indexing starts at zero. On your first pass you calling for element
[3,3] (which does not exist) and the thus the out of bounds error.
I'm not all that familiar with dynamic time warping, but I think should be
using the `.shape` of the specific axis as opposed to the `len()` which is
just the length of the first dimension of your array. Even then, you'll have
to adjust your recursion to iterate over the bounds of each successive array.
Lastly, the `return` statement should be at the same level of the `if` blocks.
Currently, `compute_dtw` won't return anything in the first two cases, only if
the shape is greater than length 1 on both axes.
|
python str.replace() with borders
Question: I'm trying to use the function `str.replace(",", ";"[, count])` but when i
fill in count (let's say 1) it only changes the first `","` but i want to
change a certain "," (working with boundries)` does anyone have a idea on how
to do this?
Answer: You could `rsplit` and join:
s = "foo,foobar,foo"
print(";".join(s.rsplit(",",1)))
Or reverse the string, replace and reverse again:
print(s[::-1].replace(";",",",1)[::-1])
splitting actually seems a little faster:
In [7]: timeit s[::-1].replace(";",",",1)[::-1]
1000000 loops, best of 3: 521 ns per loop
In [8]: timeit ";".join(s.rsplit(",",1))
1000000 loops, best of 3: 416 ns per loop
If you want to change the ith occurrence:
def change_ith(st, ith, sep, rep):
return "".join([s + rep if i == ith else s + sep
for i, s in enumerate(st.split(sep, ith), 1)]).rstrip(sep)
Output:
In [15]: s = "foo,foo,bar,foo,foo"
In [16]: change_ith(s, 1, ",",";")
Out[16]: 'foo;foo,bar,foo,foo'
In [17]: change_ith(s, 2, ",",";")
Out[17]: 'foo,foo;bar,foo,foo'
In [18]: change_ith(s, 3, ",",";")
Out[18]: 'foo,foo,bar;foo,foo'
In [19]: change_ith(s, 4, ",",";")
Out[19]: 'foo,foo,bar,foo;foo'
There are cases where join will could give incorrect output if you had a
string ending in the sep and a few other edge cases, to get a more robust
function we would need to use a regex passing a lambda as the repl arg and
using `itertools.count` to count how many matches we got:
import re
from itertools import count
def change_ith(st, ith, sep, rep):
return re.sub(sep, lambda m, c=count(1): rep if next(c) == ith else m.group(0), st)
Or applying the same logic to join:
from itertools import count
def change_ith(st, ith, sep, rep):
cn = count(1)
return "".join([rep if ch == sep and next(cn) == ith else ch
for ch in st])
|
for-in-if in jinja2 throws exception
Question: I am new to both Python and Jinja2. I'd like to read a value in a dictionary
of a list. I think there is an answer for such operation
[here](http://stackoverflow.com/a/8653568/124050). Unfortunately this does not
seem to work in Jinja2. I get this:
> jinja2.exceptions.TemplateSyntaxError: Encountered unknown tag 'item'.
From what I know Jinja2 does not comprehend full Python, which I think is at
the heart of the problem here. Can anyone please confirm?
Answer: Example using Flask:
main.py
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def hello_world():
dicts = [
{ "name": "Tom", "age": 10 },
{ "name": "Mark", "age": 5 },
{ "name": "Pam", "age": 7 },
{ "name": "Dick", "age": 12 }
]
return render_template("test.html", dicts = dicts)
if __name__ == '__main__':
app.run(debug = True)
In folder templates
test.html
<html>
<head>
<table>
<tr>
<th>Name</th>
<th>Age</th>
</tr>
{% for dic in dicts %}
{%if dic['name'] == 'Pam'%}
<tr><td><b>{{dic['name']}}</b></td><td><b>{{dic['age']}}</b></td></tr>
{%else%}
<tr><td>{{dic['name']}}</td><td>{{dic['age']}}</td></tr>
{%endif%}
{% endfor %}
</table>
</body>
</html>
Output:
[](http://i.stack.imgur.com/8ke74.png)
|
Delete entries from json that are missing properties
Question: I have a json file that contains about 100,000 lines in the following format:
{
"00-0000045": {
"birthdate": "5/18/1975",
"college": "Michigan State",
"first_name": "Flozell",
"full_name": "Flozell Adams",
"gsis_id": "00-0000045",
"gsis_name": "F.Adams",
"height": 79,
"last_name": "Adams",
"profile_id": 2499355,
"profile_url": "http://www.nfl.com/player/flozelladams/2499355/profile",
"weight": 338,
"years_pro": 13
},
"00-0000108": {
"birthdate": "12/9/1974",
"college": "Louisville",
"first_name": "David",
"full_name": "David Akers",
"gsis_id": "00-0000108",
"gsis_name": "D.Akers",
"height": 70,
"last_name": "Akers",
"number": 2,
"profile_id": 2499370,
"profile_url": "http://www.nfl.com/player/davidakers/2499370/profile",
"weight": 200,
"years_pro": 16
}
}
I am trying to delete all the items that do not have a `gsis_name` property.
So far I have this python code, but it does not delete any values (note: I do
not want to overwrite the original file)
import json
with open("players.json") as json_file:
json_data = json.load(json_file)
for x in json_data:
if 'gsis_name' not in x:
del x
print json_data
Answer: You're deleting x, but x is a copy of the original element in json_data;
deleting x won't actually delete it from the object that it was drawn from.
In Python, if you want to filter some items out of a collection your best bet
is to copy the items you do want into a new collection.
clean_data = {k: v for k, v in json_data.items() if 'gsis_name' in v}
and then write `clean_data` to a file with `json.dump`.
|
How to hide output of subprocess in Python without preventing other subprocesses from running
Question: I run Python 3.3.5 on Windows, and a very basic version of something I'm
working on looks like this:
import os
import subprocess
import time
path = r"C:\Users\Luke\Desktop\Tor_Browser\Browser\firefox.exe {0}"
url = "http://google.com"
views = 5
opened = 0
for i in range(views):
subprocess.Popen(path.format(url))
time.sleep(15)
opened = opened + 1
print ("Times opened:", opened)
os.system("taskkill /f /im firefox.exe")
What this code is supposed to do is run my firefox/tor browser with the google
url 5 times for 15 seconds each and then close it. It does just this, however,
I can't seem to stop the terminal from displaying the text: "SUCCESS: The
process "firefox.exe" with PID xxxx has been terminated."
I've tried changing the line
os.system("taskkill /f /im firefox.exe")
to
FNULL = open(os.devnull, 'w')
subprocess.Popen("taskkill /f /im firefox.exe", stdout=FNULL)
but when I do the program only opens the browser once, closes it and then
ceases to open it again but still displays the "Times opened" text.
Anyone have any ideas as to how I can stop the terminal from displaying this
text?
Thanks
-Luke
Answer: Try this:
import os
import subprocess
import time
path = r"C:\Users\Luke\Desktop\Tor_Browser\Browser\firefox.exe {0}"
url = "http://google.com"
views = 5
opened = 0
for i in range(views):
proc = subprocess.Popen(path.format(url)) # Change here to bind pipe to a name
time.sleep(15)
opened = opened + 1
print ("Times opened:", opened)
proc.terminate() # send process term signal
|
Logging to console with Pyramid pshell
Question: In the process of developing a Pyramid web application, I've found it very
useful to use the command-line `pshell` to load the application and interact
with various code. However, log statements are not echoed on the console, and
I'm not sure why.
For instance, lets say in `common.utilities.general` I have a function:
import logging
log = logging.getLogger(__name__)
def my_util():
log.debug("Executing utility.")
return "Utility was executed."
Then in my command line:
(pyenv)rook:swap nateford$ pshell src/local.ini
2015-10-08 14:44:01,081 INFO [common.orm.pymongo_core][MainThread] PyMongo Connection to replica set successful: localhost:27017
2015-10-08 14:44:01,082 INFO [common.orm.pymongo_core][MainThread] Connected to Mongo Database = turnhere
Python 3.4.3 (default, Mar 10 2015, 14:53:35)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help" for more information.
Environment:
app The WSGI application.
registry Active Pyramid registry.
request Active request object.
root Root of the default resource tree.
root_factory Default root factory used to create `root`.
>>> from common.utilities.general import my_util
>>> my_util()
'Utility was executed.'
>>>
As you can see, there is no log to the console. I would expect:
>>> from common.utilities.general import my_util
>>> my_util()
[some date/server info][DEBUG]: Executing utility.
'Utility was executed.'
>>>
Here is the (relevant) contents of my `local.ini` file:
<Various elided application settings>
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/1.5-branch/narr/logging.html
###
[loggers]
keys = root, common, webapp, services, sqlalchemy
[handlers]
keys = console, applog
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console, applog
[logger_common]
level = DEBUG
handlers =
qualname = common
[logger_services]
level = DEBUG
handlers =
qualname = common.services
[logger_webapp]
level = DEBUG
handlers =
qualname = webapp
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = DEBUG
formatter = generic
[handler_applog]
class = FileHandler
args = (r'%(here)s/log/app.log','a')
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
Answer: Your root's logger logging level is set `INFO` which is higher level than
`DEBUG` which is the level you log your messages with. Changing root's logger
level to `DEBUG` should help.
|
Python Sine function error
Question: I have worked for a while in matlab but want now also learn some python. but
after a few days i run in some trouble...
I have similar functions. one matplotlib example and one own based on some
examples (probably also the matplotlib one)
The trouble is that one function is working one not...
This one is working
import matplotlib.pylab
from pylab import *
t = arange(0.0, 2.0, 0.01)
s = sin(2*pi*t)
plot(t, s)
xlabel('time (s)')
ylabel('voltage (mV)')
title('About as simple as it gets, folks')
grid(True)
savefig("test.png")
show()
This one
import math
import numpy
#import scipy
import matplotlib.pylab as plt
f0 = 50
f1 = 500
t1 = 2
t = numpy.arange(0,t1,1/44100)#[numpy.newaxis];
#print(t.getshape())
sine = math.sin(2*math.pi*f0*t)
plt.plot(t, sine)
plt.xlabel('Angle [rad]')
plt.ylabel('sin(t)')
plt.axis('tight')
plt.show()
gives the following error
> File "C:/Users/location/test_sweep.py", line 19, in sine =
> math.sin(2*math.pi*f0*t)
>
> TypeError: only length-1 arrays can be converted to Python scalars
Where is this error come from? why do i have this with te second formulation
but not with the first?
A side question When i uncomment the `print(t.getshape())` i get also the
following error.
> File "C:/Users/location/test_sweep.py", line 17, in print(t.getshape())
>
> AttributeError: 'numpy.ndarray' object has no attribute 'getshape'
The first error looks like if it are multiple arrays. but the second says it
doesn have more than arrays. Why or don't i understand the error messages?
Answer: When using `numpy` arrays, you shouldn't use `math` functions. Try use `numpy`
functions:
sine = numpy.sin(2*numpy.pi*f0*t))
As for the `getShape()` issue, as the error message says there is no attribute
with that name. Try:
print(t.shape)
|
How do I convert python time.gmtime() to MongoDB Date
Question: I need to convert the time returned by gmtime() to a readable string. As a
mistake, I populated huge data in mongo and stored date as a string. After the
discussion in my [previous
question](http://stackoverflow.com/questions/33015162/how-do-i-fetch-all-
mongo-data-which-is-created-greater-than-a-particular-
date/33015994?noredirect=1#comment53886215_33015994) I am just doing a regex
match to get the relevant data from mongo. The problem now is to convert
gmtime which is say:
time.struct_time(tm_year=2015, tm_mon=10, tm_mday=9, tm_hour=9, tm_min=49, tm_sec=31, tm_wday=4, tm_yday=282, tm_isdst=0)
to this:
Tue 27 Jan 2015
This is the string format that is stored in mongo
{
"startTime" : "Sun 25 Jan 2015 07:14:26 GMT",
"endTime" : "",
"jobStatus" : "JOBCANCELLED",
"uiState" : "HISTORY",
"priority" : "SILVER"
}
Dumb way to make Mongo query work:
db.getCollection('jobsCollection').find({"startTime":{$regex: "Tue 27 Jan 2015.*"}})
I need to convert the gmtime to the regex attribute shown in the mongo query
Is there any way I can achieve this?
Answer:
import time
print time.gmtime(123456)
print time.strftime("%A %d-%b-%Y", time.gmtime(time.time()))
>>>Friday 09-Oct-2015
|
python get and convert to date from string
Question: I want to convert the following string to date in python ,how to go about
this,
>>> d= "1997-01-29 00:00:00+00:00"
>>> import datetime
>>> datetime.datetime.strptime(d, '%Y-%m-%d').date()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/_strptime.py", line 328, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: 00:00:00+00:00
Answer: The error is pretty explicit in stating that you have an uncovered portion of
your string that wasn't expected based on what you are trying to convert.
You have provided this:
d = "1997-01-29 00:00:00+00:00"
You are trying to convert based on this pattern:
'%Y-%m-%d'
The error is stating:
ValueError: unconverted data remains: 00:00:00+00:00
So, you should simply strip off that time portion based on how you are
converting. I simply changed your string to this and it worked:
>>> d= "1997-01-29"
>>> datetime.datetime.strptime(d, '%Y-%m-%d').date()
datetime.date(1997, 1, 29)
|
Python customized sort
Question: I have a list of dictionary objects (read from csv) that I want to sort. E.g.
l_data.sort(key=lamdba x: (x['fname'], x['lname']))
Now I want to make the code more flexible, and want to sort on the keys based
on the input, something like:
def sort_data(l_data, keys):
l_data.sort(key=lambda x: (x[d] for d in keys))
... ...
And I get the following error:
TypeError: unorderable types: generator() < generator()
Answer: `(x[d] for d in keys)` is a generator expression (produces `generator`
object). If you want to make a tuple, wrap that generator expression with
`tuple(...)`:
def sort_data(l_data, keys):
l_data.sort(key=lambda x: tuple(x[d] for d in keys))
# code
Though your code can be greatly simplified using
[`operator.itemgetter`](https://docs.python.org/3/library/operator.html#operator.itemgetter):
from operator import itemgetter
def sort_data(l_data, keys):
l_data.sort(key=itemgetter(*keys))
# code
|
Create multiple columns in Pandas Dataframe from one function
Question: I'm a python newbie, so I hope my two questions are clear and complete. I
posted the actual code and a test data set in csv format below.
I've been able to construct the following code (mostly with the help from the
StackOverflow contributors) to calculate the Implied Volatility of an option
contract using Newton-Raphson method. The process calculates Vega when
determining the Implied Volatility. Although I'm able to create a new
DataFrame column for Implied Volatility using the Pandas DataFrame apply
method, I'm unable to create a second column for Vega. Is there a way create
two separate DataFrame columns when the function to returns IV & Vega
together?
I tried:
* `return iv, vega` from function
* `df[['myIV', 'Vega']] = df.apply(newtonRap, axis=1)`
* Got `ValueError: Shape of passed values is (56, 2), indices imply (56, 13)`
Also tried:
* `return iv, vega` from function
* `df['myIV'], df['Vega'] = df.apply(newtonRap, axis=1)`
* Got `ValueError: Shape of passed values is (56, 2), indices imply (56, 13)`
Additionally, the calculation process is slow. I imported numba and
implemented the @jit(nogil=True) decorator, but I only see a performance
improvement of 25%. The test data set is the performance test has almost
900,000 records. The run time is 2 hours and 9 minutes without numba or with
numba, but witout nogil=True. The run time when using numba and
@jit(nogil=True) is 1 hour and 32 minutes. Can I do better?
from datetime import datetime
from math import sqrt, pi, log, exp, isnan
from scipy.stats import norm
from numba import jit
# dff = Daily Fed Funds (Posted rate is usually one day behind)
dff = pd.read_csv('https://research.stlouisfed.org/fred2/data/DFF.csv', parse_dates=[0], index_col='DATE')
rf = float('%.4f' % (dff['VALUE'][-1:][0] / 100))
# rf = .0015 # Get Fed Funds Rate https://research.stlouisfed.org/fred2/data/DFF.csv
tradingMinutesDay = 450 # 7.5 hours per day * 60 minutes per hour
tradingMinutesAnnum = 113400 # trading minutes per day * 252 trading days per year
cal = USFederalHolidayCalendar() # Load US Federal holiday calendar
@jit(nogil=True) # nogil=True arg improves performance by 25%
def newtonRap(row):
"""Estimate Implied Volatility (IV) using Newton-Raphson method
:param row (dataframe): Options contract params for function
TimeStamp (datetime): Close date
Expiry (datetime): Option contract expiration date
Strike (float): Option strike
OptType (object): 'C' for call; 'P' for put
RootPrice (float): Underlying close price
Bid (float): Option contact closing bid
Ask (float): Option contact closing ask
:return:
float: Estimated implied volatility
"""
if row['Bid'] == 0.0 or row['Ask'] == 0.0 or row['RootPrice'] == 0.0 or row['Strike'] == 0.0 or \
row['TimeStamp'] == row['Expiry']:
iv, vega = 0.0, 0.0 # Set iv and vega to zero if option contract is invalid or expired
else:
# dte (Days to expiration) uses pandas bdate_range method to determine the number of business days to expiration
# minus USFederalHolidays minus constant of 1 for the TimeStamp date
dte = float(len(pd.bdate_range(row['TimeStamp'], row['Expiry'])) -
len(cal.holidays(row['TimeStamp'], row['Expiry']).to_pydatetime()) - 1)
mark = (row['Bid'] + row['Ask']) / 2
cp = 1 if row['OptType'] == 'C' else -1
S = row['RootPrice']
K = row['Strike']
# T = the number of trading minutes to expiration divided by the number of trading minutes in year
T = (dte * tradingMinutesDay) / tradingMinutesAnnum
# TODO get dividend value
d = 0.00
iv = sqrt(2 * pi / T) * mark / S # Closed form estimate of IV Brenner and Subrahmanyam (1988)
vega = 0.0
for i in range(1, 100):
d1 = (log(S / K) + T * (rf - d + iv ** 2 / 2)) / (iv * sqrt(T))
d2 = d1 - iv * sqrt(T)
vega = S * norm.pdf(d1) * sqrt(T)
model = cp * S * norm.cdf(cp * d1) - cp * K * exp(-rf * T) * norm.cdf(cp * d2)
iv -= (model - mark) / vega
if abs(model - mark) < 1.0e-9:
break
if isnan(iv) or isnan(vega):
iv, vega = 0.0, 0.0
# TODO Return vega with iv if add'l pandas column possible
# return iv, vega
return iv
if __name__ == "__main__":
# test function from baseline data
get_csv = True
if get_csv:
csvHeaderList = ['TimeStamp', 'OpraSymbol', 'RootSymbol', 'Expiry', 'Strike', 'OptType', 'RootPrice', 'Last',
'Bid', 'Ask', 'Volume', 'OpenInt', 'IV']
fileName = 'C:/tmp/test-20150930-56records.csv'
df = pd.read_csv(fileName, parse_dates=[0, 3], names=csvHeaderList)
else:
pass
start = datetime.now()
# TODO Create add'l pandas dataframe column, if possible, for vega
# df[['myIV', 'Vega']] = df.apply(newtonRap, axis=1)
# df['myIV'], df['Vega'] = df.apply(newtonRap, axis=1)
df['myIV'] = df.apply(newtonRap, axis=1)
end = datetime.now()
print end - start
Test Data: C:/tmp/test-20150930-56records.csv
2015-09-30 16:00:00,AAPL151016C00109000,AAPL,2015-10-16
16:00:00,109,C,109.95,3.46,3.6,3.7,1565,1290,0.3497 2015-09-30
16:00:00,AAPL151016P00109000,AAPL,2015-10-16
16:00:00,109,P,109.95,2.4,2.34,2.42,3790,3087,0.3146 2015-09-30
16:00:00,AAPL151016C00110000,AAPL,2015-10-16
16:00:00,110,C,109.95,3,2.86,3,10217,28850,0.3288 2015-09-30
16:00:00,AAPL151016P00110000,AAPL,2015-10-16
16:00:00,110,P,109.95,2.81,2.74,2.8,12113,44427,0.3029 2015-09-30
16:00:00,AAPL151016C00111000,AAPL,2015-10-16
16:00:00,111,C,109.95,2.35,2.44,2.45,6674,2318,0.3187 2015-09-30
16:00:00,AAPL151016P00111000,AAPL,2015-10-16
16:00:00,111,P,109.95,3.2,3.1,3.25,2031,3773,0.2926 2015-09-30
16:00:00,AAPL151120C00110000,AAPL,2015-11-20
16:00:00,110,C,109.95,5.9,5.7,5.95,5330,17112,0.3635 2015-09-30
16:00:00,AAPL151120P00110000,AAPL,2015-11-20
16:00:00,110,P,109.95,6.15,6.1,6.3,3724,15704,0.3842
Answer: If I understand you right, what you should be doing is returning a Series from
your function. Something like:
return pandas.Series({"IV": iv, "Vega": vega})
If you want to put the result into new columns of the same input DataFrame,
then just do:
df[["IV", "Vega"]] = df.apply(newtonRap, axis=1)
|
PIL installation on MAC
Question: I am trying to install PIL in my Mac.I am using the following command:
sudo pip install pil --allow-external pil --allow-unverified pil
But it gives the following error
Command "/Users/akira/anaconda/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-bCxMjA/pil/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-YwCGxg-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-bCxMjA/pil
Please note that,I have condo installed in my system.I am not able to install
PIL in my system in any means.Please help me with this.I have browsed many
links,But no solution works for me.
Answer: Why are you using `pip` if you have `conda` installed? I installed it simply
with the command:
conda install pil
Or if you want the latest version (since PIL is now called pillow):
conda install pillow
|
Checking Facebook username availability - Python
Question: I am currently creating an application which includes checking username
availability for social networks, one being Facebook. Using python how would I
do this? I have already done it for twitter:
url = "https://twitter.com/users/username_available?username=" + username
response = requests.get(url)
data = json.loads(response.text)
return data.get("reason")
Answer: You can use the Graph API from Facebook, I've tried one very simple solution.
Basically I'm just checking the response for predefined string.
import requests
name = input()
url = "https://graph.facebook.com/" + name
response = requests.get(url)
if (response.text.find("Some of the aliases you requested do not exist") == -1):
print("Exist!")
else:
print("Don't exist!")
|
Regex for absolute url
Question: I am searching quite a while for a regex compatible with Python's `re` module
for finding all URLs in HTML document and I cannot find it except one that was
only to able to check whether an url is valid or invalid (with `match`
method). I want to do simple
import requests
html_response = requests.get('http://example.com').text
urls = url_pattern.findall(html_response)
I suppose needed regex (if exists) would be complex enough to take into
consideration a bunch of special cases of urls so it cannot be some oneline
code.
Answer: Use **BeautifulSoup** instead.It's simple to use and allows you to parse pages
with HTML.
See this answer [How to extract URLs from a HTML page in
Python](http://stackoverflow.com/questions/15517483/how-to-extract-urls-from-
a-html-page-in-python)
|
How to Properly Use Arithmetic Operators Inside Comprehensions?
Question: I am dealing with a simple csv file that contains three columns and three rows
containing numeric data. The csv data file looks like the following:
Col1,Col2,Col3
1,2,3
2,2,3
3,2,3
4,2,3
I have hard time figuring out how to let my python program subtracts the
average value of the first column "Col1" from each value in the same column.
For illustration the output should give the following values for 'Col1':
1 - 2.5 = -1.5
2 - 2.5 = -0.5
3 - 2.5 = 0.5
4 - 2.5 = 1.5
Here is my attempt that gives me (TypeError: unsupported operand type(s) for
-: 'str' and 'float' ) at the last print statement which containing the
comprehension.
import csv
# Opening the csv file
file1 = csv.DictReader(open('columns.csv'))
file2 = csv.DictReader(open('columns.csv'))
# Do some calculations
NumOfSamples = open('columns.csv').read().count('\n')
SumData = sum(float(row['Col1']) for row in file1)
Aver = SumData/(NumOfSamples - 1) # compute the average of the data in 'Col1'
# Subtracting the average from each value in 'Col1'
data = []
for row in file2:
data.append(row['Col1'])
# Print the results
print Aver
print [e-Aver for e in data] # trying to use comprehension to subtract the average from each value in the list 'data'
I do not know how to solve this problem! Any idea how to make the
comprehension working to give what is supposed to do?
Answer: The issue in your code is that in case of `data` list (`file2`) , you are
reading strings from the file and storing strings into the `data` list.
Hence, when later on , you try to do - `[e-Aver for e in data]` \- it errors
out as you are trying to subtract float from string.
You should convert to `float` or `int` before storing into `data` list.
Example -
data = []
for row in file2:
data.append(float(row['Col1']))
|
Try to install mongodb with dijango?
Question: I am trying to configure MongoDB database to Django. I tried a lot of
tutorials but still, I am not able to configure.
I followed following tutorials commands.
<http://2buntu.com/articles/1451/installing-django-and-mongodb-in-your-
virtualenv/>
<http://django-mongodb-engine.readthedocs.org/en/latest/topics/setup.html>
<http://docs.mongodb.org/manual/tutorial/write-a-tumblelog-application-with-
django-mongodb-engine/>
I added the following code to `setting.py` class.
DATABASES = {
'default': {
'ENGINE' : 'django_mongodb_engine',
'NAME' : 'product'
}
I tried with different version too, but still I could not run this command.
python manage.py runserver
I got following error.
$ python manage.py runserver
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
This is my current installed project packages.
$ pip list
django-mongodb-engine (0.6.0)
djangotoolbox (1.8.0)
pip (7.1.2)
pymongo (3.0.3)
setuptools (18.2)
wheel (0.24.0)
Expecting any expert help to do this. I did not find any latest article to do
this.
I want to do this project using `dijango-1.8`, `python 3.x` and using
`MongoDB`. I code on `linux-ubuntu(14.04)` machine, my system has both python
2x,3x version.
=========================After install Dijango 1.8===================
$ python manage.py runserver
/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/djangotoolbox/db/utils.py:1: RemovedInDjango19Warning: The django.db.backends.util module has been renamed. Use django.db.backends.utils instead.
from django.db.backends.util import format_number
/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/djangotoolbox/db/utils.py:1: RemovedInDjango19Warning: The django.db.backends.util module has been renamed. Use django.db.backends.utils instead.
from django.db.backends.util import format_number
Performing system checks...
System check identified no issues (0 silenced).
Unhandled exception in thread started by <function wrapper at 0x7fcf8b16ce60>
Traceback (most recent call last):
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 223, in wrapper
fn(*args, **kwargs)
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 112, in inner_run
self.check_migrations()
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 164, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 19, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 47, in __init__
self.build_graph()
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 180, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 59, in applied_migrations
self.ensure_schema()
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 52, in ensure_schema
with self.connection.schema_editor() as editor:
File "/home/umayanga/Desktop/mongoProject/myproject/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 502, in schema_editor
'The SchemaEditorClass attribute of this database wrapper is still None')
NotImplementedError: The SchemaEditorClass attribute of this database wrapper is still None
now pip list.
Django (1.8)
django-mongodb-engine (0.6.0)
djangotoolbox (1.8.0)
pip (7.1.2)
pymongo (3.0.3)
setuptools (18.2)
wheel (0.24.0)
Answer: After , big effort,I can configure Dijago.when I install some warning are
aper. but I think ,sometime my installing order may be incorrect. I post this
, because it may be help to others and if I do some wrong ,I want expert
advice.
Installing packages.
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/test2$ pip list
Django (1.6.11)
django-dbindexer (1.6.1)
django-mongodb-engine (0.6.0)
djangotoolbox (1.8.0)
pip (7.1.2)
pymongo (3.0.3)
setuptools (18.2)
wheel (0.24.0)
Ubuntu terminal code
umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ ls -l
total 0
umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ pip install virtualenv
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /usr/local/lib/python2.7/dist-packages
Cleaning up...
umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ virtualenv myprojec
New python executable in myprojec/bin/python
Installing setuptools, pip, wheel...done.
umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ source myprojec/bin/activate
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ pip install https://github.com/django-nonrel/django/tarball/nonrel-1.6
Collecting https://github.com/django-nonrel/django/tarball/nonrel-1.6
/home/umayanga/Desktop/mongoProject/myprojec/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
/home/umayanga/Desktop/mongoProject/myprojec/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading https://github.com/django-nonrel/django/tarball/nonrel-1.6
| 6.7MB 1.9MB/s
Building wheels for collected packages: Django
Running setup.py bdist_wheel for Django
Stored in directory: /home/umayanga/.cache/pip/wheels/89/cd/89/64475e53eef52b22b711705322a36352f2f979fdcef0e39e8a
Successfully built Django
Installing collected packages: Django
Successfully installed Django-1.6.11
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ ls -l
total 4
drwxrwxr-x 6 umayanga umayanga 4096 Oct 10 15:26 myprojec
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ cd myprojec/
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ ls -l
total 20
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:26 bin
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:25 include
drwxrwxr-x 3 umayanga umayanga 4096 Oct 10 15:25 lib
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:25 local
-rw-rw-r-- 1 umayanga umayanga 60 Oct 10 15:26 pip-selfcheck.json
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ (myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ cd ..
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ pip install pymongo
Collecting pymongo
/home/umayanga/Desktop/mongoProject/myprojec/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Installing collected packages: pymongo
Successfully installed pymongo-3.0.3
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ sudo pip install git+https://github.com/django-nonrel/djangotoolbox
Downloading/unpacking git+https://github.com/django-nonrel/djangotoolbox
Cloning https://github.com/django-nonrel/djangotoolbox to /tmp/pip-Lloitv-build
Running setup.py (path:/tmp/pip-Lloitv-build/setup.py) egg_info for package from git+https://github.com/django-nonrel/djangotoolbox
Requirement already satisfied (use --upgrade to upgrade): djangotoolbox==1.8.0 from git+https://github.com/django-nonrel/djangotoolbox in /usr/local/lib/python2.7/dist-packages
Cleaning up...
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ pip install django-dbindexer
Collecting django-dbindexer
/home/umayanga/Desktop/mongoProject/myprojec/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading django-dbindexer-1.6.1.tar.gz
Building wheels for collected packages: django-dbindexer
Running setup.py bdist_wheel for django-dbindexer
Stored in directory: /home/umayanga/.cache/pip/wheels/09/2f/ea/01d26e4ffc98cd2ed54b92f31a82aecccb8e7b5c9e3b28a8ca
Successfully built django-dbindexer
Installing collected packages: django-dbindexer
Successfully installed django-dbindexer-1.6.1
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ pip install git+https://github.com/django-nonrel/djangotoolbox
Collecting git+https://github.com/django-nonrel/djangotoolbox
Cloning https://github.com/django-nonrel/djangotoolbox to /tmp/pip-2AUZTq-build
Installing collected packages: djangotoolbox
Running setup.py install for djangotoolbox
Successfully installed djangotoolbox-1.8.0
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ pip install git+https://github.com/django-nonrel/mongodb-engine
Collecting git+https://github.com/django-nonrel/mongodb-engine
Cloning https://github.com/django-nonrel/mongodb-engine to /tmp/pip-63Fwrm-build
Requirement already satisfied (use --upgrade to upgrade): pymongo>=2.8 in ./myprojec/lib/python2.7/site-packages (from django-mongodb-engine==0.6.0)
Requirement already satisfied (use --upgrade to upgrade): djangotoolbox>=1.6.0 in ./myprojec/lib/python2.7/site-packages (from django-mongodb-engine==0.6.0)
Installing collected packages: django-mongodb-engine
Running setup.py install for django-mongodb-engine
Successfully installed django-mongodb-engine-0.6.0
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject$ cd myprojec
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ django-admin.py startproject myproject
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ ls -l
total 24
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:26 bin
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:25 include
drwxrwxr-x 3 umayanga umayanga 4096 Oct 10 15:25 lib
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:25 local
drwxrwxr-x 3 umayanga umayanga 4096 Oct 10 15:36 myproject
-rw-rw-r-- 1 umayanga umayanga 60 Oct 10 15:26 pip-selfcheck.json
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ (myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec$ cd myproject/
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec/myproject$
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec/myproject$ ls -l
total 8
-rwxrwxr-x 1 umayanga umayanga 252 Oct 10 15:36 manage.py
drwxrwxr-x 2 umayanga umayanga 4096 Oct 10 15:36 myproject
(myprojec)umayanga@umayanga-HP-630-Notebook-PC:~/Desktop/mongoProject/myprojec/myproject$ python manage.py runserver
Validating models...
0 errors found
October 10, 2015 - 10:06:57
Django version 1.6.11, using settings 'myproject.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
[10/Oct/2015 10:07:03] "GET / HTTP/1.1" 200 1757
[10/Oct/2015 10:08:42] "GET / HTTP/1.1" 200 1757
[10/Oct/2015 10:08:48] "GET /admin HTTP/1.1" 301 0
[10/Oct/2015 10:08:48] "GET /admin/ HTTP/1.1" 200 1865
|
anaconda python: could not find or load the Qt platform plugin "xcb"
Question: On my OS(Linux Mint Debian Edition 2), except for the system
python(_/usr/bin/python_) installed by the **apt** , I also installed the
**anaconda**. But I've encounterd a problem running the following code with
the **anaconda** python
# test.py
import matplotlib.pyplot as plt
import numpy as np
x = np.array([0, 1])
plt.scatter(x, x)
plt.show()
The error is
> This application failed to start because it could not find or load the Qt
> platform plugin "xcb".
>
> Reinstalling the application may fix this problem.
>
> Aborted
But if I try with the system python, i.e., `/usr/bin/python test.py`, it works
correctly.
Then I tried the ipythons, of system and of anaconda, the result is same as
before: the anaconda ipython kernel died.
And I tried add the ipython magic `%matplotlib inline` into the code, the
anaconda ipython works correctly now. But if I replace the `%matplotlib
inline` with `%pylab`, the anaconda ipython died again.
Note: I use the python 2.7. System ipython's version is 2.3, anaconda
ipython's version is 3.2.
Answer: Same problem with Linux Mint 17, 64 bit. It was solved after 4h searching on
the net! You need to give these commands on the terminal from folder
/anaconda2/bin
sudo ./conda remove qt
sudo ./conda remove pyqt
sudo ./conda install qt
sudo ./conda install pyqt
Hope it helps!
|
opencv error : error while displaying rectangle
Question: i m not able to rectify the error. i m following the official opencv python
tutorial. I m passing a video here and doing meanshift.
source: <https://opencv-python-
tutroals.readthedocs.org/en/latest/py_tutorials/py_video/py_meanshift/py_meanshift.html#meanshift>
Below is my code:
import numpy as np
import cv2
cap = cv2.VideoCapture("slow.mp4")
# take first frame of the video
ret,frame = cap.read()
# setup initial location of window
r,h,c,w = 250,90,400,125 # simply hardcoded the values
track_window = (c,r,w,h)
# set up the ROI for tracking
roi = frame[r:r+h, c:c+w]
hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)
# Setup the termination criteria, either 10 iteration or move by atleast 1 pt
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )
while(1):
ret ,frame = cap.read()
if ret == True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)
# apply meanshift to get the new location
ret, track_window = cv2.meanShift(dst, track_window, term_crit)
# Draw it on image
x,y,w,h = track_window
img2 = cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow('img2',img2)
k = cv2.waitKey(60) & 0xff
if k == 27:
break
else:
cv2.imwrite(chr(k)+".jpg",img2)
else:
break
cv2.destroyAllWindows()
cap.release()
cv2.imshow('img2',img2)
this line is showing the error. execution stops at this line. I have done
debugging too.
error is following :
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /build/buildd/opencv-2.4.8+dfsg1/modules/highgui/src/window.cpp, line 269
Traceback (most recent call last):
File "programs/test14.py", line 36, in <module>
cv2.imshow('img2',img2)
cv2.error: /build/buildd/opencv-2.4.8+dfsg1/modules/highgui/src/window.cpp:269: error: (-215) size.width>0 && size.height>0 in function imshow
Answer: In OpenCV2.4.x
[`rectangle`](http://docs.opencv.org/modules/core/doc/drawing_functions.html#cv2.rectangle)
function return `None` \- it's modify image in-place. You could use OpenCV3 or
slightly modify your code:
cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow('img2',frame)
instead of
img2 = cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow('img2',img2)
|
Issue with scraping data with foreign characters
Question: I have written a python script to scrape data from some chinese site.
According to its head charset is "gb2312" and have also checked it with
"chardet.detect()" python library that its correct but still i get wrong
characters. Here is my code:
import csv
import requests
import cssselect
from lxml import html
url = "http://www.example.com/"
header = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36'}
mainPage = requests.get(url, headers = header)
tree = html.fromstring(mainPage.content)
with open('jd(test).csv', 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile, delimiter=',')
main_info = ["URL", "CAT-Name Original", "CAT-Name Converted"]
csvwriter.writerow(main_info)
for link in tree.cssselect("div#allsort div div.m div.mc dl dd em a"):
urls = [link.get('href')]
text = link.text
print (urls)
convertedText = text.encode('gb2312')
print (text)
print (convertedText)
row_info = [urls, text, convertedText]
csvwriter.writerow(row_info)
OUTPUT:
['http://example.com/']
戏曲综艺
b'\xcf\xb7\xc7\xfa\xd7\xdb\xd2\xd5'
Answer: What you ask cannot work; encoding is for character sets only, you want
**translation**.
You can get it using [py-translate](http://pythonhosted.org/py-
translate/devs/api.html#translator-co-routine), which is an interface to
Google but apparently free to use.
Python 3.4.3 (default, Mar 27 2015, 02:30:53) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from translate import translator
>>> translator('zh-Tw', 'en', '戏曲综艺')
[[['Opera Arts', '戏曲综艺', 0]], 'zh-TW']
>>>
I am not wholly familiar with the tool, so you'll better check out the
license.
> what do i need to do to get it translated without using any third party?
By definition, if you do not want to use _any third party_ , your only option
is to _learn Chinese_. In some outfits that might be a workable option. Case
in point: a firm I worked for needed some Chinese translations. Then more. And
more. First they hired a translator. Then a Chinese intern, and organized a
Chinese course for two developers. Depending on the amount of data, it could
be economically convenient.
But I think you want some kind of _free (as in beer) tool_. The problem being
that most tools are either quite amateurish and will at best spew
[Engrish](https://en.wikipedia.org/wiki/Engrish), or are **not free** in some
form or other; they might be trials, or "personal use". The fact is that
people developing these tools need to put in _a lot of work_ , and they
understandably seek some sort of return on their investment. You should at
least ask yourself, _what should I give back?_ , and, _what am I giving back
(knowingly or not)?_. Unfortunately - until a post-scarcity singularity - this
applies to _everything_.
You can try e.g. the [Baidu service](http://translate.baidu.com/) that has no
"reasonable or personal use" limit that I can readily see, but has his ideas
on your privacy ("Baidu hereby reminds users that the content you input into
Baidu Translate _will not be considered as your private personal
information_."). As long as you do not translate anything...
_controversial_... there should be no problems.
|
Python script switching between matplotlib inline and QT backend
Question: I would like to be able to switch between displaying matplotlib graphs inline
or using QT backend in a script file based on a variable called inlinemode
such as the code fragment below
import matplotlib.pylab as plt
inlinemode = False
if inlinemode:
print "plot will be inline..."
else:
print "plot will be qt..."
plt.switch_backend('qt4agg')
plt.figure()
plt.plot(range(10))
My IPython console starts in inline mode by default, and executing the code
fragment above still produces an inline graph instead of a qt window. If it
was possible to send the IPython magic %matplotlib inline -or- %matplotlib qt
in the if block, this would accomplish this task. However, from what I have
been able to gather, it is not possible to send these magic's from a script
file.
Any suggestions would be greatly appreciated!
(FYI: I am running from Spyder in Anaconda Python 2.7 on Windows 10)
Answer: This is possible using the following block, which will allow the script to be
run in IPython using `%run` or directly through standard python.
try:
import IPython
shell = IPython.get_ipython()
shell.enable_matplotlib(gui='inline')
except:
pass
This is what the magic `%matplotlib inline` is actually doing under the hood.
You can change the keyword argument to be `gui='qt'` to select a different
backend.
|
TypeError: 'list' does not support the buffer interface
Question: I have a program that asks the user 10 questions then saves it to the file.
All of that works perfectly however I am then trying to read a set of files
that have been saved, which raises the aforementioned TypeError.
The code that is causing the problem is this:
def readfile():
classtoload = input("Which class would you like to load: ")
gclass = "Class_" + classtoload
script_dir = os.path.dirname(os.path.abspath(__file__))
globclass = os.path.join(script_dir, gclass)
list_of_files = glob.glob(globclass + '\*.pkl')
files = []
files.append(list_of_files)
print(files)
for s in files:
load = os.path.dirname(os.path.abspath(s))
pickle.load(load)
The full error is this:
Traceback (most recent call last):
File "C:\Users\Liam\Documents\LIWA HW\Python programs\maths question.py", line 102, in <module>
ts()
File "C:\Users\Liam\Documents\LIWA HW\Python programs\maths question.py", line 10, in ts
readfile()
File "C:\Users\Liam\Documents\LIWA HW\Python programs\maths question.py", line 96, in readfile
load = os.path.dirname(os.path.abspath(s))
File "C:\Users\Liam\Documents\LIWA HW\python\lib\ntpath.py", line 547, in abspath
path = _getfullpathname(path)
TypeError: 'list' does not support the buffer interface
My full code is this:
import random, re, pickle, os, glob
def ts():
tors = ""
while tors not in ["T","S"]:
tors = input("are you a teacher or student: ").upper()
if tors == "S":
name_enter()
else:
readfile()
def name_enter():
global forename, surname
forename, surname = "", ""
while forename == "" or len(forename) > 25 or not re.match(r'^[A-Za-z0-9-]*$', forename):
forename = input("Please enter your forename: ")
while surname == "" or len(surname) > 30 or not re.match(r'^[A-Za-z0-9-]*$', surname):
surname = input("Please enter your surname: ")
enter_class()
def enter_class():
global class_choice
class_choice = None
while class_choice not in ["1","3","2"]:
class_choice = input("Please enter you class (1, 2, 3): ")
print("\nClass entered was " + class_choice)
mathsquestion()
def mathsquestion():
global qa, score
qa, score = 0, 0
for qa in range(0,10):
qa = qa + 1
print("The question you are currently on is: ", qa)
n1, n2, userans = random.randrange(12), random.randrange(12), ""
opu = random.choice(["-","+","x"])
if opu == "+":
while userans == "" or not re.match(r'^[0-9,-]*$', userans):
userans = input("Please solve this: %d" % (n1) + " + %d" % (n2) + " = ")
prod = n1 + n2
elif opu == "-":
while userans == "" or not re.match(r'^[0-9,-]*$', userans):
userans = input("Please solve this: %d" % (n1) + " - %d" % (n2) + " = ")
prod = n1 - n2
else:
while userans == "" or not re.match(r'^[0-9,-]*$', userans):
userans = input("Please solve this: %d" % (n1) + " x %d" % (n2) + " = ")
prod = n1 * n2
userans = int(userans)
prod = int(prod)
if prod == userans:
score = score + 1
print("Well done, you have got the question correct. Your score is now: %d" % (score))
else:
print("Unfortunatly that is incorrect. The answer you entered was %d" % (userans) + " and the answer is actually %d" % (prod))
print("Your final score is: %d" % (score))
savefile()
def savefile():
file = forename + "_" + surname + ".pkl"
script_dir = os.path.dirname(os.path.abspath(__file__))
dest_dir = os.path.join(script_dir,'Class_' + class_choice)
scoresave = {"%d" % score}
try:
os.makedirs(dest_dir)
except OSError:
pass
path = os.path.join(dest_dir, file)
with open(path, 'ab') as stream:
pickle.dump(scoresave, stream)
lists = []
infile = open(path, 'rb')
while True:
try:
lists.append(pickle.load(infile))
except EOFError:
break
obj=lists[0]
while len(lists) > 3:
lists.pop(0)
print(lists)
infile.close()
def readfile():
classtoload = input("Which class would you like to load: ")
gclass = "Class_" + classtoload
script_dir = os.path.dirname(os.path.abspath(__file__))
globclass = os.path.join(script_dir, gclass)
list_of_files = glob.glob(globclass + '\*.pkl')
files = []
files.append(list_of_files)
print(files)
for s in files:
load = os.path.dirname(os.path.abspath(s))
pickle.load(load)
Answer: `files` is a list of lists, because you _append_ the list of names to it:
list_of_files = glob.glob(globclass + '\*.pkl')
files = []
files.append(list_of_files)
You now have a list containing one element, another list. So when iterating
over `files`, you get that one list again:
for s in files:
load = os.path.dirname(os.path.abspath(s))
which fails because `s` should be a string.
Use `list.extend()` instead:
files.extend(list_of_files)
or better still, just use `list_of_files` directly:
for s in list_of_files:
|
Python - String Splitting
Question: Writing `'1000011'.split('1')` gives
['', '0000', '', '']
What I want is:
['1', '0000', '11']
How do I achieve this?
Answer: The `str.split(sep)` method does not add the `sep` delimiter to the output
list.
You want to _group_ string characters e.g. using
[`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby):
In: import itertools
In: [''.join(g) for _, g in itertools.groupby('1000011')]
Out: ['1', '0000', '11']
We didn't specify the `key` argument and the default `key` function just
returns the element unchanged. `g` is then the group of `key` characters.
|
Python 3.5 HookManager SystemError: PyEval_EvalFrameEx
Question: I'm new here and I hope not to do any mistakes!
I'm trying to make this simple code work, I tested it on Python 3.4 32 bit and
it worked but I need to use it on Python 3.5.0 64 bit but I get this error
that I don't know how to fix.
import pythoncom, pyHook
def OnKeyboardEvent(event):
key=chr(event.Ascii)
print(key)
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
pythoncom.PumpMessages()
I get the pressed key printed on the screen and then I get this error:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python 3.5\lib\site-packages\pyHook\HookManager.py", line 348, in KeyboardSwitch
event = KeyboardEvent(msg, vk_code, scan_code, ascii, flags, time, hwnd, win_name)
File "C:\Python 3.5\lib\site-packages\pyHook\HookManager.py", line 208, in __init__
HookEvent.__init__(self, msg, time, hwnd, window_name)
SystemError: PyEval_EvalFrameEx returned a result with an error set
TypeError: an integer is required (got type NoneType)
I don't really know what to do!
Answer: Such a simple question without any answer? I think your function needs to
return an integer value.
It should look like this.
def OnKeyboardEvent(event):
key=chr(event.Ascii)
print(key)
return 0
I know it's been a while, but I hope it helps some one else!
Hope my english is understandable since it's not my first language!
|
Delete last column from file with Python
Question: I am basically trying to do the same thing as this guy but with Python: [How
can i delete last column from my text
file](http://stackoverflow.com/questions/26940261/how-can-i-delete-last-
column-from-my-text-file)
How can I remove my last column?
I tried to load the text using `numpy.loadtxt` first, and then just ignore the
final array but it wouldn't load at all because the last column contains
strings, instead of floats.
Answer: The `numpy.loadtxt` function has a parameter `usecols`. From [the
documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html):
numpy.loadtxt(
fname,
dtype=<type 'float'>,
comments='#',
delimiter=None,
converters=None,
skiprows=0,
usecols=None,
unpack=False,
ndmin=0
)
Load data from a text file.
...
usecols : sequence, optional Which columns to read, with 0 being
the first. For example, usecols = (1,4,5) will extract the
2nd, 5th and 6th columns. The default, None, results in all
columns being read.
Of course this presumes you know in advance how many columns are in the file.
For example, given the following file `test.txt`:
100 test1
200 test2
300 test3
Loading with `numpy.loadtxt("test.txt")` produces this error.
$ python -c "import numpy as np;np.loadtxt('test.txt')"
Traceback (most recent call last):
...
items = [conv(val) for (conv, val) in zip(converters, vals)]
ValueError: could not convert string to float: test1
However using the `usecols` parameter works fine:
$ python -c "import numpy as np;print np.loadtxt('test.txt', usecols=(0,))"
[ 100. 200. 300.]
|
About xlwings, the API for Excel VBA to use Python
Question: When I follow the instruction of xlwings up to:
Sub MyMacro()
RunPython ("import mymodule; mymodule.rand_numbers()")
End Sub
It gives me error like: _can't find mymodule_. My understanding is this should
be an object from xlwings.
Why can't Excel find it and how should I correct this error?
Answer: Set the `PYTHONPATH` setting to the folder where `mymodule` is. Per default it
expects it to be in the same directory as your Excel file, see the
[docs](http://docs.xlwings.org/vba.html#settings).
|
Automating a wxPython main menu setup?
Question: I am trying to find a way to condense and automate the construction of a main
menu (underneath the title bar, with _file_ , _edit_ , _help_ , etc.) in
wxPython.
Writing out each and every menu item is direct, but I notice I repeat myself a
lot, between Appending, sorting IDs, etc. Followed by other unique pits like
if I want to add an icon to a specific menu, or if I have submenus and they
may have submenus, etc. Without one consistent way to itemize everything,
simply by adding information to maybe a list or dictionary, or a combo of the
two, my wx.Frame object will get very dense.
I can't see a clean an organized way of doing that, short of a 3-dimensional
array. And even then, I don't know how to organize that 3D array uniformly so
every item is ready to go.
Here is what I have so far (pardon any indentation errors; it works fine on
me):
class frameMain(wx.Frame):
"""The main application frame."""
def __init__(self,
parent=None,
id=-1,
title='TITLE',
pos=wx.DefaultPosition,
size=wx.Size(550, 400),
style=wx.DEFAULT_FRAME_STYLE):
"""Initialize the Main frame structure."""
wx.Frame.__init__(self, parent, id, title, pos, size, style)
self.Center()
self.CreateStatusBar()
self.buildMainMenu()
def buildMainMenu(self):
"""Creates the main menu at the top of the screen."""
MainMenu = wx.MenuBar()
# Establish menu item IDs.
menuID_File = ['exit']
menuID_Help = ['about']
menuID_ALL = [menuID_File,
menuID_Help]
# Make a dictionary of the menu item IDs.
self.menuID = {}
for eachmenu in menuID_ALL:
for eachitem in eachmenu:
self.menuID[eachitem] = wx.NewId()
# Create the menus.
MM_File = wx.Menu()
FILE = {}
MM_File.AppendSeparator()
FILE['exit'] = MM_File.Append(self.menuID['exit'],
'Exit',
'Exit application.')
self.Bind(wx.EVT_MENU, self.onExit, FILE['exit'])
MainMenu.Append(MM_File, 'File')
MM_Help = wx.Menu()
HELP = {}
MM_Help.AppendSeparator()
HELP['about'] = MM_Help.Append(self.menuID['about'],
'About',
'About the application.')
self.Bind(wx.EVT_MENU, self.onAbout, HELP['about'])
MainMenu.Append(MM_Help, 'Help')
# Install the Main Menu.
self.SetMenuBar(MainMenu)
I tried using the list-to-dictionary thing to make it so I don't need a
specific index number when referring to an ID, just write in a keyword and it
gets the ID. I write it once, and it's applied across the rest of the
function.
Notice how I have to make a whole new variable and repeat itself, like
MM_File, MM_Edit, MM_Help, and each time I do I put in similar information to
append and bind. And keep in mind, some of the menus may need Separators, or
have menus in menus, or I may want to use a sprite next to any of these menu
items, so I'm trying to figure how to organize my arrays to do that.
What is the appropriate way to organize this into a concise system so it
doesn't bloat this class?
Answer: There are several approaches you can take with this. You can put the menu
generation code into a helper function if you like. Something like this should
work:
def menu_helper(self, menu, menu_id, name, help, handler, sep=True):
menu_obj = wx.Menu()
if sep:
menu_obj.AppendSeparator()
menu_item = menu_obj.Append(menu_id, name, help)
self.Bind(wx.EVT_MENU, handler, menu_item)
self.MainMenu.Append(menu_obj, menu)
Here's a complete example:
import wx
class frameMain(wx.Frame):
"""The main application frame."""
def __init__(self,
parent=None,
id=-1,
title='TITLE',
pos=wx.DefaultPosition,
size=wx.Size(550, 400),
style=wx.DEFAULT_FRAME_STYLE):
"""Initialize the Main frame structure."""
wx.Frame.__init__(self, parent, id, title, pos, size, style)
self.Center()
self.CreateStatusBar()
self.buildMainMenu()
def buildMainMenu(self):
"""Creates the main menu at the top of the screen."""
self.MainMenu = wx.MenuBar()
# Establish menu item IDs.
menuID_File = 'exit'
menuID_Help = 'about'
menuID_ALL = [menuID_File,
menuID_Help]
# Make a dictionary of the menu item IDs.
self.menuID = {item: wx.NewId() for item in menuID_ALL}
# Create the menus.
self.menu_helper('File', self.menuID['exit'], 'Exit',
'Exit application', self.onExit)
self.menu_helper('Help', self.menuID['about'], 'About',
'About the application.', self.onAbout)
# Install the Main Menu.
self.SetMenuBar(self.MainMenu)
def menu_helper(self, menu, menu_id, name, help, handler, sep=True):
"""
"""
menu_obj = wx.Menu()
if sep:
menu_obj.AppendSeparator()
menu_item = menu_obj.Append(menu_id, name, help)
self.Bind(wx.EVT_MENU, handler, menu_item)
self.MainMenu.Append(menu_obj, menu)
#----------------------------------------------------------------------
def onExit(self, event):
pass
def onAbout(self, event):
pass
if __name__ == '__main__':
app = wx.App(False)
frame = frameMain()
frame.Show()
app.MainLoop()
Or you could create a class that handles all the menu creation. You could also
create a config file that has all this information in it that you read to
create your menu. Another alternative would be to use XRC, although I
personally find that a bit limiting.
|
Join 2 trees using Python (dendroPy or other libraries)
Question: I am working on phylogenies by using Python libraries (Bio.Phylo and
DendroPy).
I have to import 2 trees in Newick format (this is obviously not the difficult
part) and join them together, more precisely I have to add one tree at one
tip/leaf of another.
I have tried with `add_child` and `new_child` methods from _DendroPy_ , but
without success.
How would I solve this issue?
Answer: Without resorting to anything fancier than an editor you could find your "tip"
in tree1 and insert the string that is tree2 at that point. (being nested sets
and all)
|
python merge panda dataframes keep dates
Question: I'd like to merge two dataframes together but add in a column based on some
logic. A simplified example of my dataframes are below:
DF_1:
domain ttl nameserver file_date
fakedomain.com 86400 ns1.fakedomain.com 8/8/2008
fakedomainz.com 86400 ns1.fakedomainz.com 8/8/2008
DF_2:
domain ttl nameserver file_date
fakedomain.com 86400 ns1.fakedomain.com 9/8/2008
fakedomainz.com 86400 ns1.fakedomainz.com 9/8/2008
What I want to do is merge both of these dataframes into a single data frame
that looks like this:
DF_2:
domain ttl nameserver first seen last seen
fakedomain.com 86400 ns1.fakedomain.com 8/8/2008 9/8/2008
fakedomainz.com 86400 ns1.fakedomainz.com 8/8/2008 9/8/2008
I can't find a way to merge them and keep the dates. I also want to make sure
the dates are in the correct fields. Its important to note that I'm creating
the dates from regex pulled from the file names. I will also be running this
script continuously over time so the first seen date will only change when
something else changes e.g. the domain changes its name server.
The only way I can think of is to merge them with renamed date columns, then
loop over the entire dataframe sorting the dates appropriately but this seems
inefficient.
Answer: I am not sure which process produces these frames or whether it is a
continuous stream of new data, but to just comment on the substance of the
question, you could do like so:
import pandas as pd
from StringIO import StringIO
s1="""
domain ttl nameserver file_date
fakedomain.com 86400 ns1.fakedomain.com 8/8/2008
fakedomainz.com 86400 ns1.fakedomainz.com 8/8/2008
"""
s2="""
domain ttl nameserver file_date
fakedomain.com 86400 ns1.fakedomain.com 9/8/2008
fakedomainz.com 86400 ns1.fakedomainz.com 9/8/2008
"""
Then,
df1 = pd.DataFrame.from_csv(StringIO(s1), sep='\s+',parse_dates=['file_date'] )
df2 = pd.DataFrame.from_csv(StringIO(s2), sep='\s+',parse_dates=['file_date'] )
And assuming that you want the unique combination of `domain`, `ttl`, and
`nameserver`, along with first and last observation for that combination,
concatenate the frames and grab min and max dates.
result = pd.concat([df1,df2]).reset_index().groupby(['domain','ttl','nameserver']).file_date.agg({'first_seen':'min','last_seen':'max'})
And finally:
result
first_seen last_seen
domain ttl nameserver
fakedomain.com 86400 ns1.fakedomain.com 8/8/2008 9/8/2008
fakedomainz.com 86400 ns1.fakedomainz.com 8/8/2008 9/8/2008
|
Error using jsbeautifier in python with unicode text
Question: I use the following code to beautify a js file (with jsbeautifier module)
using python (3.4)
import jsbeautifier
def write_file(output, fn):
file = open(fn, "w")
file.write(output)
file.close()
def beautify_file():
res = jsbeautifier.beautify_file("myfile.js")
write_file(res, "myfile-exp.js")
print("beautify_file done")
def main():
beautify_file()
print("done")
pass
if __name__ == '__main__':
main()
The file contains the following contents:
function MyFunc(){
return {Language:"Мова",Theme:"ТÑма"};
}
When I run the python code, I get the following error:
'charmap' codec can't decode byte 0x90 in position 43: character maps to <undefined>
Can someone guide me as to how to handle unicode/utf-8 charsets with the
beautifier?
Thanks
Answer: It's hard to tell without a full stack trace but it looks like jsbeautify
isn't fully Unicode aware.
Try one of the following:
1. Decode js file to Unicode:
with open("myfile.js", "r", encoding="UTF-8") as myfile:
input_string = myfile.read()
res = jsbeautifier.beautify(input_string)
or, if that fails
2. Open file as binary:
with open("myfile.js", "rb") as myfile:
input_string = myfile.read()
res = jsbeautifier.beautify(input_string)
In addition, you may run into issues when writing. You really need to set the
encoding on the output file:
file = open(fn, "w", encoding="utf-8")
|
`document.lastModified` in Python
Question: In python, by using an HTML parser, is it possible to get the
`document.lastModified` property of a web page. I'm trying to retrieve the
date at which the webpage/document was last modified by the owner.
Answer: A somewhat related question "[I am downloading a file using Python urllib2.
How do I check how large the file size
is?](http://stackoverflow.com/questions/1636637/i-am-downloading-a-file-using-
python-urllib2-how-do-i-check-how-large-the-file)", suggests that the
following (untested) code should work:
import urllib2
req = urllib2.urlopen("http://example.com/file.zip")
total_size = int(req.info().getheader('last-modified'))
You might want to add a default value as the second parameter to
`getheader()`, in case it isn't set.
|
Cython, numpy speed-up
Question: I am trying to write an algorithm that calculates the mean value of certain
neighboring elements of a 2D array.
I would like to see if it is possible to speed it up using Cython, but it is
the first time I use it myself.
**Python version** :
import numpy as np
def clamp(val, minval, maxval):
return max(minval, min(val, maxval))
def filter(arr, r):
M = arr.shape[0]
N = arr.shape[1]
new_arr = np.zeros([M, N], dtype=np.int)
for x in range(M):
for y in range(N):
# Corner elements
p1 = clamp(x-r, 0, M)
p2 = clamp(y-r, 0, N)
p3 = clamp(y+r, 0, N-1)
p4 = clamp(x+r, 0, M-1)
nbr_elements = (p3-p2-1)*2+(p4-p1-1)*2+4
tmp = 0
# End points
tmp += arr[p1, p2]
tmp += arr[p1, p3]
tmp += arr[p4, p2]
tmp += arr[p4, p3]
# The rest
tmp += sum(arr[p1+1:p4, p2])
tmp += sum(arr[p1+1:p4, p3])
tmp += sum(arr[p1, p2+1:p3])
tmp += sum(arr[p4, p2+1:p3])
new_arr[x, y] = tmp/nbr_elements
return new_arr
and my attempt of a Cython implementation. I found out that max/min/sum was
faster if you re-implemented them, rather than using the python version
**Cython version** :
from __future__ import division
import numpy as np
cimport numpy as np
DTYPE = np.int
ctypedef np.int_t DTYPE_t
cdef inline int int_max(int a, int b): return a if a >= b else b
cdef inline int int_min(int a, int b): return a if a <= b else b
def clamp(int val, int minval, int maxval):
return int_max(minval, int_min(val, maxval))
def cython_sum(np.ndarray[DTYPE_t, ndim=1] y):
cdef int N = y.shape[0]
cdef int x = y[0]
cdef int i
for i in xrange(1, N):
x += y[i]
return x
def filter(np.ndarray[DTYPE_t, ndim=2] arr, int r):
cdef M = im.shape[0]
cdef N = im.shape[1]
cdef np.ndarray[DTYPE_t, ndim=2] new_arr = np.zeros([M, N], dtype=DTYPE)
cdef int p1, p2, p3, p4, nbr_elements, tmp
for x in range(M):
for y in range(N):
# Corner elements
p1 = clamp(x-r, 0, M)
p2 = clamp(y-r, 0, N)
p3 = clamp(y+r, 0, N-1)
p4 = clamp(x+r, 0, M-1)
nbr_elements = (p3-p2-1)*2+(p4-p1-1)*2+4
tmp = 0
# End points
tmp += arr[p1, p2]
tmp += arr[p1, p3]
tmp += arr[p4, p2]
tmp += arr[p4, p3]
# The rest
tmp += cython_sum(arr[p1+1:p4, p2])
tmp += cython_sum(arr[p1+1:p4, p3])
tmp += cython_sum(arr[p1, p2+1:p3])
tmp += cython_sum(arr[p4, p2+1:p3])
new_arr[x, y] = tmp/nbr_elements
return new_arr
I made a test script:
import time
import numpy as np
import square_mean_py
import square_mean_cy
N = 500
arr = np.random.randint(15, size=(N, N))
r = 8
# Timing
t = time.time()
res_py = square_mean_py.filter(arr, r)
print time.time()-t
t = time.time()
res_cy = square_mean_cy.filter(arr, r)
print time.time()-t
Which prints
9.61458301544
1.44476890564
that is a speed-up of approx. 7 times. I have seen a lot of Cython
implmentations that yield a lot better speed-up, and so I was thinking that
maybe some of you see a a potential way of speeding up the algorithm?
Answer: There are a few issues with your Cython script:
1. You are not giving Cython some key information, such as the types of `x, y, M` and `N` which are used in ranges.
2. I have `cdef`ed the two functions `cython_sum` and `clamp` since you don't need them at Python level.
3. What is `im` that appears in `filter` function? I am assuming you meant `arr`.
Fixing those I will rewrite/modify your Cython script like so:
from __future__ import division
import numpy as np
cimport numpy as np
from cython cimport boundscheck, wraparound
DTYPE = np.int
ctypedef np.int_t DTYPE_t
cdef inline int int_max(int a, int b): return a if a >= b else b
cdef inline int int_min(int a, int b): return a if a <= b else b
cdef int clamp3(int val, int minval, int maxval):
return int_max(minval, int_min(val, maxval))
@boundscheck(False)
cdef int cython_sum2(DTYPE_t[:] y):
cdef int N = y.shape[0]
cdef int x = y[0]
cdef int i
for i in range(1, N):
x += y[i]
return x
@boundscheck(False)
@wraparound(False)
def filter3(DTYPE_t[:,::1] arr, int r):
cdef int M = arr.shape[0]
cdef int N = arr.shape[1]
cdef np.ndarray[DTYPE_t, ndim=2, mode='c'] \
new_arr = np.zeros([M, N], dtype=DTYPE)
cdef int p1, p2, p3, p4, nbr_elements, tmp, x, y
for x in range(M):
for y in range(N):
# Corner elements
p1 = clamp3(x-r, 0, M)
p2 = clamp3(y-r, 0, N)
p3 = clamp3(y+r, 0, N-1)
p4 = clamp3(x+r, 0, M-1)
nbr_elements = (p3-p2-1)*2+(p4-p1-1)*2+4
tmp = 0
# End points
tmp += arr[p1, p2]
tmp += arr[p1, p3]
tmp += arr[p4, p2]
tmp += arr[p4, p3]
# The rest
tmp += cython_sum2(arr[p1+1:p4, p2])
tmp += cython_sum2(arr[p1+1:p4, p3])
tmp += cython_sum2(arr[p1, p2+1:p3])
tmp += cython_sum2(arr[p4, p2+1:p3])
new_arr[x, y] = <int>(tmp/nbr_elements)
return new_arr
Here is the timing on my machine:
arr = np.random.randint(15, size=(500, 500))
Original (Python) version: 7.34 s
Your Cython version: 1.98 s
New Cython version: 0.0323 s
That is almost 60 times speed up over your Cython script and over 200 times
speed-up over the original Python script.
|
How to plot grad(f(x,y))?
Question: I want to calculate and plot a gradient of any scalar function of two
variables. If you really want a concrete example, lets say f=x^2+y^2 where x
goes from -10 to 10 and same for y. How do I calculate and plot grad(f)? The
solution should be vector and I should see vector lines. I am new to python so
please use simple words.
EDIT:
@Andras Deak: thank you for your post, i tried what you suggested and instead
of your test function (fun=3*x^2-5*y^2) I used function that i defined as
V(x,y); this is how the code looks like but it reports an error
import numpy as np
import math
import sympy
import matplotlib.pyplot as plt
def V(x,y):
t=[]
for k in range (1,3):
for l in range (1,3):
t.append(0.000001*np.sin(2*math.pi*k*0.5)/((4*(math.pi)**2)* (k**2+l**2)))
term = t* np.sin(2 * math.pi * k * x/0.004) * np.cos(2 * math.pi * l * y/0.004)
return term
return term.sum()
x,y=sympy.symbols('x y')
fun=V(x,y)
gradfun=[sympy.diff(fun,var) for var in (x,y)]
numgradfun=sympy.lambdify([x,y],gradfun)
X,Y=np.meshgrid(np.arange(-10,11),np.arange(-10,11))
graddat=numgradfun(X,Y)
plt.figure()
plt.quiver(X,Y,graddat[0],graddat[1])
plt.show()
AttributeError: 'Mul' object has no attribute 'sin'
And lets say I remove sin, I get another error:
TypeError: can't multiply sequence by non-int of type 'Mul'
I read tutorial for sympy and it says "The real power of a symbolic
computation system such as SymPy is the ability to do all sorts of
computations symbolically". I get this, I just dont get why I cannot multiply
x and y symbols with float numbers.
What is the way around this? :( Help please!
**UPDATE**
@Andras Deak: I wanted to make things shorter so I removed many constants from
the original formulas for V(x,y) and Cn*Dm. As you pointed out, that caused
the sin function to always return 0 (i just noticed). Apologies for that. I
will update the post later today when i read your comment in details. Big
thanks!
**UPDATE 2** I changed coefficients in my expression for voltage and this is
the result:
[](http://i.stack.imgur.com/FlDLH.png)
It looks good except that the arrows point in the opposite direction (they are
supposed to go out of the reddish dot and into the blue one). Do you know how
I could change that? And if possible, could you please tell me the way to
increase the size of the arrows? I tried what was suggested in another topic
([Computing and drawing vector
fields](http://stackoverflow.com/questions/25342072/computing-and-drawing-
vector-fields)):
skip = (slice(None, None, 3), slice(None, None, 3))
This plots only every third arrow and matplotlib does the autoscale but it
doesnt work for me (nothing happens when i add this, for any number that i
enter) You were already of huge help , i cannot thank you enough!
Answer: Here's a solution using `sympy` and `numpy`. This is the first time I use
sympy, so others will/could probably come up with much better and more elegant
solutions.
import sympy
#define symbolic vars, function
x,y=sympy.symbols('x y')
fun=3*x**2-5*y**2
#take the gradient symbolically
gradfun=[sympy.diff(fun,var) for var in (x,y)]
#turn into a bivariate lambda for numpy
numgradfun=sympy.lambdify([x,y],gradfun)
now you can use `numgradfun(1,3)` to compute the gradient at `(x,y)==(1,3)`.
This function can then be used for plotting, which you said you can do.
For plotting, you can use, for instance, `matplotlib`'s `quiver`, like so:
import numpy as np
import matplotlib.pyplot as plt
X,Y=np.meshgrid(np.arange(-10,11),np.arange(-10,11))
graddat=numgradfun(X,Y)
plt.figure()
plt.quiver(X,Y,graddat[0],graddat[1])
plt.show()
[](http://i.stack.imgur.com/iT1Bv.png)
## UPDATE
You added a specification for your function to be computed. It contains the
product of terms depending on `x` and `y`, which seems to break my above
solution. I managed to come up with a new one to suit your needs. However,
your function seems to make little sense. From your edited question:
t.append(0.000001*np.sin(2*math.pi*k*0.5)/((4*(math.pi)**2)* (k**2+l**2)))
term = t* np.sin(2 * math.pi * k * x/0.004) * np.cos(2 * math.pi * l * y/0.004)
On the other hand, from your corresponding comment to this answer:
> V(x,y) = Sum over n and m of [Cn * Dm * sin(2pinx) * cos(2pimy)]; sum goes
> from -10 to 10; Cn and Dm are coefficients, and i calculated that CkDl =
> sin(2pik)/(k^2 +l^2) (i used here k and l as one of the indices from the sum
> over n and m).
I have several problems with this: both `sin(2*pi*k)` and `sin(2*pi*k/2)` (the
two competing versions in the prefactor are always zero for integer `k`,
giving you a constant zero `V` at every `(x,y)`. Furthermore, in your code you
have magical frequency factors in the trigonometric functions, which are
missing from the comment. If you multiply your `x` by `4e-3`, you
_drastically_ change the spatial dependence of your function (by changing the
wavelength by roughly a factor of a thousand). So you should really decide
what your function is.
So here's a solution, where I assumed
> V(x,y)=sum_{k,l = 1 to 10} C_{k,l} * sin(2*pi*k*x)*cos(2*pi*l*y), with
> C_{k,l}=sin(2*pi*k/4)/((4*pi^2)*(k^2+l^2))*1e-6
This is a combination of your various versions of the function, with the
modification of `sin(2*pi*k/4)` in the prefactor in order to have a non-zero
function. I expect you to be able to fix the numerical factors to your actual
needs, after you figure out the proper mathematical model.
So here's the full code:
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
def CD(k,l):
#return sp.sin(2*sp.pi*k/2)/((4*sp.pi**2)*(k**2+l**2))*1e-6
return sp.sin(2*sp.pi*k/4)/((4*sp.pi**2)*(k**2+l**2))*1e-6
def Vkl(x,y,k,l):
return CD(k,l)*sp.sin(2*sp.pi*k*x)*sp.cos(2*sp.pi*l*y)
def V(x,y,kmax,lmax):
k,l=sp.symbols('k l',integers=True)
return sp.summation(Vkl(x,y,k,l),(k,1,kmax),(l,1,lmax))
#define symbolic vars, function
kmax=10
lmax=10
x,y=sp.symbols('x y')
fun=V(x,y,kmax,lmax)
#take the gradient symbolically
gradfun=[sp.diff(fun,var) for var in (x,y)]
#turn into bivariate lambda for numpy
numgradfun=sp.lambdify([x,y],gradfun,'numpy')
numfun=sp.lambdify([x,y],fun,'numpy')
#plot
X,Y=np.meshgrid(np.linspace(-10,10,51),np.linspace(-10,10,51))
graddat=numgradfun(X,Y)
fundat=numfun(X,Y)
hf=plt.figure()
hc=plt.contourf(X,Y,fundat,np.linspace(fundat.min(),fundat.max(),25))
plt.quiver(X,Y,graddat[0],graddat[1])
plt.colorbar(hc)
plt.show()
I defined your `V(x,y)` function using some auxiliary functions for
transparence. I left the summation cut-offs as literal parameters, `kmax` and
`lmax`: in your code these were 3, in your comment they were said to be 10,
and anyway they should be infinity.
The gradient is taken the same way as before, but when converting to a numpy
function using `lambdify` you have to set an additional string parameter,
`'numpy'`. This will alow the resulting numpy lambda to accept array input
(essentially it will use `np.sin` instead of `math.sin` and the same for
`cos`).
I also changed the definition of the grid from `array` to `np.linspace`: this
is usually more convenient. Since your function is almost constant at integer
grid points, I created a denser mesh for plotting (51 points while keeping
your original limits of (-10,10) fixed).
For clarity I included a few more plots: a `contourf` to show the value of the
function (contour lines should always be orthogonal to the gradient vectors),
and a colorbar to indicate the value of the function. Here's the result:
[](http://i.stack.imgur.com/juC87.png)
The composition is obviously not the best, but I didn't want to stray too much
from your specifications. The arrows in this figure are actually hardly
visible, but as you can see (and also evident from the definition of `V`) your
function is periodic, so if you plot the same thing with smaller limits and
less grid points, you'll see more features and larger arrows.
|
Python NLTK: parse string using conjoint structure, getting into infinite recursion
Question: I am new in python and nltk. I am asked to create two different parse tree for
the following sentence:
Adam slept while Josh ate and the dog barked.
Based on these two constructions:
S-> S while S
S-> S and S
Here is what I wrote so far, I used [this page
(4.1)](http://www.nltk.org/book/ch08.html) as a guideline.
import nltk
grammar_string = '''
S -> S 'and' S
S -> S 'or' S
S -> S 'but' S
S -> S 'while' S
S -> S 'when' S
S -> 'Adam'|'slept'|'Josh'|'ate'|'the'|'dog'|'barked'
'''
sentence = "Adam slept while Josh ate and the dog barked"
grammar = nltk.CFG.fromstring(grammar_string)
rd_parser = nltk.RecursiveDescentParser(grammar)
sent = sentence.split()
for tree in rd_parser.parse(sent):
print(tree)
This code doesn't work. I am getting this error:
if isinstance(index, (int, slice)):
RuntimeError: maximum recursion depth exceeded in __instancecheck__
I am wondering what is wrong with my code? Is it because of this: `S ->
'Adam'|'slept'|'Josh'|...`
Thank you.
Answer: You probably want to define something like this (which is somewhat non-
conventional, by the way):
S -> P
P -> P u P | F
F -> W | W F
u -> 'and'| 'or' | 'but' | 'while' | 'when'
W -> 'Adam'|'slept'|'Josh'|'ate'|'the'|'dog'|'barked'
'F' stays for 'fragment' here. I don't guarantee that this would generate only
meaningful sentences, but it should hopefully allow the parser to terminate.
|
How to execute commands with double quotes (net start "windows search") using python 'os' module?
Question: When I execute simple command like "net start", I am getting output
successfully as shown below.
Python script:
import os
def test():
cmd = ' net start '
output = os.popen(cmd).read()
print output
test()
Output:
C:\Users\test\Desktop\service>python test.py
These Windows services are started:
Application Experience
Application Management
Background Intelligent Transfer Service
Base Filtering Engine
Task Scheduler
TCP/IP NetBIOS Helper
The command completed successfully.
C:\Users\test\Desktop\service>
But When I execute long commands (for example : "net start "windows search") I
am **NOT** getting any output.
Python script:
import os
def test():
cmd = ' net start "windows search" '
output = os.popen(cmd).read()
print output
test()
Output:
C:\Users\test\Desktop\service>python test.py
C:\Users\test\Desktop\service>
I have tried "net start \"windows search\" ". also. But same issue.
Can anyone guide me on this please?
Answer: From [the documentation](https://docs.python.org/2/library/os.html#os.popen):
> _Deprecated since version 2.6:_ This function is obsolete. Use the
> [`subprocess`](https://docs.python.org/2/library/subprocess.html#module-
> subprocess) module. Check especially the [_Replacing Older Functions with
> the subprocess
> Module_](https://docs.python.org/2/library/subprocess.html#subprocess-
> replacements) section.
subprocess.Popen(['net', 'start', 'windows search'], ...)
|
Pygame - TypeError: Missing 1 required positional argument
Question: I am building a game and I keep on running up against this error. I can't seem
to fix it. I believe the problem is either in the function "main" at the
bottom or in the classes "Level" and "Level01". If you find there is a way I
can improve my code can you also tell me as I am just learning how to build
games with OOP.
File "C:/Users/fabma/Documents/PythonGames/RPG/Scroller!.py", line 148, in main
currentLevel.drawer(display)
TypeError: drawer() missing 1 required positional argument: 'display1'
Here is my code:
import pygame
# Colours + Global constants
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
WIDTH = 800
HEIGHT = 600
SIZE = (WIDTH, HEIGHT)
# CLASSES
# Block is the common platform
class Block(pygame.sprite.Sprite):
def __init__(self, length, height, colour):
super().__init__()
# Making image
self.image = pygame.Surface([length, height])
self.image.fill(colour)
self.rect = self.image.get_rect()
# Setting Y coordinates
self.rect.y = HEIGHT * 0.95
class Player(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
# Is it touching the floor?
self.velocity = 0
self.standing = True
# Rendering image and creating some variables
self.height = 40
self.length = 40
self.sprite_x_change = 0
self.sprite_y_change = 0
self.image = pygame.Surface([self.height, self.length])
self.image.fill(GREEN)
self.rect = self.image.get_rect()
self.rect.y = HEIGHT * 0.884
self.level = None
# Mobility: Left, right, up and stop
def move_right(self):
self.sprite_x_change = 15
def move_left(self):
self.sprite_x_change = -15
def move_up(self, platform):
# Seeing if we hit anything if so then we can jump!
self.rect.y -= 2
hit_list = pygame.sprite.spritecollide(self, platform, False)
if len(hit_list) > 0 or self.rect.bottom >= HEIGHT - Block.height:
self.change_y = -10
def stop(self):
self.sprite_x_change = 0
def updater(self):
self.gravity()
platforms_hit = pygame.sprite.spritecollide(self, self.level.platforms, False)
for blocks in platforms_hit:
self.sprite_y_change = 0
# Going down
if self.sprite_y_change > 0:
self.rect.bottom = blocks.rect.top
self.velocity = 0
self.standing = True
# Going up
if self.sprite_y_change < 0:
self.rect.top = blocks.rect.bottom
self.standing = False
if self.sprite_x_change > 0:
self.rect.right = blocks.rect.left
if self.sprite_x_change < 0:
self.rect.left = blocks.rect.right
if self.sprite_x_change == 0 and self.sprite_y_change == 0:
self.rect.y = HEIGHT * 0.884
if self.standing == False:
self.velocity += 1
self.rect.x += self.sprite_x_change
self.rect.y += self.sprite_y_change
def gravity(self):
self.sprite_y_change += 0.980665*self.velocity
class Level:
def __init__(self):
# Creating groups
self.sprites = pygame.sprite.Group()
self.all_things = pygame.sprite.Group()
self.platforms = pygame.sprite.Group()
def drawer(self, display1):
display1.fill(BLUE)
self.all_things.draw(display1)
class Level01(Level):
def __init__(self, player1):
# Initialise level1
Level.__init__(self)
# Level01 things
block = Block(WIDTH, HEIGHT * 0.05, RED)
Level.all_things = self.all_things
self.sprites.add(player1)
self.platforms.add(block)
self.all_things.add(player1, block)
def main():
# Init pygame
pygame.init()
# Set screen
display = pygame.display.set_mode(SIZE)
# Creating FPS thingy
clock = pygame.time.Clock()
# Making levels + Player
player = Player()
level_1 = Level01(player)
# Choosing level
levelList = []
levelList.append(Level01)
currentLevelNumber = 0
currentLevel = levelList[currentLevelNumber]
# Game loop
loop = True
while loop == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RIGHT:
player.move_right()
if event.key == pygame.K_LEFT:
player.move_left()
if event.key == pygame.K_UP:
player.move_up(currentLevel.platforms)
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT and player.sprite_x_change < 0:
player.stop()
if event.key == pygame.K_RIGHT and player.sprite_x_change > 0:
player.stop()
# Update things
currentLevel.all_things.update()
currentLevel.drawer(display)
# Refresh screen
clock.tick(30)
pygame.display.update()
pygame.quit()
if __name__ == "__main__":
main()
Answer: You need to create an instance of your level rather than just appending the
class itself into your list:
levelList.append(Level01)
should be...
levelList.append(level_1)
* * *
As it stands, you're using the class object rather than an instance of it,
which means that the `display` you're passing is getting put into the `self`
argument (because the class object won't pass along an instance, because it's
not one). Once you're calling it on an instance of the class, the `self`
argument will be automatically passed and thus your `display` argument will
get passed to the correct spot.
|
nlargest and nsmallest ; heapq python
Question: This is out of curiosity about the nsmallest and nlargest methods of heapq.py
module in python.
I was reading it [here](https://docs.python.org/2/library/heapq.html#) in the
docs.
The documentation doesn't say how it does so (nsmalles/nlargest) on any
iterable.
This might be a stupid question, but can I assume that these methods
internally create a heap of the iterable data structure (may be using
'heapify' method) and then return the n smallest/largest elements?
Just want to confirm my conclusion. thanks!
Answer: The algorithm for finding the `n` smallest or largest items from an iterable
with `N` items is a bit tricky. You see, you don't create a size-`N` min-heap
to find the smallest items.
Instead, you make a smaller, size-`n` max-heap with the first `n` items, then
do repeated `pushpop` operations on it with the remaining items from the
sequence. Once you're done, you pop the items from the heap and return them in
reversed order.
This process take `O(N log(n))` time (note the small `n`) and of course only
`O(n)` space. If `n` is much less than `N`, it's much more efficient than
sorting and slicing.
The [`heapq` module](https://hg.python.org/cpython/file/default/Lib/heapq.py)
contains a pure-Python implementation of this algorithm, though when you
import it, you may get a faster version of the code written in C instead (you
can read [the source for
that](https://hg.python.org/cpython/file/default/Modules/_heapqmodule.c) too,
but it's not quite as friendly unless you know the Python C API).
|
Python - create wordlist from given characters of that contain 4-5 letters and 5-6 numbers ONLY
Question: I'm looking to preform a dictionary attack and want to know how to create a
wordlist that has every 10 letter permutation of the letters a-f and 2-9
(which I can do) WHEN every word contains 5 letters (a-f) and 5 numbers (2-9)
OR 6 letters (a-f) and 4 numbers (2-9) ONLY (what I can't do).
Here's what I have so far (from
[here](http://stackoverflow.com/questions/21559039/python-how-to-generate-
wordlist-from-given-characters-of-specific-length)):
import itertools
chrs = 'abcdef23456789'
n = 2
min_length, max_length = 10, 10
for n in range(min_length, max_length+1):
for xs in itertools.product(chrs, repeat=n):
print ''.join(xs)
Thanks
Answer: Just write a function that returns the number of valid letters and numbers in
a string and test the return value.
nums = '23456789'
chars = 'abcdef'
def letter_number_count(word):
l,n = 0,0
if len(word) == 10:
for c in word:
if c in nums:
n += 1
elif c in chars:
l += 1
return l,n
Test output
>>> s = 'abcdef2345'
>>> (l,n) = letter_number_count(s)
>>> if (l,n) == (6,4) or (l,n) == (5,5):
... print "Do something"
...
Do something
>>>
|
Python - Random baby name generator issues - (duplicating input, calling variables)
Question: I've been looking at this all afternoon and can't figure out why the gender
input is repeating itself despite only appearing to be called once. It's not
part of a loop that I can see either.
I've tried adding variables to act as a counter and tried using an if
statement to only run the input if the counter variable is less than 1, but
can't figure it out.
Edit: Thanks to the great feedback here, I found out that get_full_name was
causing the duplicating gender input in get_first_name - but now I'm running
into issues when trying to output the randomly generated first & middle names.
I figured setting the setFirst, setMiddle and setLast variables as globals,
but then I get a NameError. I also tried creating a new function to display
them, but that wasn't working either. I tried adding "self." (without the
quotes) either directly in the function() or one of the indents beneath it.
I'll display the error first, then the full code.
Error: Traceback (most recent call last):
File "**init**.py", line 100, in main()
File "**init**.py", line 92, in main
print displayName(setFirst, setMiddle, setLast)
NameError: global name 'setFirst' is not defined
I also get name errors trying to concatenate setFirst, setMiddle and setLast
into another variable for the full name.
Here's the code:
from os.path import abspath, join, dirname
import random
full_path = lambda filename: abspath(join(dirname(__file__), filename))
FILES = {
'first:male': full_path('dist.male.first'),
'first:female': full_path('dist.female.first'),
'last': full_path('dist.all.last'),
}
def get_name(filename):
selected = random.random() * 90
with open(filename) as name_file:
for line in name_file:
name, _, cummulative, _ = line.split()
if float(cummulative) > selected:
return name
def get_first_name(gender=None):
global determine
global setFirst
print ("First name... Enter 1 for Male, 2 for Female or 3 to be surprised! ")
determine = input()
if determine == 1:
gender = 'male'
if determine == 2:
gender = 'female'
if determine == 3:
print ("You want to be surprised!")
gender = random.choice(('male', 'female'))
return get_name(FILES['first:%s' % gender]).capitalize()
setFirst = get_first_name()
print setFirst + " "
def get_middle_name(gender=None):
global setMiddle
if determine == 1:
gender = 'male'
if determine == 2:
gender = 'female'
if determine == 3:
gender = random.choice(('male', 'female'))
return get_name(FILES['first:%s' % gender]).capitalize()
setMiddle = get_middle_name()
print setMiddle + " "
def get_last_name():
global setLast
#We will implicitly pass a Last Name until other issues are fixed
return “Smith”
setLast = get_last_name()
print setLast
def get_full_name(gender=None):
return u"%s %s %s" % (get_first_name(gender), get_middle_name(gender), get_last_name())
#def displayName(setFirst, setMiddle, setLast):
# print setFirst + " " + setMiddle + " " + setLast
def main():
#print u"%s %s %s" % (setFirst, setMiddle, setLast)
#print displayName(setFirst, setMiddle, setLast)
f = open('output', 'a') #append output to filename output
f.write(get_full_name() + '\n') #and add a line break after each run
f.close()
if __name__ == "__main__":
main()
Even if I try passing the variables to main() like:
def main(setFirst, setMiddle, setLast):
It still gives the NameError about not being defined. What am I doing wrong?
I added this right under "import random", but now I'm getting some rogue
"None" displays - which leads me to believe there is a leak in the code
somewhere. Thoughts?
setFirst = None
setMiddle = None
setLast = None
Here is the function I created to try to track it: def displayName(setFirst,
setMiddle, setLast):
if setFirst == None:
print ("Random Baby Name Generator")
else:
print setFirst
print setMiddle
print setLast
if setMiddle == None:
print ("Double check the middle name variable.")
if setLast == None:
print ("Double check the last name variable.")
Answer: You are calling `get_full_name()` twice, you need to save the results:
def main():
full_name = get_full_name()
print(full_name)
f = open('output', 'a') #append output to filename output
f.write(full_name + '\n') #and add a line break after each run
f.close()
You also have a few indentation issues as well, plus your use of globals is a
bit inefficient. Ideally, functions should do one - and only one - task; this
makes them easier to debug.
Try this different version of your code:
from os.path import abspath, join, dirname
import random
full_path = lambda filename: abspath(join(dirname(__file__), filename))
FILES = {
'first:male': full_path('dist.male.first'),
'first:female': full_path('dist.female.first'),
'last': full_path('dist.all.last'),
}
GENDER_MAP = {'1': 'male', '2': 'female'}
def get_gender():
result = input('Select a gender: 1 for Male, 2 for Female or 3 to be surprised')
if result not in ('1', '2', '3'):
print('{} is not a valid choice, please try again'.format(result))
return get_gender()
if result == '3':
return random.choice(('1', '2'))
return result
def get_name(filename):
selected = random.random() * 90
with open(filename) as name_file:
for line in name_file:
name, _, cummulative, _ = line.split()
if float(cummulative) > selected:
return name
def get_name_from_file(name_type='first', gender='male'):
if name_type in ('first','middle',):
name = get_name(FILES['{}:{}'.format(name_type, gender)]).capitalize()
else:
name = get_name(FILES['last']).capitalize()
return name
def get_full_name():
gender = get_gender()
gender_file = GENDER_MAP.get(gender, '')
first_name = get_name_from_file('first', gender_file)
middle_name = get_name_from_file('middle', gender_file)
last_name = get_name_from_file('last')
return '{} {} {}'.format(first_name, middle_name, last_name)
if __name__ == '__main__':
name = get_full_name()
print(full_name)
with open('output', 'a') as f:
f.write('{}\n'.format(full_name))
print('Done')
|
How to stub the context manager close method in Python?
Question: I need to do some operations that did not work well using the builtin mock
module. Then I decided to mock this by myself, using:
import builtins
import io
from unittest import mock
import my_module
builtins.open = mock.Mock()
builtins.open.return_value = io.StringIO()
builtins.open.return_value.__exit__ = lambda a, b, c, d: None
my_module.f('foo')
builtins.open.return_value.seek(0)
txt = builtins.open.return_value.read()
print(txt)
While, in my_module.py:
def f(fname):
with open(fname) as handle:
g(handle)
def g(fhandle):
fhandle.write('bar')
Using this approach, I get a ValueError, which message is "I/O operation on
closed file". I expected the lambda to work to stub the **exit** method, why
does not it work? How can I stub it?
Answer: Have you tried using mocks "mock_open" by any chance? This is some code I used
a while back, that I believe should work. Let me know if this helps you:
@mock.patch('__builtin__.open', new_callable=mock_open())
def test_foo(self, m_open):
self.foo()
m_open.assert_called_once_with("something", 'w')
m_open().__enter__().write.assert_called_once_with("stuff")
Here is some more documentation on
[mock_open](http://www.voidspace.org.uk/python/mock/helpers.html#mock-open)
|
Error while GARCH Modeling using Python
Question: I have a sample python code for GARCH modeling which creates a data of
randomly generated numbers. I want to replace the numbers with my own csv file
which contains a column vector, but that just doesn't work.
import numpy as np
from scipy import optimize
r=np.array([0.945532630498276,
0.614772790142383,
0.834417758890680,
0.862344782601800,
0.555858715401929,
0.641058419842652,
0.720118656981704,
0.643948007732270,
0.138790608092353,
0.279264178231250,
0.993836948076485,
0.531967023876420,
0.964455754192395,
0.873171802181126,
0.937828816793698])
print r
f=open('main_ax.csv')
b=f.read()
r=np.array(b)
print r
def GARCH11_logL(param, r):
omega, alpha, beta = param
n = len(r)
s = np.ones(n)
for i in range(3,n):
s[i] = omega + alpha*r[i-1]**2 + beta*(s[i-1]) # GARCH(1,1) model
logL = -( (-np.log(s) - r**2/s).sum() ) # calculate the sum
return logL
R = optimize.fmin(GARCH11_logL,np.array([.1,.1,.1]),args=(r,),full_output=1)
print R
print
print("omega = %.6f\nbeta = %.6f\nalpha = %.6f\n") % (R[0][0],R[0][2],R[0][1])
This program throws me the following error:
Traceback (most recent call last):
File "C:\Users\Gaurav\Desktop\Ram\garch_model.py", line 35, in <module>
R = optimize.fmin(GARCH11_logL,np.array([.1,.1,.1]),args=(r,),full_output=1)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 377, in fmin
res = _minimize_neldermead(func, x0, args, callback=callback, **opts)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 435, in _minimize_neldermead
fsim[0] = func(x0)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 285, in function_wrapper
return function(*(wrapper_args + args))
File "C:\Users\Gaurav\Desktop\Ram\garch_model.py", line 32, in GARCH11_logL
s[i] = omega + alpha*r[i-1]**2 + beta*(s[i-1]) # GARCH(1,1) model
TypeError: unsupported operand type(s) for ** or pow(): 'numpy.string_' and 'int'
Any help to fix this code will be really helpful.
Answer: The lines
f=open('main_ax.csv')
b=f.read()
r=np.array(b)
create an array with a single element that is a string. You haven't converted
the contents of the file into an array of numbers.
You can replace that code with a call to
[`numpy.loadtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html):
r = np.loadtxt('main_ax.csv')
|
Seeing all the values of variables in python as it is run
Question: In MatLab, unless I add ; at the end of the line, I can just run and see what
this code is doing. For example if I code
x=0
for i=1:1:3
x=x+1
end
then I will see that
x=0
x=1
x=2
x=3
appearing in the window.
That way, I can see the process of my code, and so I can easily fix the error
when there is.
How can I do the same thing in Python?
I know I can just add print to make it happen like
for i in list
print('i is:', i)
i=i+1
But this makes the whole code too messy. Is there a MatLab-like way to do
this?
Answer: I am not familiar with matlab, but from what you are explaining, you seem to
be wanting to trace your code to see where errors come up? You can do this
through pdbtrace.
Let me know if this is what you are looking for.
Here is a quick how-to on how to use pdbtrace using your code sample:
import pdb
l = [1, 2, 3, 4]
pdb.set_trace()
for i in l:
print('i is:', i)
i = i+1
So, when you run this, you will then have the ability to control the flow of
the app. You hit 'n' for the next step in your code, and you can see what it
is doing.
I strongly suggest reading this, as it is an excellent tutorial:
<https://pythonconquerstheuniverse.wordpress.com/2009/09/10/debugging-in-
python/>
Furthermore, what I actually recommend even more, is using an IDE that has a
fully functional debugger. I am partial to PyCharm, which you can download for
free here: <https://www.jetbrains.com/pycharm/download/>
|
Membership comparsion in python
Question: I am not sure whether i am doing it correctly but please consider the below
code. a and b values are same (i.e "John") but when doing a membership test
(i.e a in b) then it says false. But when checking the value equality then it
shows True but somehow membership test is saying false even though the values
are same.
>>> import redis
>>> r.mset(first="John", second="Stanley", third="Junior")
>>> a = r.mget(["first"])
>>> a
['John']
>>> b = r.mget(["first"])
>>> b
['John']
>>> a in b
False
>>> a == b
True
Answer: You can check below console result.
>>> ['John'] in ['John'] # list in list
False
>>> ['John'] in [['John']] # list in double list
True
>>> 'John' in ['John'] # Probably this is what you want.
True
>>> ['John'] == ['John']
True
>>> 'John' == ['John'] # 'John' is an element in List ['John'] but not same.
False
And redis support 'exsits' keyword.
>>> r.exists('John')
True
>>> r.exists('foo')
False
|
How to put results of subprocess in a variable
Question: I can retrieve my data only by using cURL, I think subprocess is the easier
way to do it but I cannot put the data in a variable.
For example like this
data = subprocess.call('curl "URL" -X GET', shell=True)
Is there any easy way to do this via python?
Answer: If you are trying to do this with subprocess and want the result of the call,
then you are looking for `check_output`:
data = subprocess.check_output(["curl", URL])
However, I suggest you use the [requests](http://docs.python-
requests.org/en/latest/) library in Python for what you are doing. Full of
features that will make your life easier.
Here is a simple example on using requests:
Make sure you have installed requests first. It is not a built-in library. The
URL I linked will explain how to do this.
>>> import requests
>>> d = requests.get("http://www.google.ca")
>>> print(d)
<Response [200]>
There is a bunch of data you can get from that response. If you do a `d.text`,
you will get the text response. I suggest you read the doc to understand how
to use it. It is a fully featured library with tons of features.
I suggest looking through the docs for all the other stuff you want.
|
What value to use for execution_count in code cells of iPython notebooks?
Question: What is the best value to use as an input argument for `execution_count` when
generating an iPython notebook as a json file?
I understand it should be set to `null` but a json dump automatically puts
quotes around it so I perform a search-replace after creation and remove
these. However ipython notebook still isn't happy with it. I'm on Python 2.7,
iPython notebook 4.0.5.
import json, fileinput, re
# Create the cell of python code
code_cell = {
"cell_type": "code",
"execution count": "null",
"metadata": {
"collapsed": False,
"autoscroll": False,
},
"source": ["import os\n", "import numpy as np"],
"outputs": [],
}
# Create ipython notebook dictionary
nbdict = { 'metadata': {}, \
'nbformat': 4,
'nbformat_minor': 0,
'cells': [code_cell]
}
with open("test.ipynb", 'w') as outfile:
json.dump(nbdict, outfile)
# Strip double quotes from execution_count argument.
file = fileinput.FileInput("test.ipynb", inplace=True)
for line in file:
print(re.sub(r'"null"', 'null', line))
file.close()
The error I receive is `Notebook Validation failed: Additional properties are
not allowed (u'execution count' was unexpected):`
This is even after I have `"execution count": null,` in the json file.
Answer: Looks like you are missing an underscore: `"execution count"` should be
`"execution_count"`, and then the `null` should work fine.
|
numberplate validation problems in python
Question: I am doing a practice assessment for school and me and my friends have been
having problems with this code
if len(numberplate)==7:
if numberplate[0].isalpha:
if numberplate[1].isalpha:
if numberplate[2].isnumeric:
if numberplate[3].isnumeric:
if numberplate[4].isalpha:
if numberplate[5].isalpha:
if numberplate[6].isalpha:
What it should do:
* if the `numberplate` is `GT45HOK` then `print('valid input')`
* if it is `GT5HYKU` then it should `print('invalid input')`
But because it's 7 letters long it `print('valid input')`
Answer: As you can read in _jonrsharpe_ 's comments, your code doesn't work because
`isalpha` doesn't return the result you want here. `isalpha()` is what you
need, this returns the `boolean` you're after.
_However_ , I would use a [regular
expression](https://developers.google.com/edu/python/regular-expressions)
here. It matches 2 capital alphabetic letters, 2 numbers, and 3 more capital
letters.
import re
numberplate = 'GT45HOK'
r = re.compile('[A-Z]{2}[0-9]{2}[A-Z]{3}')
if r.match(numberplate):
print 'valid input'
else:
print 'invalid input'
|
How do I prepare this text file for nearest neighbor algorithm?
Question: I want to prepare the text file so I can use k-NN clusstering in python 2.7. I
have no idea how to approach this. Can anybody help?
The dataset is here: <http://textuploader.com/ayhqc>
Columns are separated by commas and rows are separated by newlines. Each
column describes one individual patient. The attributes are in rows in the
following order: plasma_glucose, bp, test_result, skin_thickness,
num_pregnancies, insulin, bmi, pedigree, age.
Answer: You need to transpose the data first (below) and then you can use an out of
the box algo.
import numpy as np
f1=open(,'r')
data=np.zeros(100,6) #rows and cols
j=0
for line in f1:
row=line.split(',')
for i in range(len(row)):
x=row[i]
if x=='positive': x=1
elif x=='negative': x=-1
else: x=float(x)
data[i,j]=x
j+=1
|
Accessing the state of a Python Multiprocessing.Process subclass from the parent process
Question: I am creating a simple TCP server as a stub so I can test a script that
operates a piece of test equipment, without having to have the equipment
there. The server should sit there waiting for a connection and then maintain
and update a state variable (just a list of 6 integers) in response to
commands that it receives. The parent process (a unit test class for example)
should then be able to interrogate the state at any time.
The interface of the server should be as simple as:
server = StubServer()
server.start()
'''
the client script connects with the server and
some stuff happens to change the state
'''
newState = server.getState() # newState = [93,93,93,3,3,45] for example
server.terminate()
I have subclassed Multiprocessing.Process to do this and I can start the
server off no problem. When I first tested this, in the getState() method I
just returned the instance variable _state but I found this was always just
the initial state. After some digging I couldn't find any similar examples out
there. Lots about subclassing Process, but not this specific problem.
Eventually I put together the below which uses an internal Queue() to store
the state on but this looks messy and clunky to me. Is there a better way to
do this?
import socket
from multiprocessing import Process, Queue
class StubServer(Process):
_port = 4001
_addr = '' # all addresses 0.0.0.0
_sock = None
_state = []
_queue = None
def __init__(self, initState=[93,93,93,93,93,93]):
super(StubServer, self).__init__()
self._queue = Queue()
self._state = initState
def run(self):
# Put the state into the queue
self._queue.put(self._state)
self._sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._sock.bind((self._addr, self._port))
self._sock.listen(1)
waitingForConnection = True
'''
Main loop will continue until the connection is terminated. if a connection is closed, the loop returns
to the start and waits for a new connection. This means multiple tests can be run with the same server
'''
while 1:
# Wait for a connection, or go back and wait for a new message (if a connection already exists)
if waitingForConnection:
waitingForConnection = False
conn, addr = self._sock.accept()
chunk = ''
chunks = []
while '\x03' not in chunk: # '\x03' is terminating character for a message
chunk = conn.recv(8192)
if not chunk: # Connection terminated, start the loop again and wait for a new connection
waitingForConnection = True
break
chunks.append(chunk)
message = ''.join(chunks)
# Now do some stuff to parse the message, and update the state if we received a command
if isACommand(message):
_updateState(message)
conn.close()
return
def getState(self):
# This is called from the parent process, so return the object on the queue
state = self._queue.get()
# But put the state back in the queue again so it's there if this method is called again before a state update
self._queue.put(state)
return state
def _updateState(self, message):
# Do some stuff to figure out what to update then update the state
self._state[updatedElementIndex] = updatedValue
# Now empty the queue and put the new state in the queue
while not self._queue.empty():
self._queue.get()
self._queue.put(self._state)
return
Answer: As its name implies, `multiprocessing` uses different processes. At some
point, [`fork()`](https://en.wikipedia.org/wiki/Fork_%28system_call%29) is
called and the child processes copy the memory of the parent process, and the
child is left with its own memory, not being shared with the parent process.
Unfortunately, you _have_ to use [tools
available](https://docs.python.org/3/library/multiprocessing.html#exchanging-
objects-between-processes) to share memory between processes, leading to the
code overhead you mention.
You can look up for other ways to do parallel processing with shared memory,
but do mind that sharing memory between threads/processes/nodes/etc is never
easy.
|
sort a YAML block mapping sequence in python
Question: I am trying to sort a YAML block mapping sequence in the way I want it... I
would like to have something like
depth: !!opencv-matrix
rows: 480
cols: 640
dt: f
data: 'x'
but everytime I do dumping, it changes to
cols: 640
data: 'x'
depth: !!opencv-matrix
dt: f
rows: 480
I checked on a simple and easy way to do it here with
ordering = ['ymlFile','depth', 'rows', 'cols', 'dt', 'data']
ordered_set = [{'depth': '!!opencv-matrix'}, {'rows' : depthSizeImg[0]}, {'cols' : depthSizeImg[1]}, {'dt' : type(img_d[0][0])}, {'data': ymlList.tolist()}]]
f = open(out_d, 'a')
f.write('%YAML:1.0 \n')
f.write(yaml.dump(data, default_flow_style=None, allow_unicode=False, indent = 4))
f.close()
But it made the YAML not in a nested way.
%YAML:1.0
- {depth: '!!opencv-matrix'}
- {rows: 323}
- {cols: 110}
- {dt: !!python/name:numpy.float32 ''}
- {data: 'x'}
How can I get the correct output?
Answer: In your example
ordered_set = [{'depth': '!!opencv-matrix'}, {'rows' : depthSizeImg[0]}, {'cols' : depthSizeImg[1]}, {'dt' : type(img_d[0][0])}, {'data': ymlList.tolist()}]]
You are dumping a list of dicts and that is what you get as YAML output.
Calling a list `ordered_set` doesn't make it a set and including the YAML tags
( those `!!object_name` entries) in your data doesn't change them either.
The [YAML specification](http://www.yaml.org/spec/1.2/spec.html#id2761292)
uses `!!omap` (example 2.26) which combine the ordered structure of a sequence
with single key mappings as elements:
depth: !!omap
- rows: 480
- cols: 640
- dt: f
- data: x
if you read that into PyYAML you get:
{'depth': [('rows', 480), ('cols', 640), ('dt', 'f'), ('data', 'x')]}
which means you cannot get the value of `rows` by simple keyword lookup. If
you dump the above to YAML you get the even more ugly:
depth:
- !!python/tuple [rows, 480]
- !!python/tuple [cols, 640]
- !!python/tuple [dt, f]
- !!python/tuple [data, x]
and you cannot get around that with PyYAML without defining some mapping from
`!!omap` to an ordereddict implementation and vv.
What you need is a more intelligent "Dumper" for your YAML ¹:
import ruamel.yaml as yaml
yaml_str = """\
depth: !!omap
- rows: 480
- cols: 640
- dt: f
- data: x
"""
data1 = yaml.load(yaml_str)
data1['depth']['data2'] = 'y'
print(yaml.dump(data1, Dumper=yaml.RoundTripDumper))
which gives:
depth: !!omap
- rows: 480
- cols: 640
- dt: f
- data: x
- data2: y
Or combine that with a smart loader (which doesn't throw away the ordering
information existing in the input), and you can leave out the `!!omap`:
import ruamel.yaml as yaml
yaml_str = """\
depth:
- rows: 480
- cols: 640 # my number of columns
- dt: f
- data: x
"""
data3 = yaml.load(yaml_str, Loader=yaml.RoundTripLoader)
print(yaml.dump(data3, Dumper=yaml.RoundTripDumper))
which gives:
depth:
- rows: 480
- cols: 640 # my number of columns
- dt: f
- data: x
(including the preserved comment).
* * *
¹ This was done using [ruamel.yaml](https://pypi.python.org/pypi/ruamel.yaml)
of which I am the author. You should be able to do the example with `data1` in
PyYAML with some effort, the other example cannot be done without a major
enhancement of PyYAML, which is exactly what ruamel.yaml is.
|
Python script breaks after first iteration
Question: I am trying to do the following in python. Read xml file with usernames, pw,
email addresses and so on. I then want to iterate through the passwords and
try to find them in a different file. if there is a match, print the username
and password. this is what i have so far:
import xml.etree.ElementTree as ET
tag = "multiRef"
tree = ET.parse('user.xml')
pwlist = open('swapped.txt', 'r')
for dataset in tree.iter(tag):
password = dataset.find("password")
pw = password.text
user = dataset.find("username").text
if pw in pwlist.read():
print user
print pw
Unfortunately, the script only prints one result and ends with no error or
anything. I know, that there have to be at least 250 results... Why is it
stopping after one result? Absolute python newb, a detailed explanation would
be much appreciated!
Answer: `if pw in pwlist.read()` shouldn't be inside the loop. The first time through
the loop, read() will return the entire file. The second time through it will
return nothing because you are at END OF FILE.
Read the contents into a list prior to the loop and then refer to the list
inside the loop.
Also consider the pattern `with open(...) as f` to make sure you are closing
the file, since I don't see an explicit close().
with open('swapped.txt', 'r') as f:
pw_list = f.readlines()
for ...
|
Python: interpolate matrix for figure
Question: I have the following matrix R:
R
array([[ 0.014985 , 0.01499475, 0.01508112, 0.01588764, 0.02019902, 0.03698812, 0.12376358],
[ 0.547997 , 0.00300703, 0.00306113, 0.00361317, 0.23311141, 0.41010791, 0.65683355],
[ 0.7739985 , 0.48050374, 0.00157832, 0.32448644, 0.61655571, 0.70505395, 0.82841677],
[ 0.9547997 , 0.89610075, 0.75911978, 0.86489729, 0.92331114, 0.94101079, 0.96568335],
[ 0.97739985, 0.94805037, 0.87955989, 0.93244864, 0.96165557, 0.9705054 , 0.98284168]])
That is a 5x7 matrix where the rows correspond to:
tc = [100,500,1000,5000,10000]
While the columns to the following:
y = [0,.00001, .0001, .001, .01, 0.1, 1]
If I plot `R` as `pcolor` image, I get:
z_min, z_max = -np.abs(R).max(), np.abs(R).max()
fig,ax = plt.subplots(figsize=(10,7.5))
ax.pcolor(R,cmap='RdBu',vmin=z_min, vmax=z_max)
[](http://i.stack.imgur.com/42TDs.png)
I would like to interpolate the matrix in order to to have a more detailed
image:
xnew,ynew = np.mgrid[0:1:1000j,0:1000:1000j]
tck = interpolate.bisplrep(tc,y,R,s=0, kx=1, ky=1)
and I get the following error:
TypeError: len(x)==len(y)==len(z) must hold.
I would like to know a way to correctly interpolate the matrix `R` given `tc =
[100,500,1000,5000,10000]` and `y = [0,.00001, .0001, .001, .01, 0.1, 1]`
Answer: Use zoom from scipy.ndimage.interpolation
You can increase the "zoom" until it's good enough
from scipy.ndimage.interpolation import zoom
plt.pcolor(R,cmap='RdBu')
plt.show()
R2 = zoom(R,2)
plt.pcolor(R2,cmap='RdBu')
plt.show()
R5 = zoom(R,5)
plt.pcolor(R5,cmap='RdBu')
plt.show()
R10 = zoom(R,10)
plt.pcolor(R10,cmap='RdBu')
plt.show()
You can also use different interpolation methods, see the function's docs
[](http://i.stack.imgur.com/bFYHm.png)
[](http://i.stack.imgur.com/vsbje.png)
[](http://i.stack.imgur.com/7yLtY.png)
[](http://i.stack.imgur.com/nN4pi.png)
|
Sorting hybrid lists in Python
Question: Say I have this list `l = ['the fountainhead','atlas shrugged', 1, 67, 12, 0]`
which I want to sort so that the final result be `['atlas shrugged', 'the
fountainhead', 0, 1, 12, 67]`. This means that strings and integers in the
list should be both sorted in ascending order. If I use `sorted()` the digits
appear first:
>>> sorted(l)
[0, 1, 12, 67, 'atlas shrugged', 'the fountainhead']
and if I use lambda, it cannot get past ordering the list as it contains
incomparable elements of different type:
>>> sorted(l, key=lambda x:int(x))
Traceback (most recent call last):
File "<pyshell#13>", line 1, in <module>
sorted(l, key=lambda x:int(x))
File "<pyshell#13>", line 1, in <lambda>
sorted(l, key=lambda x:int(x))
ValueError: invalid literal for int() with base 10: 'the fountainhead'
As far as I know, there is no way to utilize exception handling when working
with lambdas. This is just a simple example to illustrate the question. I'd
like to know if there is a general flexible way to sort hybrid lists in
Python. I have searched through relevant pages on SO but couldn't find a
general approach for this.
Answer: I think that the first problem here is that you have a hybrid list -- In
general, it becomes tricky to follow the code when you don't know what
operations you can do on which element since they aren't all the same type.
There is no general solution (since there is no _general_ way to know how to
compare objects of different types), but you can definitely handle the case
you have (if you must)...
import numbers
sorted(l, key=lambda x: (isinstance(x, numbers.Number), x))
should do the trick. Basically, my key function returns a `tuple`. Since
`tuple` (and all python sequences) are sorted lexicographically, python will
first look at the first element of the tuple -- In this case, it will be
`False` (`0`) if the item isn't a number so those elements will appear first.
Demo:
>>> l = [1, 2, 3, 'foo', 'bar', 4, 8, -10, 'baz']
>>> import numbers
>>> sorted(l, key=lambda x: (isinstance(x, numbers.Number), x))
['bar', 'baz', 'foo', -10, 1, 2, 3, 4, 8]
|
How to run single function in to a number of times in python
Question: I tried to run simple function n times by using the code below:
df = pd.DataFrame()
def repeat_fun(times, f, args):
for i in range(times): f(args)
def f(x):
g = np.random.normal(0, 1, 32)
mm = np.random.normal(491.22, 128.23, 32)
x = 491.22+(0.557*(mm -491.22))+(g*128.23*(np.sqrt(1-0.557**2)))
print x
repeat_fun(2,f,df)
But I want the result column-wise with respect to run times. The function
above gives the result in one array types.Can anyone help me to figure-out
this problem.
Answer: Hard to know what you mean, but I assume you want the results of `f` to be
stored as columns in a dataframe. If thats's the case:
import pandas as pd
import numpy as np
df = pd.DataFrame()
def repeat_fun(times, f, args):
for i in range(times): f(i,args)
def f(iteration,df):
g = np.random.normal(0, 1, 32)
mm = np.random.normal(491.22, 128.23, 32)
x = 491.22+(0.557*(mm -491.22))+(g*128.23*(np.sqrt(1-0.557**2)))
df[iteration] = x
repeat_fun(2,f,df)
Run this and look at/print the contents of df and see if that helps.
|
How best to convert from azure blob csv format to pandas dataframe while running notebook in azure ml
Question: I have a number of large csv (tab delimited) data stored as azure blobs, and I
want to create a pandas dataframe from these. I can do this locally as
follows:
from azure.storage.blob import BlobService
import pandas as pd
import os.path
STORAGEACCOUNTNAME= 'account_name'
STORAGEACCOUNTKEY= "key"
LOCALFILENAME= 'path/to.csv'
CONTAINERNAME= 'container_name'
BLOBNAME= 'bloby_data/000000_0'
blob_service = BlobService(account_name=STORAGEACCOUNTNAME, account_key=STORAGEACCOUNTKEY)
# Only get a local copy if haven't already got it
if not os.path.isfile(LOCALFILENAME):
blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
df_customer = pd.read_csv(LOCALFILENAME, sep='\t')
However, when running the notebook on azure ML notebooks, I can't 'save a
local copy' and then read from csv, and so I'd like to do the conversion
directly (something like pd.read_azure_blob(blob_csv) or just
pd.read_csv(blob_csv) would be ideal).
I can get to the desired end result (pandas dataframe for blob csv data), if I
first create an azure ML workspace, and then read the datasets into that, and
finally using <https://github.com/Azure/Azure-MachineLearning-ClientLibrary-
Python> to access the dataset as a pandas dataframe, but I'd prefer to just
read straight from the blob storage location.
Answer: I think you want to use `get_blob_to_bytes`, `or get_blob_to_text`; these
should output a string which you can use to create a dataframe as
from io import StringIO
blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
df = pd.read_csv(StringIO(blobstring))
|
How do I reverse str(class) in Python?
Question: I need to define a new instance of a class from an informal string
representation of the same type of class. What is a clean way to do this in
Python?
program1.py:
fileHandle = open("filename.txt", "wb")
instance = className()
instance.a = 2
instance.b = 3.456
fileHandle.write(str(instance))
filename.txt (after running program1.py):
<className a=2, b=3.456>
program2.py:
instance = className()
with open("filename.txt", "r") as fileHandle:
for fileLine in fileHandle:
##### How do I grab the contents of the file line and get them in variables? #####
(instance.a, instance.b) = magicFunction(fileLine)
# I seem to have forgotten the contents of magicFunction(). Can someone remind me?
Answer: In general, the python `str` function is designed for printing things in a
**human-readable** way, not computer-readable as you want to use. If you
control `program1.py`, the `pickle` module could fit your needs.
Program1.py:
import pickle
[rest of code unrelated to printing]
pickle.dump(instance,fileHandle)
Program2.py:
instance = pickle.load(fileHandle)
|
Why zipfile module is_zipfile function cannot detech a gzip file?
Question: I am aware of this question [Why "is_zipfile" function of module "zipfile"
always returns "false"?](http://stackoverflow.com/questions/14174048/why-is-
zipfile-function-of-module-zipfile-always-returns-false). I want to seek some
more clarification and confirmation.
I have created a zip file in python using the gzip module.
If I check the zip file using the `file` command in OSX I get this
> file data.txt
data.txt: gzip compressed data, was "Slide1.html", last modified: Tue Oct 13 10:10:13 2015, max compression
I want to write a generic function to tell if the file is gzip'ed or not.
import gzip
import os
f = '/path/to/data.txt'
print os.path.exists(f) # True
with gzip.GzipFile(f) as zf:
print zf.read() # Print out content as expected
import zipfile
print zipfile.is_zipfile(f) # Give me false. Not expected
I want to use `zipfile` module but it always reports false.
I just want to have a confirmation that `zipfile` module is not compatible
with `gzip`. If so, why it is the case? Are `zip` and `gzip` considered
different format?
Answer: > I have created a zip file in python using the gzip module.
No you haven't. gzip doesn't create zip files.
> I just want to have a confirmation that zipfile module is not compatible
> with gzip.
Confirmed.
> If so, why it is the case?
A gzip file is a single file compressed with zlib with a very small header. A
zip file is multiple files, each optionally compressed with zlib, in a single
archive with a header and directory.
> Are zip and gzip considered different format?
Yes.
|
Python - Reading from a text file that is being written in Windows
Question: I am using Windows 7, Python 2.7. I am trying to write to a text file with one
file ID in one program that continues writing new data/numbers for several
minutes.
In a separate program, after the writing has begun, I am trying to read from
the file that is being written in order to update and plot the data in the
file.
While the first program is writing the data, I am unable to read the data
until it finishes. Here is some example code to illustrate my point:
**Program 1:**
import time
fid = open("test1.txt", "w+")
for i in range(0, 5):
fid.write(str(i) + "\n")
print(i)
time.sleep(5)
fid.close()
**Program 2:**
fid = open("test1.txt", "r+")
dataList = fid.read().splitlines()
print(dataList)
fid.close()
Executing Program 2 while Program 1 is running does not allow me to see any
changes until Program 1 is completed.
Is there a way to fix this issue? I need to keep the reading and writing in
two separate programs.
Answer: This might be caused by buffering in program 1. You can try flushing the
output in program 1 after each write:
fid.write(str(i) + "\n")
fid.flush()
Another thing you can try is to run the Python interpreter in unbuffered mode
for program 1. Use the `python -u` option.
Also, do you need to open the file for update (mode `r+`) in program 2? If you
just want to read it, mode `r` is sufficient, or you can omit the mode when
calling `open()`.
|
Trying to get json data from URL using Python
Question: I am learning to get json data from a link and use that data later on. But i
am getting error: "RuntimeError: maximum recursion depth exceeded while
calling a Python object"
Here is my code:
import json
import requests
from bs4 import BeautifulSoup
url = "http://example.com/category/page=2&YII_CSRF_TOKEN=31eb0a5d28f4dde909d3233b5a0c23bd03348f69&more_products=true"
header = {'x-requested-with': 'XMLHttpRequest'}
mainPage = requests.get(url, headers = header)
xTree = BeautifulSoup(mainPage.content, "lxml")
newDictionary=json.loads(str(xTree))
print (newDictionary)
**EDIT:** Okay I got the response data from using this slight change, here is
the new code:
import json
import requests
from bs4 import BeautifulSoup
url = "http://example.com/category/page=2&YII_CSRF_TOKEN=31eb0a5d28f4dde909d3233b5a0c23bd03348f69&more_products=true"
header = {'x-requested-with': 'XMLHttpRequest'}
mainPage = requests.get(url, headers = header
print (mainPage.json())
Answer: Don't use beautiful soup to process a json http response. Use something like
requests:
url = "https://www.daraz.pk/womens-kurtas-shalwar-kameez/?pathInfo=womens-kurtas-shalwar-kameez&page=2&YII_CSRF_TOKEN=31eb0a5d28f4dde909d3233b5a0c23bd03348f69&more_products=true"
header = {'x-requested-with': 'XMLHttpRequest'}
t = requests.get(url, headers=True)
newDictionary=json.loads(t)
print (newDictionary)
The beautiful soup object can't be parsed with json.loads() that way.
If you have HTML data on some of those json keys then you can use beautiful
soup to parse those string values individually. If you have a key called
content on your json, containing html, you can parse it like so:
BeautifulSoup(newDictionary.content, "lxml")
You may need to experiment with different parsers, if you have fragmentary
html.
|
python - " Import error no module named httplib2" in only some files
Question: When I import the httplib2 in quickStart.py and run it using terminal It
works.
Now I import quickStart in another file main.py(Google app engine web app
python file) and try loading the page via localhost it shows "Import error no
module named httplib2" while both files are in the same directory. It shows
following error :-
ERROR 2015-10-13 12:41:47,128 wsgi.py:263]
Traceback (most recent call last):
File "C:\Program Files (x86)\google\google_appengine\google\appengine\runtime\wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "C:\Program Files (x86)\google\google_appengine\google\appengine\runtime\wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "C:\Program Files (x86)\google\google_appengine\google\appengine\runtime\wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "G:\dtuwiki\dtuwiki2\main.py", line 7, in <module>
import quickStart
File "G:\dtuwiki\dtuwiki2\quickStart.py", line 2, in <module>
import httplib2
ImportError: No module named httplib2
INFO 2015-10-13 18:11:47,398 module.py:809] default: "GET / HTTP/1.1" 500 -
main.py
import webapp2
import jinja2
import os
import cgi
import quickStart
template_dir = os.path.join(os.path.dirname(__file__), 'templates')
root_dir = os.path.dirname(__file__)
jinja_env =
jinja2.Environment(loader=jinja2.FileSystemLoader([template_dir,root_dir]),autoescape=True)
def escapeHTML(string):
return cgi.escape(string , quote="True")
class Handler(webapp2.RequestHandler):
def write(self,*a,**kw):
#self.response.write(form %{"error":error})
self.response.out.write(*a,**kw)
def render_str(self,template,**params):
t = jinja_env.get_template(template)
return t.render(params)
def render(self , template ,**kw):
self.write(self.render_str(template,**kw))
quickStart.py
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
import oauth2client
from oauth2client import client
from oauth2client import tools
import datetime
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
SCOPES = 'https://www.googleapis.com/auth/calendar.readonly'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Google Calendar API Python Quickstart'
def get_credentials():
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'calendar-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatability with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
I also tried following --
$ python -m pip install httplib2
Requirement already satisfied (use --upgrade to upgrade): httplib2 in /usr/local/lib/python2.7/site-packages
Cleaning up...
C:\WINDOWS\system32>python -m pip -V
pip 7.1.2 from C:\Python27\lib\site-packages (python 2.7)
C:\WINDOWS\system32>python -m pip list
google-api-python-client (1.4.2)
httplib2 (0.9.2)
Jinja2 (2.8)
oauth2client (1.5.1)
pip (7.1.2)
uritemplate (0.6)
virtualenv (13.1.2)
Answer: Google App Engine requires that any 3rd party modules be included inside the
application source tree in order to deploy it to App Engine. This means that
items inside `site-packages` will not be imported into an app running under
the development SDK and you will see an error similar to what you are
experiencing.
[Here are the
docs](https://cloud.google.com/appengine/docs/python/tools/libraries27?hl=en#vendoring)
on how to include libraries like `httplib2`.
The short of it is that you would need to `pip install -t some_dir <libname>`
and then add `some_dir` to your application's path inside of
`appengine_config.py`
|
IPython autoreload changes in subdirectory
Question: I launch IPython from the main folder `/project`. Now if I make changes in the
file `/project/tests/some_module.py`, the changes fail to be autoreloaded in
IPython. Also, I get the following message after I save the changes and want
to run some other script in the prompt:
[autoreload of some_module failed: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/IPython/extensions/autoreload.py", line 229, in check
superreload(m, reload, self.old_objects)
ImportError: No module named some_module]
It seems it detects changes were made inside folder `/tests` but it cannot
import it. Can someone help me with this?
**Edit:** For better clarification: I launch IPython from the terminal in the
main folder. In this folder, I have another folder `tests`. Inside `tests` I
have two python files:
**some_module.py** :
def hello_world():
print "Hello World!!!"
**use_some_module.py** :
from some_module import hello_world
hello_world()
Once I launched IPython, further changes in _some_module.py_ won't be loaded
in IPython. For example, if I add a second `print "Hello Earth!!!"` in the
definition of `hello_world()`, and run `run tests/use_some_module.py`, I get
the error message shown above, and will only get the `"Hello World!!!"` print.
**Edit2** : I would like a solution where I don't need to either change the
working directory, or adding any search paths manually. I want it to be loaded
automatically with autoreload.
Answer: If all you need is to reload a changed file in Python just do the following:
from main import some_module
....
reload(some_module)
But if your reload purposes are really eager, you can do the following (taken
from [this question](http://stackoverflow.com/questions/5364050/reloading-
submodules-in-ipython)):
%load_ext autoreload
%autoreload 2
The previous code will reload all changed modules every time before executing
a new line.
_NOTE:_ You can also check [dreload](http://ipython.org/ipython-
doc/rel-0.10.1/html/interactive/reference.html#dreload) which does a recursive
reload of a module, and [%run](http://ipython.org/ipython-
doc/rel-0.10.1/html/interactive/tutorial.html#the-run-magic-command) which
allows you to run any python script and load all of its data directly into the
interactive namespace.
Hope it helps,
|
for loop returning the last object not the preceding ones python
Question:
def excel(vendor_ids):
for i in vendor_ids:
t = Test()
c = pycurl.Curl()
c.setopt(c.URL, (str("https://api.box.com/2.0/folders/%s")%(i)))
c.setopt(pycurl.HTTPHEADER, ['Authorization: Bearer %s'%(access_token)])
c.setopt(c.WRITEFUNCTION, t.body_callback)
c.perform()
c.close()
contents=(t.contents)
#print(contents)
jsondict=(json.JSONDecoder().decode(contents))
collect=(jsondict['item_collection'])
ids= (collect['entries'])
dic=[]
for k in ids:
print(k)
return k
K=excel(vendor_ids)
when i print the following print but when i return i only get the last one
{u'sequence_id': u'0', u'etag': u'0', u'type': u'folder', u'id': u'4322345554', u'name': u'rejected'}
{u'sequence_id': u'0', u'etag': u'0', u'type': u'folder', u'id': u'4392281882', u'name': u'incoming'}
{u'sequence_id': u'0', u'etag': u'0', u'type': u'folder', u'id': u'4392284514', u'name': u'rejected'}
{u'sequence_id': u'0', u'etag': u'0', u'type': u'folder', u'id': u'4866336745', u'name': u'imports'}
{u'sequence_id': u'0', u'etag': u'0', u'type': u'folder', u'id': u'4912855065', u'name': u'Incoming'}
{u'sequence_id': u'0', u'etag': u'0', u'type': u'folder', u'id': u'4912855189', u'name': u'Rejected'}
Answer: What you can do is to store your data to a list and return at the end.
Something like this:
lst=[]
for k in ids:
print(k)
lst.append(k)
return lst
But your outer loop still iterate only once, i can't really understand your
code and use case, but you could use `yield` instead of `return` if you have
`generator` concept and matches your use case.
|
How to select features, using python, when dataframe has features in rows
Question: My data frame is like this: Where px1, px2,...px99 are placeholders and appear
in data frame as columns. It has values like 5569, 5282 etc, which are real
features to be selected. These features are in many thousands. I want to
filter important features. Trying to use Random Forest. I know I can filter
Px's from Random Forest but how actual features embedded within? I am using
python.
**px1 px2 px3 px4 px5 px6 px7 px8 px9 px10**
5569 5282 93
5569 5280 93 9904
5569 5282 93 93 3893 8872 3897 9904
5569 5280 5551 93 93 3995 8607
5569 5280 93 8867
5282 5569 93 9904 93
Answer: You don't need more than 2 column cause chronology doesn't matter,so
df = pds.concat([df[['px1',col]].rename(columns={col:'px2'}) for col in df.columns],\
axis=0,join='outer').dropna()
Now, because you only consider the 1st variable, you have to see:
for label,dist in df.groupby('px1')['px2']:
dist.hist(bins=len(dist.unique()),label=label)
|
Arrow-key events in VTK on Windows
Question: Why are arrow key-press events not forwarded to the vtkRenderWindowInteractor
on Windows? Is there a workaround? Is there a general difference between
arrow-key events on Windows and Mac?
I can reproduce the problem with the following sample code. On Mac OS, I see
'Up', 'Down', 'Left' and 'Right' if I press the arrow keys. But on Windows, I
don't see anything (the callback is not entered). I wondered why this is the
case.
I use VTK 6.2.0 for python 2.7. I tested on Windows Server 2012 (similar to
Windows 8) and Windows 7 (on a virtual machine), showing both the same
behaviour.
# -*- coding: utf-8 -*-
import vtk
import sys
class ClickInteractorStyle(vtk.vtkInteractorStyleTrackballCamera):
def __init__(self, parent=None):
self.AddObserver("CharEvent",self.onKeyPressEvent)
def onKeyPressEvent(self, renderer, event):
key = self.GetInteractor().GetKeySym()
print key
self.OnChar()
def run():
sphereSource = vtk.vtkSphereSource()
sphereSource.SetCenter(0.,0.,0.)
sphereSource.SetRadius(1.)
sphereSource.Update()
sphereMapper = vtk.vtkPolyDataMapper()
sphereMapper.SetInputConnection(sphereSource.GetOutputPort())
sphereActor = vtk.vtkActor()
sphereActor.SetMapper(sphereMapper)
renderer = vtk.vtkRenderer()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindowInteractor = vtk.vtkRenderWindowInteractor()
renderWindowInteractor.SetRenderWindow(renderWindow)
style = ClickInteractorStyle()
renderWindowInteractor.SetInteractorStyle(style)
style.SetCurrentRenderer(renderer)
renderer.AddActor(sphereActor)
renderWindow.Render()
renderWindowInteractor.Start()
###############################################################################
# MAIN
###############################################################################
if __name__ == "__main__":
try:
run()
except:
print "Unexpected error: %s", sys.exc_info()[0]
raise
Answer: This fixed my problem: Replace
self.AddObserver("CharEvent",self.onKeyPressEvent)
with
self.AddObserver("KeyPressEvent",self.onKeyPressEvent)
|
Python - random card
Question: I am trying to make a program in python in which the computer displays a
random card when the user presses enter. The same card can not be printed
twice. My code so far is:
from random import *
used = []
number = randint(1,13)
if number == 11:
number = 'Jack'
elif number == 12:
number = 'Queen'
elif number == 13:
number = 'King'
suit = randint(1,4)
if suit == 1:
suit = 'Hearts'
elif suit == 2:
suit = 'Spades'
elif suit == 3:
suit = 'Diamonds'
elif suit == 4:
suit = 'Clubs'
end = False
while end == False :
get = raw_input('Press enter to get a card')
if get == 'end':
end == True
continue
card = number , suit
for i in (used):
if card == i:
continue
used.append(card)
print card
So far it only outputs the same card every time the user presses enter and the
program does not end when the user types 'end'. Can anybody find the mistakes?
Answer: The issue is that in the while loop you assign
card = number, suit
But `number` and `suit` were picked once before you enter the loop and
therefore will never change. You need to reassign `number` and `suit` each
time you loop.
Additionally, you are iterating over the cards in `used`, but initially there
are no cards there, so that loop will never execute. And since the only place
you add `card` to `used` is inside that `for` loop it will never add anything
to `used`.
|
Django {% include %} Templatetag not working for a {% load %} in file
Question: I just rent a VPS to try to publish a Django project. In debug mode in my pc
all runs ok, but when i try to run debug mode in the VPS it comes an error:
Django Version: 1.8.5
Exception Type: SyntaxError
Exception Value:
invalid syntax (inbox.py, line 39)
Exception Location: /usr/lib/python3.4/importlib/__init__.py in import_module, line 109
Python Executable: /usr/bin/python3
Python Version: 3.4.3
Error during template rendering
In template /webapps/rusc/rusc/templates/base.html, error at line 66:
63 </div>
64
65 <div id="div_user">
66
{% include "usermenu.html" %}
67 </div>
68 </div>
In the "usermenu.html" i have this loads that are causing the problem
{% load notifications_tags %}
{% load inbox %}
If i load this in "base.html" the {% extends %} tag doesn't work:
Django Version: 1.8.5
Exception Type: SyntaxError
Exception Value:
invalid syntax (inbox.py, line 39)
Exception Location: /usr/lib/python3.4/importlib/__init__.py in import_module, line 109
Python Executable: /usr/bin/python3
Python Version: 3.4.3
In template /webapps/rusc/rusc/templates/rusc.html, error at line 1
invalid syntax
1
{% extends "base.html" %}
2
3
4 {% block content %}
5 <br />
6 <br />
and if i load on rusc.html i still have the **SyntaxError** but with no html
file attached, just the return with the render:
Environment:
Request Method: GET
Request URL: http://xx.xxx.xx.xx:8000/rusc/
Django Version: 1.8.5
Python Version: 3.4.3
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_filters',
'django_tables2',
'django_messages',
'notifications',
'registration',
'autocomplete_light',
'post',
'etiqueta',
'recurs',
'usuari',
'buscador',
'cela',
'rusc.faq',
'micawber.contrib.mcdjango')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Traceback:
File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/webapps/rusc/rusc/views.py" in ruscView
24. return render(request,"rusc.html", {'celas':celas,'notifications': notif})
File "/usr/local/lib/python3.4/dist-packages/django/shortcuts.py" in render
67. template_name, context, request=request, using=using)
File "/usr/local/lib/python3.4/dist-packages/django/template/loader.py" in render_to_string
98. template = get_template(template_name, using=using)
File "/usr/local/lib/python3.4/dist-packages/django/template/loader.py" in get_template
35. return engine.get_template(template_name, dirs)
File "/usr/local/lib/python3.4/dist-packages/django/template/backends/django.py" in get_template
30. return Template(self.engine.get_template(template_name, dirs))
File "/usr/local/lib/python3.4/dist-packages/django/template/engine.py" in get_template
167. template, origin = self.find_template(template_name, dirs)
File "/usr/local/lib/python3.4/dist-packages/django/template/engine.py" in find_template
141. source, display_name = loader(name, dirs)
File "/usr/local/lib/python3.4/dist-packages/django/template/loaders/base.py" in __call__
13. return self.load_template(template_name, template_dirs)
File "/usr/local/lib/python3.4/dist-packages/django/template/loaders/base.py" in load_template
23. template = Template(source, origin, template_name, self.engine)
File "/usr/local/lib/python3.4/dist-packages/django/template/base.py" in __init__
190. self.nodelist = engine.compile_string(template_string, origin)
File "/usr/local/lib/python3.4/dist-packages/django/template/engine.py" in compile_string
261. return parser.parse()
File "/usr/local/lib/python3.4/dist-packages/django/template/base.py" in parse
341. compiled_result = compile_func(self, token)
File "/usr/local/lib/python3.4/dist-packages/django/template/loader_tags.py" in do_extends
210. nodelist = parser.parse()
File "/usr/local/lib/python3.4/dist-packages/django/template/base.py" in parse
341. compiled_result = compile_func(self, token)
File "/usr/local/lib/python3.4/dist-packages/django/template/defaulttags.py" in load
1159. lib = get_library(taglib)
File "/usr/local/lib/python3.4/dist-packages/django/template/base.py" in get_library
1392. lib = import_library(taglib_module)
File "/usr/local/lib/python3.4/dist-packages/django/template/base.py" in import_library
1331. mod = import_module(taglib_module)
File "/usr/lib/python3.4/importlib/__init__.py" in import_module
109. return _bootstrap._gcd_import(name[level:], package, level)
Exception Type: SyntaxError at /rusc/
Exception Value: invalid syntax (inbox.py, line 39)
Where i can load this data?
The strange thing is that i installed the same project in Windows and Ubuntu
and works fine, this error is only in a Ubuntu VPS from OVH (as far i know).
Any help would be appreciated.
**inbox.py** is a file of Django-messages: [https://github.com/arneb/django-
messages/blob/master/django_messages/templatetags/inbox.py](http://Django-
messages)
from django.template import Library, Node, TemplateSyntaxError
class InboxOutput(Node):
def __init__(self, varname=None):
self.varname = varname
def render(self, context):
try:
user = context['user']
count = user.received_messages.filter(read_at__isnull=True, recipient_deleted_at__isnull=True).count()
except (KeyError, AttributeError):
count = ''
if self.varname is not None:
context[self.varname] = count
return ""
else:
return "%s" % (count)
def do_print_inbox_count(parser, token):
"""
A templatetag to show the unread-count for a logged in user.
Returns the number of unread messages in the user's inbox.
Usage::
{% load inbox %}
{% inbox_count %}
{# or assign the value to a variable: #}
{% inbox_count as my_var %}
{{ my_var }}
"""
bits = token.contents.split()
if len(bits) > 1:
if len(bits) != 3:
raise TemplateSyntaxError("inbox_count tag takes either no arguments or exactly two arguments")
if bits[1] != 'as':
raise TemplateSyntaxError("first argument to inbox_count tag must be 'as'")
return InboxOutput(bits[2])
else:
return InboxOutput()
register = Library()
register.tag('inbox_count', do_print_inbox_count)
Answer: It seems that the problem refers to the django-messages app's version that is
used on your VPS. You are using python3.4 version and if you will just install
django_messages with pip install you'll face an issue with old Exception
syntax (just in 39 line):
raise TemplateSyntaxError, "first argument to inbox_count tag must be 'as'"
It was changed in the master branch <https://github.com/arneb/django-
messages/commit/659a3dd710051f54e3edc1d76cdfb910d7d04c1a#diff-2006ff4f62d84a3bee25f8b1823d6a5fL39>,
so if you try update django-messages app version you will get rid of
SyntaxError problem.
|
Serial Commands for BrainTree Scientific, Inc. Syringe Pump (model bs-8000) rs232
Question: **UPDATE:** After ensuring my commands, serial config, and terminator ('\r')
were correct I got this working on 1 of 5 computers. This leads me to believe
it is an adapter issue. I plan on calling the company to see about ordering a
USB/RJ11 adapter (I had been using a Keyspan USB->DB9->RJ11 adapter on my mac)
* * *
I've read [this](http://stackoverflow.com/questions/27183874/not-receiving-
reply-from-syringe-pump-via-rs232-using-mscomm1-input) but I am still unable
to communicate with this pump. This is the python script I modified
([source](http://www.varesano.net/blog/fabio/serial%20rs232%20connections%20python)),
import time
import serial
# configure the serial connections (the parameters differs on the device you are connecting to)
ser = serial.Serial(
port='/dev/tty.USA19H142P1.1', # /dev/tty.KeySerial1 ?
baudrate=19200,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS
)
if not ser.isOpen():
ser.open()
print ser
commands = ['dia26.59', 'phn01', 'funrat', 'rat15mm', 'vol0.7', 'dirinf',
'phn02', 'funrat', 'rat7.5mm', 'vol.5', 'dirinf', 'phn03',
'funrat', 'rat15mm', 'vol0.7', 'dirwdr', 'phn04', 'funstp',
'dia26.59', 'phn01', 'funrat', 'rat15mm', 'vol1.0', 'dirinf',
'phn02', 'funrat', 'rat7.5mm', 'vol.5', 'dirinf', 'phn03',
'funrat', 'rat15mm', 'vol1.0', 'dirwdr', 'phn04', 'funstp']
for cmd in commands:
print cmd
ser.write(cmd + '\r')
time.sleep(1)
out = ''
while ser.inWaiting() > 0:
out += ser.read(1)
if out != '':
print '>>' + out
tty ports:
$ ls -lt /dev/tty* | head
crw--w---- 1 nathann tty 16, 0 Oct 13 14:13 /dev/ttys000
crw-rw-rw- 1 root wheel 31, 6 Oct 13 14:12 /dev/tty.KeySerial1
crw-rw-rw- 1 root wheel 31, 8 Oct 13 13:52 /dev/tty.USA19H142P1.1
crw-rw-rw- 1 root wheel 2, 0 Oct 13 10:00 /dev/tty
crw-rw-rw- 1 root wheel 31, 4 Oct 12 11:34 /dev/tty.Bluetooth-Incoming-Port
crw-rw-rw- 1 root wheel 4, 0 Oct 12 11:34 /dev/ttyp0
crw-rw-rw- 1 root wheel 4, 1 Oct 12 11:34 /dev/ttyp1
crw-rw-rw- 1 root wheel 4, 2 Oct 12 11:34 /dev/ttyp2
crw-rw-rw- 1 root wheel 4, 3 Oct 12 11:34 /dev/ttyp3
crw-rw-rw- 1 root wheel 4, 4 Oct 12 11:34 /dev/ttyp4
I'm not even sure if it is sending the commands. Not getting any errors or
feedback. Nothing is happening on the pump and nothing is getting returned
(`out` string is always empty)
This is my output:
(sweetcrave)nathann@glitch sweetcrave (master) $ python pumptest.py
Serial<id=0x1093af290, open=True>(port='/dev/tty.USA19H142P1.1', baudrate=19200, bytesize=7, parity='O', stopbits=2, timeout=None, xonxoff=False, rtscts=False, dsrdtr=False)
dia26.59
>>
phn01
funrat
rat15mm
vol0.7
^CTraceback (most recent call last):
File "pumptest.py", line 28, in <module>
time.sleep(1)
KeyboardInterrupt
My ultimate goal:
* set up pumps parameters
* there are three phases that are specified:
* phase 1: push liquid to end of tube
* phase 2: dispense liquid in specific rate and volume
* phase 3: pull liquid back up
* the liquid is pulled back up (phase 3) so that it won't drip from the manifold, and so the subject can't suck it out. As such, phase 1 is needed to push the
* liquid back to the outflow point.
* volume and dispense rate can be changed. Use the following formula:
* rate= volume/sec * 60
* example: .5/4 x 60 (deliver .5 ml over a 4 sec duration)=7.5
Answer: The pumps are very easy to talk to - but if you experience much trouble - then
there must be a problem waiting to be fixed.
Before you worry about sending commands to pumps from your programming code,
it's a good idea to test the pump is ready to make a computer connection.
From years of experience with these pumps I can tell you broken cables are the
MOST common problem when you experience this level of difficulty communicating
with pumps, Number 2 is plugging them into the correct hole on the back of the
pump.
I suggest grabbing a known working application from a 3rd party - like mine
<http://www.SyringePumpPro.com>, installing it and using it to confirm that
your pump will communicate with a known functioning piece of software. If all
is well with the pumps and cables, SyringePumpPro will detect and display your
pump's activities in seconds. It wont cost you anything and it will let you
know the pump, serial adapter and cables are all working properly.
Your program...
I will leave aside the issue of whether your tty port is being opened etc,
however if you send the pumps anything they will answer - usually with a
sequence like
00S? for an unknown command.
Looking at your python code - I am concerned that you repeat the commands
twice. The pump only needs these commands uploaded once and will remember them
through power cycles.
Assuming your commands were getting to the pump none of them would cause the
pump to pump - they are loading the pump's memory with what to do but not
actually doing it. You need the command RUN to get the pump to run what your
uploading.
The pump commands can all be uploaded in one upload and then RUN. Then it's
all about synchronizing the pumping and the stimulation in your python code.
That pumping sequence above can be done in a PPL or pump program language file
and uploaded once.
There's example PPL files in the back of the pump manual and the example you
might be interested in is Example 2.
It's called repeated dispense with suck back.
As it happens I made a looooong training video about this which is on youtube.
It might really help explain how the pumps work, how the pump's programming
language works and how to upload pump programs.
Good luck
|
Can't read csv data from gzip-compressed file which stores name of archived file with Pandas
Question: I am trying to read csv data from gzip archive file which also stores name of
the archived data file. The problem is that pandas.read_csv() picks the name
of the archived file and returns it as very first data entry in returned
DataFrame. How can I skip the name of the archived file? I looked at all
available options of pandas.read_csv() and could not find the one that would
allow me to do it.
Here is how I create my gzip archive file in python:
import pandas as pn
import numpy as np
import tarfile
a = np.ones((10, 8))
np.savetxt('ones.dat', a)
fh = tarfile.open('ones.tar.gz', 'w:gz')
fh.add('ones.dat', arcname='numpy_ones.dat')
fh.close()
f = pn.read_csv('ones.tar.gz', compression='gzip', sep='\s+', header=None)
In [32]: f
Out[32]:
0 1 2 3 4 5 6 7 8
0 numpy_ones.dat 1 1 1 1 1 1 1 1
1 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
2 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
3 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
4 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
5 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
6 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
7 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
8 1.000000000000000000e+00 1 1 1 1 1 1 1 NaN
9 NaN NaN NaN NaN NaN NaN NaN NaN NaN
I am using Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03). Numpy:
'1.9.2' Pandas: '0.16.2'
Many thanks, Masha
Answer: Use tarfile again:
fh = tarfile.open('ones.tar.gz', 'r:gz')
f = fh.extractfile('numpy_ones.dat')
df = pd.read_csv(f, delim_whitespace=True, header=None)
|
Python file reading and printing, with exceptions and terminations
Question: Hello I am a very new programmer who is self teaching Python. I have
encountered a very interesting problem and need some help in creating a
program for it. It goes like this
A hotel salesperson enters sales in a text file. Each line contains the
following, separated by semicolons: The name of the client, the service sold
(such as Dinner, Conference, Lodging, and so on), the amount of the sale, and
the date of that event. Write a program that reads such a file and displays
the total amount for each service category. Display an error if the file does
not exist or the format is incorrect.
* Prompt for the name of the file to process and issue an error message and terminate if that file can’t be opened
* Verify that each line has the correct number of items and terminate if it does not
* Verify that the dollar amount is a valid floating-‐point number and terminate if it is not
* Keep a list with the categories that are encountered (they may be different than below) and another list with the cumulative dollar amount for each category. These are two lists but the elements in one relate to the elements in the other (by position)
* Close the file when all the data has been processed
* Display the categories and the total for each one
Our Sample text file looks something like this
Bob;Dinner;10.00;January 1, 2015
Tom;Dinner;14.00;January 2, 2015
Anne;Lodging;125.00;January 3, 2015
Jerry;Lodging;125.00;January 4, 2015
Here is what I am trying to do. I am trying to get an understanding of this
and have some help from experts on Stack Overflow to solve this problem while
learning. Thank you everyone!
import sys
def main():
try:
line = infile.readline()
for line in infile:
inputFileName = input("Input file name: ")
infile = open(inputFileName, "r")
fields = line.split(";")
value = float(fields[1])
except:
print("Error: The file cannot be opened.")
sys.exit(1)
infile.close()
main()
Answer: Here's a basic sketch. This is untested so likely contains typos, logic errors
and such. Also, it doesn't check all of the error conditions you mentioned.
However, it should be enough to get your started. The main trick is to just
[throw an
exception](https://docs.python.org/3.2/reference/simple_stmts.html#raise)
where you encounter an error, and [catch
it](https://docs.python.org/3.2/reference/compound_stmts.html#except) where
you can deal with it. That immediately stops processing the file as you
wanted. The other trick is to keep a
[dictionary](https://docs.python.org/3.2/tutorial/datastructures.html) mapping
category to total so you can keep a running total by category.
def main():
# Req 1.1: ask for a filename
file_name = input("Input file name: ")
try:
# To keep things simple we do all the file processing
# in a separate function. That lets us handle
# any error in the file processing with a single
# except block
amount_by_category = process_file(file_name)
# Req 6: display the categories - python will
# display the contents of a data structure when we print() it
print('Totals: ', amount_by_category)
except Exception, e:
# Reqs 1-3: display errors
print('Error processing file:', e)
def process_file(file_name):
# Req 1.2: open the file
infile = open(file_name, 'r')
# Req 4.1: somewhere to remember the categories
amount_by_catgeory = {}
# Reqs 2-4: we are dealing with a many line file
# Req 5: when we reach the end, python closes the file for us automatically
for line in infile:
# Req 2.1: each line should have 4 values separated by ;
fields = line.split(';')
# Req 2.2: does this line have 4 values?
if len(fields) != 4:
raise Exception('Expected 4 fields but found %s' % len(fields))
# Req 3: is the third value a number?
value = float(fields[2])
# Req 4.2: what category does this line belong to?
category = fields[1]
# Req 4.3.1: have we seen this category before?
if not category in amount_by_category:
# Req 4.3.2: accumulations start from 0?
amount_by_category[category] = 0.0f
# Req 4.4: increase the cumulative amount for the category
amount_by_category[category] += value
return amount_by_category
|
Python quick search in numpy array
Question: Does Numpy have a function for quick search of element in 2D array and return
its indexes? Mean for example:
a=54
array([[ 0, 1, 2, 3],
[ 4, 5, 54, 7],
[ 8, 9, 10, 11]])
So equal value will be `array[1][2]`. Of course I can make it using simple
loops- but I want something similar to:
if 54 in arr
Answer:
In [4]: import numpy as np
In [5]: my_array = np.array([[ 0, 1, 2, 3],
[ 4, 5, 54, 7],
[8, 54, 10, 54]])
In [6]: my_array
Out[6]:
array([[ 0, 1, 2, 3],
[ 4, 5, 54, 7],
[ 8, 54, 10, 54]])
In [7]: np.where(my_array == 54) #indices of all elements equal to 54
Out[7]: (array([1, 2, 2]), array([2, 1, 3])) #(row_indices, col_indices)
In [10]: temp = np.where(my_array == 54)
In [11]: zip(temp[0], temp[1]) # maybe this format is what you want
Out[11]: [(1, 2), (2, 1), (2, 3)]
|
LogisticRegression object has no attributes
Question: I'm a noob at both Django and scikit-learn trying to create a simple REST
server using these technologies to perform classification. So far I'm just
trying to get some sort of result to test that the controllers work, but the
program doesn't seem to detect any of my LogisticRegression object's
attributes.
My code:
from rest_framework.views import APIView
from .mixins import JSONResponseMixin
from django.http import HttpResponse
import numpy as np
from sklearn import svm
from sklearn.linear_model import LogisticRegression
import json
import pickle
class LogisticRegression(APIView):
def get(self, request):
return HttpResponse("Stub")
def post(self, request):
logreg = LogisticRegression()
array = '{"data":' + request.body + '}'
#print array
jobj= json.loads(array)
jarray = jobj['data']
matrix = np.asarray([[j['GravityX'], j['GravityY'], j['GravityZ'], j['true']] for j in jarray])
X = matrix[:, :3]
y = matrix[:, 3]
logreg.fit(X, y)
return HttpResponse("test")
And the result (created using Postman with a dummy data JSON in the request
body):
Request Method: POST
Request URL: http://localhost:8000/classify/logistic_regression
Django Version: 1.8.4
Exception Type: AttributeError
Exception Value:
'LogisticRegression' object has no attribute 'fit'
Exception Location: /Users/mart/myclassifier/classifierapi/views.py in post, line 31
Python Executable: /Users/mart/myclassifier/myclassifiervenv/bin/python
Python Version: 2.7.10
Python Path:
['/Users/mart/myclassifier',
'/Users/mart/myclassifier/myclassifiervenv/lib/python27.zip',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/plat-darwin',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/plat-mac',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/lib-tk',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/lib-old',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/lib-dynload',
'/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin',
'/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk',
'/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac',
'/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/mart/myclassifier/myclassifiervenv/lib/python2.7/site-packages']
Server time: Wed, 14 Oct 2015 02:54:32 +0000
I tried other attributes, and the corresponding no attribute error results.
Any ideas?
Answer: Changing
`from sklearn.linear_model import LogisticRegression`
to
`import sklearn.linear_model as lm`
and using
`logreg = lm.LogisticRegression()`
fixed it.
|
Count attributes of many to many relation and filter by it
Question: I would like to do something slightly different than
[this](http://stackoverflow.com/questions/7883916/django-filter-the-model-on-
manytomany-count).
Suppose I have something like this in my models.py:
class Hipster(models.Model):
name = CharField(max_length=50)
has_iphone = BooleanField(default=True)
class Party(models.Model):
participants = models.ManyToManyField(Hipster, related_name="participants")
And then do:
hip_parties = Party.objects.filter(participants__has_iphone__istrue__count=4)
How can I do that?
UPDATE:
>>> Question.objects.filter(options__is_correct=True).annotate(options__count=Count('options')).filter(options__count=0)
[]
>>> q = Question.objects.get(id=49835)
>>> q.options.all()[0].is_correct
False
>>> q.options.all()[1].is_correct
False
>>> q.options.all()[2].is_correct
False
>>> q.options.all()[3].is_correct
False
>>> q.options.all()[4].is_correct
False
>>> q.options.all()[5].is_correct
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/patrickbassut/Programming/logus/lib/python2.7/site-packages/django/db/models/query.py", line 177, in __getitem__
return list(qs)[0]
IndexError: list index out of range
Answer: You can use annotations for this.
from django.db.models import Count
Party.objects.filter(
participants__has_iphone=True
).annotate(iphone_count=Count('participants')).filter(
iphone_count=4
)
|
Python script to download specific files from FTP and update the download directory
Question: I need some help in order to create a script to download multiple .csv files
from FTP every 24 hours, ignoring the old files and to continue downloading
the new ones to keep an update. I'm having trouble writing the pattern because
the name of the files vary from 01150728.csv, 01150904.csv to 02xxxxxx.csv,
03xxxxx.csv and currently it reached 30151007.csv. The script that I'm
currently using downloads all the files but I need a command line in order to
do what I described earlier.
from ftplib import FTP
import sys
import ftplib
import os
import fnmatch
os.chdir(r'______________') # Directory where the files need to be downloaded
ftp=ftplib.FTP('xxxxxxxx', 'xxxxx', 'xxxxxx') # ftp host info
ftp.cwd('______')
filematch='*csv'
for filename in ftp.nlst(filematch):
fhandle=open(filename, 'wb')
print 'Getting ' + filename
ftp.retrbinary('RETR '+ filename, fhandle.write)
fhandle.close()
ftp.quit()
Answer: You should keep a list or set of the files already fetched. The following
assumes you run the code once and don't exit.
from ftplib import FTP
import sys
import ftplib
import os
import fnmatch
os.chdir(r'______________') # Directory where the files need to be downloaded
ftp=ftplib.FTP('xxxxxxxx', 'xxxxx', 'xxxxxx') # ftp host info
ftp.cwd('______')
filematch='*csv'
import time
downloaded = []
while True: # runs forever
skipped = 0
for filename in ftp.nlst(filematch):
if filename not in downloaded:
fhandle=open(filename, 'wb')
print 'Getting ' + filename
ftp.retrbinary('RETR '+ filename, fhandle.write)
fhandle.close()
downloaded.append(filename)
else:
skipped += 1
print 'Downloaded %s, skipped %d files' % (downloaded[-1], skipped)
time.sleep(24*60*60) # sleep 24 hours after finishing last download
ftp.quit()
If you run the script each day, omit the while loop and use pickle or simply
write the list/set in a file, and load it at the start of the script.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.