text
stringlengths 226
34.5k
|
---|
How to add signed certificates to a bitcoin bip70 payment message? python
Question: References:
<https://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki>
<https://github.com/aantonop/bitcoinbook/blob/develop/selected%20BIPs/bip-0070.mediawiki#paymentdetailspaymentrequest>
message PaymentRequest {
##optional uint32 payment_details_version = 1 [default = 1]; # 'x509+sha256' in this case.
##optional string pki_type = 2 [default = "none"];
optional bytes pki_data = 3;
##required bytes serialized_payment_details = 4;
optional bytes signature = 5;
}
The ones with `##` at the front are not a problem, I've solved them already.
`optional bytes pki_data` wants a byte encoded version of 'x509+sha256' so...
x509_bytes = open('/path/to/x509.der', 'rb').read()
pki_data = hashib.sha256(x509_bytes)
Is the above correct?
Next `optional bytes signature`, 'digital signature over a hash of the
protocol buffer serialized variation of the PaymentRequest message'
I'm not sure how to achieve this so any suggestions would be greatly
appreciated.
Finally I have...
message X509Certificates {
repeated bytes certificate = 1;
}
`repeated bytes certificate` 'Each certificate is a DER [ITU.X690.1994] PKIX
certificate value. The certificate containing the public key of the entity
that digitally signed the PaymentRequest MUST be the first certificate.'
I only have the one cert I got from the comodo root authority so I think I
only need to supply the raw byte data of the cert to satisfy this one which
already exists in the form of `x509_bytes` above, so...
repeated bytes certificate = x509_bytes
Am I close??
Also I notice that `repeated bytes certificate` comes after `optional bytes
signature` but shouldn't I deal with that before `message PaymentRequest` so
that I can serialise it into my http response somehow?
EDIT:
For what it's worth I'm aware that I need to import, instantiate and in some
cases serialise these methods before sending them as a request/response but
what I'm looking for are the methods on how to manipulate and supply the
information required.
Thanks :)
Answer: To add PKI data to the PaymentRequest object:
pki_data = X509Certificates()
certificates = [your_cert_der_data, root_cert_der_data]
for cert in certificates:
pki_data.certificate.append(cert)
request.pki_data = pki_data.SerializeToString()
To add a signature (with
[pycrypto](https://www.dlitz.net/software/pycrypto/)):
from Crypto.Hash import SHA256
from Crypto.Signature import PKCS1_v1_5
from Crypto.PublicKey import RSA
# At this moment request object must contain serialized_payment_details, pki_type and pki_data
request.signature = "" # Add empty signature
request_hash = SHA256.new(request.SerializeToString())
private_key = RSA.importKey(private_key_der_data)
signer = PKCS1_v1_5.new(private_key)
request.signature = signer.sign(request_hash)
result = request.SerializeToString()
|
indentation error in python for raspberry pi
Question: I am creating a motion sensor with my Raspberry Pi with python. I had this
working fine to just print "motion Detected!", but now I'd like to add a
feature to also take a picture. I am getting an indentation error on the line
`camera = picamera.PiCamera()`. Removing this line causes an indentation error
on the next line. What is going on here? Can my defined callback only be 1
line of code?
# coding: utf-8
import RPi.GPIO as GPIO
import picamera
import time
GPIO.setmode(GPIO.BCM)
PIR_PIN = 7
GPIO.setup(PIR_PIN, GPIO.IN)
filename = 0
def MOTION(PIR_PIN):
print "Motion Detected!"
camera = picamera.PiCamera()
filename = 'image' + filenumber + '.jpg'
camera.capture(filename)
filenumber = filenumber + 1
print "PIR Module Test (CTRL+C to exit)"
time.sleep(2)
print "Ready"
try:
GPIO.add_event_detect(PIR_PIN, GPIO.RISING, callback=MOTION)
while 1:
time.sleep(100)
except KeyboardInterrupt:
print " Quit"
GPIO.cleanup()
enter code here
Answer: When looking at the indentation for the `MOTION` function, this is it (with
tabs being visualized by `\t`):
def MOTION(PIR_PIN):
print "Motion Detected!"
\t\t\t camera = picamera.PiCamera()
\t\t\t filename = 'image' + filenumber + '.jpg'
\t\t\t camera.capture(filename)
\t\t\t filenumber = filenumber + 1
As you can see, the `print` is indented with spaces, while the rest of the
lines is indented with 3 tabs first and then 3 spaces. So the print and the
following lines have a different indentation which throws Python off.
In Python indentation matters, and for a level of indentation, the indentation
needs to be exactly the same. So if you indent with 15 spaces first, you need
to keep that level for the whole function body.
In general, you should absolutely stay consistent in the way you indent:
Either use tabs, or use spaces. Don’t use both, and especially not on the same
line.
|
Adding a Second Text Area to a Flask App
Question: I have a simple flask app. I tried to add a second text area to the app to add
another function. I copied the text area exactly and I receive the following
message when I his submit on the lower box:
Bad Request The browser (or proxy) sent a request that this server could not
understand.
Here is the code all that was changed from the original app is adding a second
text area identical to the first. It appears ok but the problem arrises when I
hit submit even if the text area name is changed. I don't understand how the
server sees a difference between the 2nd and first box at this point. Heres
[the app that's being changed.](http://ven-diagram.herokuapp.com/) Its about
as sophisticated as you would expect given this question. Thx!
<!DOCTYPE html>
<html>
<head>
<title>Ven Diagram</title>
<style type=”text/css”>
#pagearea {
width: 100%;
margin: 0 auto;
}
textarea {
width: 48%;
padding: 0 0 0 0;
margin: 0 0 0 0;
}
input {
width: 80px;
height: 40px;
}
</style>
</head>
<body>
<div id="pagearea">
<h1>
This program allows you to match text. The text must be unicode.
Enter two text blocks to compare:
</h1>
<form action="/" method="post">
<textarea name="A" cols="100" rows="20"></textarea>
<textarea name="B" cols="100" rows="20"></textarea>
<br />
<input type="submit" value="Execute" />
</form>
</div>
<div id="pagearea">
<h1>
This will give add and subtract permutations for numbers.
</h1>
<form action="/" method="post">
<textarea name="A" cols="100" rows="20"></textarea>
<br />
<input type="submit" value="Execute" />
</form>
</div>
{% with messages = get_flashed_messages() %}
{% if messages %}
Results:
<pre>
{% for message in messages %}
{{ message }}
{% endfor %}
</pre>
{% endif %}
{% endwith %}
</body>
</html>
Here is the Python code:
#!flask/bin/python
import flask, flask.views
import os
import urllib
app = flask.Flask(__name__)
app.secret_key = "REDACTED"
class View(flask.views.MethodView):
def get(self):
return flask.render_template('index.html')
def post(self):
A = flask.request.form['A']
B = flask.request.form['B']
A = urllib.unquote(unicode(A))
B = urllib.unquote(unicode(B))
C = A.split()
D = B.split()
Both = []
for x in C:
if x in D:
Both.append(x)
for x in range(len(Both)):
Both[x]=str(Both[x])
Final = []
for x in set(Both):
Final.append(x)
MissingA = []
for x in C:
if x not in Final and x not in MissingA:
MissingA.append(x)
for x in range(len(MissingA)):
MissingA[x]=str(MissingA[x])
MissingB = []
for x in D:
if x not in Final and x not in MissingB:
MissingB.append(x)
for x in range(len(MissingB)):
MissingB[x]=str(MissingB[x])
#flask.flash("A:")
#flask.flash(A)
#flask.flash("B:")
#flask.flash(B)
#flask.flash("C:")
#flask.flash(C)
#flask.flash("D:")
#flask.flash(D)
flask.flash("Words in Both:")
flask.flash(Final)
flask.flash("Words in First Box Only:")
flask.flash(MissingA)
flask.flash("Words in Second Box Only:")
flask.flash(MissingB)
return self.get()
app.add_url_rule('/', view_func=View.as_view('main'), methods=['GET', 'POST'])
app.debug = True
if __name__ == "__main__":
# Bind to PORT if defined, otherwise default to 5000.
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
Answer: even if you might think it's overkill. I learned that I want to use flask
extensions in every possible way there is.
In your case i would recommend
[wtforms](http://wtforms.readthedocs.org/en/latest/) with [flask-
wtf](https://flask-wtf.readthedocs.org/en/latest/) to better handle any sort
of forms.
|
Convert Unicode to ANSI or UTF8 in python
Question: I am using Python 2.7.8 and I have a script which does parsing:
def parse():
with open('myfile.txt') as f:
.
.
.
Print l
myfile.txt has a UNICODE coding. How can I add a code to this script so that
it reads the myfile.txt as ANSI for example?
Answer: using io solved my problem:
import io
def parse()
with io.open(path,'r',encoding='utf_16') as f:
.....
solved my problem.
|
Generating 3D Gaussian distribution in Python
Question: I want to generate a Gaussian distribution in Python with the x and y
dimensions denoting position and the z dimension denoting the magnitude of a
certain quantity.
The distribution has a maximum value of 2e6 and a standard deviation
sigma=0.025.
In MATLAB I can do this with:
x1 = linspace(-1,1,30);
x2 = linspace(-1,1,30);
mu = [0,0];
Sigma = [.025,.025];
[X1,X2] = meshgrid(x1,x2);
F = mvnpdf([X1(:) X2(:)],mu,Sigma);
F = 314159.153*reshape(F,length(x2),length(x1));
surf(x1,x2,F);
In Python, what I have so far is:
x = np.linspace(-1,1,30)
y = np.linspace(-1,1,30)
mu = (np.median(x),np.median(y))
sigma = (.025,.025)
There is a Numpy function numpy.random.multivariate_normal what can supposedly
do the same as MATLAB's mvnpdf, but I am struggling to undestand the
[documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html).
Especially in obtaining the covariance matrix needed by
numpy.random.multivariate_normal.
Answer: As of scipy 0.14, you can use `scipy.stats.multivariate_normal.pdf()`:
<http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html>
import numpy as np
from scipy.stats import multivariate_normal
x, y = np.mgrid[-1.0:1.0:30j, -1.0:1.0:30j]
# Need an (N, 2) array of (x, y) pairs.
xy = np.column_stack([x.flat, y.flat])
mu = np.array([0.0, 0.0])
sigma = np.array([.025, .025])
covariance = np.diag(sigma**2)
z = multivariate_normal.pdf(xy, mean=mu, cov=covariance)
# Reshape back to a (30, 30) grid.
z = z.reshape(x.shape)
|
Right Format to Embed JSON in CSV
Question: I am trying to write a Python function that query an API that returns various
JSON snippets and want to put each of these snippets (some are objects, more
are json arrays) within a .csv file.
What's the right way to escape all commas, [, ], " and other symbols so that
Excel can read it properly in the sheet?
Right now almost everything shifts after the first column of JSON in the file.
Parsing each json objects into their own columns is not what I'm looking to
do.
Answer: the `csv` module will take care of all of those things for you:
>>> import csv, json
>>> import StringIO
>>> outfile = StringIO.StringIO()
>>> writer = csv.writer(outfile)
>>> writer.writerow([json.dumps({"hello":"world"})]*3)
>>> print outfile.getvalue()
"{""hello"": ""world""}","{""hello"": ""world""}","{""hello"": ""world""}"
>>>
|
Java's FluentWait in Python
Question: In java selenium-webdriver package, there is a
[`FluentWait`](https://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/support/ui/FluentWait.html)
class:
> Each FluentWait instance defines the maximum amount of time to wait for a
> condition, as well as the frequency with which to check the condition.
> Furthermore, the user may configure the wait to ignore specific types of
> exceptions whilst waiting, such as NoSuchElementExceptions when searching
> for an element on the page.
In other words, it is something more than [implicit and explicit
wait](http://www.toolsqa.com/selenium-webdriver/implicit-explicit-n-fluent-
wait/), gives you more control for waiting for an element. It can be very
handy and definitely has use cases.
Is there anything similar in [python selenium package](http://selenium-
python.readthedocs.org/), or should I implement it myself?
(I've looked through documentation for [Waits](http://selenium-
python.readthedocs.org/waits.html) \- nothing there).
Answer: I believe you can do this with Python, however it isn't packaged as simply as
a FluentWait class. Some of this was covered in the documentation you provided
by not extensively.
The WebDriverWait class has optional arguments for timeout, poll_frequency,
and ignored_exceptions. So you could supply it there. Then combine it with an
Expected Condition to wait for elements for appear, be clickable, etc... Here
is an example:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import *
driver = webdriver.Firefox()
# Load some webpage
wait = WebDriverWait(driver, 10, poll_frequency=1, ignored_exceptions=[ElementNotVisibleException, ElementNotSelectableException])
element = wait.until(EC.element_to_be_clickable((By.XPATH, "//div")))
Obviously you can combine the wait/element into one statement but I figured
this way you can see where this is implemented.
|
logic for python web scraper for business names
Question: I am new to python and was wondering if there was a way to get the business
name of a website through a python script.
I have 1000s of businesses I need to validate for their names and was
wondering if it was possible to scale this up by looking at their website or
address and find the registered business name under the address.
I want to ask this question here before I waste my research time on if this is
even possible.
Thank you for any help in advanced.
Answer: In certain cases, the page title of the website homepage could be an
approximation of the full business name.
The following is a very simple example of pinging a website homepage and
returning the `<title>` tag, an approximation of the business name. You need
to install the requests and lxml libraries.
import requests
from lxml import etree
from StringIO import StringIO
parser = etree.HTMLParser()
urls = ['http://google.com', 'http://facebook.com', 'http://stackoverflow.com']
for url in urls:
r = requests.get(url)
html = r.text
tree = etree.parse(StringIO(html), parser)
title = tree.xpath('//title/text()')
print url, title
>>>
http://google.com ['Google']
http://facebook.com ['Welcome to Facebook - Log In, Sign Up or Learn More']
http://stackoverflow.com ['Stack Overflow']
In other cases, you might want to navigate to a 'Legal' or 'Contact Us' page
if you need find the full legal business name. That's much trickier because
the name isn't necessarily associated with any html tag; it's likely just free
text floating somewhere on your page.
|
Uploading files using Python requests module
Question: I need to load a file using a soap endpoint url...When I use the below code to
load it the files are getting loaded but they are not in a readable
format...When I load using SOAPUI tool it loads properly...
import requests
xml = '''<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:v1="http://s.sa.com/services/Attachment/v1.0">
<soapenv:Header/>
<soapenv:Body>
<v1:attachment>
<filename>FUZZY.csv</filename>
<data>cid:138641430598</data>
</v1:attachment>
</soapenv:Body>
</soapenv:Envelope>'''
target_url = 'https://s.sa.com:443/soatest/FileAttachmentService'
headers = {'Content-Type': 'text/xml','charset':'utf-8'}
r = requests.post(target_url,data=xml,headers=headers,auth=('3user1',''))
print 'r.text = ', r.text
print 'r.content = ', r.content
print 'r.status_code = ', r.status_code
New changes:-
files = {'file':open('./FUZZY.csv','rb')}
print files
r = requests.post(target_url,files=files,data=xml,headers=headers,auth=('p3user1',''))
Error:
Traceback (most recent call last):
File "soapcall_python.py", line 18, in <module>
r = requests.post(target_url,files=files,data=xml,headers=headers,auth=('p3user1',''))
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/api.py", line 88, in post
return request('post', url, data=data, **kwargs)
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/sessions.py", line 418, in request
prep = self.prepare_request(req)
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/sessions.py", line 356, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/models.py", line 297, in prepare
self.prepare_body(data, files)
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/models.py", line 432, in prepare_body
(body, content_type) = self._encode_files(files, data)
File "/opt/python2.7/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/models.py", line 109, in _encode_files
raise ValueError("Data must not be a string.")
ValueError: Data must not be a string.
Answer: You aren't sending the contents of the file anywhere. You're just sending a
reference to a file that doesn't exist anywhere that the server can see.
As the docs for [SOAP references to attachments](http://www.w3.org/TR/SOAP-
attachments#SOAPReferenceToAttachements) explains, the way you do this is to
send a MIME-multipart message. If you're using the CID reference mechanism,
that `cid` isn't some arbitrary string, it has to match the `Content-ID`
header of a message in the MIME envelope.
The `requests` docs for [POST a Multipart-Encoded File](http://docs.python-
requests.org/en/latest/user/quickstart/#post-a-multipart-encoded-file) explain
how to send the contents of a file as a message within a MIME request;
briefly:
with open('FUZZY.csv', 'rb') as f:
files = {'file': f}
r = requests.post(target_url,
data=xml, headers=headers, auth=('3user1',''),
files=files)
However, this simple method doesn't give you access to the Content-ID that
will be generated under the covers for your message. So, if you want to use
the CID reference mechanism, you will need to generate the MIME envelope
manually (e.g., by using
[`email.mail.MIMEMultipart`](https://docs.python.org/3.4/library/email.mime.html))
and sending the entire thing as a `data` string.
|
Python nesting list comprehensions with variable depth
Question: I am writing a program where I use different methods to fit a dataset, and in
the final step I want to take a distribution over the models, and then test it
against a validation set to pick the optimal distribution. In order to do so,
I need lists that sum up to 1 (the total weight of all the models). In the
case of 3 models, I use the following code:
Grid = np.arange(0,1.1,0.1)
Dists = [[i,j,k] for i in Grid for j in Grid for k in Grid if i+j+k==1]
I am now looking for a way to generalize this to arbitrary number of models,
say d, without specifying what d is beforehand. I have looked at np.tensordot
and np.outer, but couldnt figure out a way to make this work. Any ideas would
be appreciated. cheers, Leo
Answer: You are looking for
[`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product):
from itertools import product
Dists = [list(p) for p in product(Grid, repeat=3) if sum(p) == 1]
|
python : speed dating & permutation
Question: I have 36 persons and 6 tables. I'd like to form 6 groups around each table.
Then form 6 other groups and 6 others again and again... until everybody met
everybody but nobody met someone twice.
So far I came up with this script, but it produces repetitions :
people = [ [1,2,3,4,5,6],[7,8,9,10,11,12],[13,14,15,16,17,18],[19,20,21,22,23,24],[25,26,27,28,29,30],[31,32,33,34,35,36] ]
def perm():
z = 0
for X in people:
for r in range(0,z):
f = X.pop()
X.insert(0,f)
z +=1
def calcul():
for q in range(0,6):
table_1 = []
table_2 = []
table_3 = []
table_4 = []
table_5 = []
table_6 = []
for r in range(0,6):
table_1.append(people[r][0])
table_2.append(people[r][1])
table_3.append(people[r][2])
table_4.append(people[r][3])
table_5.append(people[r][4])
table_6.append(people[r][5])
print(table_1)
print(table_2)
print(table_3)
print(table_4)
print(table_5)
print(table_6)
print '--'
perm()
calcul()
and the output is :
[1, 7, 13, 19, 25, 31]
[2, 8, 14, 20, 26, 32]
[3, 9, 15, 21, 27, 33]
[4, 10, 16, 22, 28, 34]
[5, 11, 17, 23, 29, 35]
[6, 12, 18, 24, 30, 36]
--
[1, 12, 17, 22, 27, 32]
[2, 7, 18, 23, 28, 33]
[3, 8, 13, 24, 29, 34]
[4, 9, 14, 19, 30, 35]
[5, 10, 15, 20, 25, 36]
[6, 11, 16, 21, 26, 31]
--
[1, 11, 15, 19, 29, 33]
[2, 12, 16, 20, 30, 34]
[3, 7, 17, 21, 25, 35]
[4, 8, 18, 22, 26, 36]
[5, 9, 13, 23, 27, 31]
[6, 10, 14, 24, 28, 32]
--
[1, 10, 13, 22, 25, 34]
[2, 11, 14, 23, 26, 35]
[3, 12, 15, 24, 27, 36]
[4, 7, 16, 19, 28, 31]
[5, 8, 17, 20, 29, 32]
[6, 9, 18, 21, 30, 33]
--
[1, 9, 17, 19, 27, 35]
[2, 10, 18, 20, 28, 36]
[3, 11, 13, 21, 29, 31]
[4, 12, 14, 22, 30, 32]
[5, 7, 15, 23, 25, 33]
[6, 8, 16, 24, 26, 34]
--
[1, 8, 15, 22, 29, 36]
[2, 9, 16, 23, 30, 31]
[3, 10, 17, 24, 25, 32]
[4, 11, 18, 19, 26, 33]
[5, 12, 13, 20, 27, 34]
[6, 7, 14, 21, 28, 35]
--
Can someone explain me why ? And maybe how to get the result ? Thanks a lot!
Answer: > **Edit** : It appears the following algorithm works only for ~~odd N~~
>
> **Edit2** : I've updated the code to include an automated test of the
> requirements. This algorithm only works if N is **prime** You can verify
> this by running the program with any prime number for N, and its odd
> successor, e.g. 53 and 55 (comment out `print_table_perms` in this case!)
>
> **Edit3** : Apparently this is a famous open problem in Mathematics
> <http://math.stackexchange.com/questions/924326/diner-permutations>
To satisfy the requirements, that everyone sits together, and never sits with
the same person twice, you need N+1 rounds.
I came up with the following algorithm by working it out on paper with N=3
1 2 3
4 5 6
7 8 9
--
1 5 9
4 8 3
7 2 6
--
1 8 6
4 2 9
7 5 3
--
1 4 7
2 5 8
3 6 9
--
The algorithm works as follows: in each successive round `i`, build row `j` by
tracing the diagonal from the current `0th` element in that row, wrapping
around diagonally. You can trace this visually in the first three rounds. The
last round is a transposition of the initial matrix, because these "columns"
never have a chance to mix. In the program below we print the transposition
first.
Here's the code:
from copy import deepcopy
def gen_tables(N):
tables = []
x = 1
for i in xrange(N):
tables.append(range(x, x + N))
x += N
return tables
def print_tables(tables):
for table in tables:
print " ".join(map(str, table))
print
def print_table_perms(perms):
for perm in perms:
print_tables(perm)
def gen_table_perms(tables):
perms = []
N = len(tables[0])
for table in tables:
assert(len(table) == N)
# first, add the "columns", who won't be mixed together
perms.append(map(list, zip(*tables)))
current_tables = deepcopy(tables)
next_tables = deepcopy(tables)
# next, mix the columns with a diagonal shift (mod N)
for i in xrange(N):
perms.append(deepcopy(current_tables))
for j in xrange(N):
for k in xrange(N):
next_tables[j][k] = current_tables[(j + k) % N][k]
(current_tables, next_tables) = (next_tables, current_tables)
return perms
def verify_table_perms(perms):
N = len(perms[0][0])
expect = set((x for x in xrange(1, N * N + 1)))
v = {}
for i in xrange(1, N * N + 1):
v[i] = set((i,))
for perm in perms:
for table in perm:
for seat in table:
v[seat].update(table)
for s in v.values():
assert s == expect, s
tables = gen_tables(6)
perms = gen_table_perms(tables)
verify_table_perms(perms)
print_table_perms(perms)
Here's the output from this program:
1 7 13 19 25 31
2 8 14 20 26 32
3 9 15 21 27 33
4 10 16 22 28 34
5 11 17 23 29 35
6 12 18 24 30 36
--
1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17 18
19 20 21 22 23 24
25 26 27 28 29 30
31 32 33 34 35 36
--
1 8 15 22 29 36
7 14 21 28 35 6
13 20 27 34 5 12
19 26 33 4 11 18
25 32 3 10 17 24
31 2 9 16 23 30
--
1 14 27 4 17 30
7 20 33 10 23 36
13 26 3 16 29 6
19 32 9 22 35 12
25 2 15 28 5 18
31 8 21 34 11 24
--
1 20 3 22 5 24
7 26 9 28 11 30
13 32 15 34 17 36
19 2 21 4 23 6
25 8 27 10 29 12
31 14 33 16 35 18
--
1 26 15 4 29 18
7 32 21 10 35 24
13 2 27 16 5 30
19 8 33 22 11 36
25 14 3 28 17 6
31 20 9 34 23 12
--
1 32 27 22 17 12
7 2 33 28 23 18
13 8 3 34 29 24
19 14 9 4 35 30
25 20 15 10 5 36
31 26 21 16 11 6
--
**Edit2:** with the automated test, this is the output
Traceback (most recent call last):
File "table_perms.py", line 65, in <module>
verify_table_perms(perms)
File "table_perms.py", line 61, in verify_table_perms
assert s == expect, s
AssertionError: set([1, 2, 3, 4, 5, 6, 7, 8, 12, 13, 14, 15, 17, 18, 19, 20, 22, 24, 25, 26, 27, 29, 30, 31, 32, 36])
Python does have
[`itertools.permutations`](https://docs.python.org/2/library/itertools.html#itertools.permutations),
but it's not very useful in this case, as we don't want _all_ permutations, we
just want a set of permutations that satisfy the requirements.
|
Django CMS test - can't find namespace
Question: I've got a very strange problem with Django CMS tests. When I run:
`./manage.py test --settings=my_project.test_settings` I get that error:
> ERROR: test_guest_list_view (apps.news.tests.test_views.NewsListViewTest)
> Tests if guest can't see disabled entries
> \----------------------------------------------------------------------
> Traceback (most recent call last): File
> "/home/robert/work/projects/my_project/apps/news/tests/test_views.py", line
> 52, in test_guest_list_view response = self.client.get(self._get_list_url())
> File "/home/robert/work/projects/my_project/apps/news/tests/test_views.py",
> line 17, in _get_list_url return reverse("news:list") File
> "/home/robert/.virtualenvs/my_project/local/lib/python2.7/site-
> packages/django/core/urlresolvers.py", line 532, in reverse key)
> NoReverseMatch: u'news' is not a registered namespace
**But when I ran tests only for that app everything works fine - all tests
pass.**
That's my very simple test class so far:
# -*- coding: utf-8 -*-
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.core.urlresolvers import reverse
from django.test.utils import override_settings
from cms.test_utils.testcases import CMSTestCase
from apps.accounts.tests.factories import CustomUserFactory
from .factories import NewsFactory
from ..models import News
class BaseNewsTestCase(CMSTestCase):
def _get_list_url(self):
"""Returns URL to objects list"""
return reverse("news:list")
def _create_data_structure(self):
"""Created test data"""
# add objects
self.disabled = NewsFactory(is_visible=False)
self.enabled = NewsFactory()
NewsFactory()
NewsFactory()
self.user = CustomUserFactory(username='user', password='user')
# privileged_user
self.privileged_user = CustomUserFactory(username='p_user',
password='p_user')
# add permissions
content_type = ContentType.objects.get_for_model(News)
permissions_list = ('add_news', 'change_news', 'delete_news')
permissions = Permission.objects.filter(content_type=content_type,
codename__in=permissions_list)
self.privileged_user.user_permissions.add(*permissions)
@override_settings(ROOT_URLCONF='apps.news.tests.urls')
class NewsListViewTest(BaseNewsTestCase):
def test_guest_list_view(self):
"""Tests if guest can't see disabled entries"""
self._create_data_structure()
response = self.client.get(self._get_list_url())
objects = response.context['object_list']
self.assertEqual(len(objects), 3)
for obj in objects:
self.assertNotEqual(obj, self.disabled)
and test urls:
# -*- coding: utf-8 -*-
from django.contrib import admin
from django.conf.urls import url, patterns, include
urlpatterns = patterns(
'',
url(r'^admin/', include(admin.site.urls)),
url(r'^news/', include('apps.news.urls', namespace='news')),
url(r'', include('cms.urls')),
)
Any clue what can cause that problem? I follow this, to test my CMA apphook
(<http://django-cms.readthedocs.org/en/latest/extending_cms/testing.html>)
I have the same tests pattern in different app in that project but it doesn't
throw that error.
Answer: I found a solution. Instead of using
`@override_settings(ROOT_URLCONF='myapp.tests.urls')` for my TestCases, as
suggested in [Django CMS docs](http://django-
cms.readthedocs.org/en/latest/extending_cms/testing.html#resolving-view-names)
I used the Django way found
[here](https://docs.djangoproject.com/en/1.6/topics/testing/tools/#django.test.SimpleTestCase.urls).
So for each TestCase I do this, for example:
class NewsListViewTest(CMSTestCase):
urls = 'apps.news.tests.urls'
|
Send mail to an IP Address using python
Question: So I am trying to send mails via a python script. It is working fine using the
usual format of the receiver address "[email protected]". When I'm now trying to
use the script with a receiver "user@[IP-Address] all my debug output looks
good and the sendmail method works, but the mail is never received. I get the
IP address via dig from my terminal.
This is my method expecting a receiver as parameter (cut some unimportant
stuff out and obfuscated the real addresses/credentials)
def sendmail(receiver):
msg = MIMEText('This is the body of the mail.')
msg['From'] = email.utils.formataddr(('me', myaddr))
msg['To'] = email.utils.formataddr(('me', receiver))
msg['Subject'] = "Python Mail Script"
server = smtplib.SMTP(smtpServer, smtpPort)
try:
server.set_debuglevel(True)
# identify ourselves, prompting server for supported features
server.ehlo()
# If we can encrypt this session, do it
if server.has_extn('STARTTLS'):
server.starttls()
server.ehlo() # re-identify ourselves over TLS connection
server.login("me@...", "...")
server.sendmail("me@...", toAddr, msg.as_string())
finally:
server.quit()
The output I get for sending a message to an IP-Address is (again I obfuscated
the smtp server and mail/ip addresses):
send: 'ehlo [127.0.1.1]\r\n'
reply: '250-mail.example.de\r\n'
reply: '250-PIPELINING\r\n'
reply: '250-SIZE 102400000\r\n'
reply: '250-VRFY\r\n'
reply: '250-ETRN\r\n'
reply: '250-STARTTLS\r\n'
reply: '250-AUTH PLAIN LOGIN\r\n'
reply: '250-AUTH=PLAIN LOGIN\r\n'
reply: '250-ENHANCEDSTATUSCODES\r\n'
reply: '250-8BITMIME\r\n'
reply: '250 DSN\r\n'
reply: retcode (250); Msg: mail.example.de
PIPELINING
SIZE 102400000
VRFY
ETRN
STARTTLS
AUTH PLAIN LOGIN
AUTH=PLAIN LOGIN
ENHANCEDSTATUSCODES
8BITMIME
DSN
send: 'STARTTLS\r\n'
reply: '220 2.0.0 Ready to start TLS\r\n'
reply: retcode (220); Msg: 2.0.0 Ready to start TLS
send: 'ehlo [127.0.1.1]\r\n'
reply: '250-mail.example.de\r\n'
reply: '250-PIPELINING\r\n'
reply: '250-SIZE 102400000\r\n'
reply: '250-VRFY\r\n'
reply: '250-ETRN\r\n'
reply: '250-AUTH PLAIN LOGIN\r\n'
reply: '250-AUTH=PLAIN LOGIN\r\n'
reply: '250-ENHANCEDSTATUSCODES\r\n'
reply: '250-8BITMIME\r\n'
reply: '250 DSN\r\n'
reply: retcode (250); Msg: mail.example.de
PIPELINING
SIZE 102400000
VRFY
ETRN
AUTH PLAIN LOGIN
AUTH=PLAIN LOGIN
ENHANCEDSTATUSCODES
8BITMIME
DSN
send: 'AUTH PLAIN AHRob21hc0B0b3Jh6feldi5kZQBiYWdpbmVy\r\n'
reply: '235 2.7.0 Authentication successful\r\n'
reply: retcode (235); Msg: 2.7.0 Authentication successful
send: 'mail FROM:<[email protected]> size=234\r\n'
reply: '250 2.1.0 Ok\r\n'
reply: retcode (250); Msg: 2.1.0 Ok
send: 'rcpt TO:<me@[IP-ADDRESS]>\r\n'
reply: '250 2.1.5 Ok\r\n'
reply: retcode (250); Msg: 2.1.5 Ok
send: 'data\r\n'
reply: '354 End data with <CR><LF>.<CR><LF>\r\n'
reply: retcode (354); Msg: End data with <CR><LF>.<CR><LF>
data: (354, 'End data with <CR><LF>.<CR><LF>')
send: 'Content-Type: text/plain; charset="us-ascii"\r\nMIME-Version: 1.0\r\nContent-Transfer-Encoding: 7bit\r\nFrom: me <[email protected]>\r\nTo: me <me@[IP-ADDRESS]>\r\nSubject: Python Mail Script\r\n\r\nThis is the body of the mail.\r\n.\r\n'
reply: '250 2.0.0 Ok: queued as 4C5A78560196\r\n'
reply: retcode (250); Msg: 2.0.0 Ok: queued as 4C5A78560196
data: (250, '2.0.0 Ok: queued as 4C5A78560196')
send: 'quit\r\n'
reply: '221 2.0.0 Bye\r\n'
reply: retcode (221); Msg: 2.0.0 Bye
Anyone seeing any mistake or error?
Edit: Using an own smtp postfix server in the university(unfortunatly no
access to the outside world) and sending the mail to user@[IP-ADDRESS] of this
very server the mail arrives. Probably another sign that in the problematic
case stated above the smtp-server of the receiver just doesn't allow an IP
address as destination.
Edit2: /var/log/mail.log
mail.log:Sep 9 00:34:06 mail postfix/qmgr[4854]: A30168560199: from=<[email protected]>, size=1197, nrcpt=1 (queue active)
mail.log:Sep 9 00:34:06 mail postfix/smtp[19355]: A30168560199: to=<me@[IP-ADDRESS]>, relay=none, delay=314, delays=313/0.04/0.01/0, dsn=4.4.1, status=deferred (connect to IP-ADDRESS[IP-ADDRESS]:25: Connection refused)
mail.log:Sep 9 00:38:31 mail postfix/smtpd[19907]: warning: Illegal address syntax from my_host_where_the_python_script_runs[IP of me] in RCPT command: <me@IP-ADDRESS>
So the connection from the SMTP server I am using to the recipient server is
refused and 4 minutes later the smtp deamon says it received a wrong syntax
even though in the handshake where I am sending via my script it say 250 Ok
... So basically I think the recipient SMTP refuses it. Thx guys
Answer: Domain names in email addresses are much like domain names for web sites - a
single server at a single IP address is often responsible for many different
domain names, and so in those cases you must supply the name in your request.
For example, a mail server at `1.2.3.4` may service both `abc.com` and
`xyz.com`. If you say `RCPT TO: <joe@[1.2.3.4]>`, the server has no idea if
you are trying to reach `[email protected]` or `[email protected]`. The server might
arbitrarily pick one (and guess wrong), accept the message and drop it, or
reject the message.
Unless you have somehow verified with the server operator that the server will
deliver `user@[ip]`-addressed messages to the domain you expect, you should
not assume that it will.
|
Python synchronise between threads and processes
Question: A bit of background:
I am writing a function in Django to get the next invoice number, which needs
to be sequential (not gaps), so the function looks like this:
def get_next_invoice_number():
"""
Returns the max(invoice_number) + 1 from the payment records
Does NOT pre-allocate number
"""
# TODO ensure this is thread safe
max_num = Payment.objects.aggregate(Max('invoice_number'))['invoice_number__max']
if max_num is not None:
return max_num + 1
return PaymentConfig.min_invoice_number
Now the problem is, this function only returns the `max()+1`, in my production
environment I have multiple Django processes, so if this function is called
twice for 2 different payments (before the first record saved), they will get
the same invoice number.
To mitigate this problem I can override the `save()` function to call the
`get_next_invoice_number()` to minimise the time gap between these function
calls, but there is still a very tiny chance for problem to happen.
So I want to implement a lock in the approve method, something like
from multiprocessing import Lock
lock = Lock()
class Payment(models.Model):
def approve(self):
lock.acquire()
try:
self.invoice_number = get_next_invoice_number()
self.save()
except:
pass
finally:
lock.release()
So my questions are:
1. Does this look okay?
2. The lock is for multiprocess, how about threads?
**UPDATE:**
1. As my colleague pointed out, this is not going to work when it's deployed to multiple servers, the locks will be meaningless.
2. Looks like DB transaction locking is the way to go.
Answer: The easiest way to do this, by far, is with your database's existing tools for
creating sequences. In fact, if you don't mind the value starting from `1` you
can just use Django's
[`AutoField`](https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.AutoField).
If your business requirements are such that you need to choose a starting
number, you'll have to see how to do this in the database. Here are
[some](http://stackoverflow.com/questions/117800/how-to-get-django-autofields-
to-start-at-a-higher-number)
[questions](http://stackoverflow.com/questions/7504513/modify-django-
autofield-start-value) that might help.
Trying to ensure this with locks or transactions will be harder to do and
slower to perform.
|
Python Indentation Error on import statement?
Question: I am writing a python script for the application maya. In the console i get
this error:
# Error: IndentationError: file <maya console> line 2: expected an indented block #
However this is a simple import statement. I'm not sure why i'm getting it, it
ONLY happens on the statement "import neoARLLF". If it take it out, it doesn't
give it anymore. The module is definitely in the folder with the rest of the
scripts, otherwise i'd presume i'd get an import error. Further more all of
the rest of the script is indented correctly, and i'm not mixing tabs and
spaces, all of it is indented by 4 spaces.
import maya.cmds as mc
import neoARLLF
import neoARnameConv
reload(neoARnameConv)
reload(neoARLF)
seg = neoARLLF.MidLvlFunc()
nameC = neoARnameConv.NameConv()
def jntSegTest():
jointRad = mc.joint("joint1", q=True, rad=True)
jnts = 2
names = []
for i in xrange(1, 2, 1):
name = nameC.curConv("test", "AuxKnee", "right", "joint", "01")
names.append(name)
seg.segmentJnt("joint1", "joint2", jnts, "y", jointRad, names)
jntSegTest()
Anyone know whats up with this code? I've searched for a long time, and all
the indentation errors i found involved mixing tabs with spaces, or not
indenting properly after semicolons (definitions, classes, for loops, etc.).
So i'm at a loss.
Heres the code for the module neoARLLF if it helps. I presume this code has
quite a few errors in it but I cant test out the code to fix it until i can
get the import statement to work in the previous module
# Filename: neoARLLF.py
# Created By: Gregory Smith
# Last Edited: 8/20/14
# Description: Neo Auto Rig - Low Level Functions
# Purpose: The classes in this script house all of the low level functions that will be carried out in
# external scripts.
import maya.cmds as mc
import neoARnameConv
from pymel.core import dt
nameC = neoARnameConv.NameConv()
class LowLvlFunc:
def __init__(self):
def reverseList(self, givenList):
"""Reverses the given list (eg. [1, 2, 3] would turn into [3, 2, 1]
Keyword Args
givenList - list that you want reversed
"""
newList = givenList[:: - 1]
return newList
def copyTranslate(self, source, target):
"""Copies the world-space translate values from one object to another
Keyword Args
source - object you want values copied from
target - object you want values copied to
"""
translate = mc.xform(source, q=True, ws=True, t=True)
rotPiv = mc.xform(target, q=True, rp=True)
newVec = [sum(i) for i in zip(translate, rotPiv)]
mc.xform(target, a=True, ws=True, t=(newVec[0], newVec[1], newVec[2]))
def copyRotate(self, source, target):
"""Copies the world-space rotate values from one object to another
Keyword Args
source - object you want values copied from
target - object you want values copied to
"""
rotate = mc.xform(source, q=True, ws=True, ro=True)
mc.xform(target, ws=True, ro=(rotate[0], rotate[1], rotate[2]))
def lockProtectedAttrs (self, control, lock):
"""Locks or unlocks all attributes in custom attributes text file
Keyword Arguments
control -- the control you want the attributes locked/unlocked on
lock -- if you want the control unlocked or locked (0 or 1)
"""
filePath = (mc.internalVar(usd=True)+"neo_ikFkSnapAttrs")
attrFile = open(filePath, "r")
nextLine = f.readLines()
attrList = []
while (len(nextLine)>0):
cleanLine = line.strip(nextLine)
attrList[len(attrList)] = cleanLine
print cleanLine
nextLine = f.readlines()
f.close()
def unlock:
for curAttr in attrList:
if mc.attributeExists(control, curAttr):
mc.setAttr((control+"."+curAttr), lock=False)
def lock:
for curAttr in attrArray:
if mc.attributeExists(control, curAttr):
mc.setAttr((control+"."+curAttr), lock=True)
lockOpt = {
0 : unlock,
1 : lock
}
lockOpt[lock]()
def zeroOutCustomAttr(self, control):
"""Zeroes out all user defined, custom attributes on given control
Keyword Arguments
control -- control you want attributes zeroed out on
"""
lockProtectedAttrs(control,1)
customAttrs = [mc.listAttr(control, ud=True, k=True, u=True)]
lockProtectedAttrs(control, 0)
for curAttr in customAttrs:
mc.setAttr((control+"."+curAttr), 0)
print ("Resettings attribute "+curAttr)
print ("Custom Attributes on "+control+" have been zeroed out")
class MidLvlFunc:
def __init__(self):
def segmentJnt(self, startJnt, endJnt, jointNum, primAxis, radius, name):
"""Creates 3 evenly spaced joints between 2 given joints
Keyword Args
startJnt - first joint, (ex, knee or elbow joint)
endJnt - second joint, (ex. ankle or wrist joint)
jointNum - number of segments in the chain
primAxis - primary axis of joint chain
radius - radius of other joints
name - name of auxillary joints
"""
startVec = mc.xform(q=True, ws=True, t=True, endJnt)
endVec = mc.xform(q=True, ws=True, t=True, startJnt)
startAux = mc.joint(n=name[0], p=(dt.Vector(startVec))
endAux = mc.joint(n=name[(len(name)-1)], p=(dt.Vector(endVec))
returnList = [startAux]
for i in xrange(1, jointNum, 1):
jointAux = mc.joint(n=name[i], o=(0, 0, 0), rad=radius)
if primAxis = "x":
mc.move(((endJnt.tx) / jointNum), 0, 0, joint, r=True, ls=True)
elif primAxis = "y":
mc.Move(0, ((endJnt.ty) / jointNum), 0, joint, r=True, ls=True)
else
mc.Move=(0, 0, ((endJnt.tz) / jointNum), joint, r=True, ls=True)
returnList.append(jointAux)
returnList.append(endAux)
return returnList
Answer: The problem is in your class `__init__`:
def __init__(self):
You have no code below there, so it errors on the next line. To stub out the
function, add a `pass` statement, like this:
def __init__(self):
pass
|
separate line output by groups
Question: My python script checks `mysqldump` and if any problems script prints :
* Dump is old for db;
* Dump is not complete for db;
* Dump is empty for db;
* MySQL dump does not exist for db;
Script logs these records to the file line by line.
My question is there are a way to format output in the file like:
Dump is old for db;
Dump is old for db;
Dump is old for db;
Dump is not complete for db;
Dump is not complete for db;
Dump is not complete for db;
Dump is empty for db;
Dump is empty for db;
Dump is empty for db;
Because now my file looks like:
Dump is old for db;
Dump is empty for db;
Dump is old for db;
MySQL dump does not exist for db;
...
etc
Here my small script :)
#!/bin/env python
import psycopg2
import sys,os
from subprocess import Popen, PIPE
from datetime import datetime
import smtplib
con = None
today = datetime.now().strftime("%Y-%m-%d")
log_dump_fail = '/tmp/mysqldump_FAIL'
log_fail = open(log_dump_fail,'w').close()
log_fail = open(log_dump_fail, 'a')
sender = 'PUT_SENDER_NAME_HERE'
receiver = ['receiver_name']
smtp_daemon_host = 'localhost'
def db_backup_file_does_not_exist(db_backup_file):
if not os.path.exists(db_backup_file): return True
else: return False
def dump_health(last_dump_row, file_name,db):
last_row = last_dump_row.rsplit(" ")
tms = ''.join(last_row[4:5])
status = last_row[1:3]
if (status) and (tms != today):
log_fail.write("\nDB is old for "+ str(db) + str(file_name) + ", \nDump finished at " + str(''.join(tms)))
log_fail.write("\n-------------------------------------------")
elif not (status) and (tms == None):
log_fail.write("\nDump is not complete for "+str(db) + str(file_name) + " , end of file is not correct")
log_fail.write("\n-------------------------------------------")
suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
def humansize(nbytes):
if nbytes == 0: return '0 B'
i = 0
while nbytes >= 1024 and i < len(suffixes)-1:
nbytes /= 1024.
i += 1
f = ('%.2f' % nbytes).rstrip('0').rstrip('.')
return '%s %s' % (f, suffixes[i])
def dump_size(dump_file, file_name,db):
size = os.path.getsize(dump_file)
if (size < 1024):
human_readable = humansize(size)
log_fail.write("\nDump is empty for " +str(db) + "\n" +"\t" + str (file_name)+", file size is " + str(human_readable))
log_fail.write("\n-------------------------------------------")
def report_to_noc(isubject,text):
TEXT = text
SUBJECT = subject
message = 'Subject: %s\n\n%s' % (SUBJECT, TEXT)
server = smtplib.SMTP(smtp_daemon_host)
server.sendmail(sender, receiver, message)
server.quit()
try:
con = psycopg2.connect(database='**', user='***', password='***', host='****')
cur = con.cursor()
cur.execute("""\
select ad.servicename, (select name from servers where id = ps.server_id) as servername
from packages as p, account_data as ad, package_servers as ps
where p.id=ad.package_id and
p.date_deleted IS NULL and
p.id=ps.package_id and
p.aktuel IS NULL and
p.pre_def_package_id = 4 and
p.mother_package_id !=0 and
ps.subservice_id=5 and
p.mother_package_id NOT IN (select id from packages where date_deleted IS NOT NULL)
ORDER BY servername;
""")
while (1):
row = cur.fetchone ()
if row == None:
break
db = row[0]
server_name = str(row[1])
if (''.join(server_name) == 'SKIP_THIS') or (''.join(server_name) == 'SKIP_THIS'):
continue
else:
db_backup_file = '/storage/backup/db/mysql/' + str(db) + '/current/' + str(db) + '.mysql.gz'
db_backup_file2 = '/storage/backup/' + str(''.join(server_name.split("DB"))) + '/mysql/' + str(db) + '/current/'+ str(db) + '.mysql.gz'
db_file_does_not_exist = False
db_file2_does_not_exist = False
if db_backup_file_does_not_exist(db_backup_file):
db_file_does_not_exist = True
if db_backup_file_does_not_exist(db_backup_file2):
db_file2_does_not_exist = True
if db_file_does_not_exist and db_file2_does_not_exist:
log_fail.write("\nMySQL dump does not exist for " + str(db) + "\n" + "\t" + str(db_backup_file2) + "\n" + "\t" + str(db_backup_file))
log_fail.write("\n-------------------------------------------")
continue
elif (db_file_does_not_exist) and not (db_file2_does_not_exist):
p_zcat = Popen(["zcat", db_backup_file2], stdout=PIPE)
p_tail = Popen(["tail", "-2"], stdin=p_zcat.stdout, stdout=PIPE)
dump_status = str(p_tail.communicate()[0])
dump_health(dump_status,db_backup_file2,db)
dump_size(db_backup_file2, db_backup_file2,db)
elif (db_file2_does_not_exist) and not (db_file_does_not_exist):
p_zcat = Popen(["zcat", db_backup_file], stdout=PIPE)
p_tail = Popen(["tail", "-2"], stdin=p_zcat.stdout, stdout=PIPE)
dump_status = str(p_tail.communicate()[0])
dump_health(dump_status,db_backup_file,db)
dump_size(db_backup_file,db_backup_file,db)
con.close()
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
log_fail.close()
if os.path.getsize(log_dump_fail) > 0:
subject = "Not all MySQL dumps completed successfully. Log file backup:" + str(log_dump_fail)
fh = open(log_dump_fail, 'r')
text = fh.read()
fh.close()
report_to_noc(subject,text)
else:
subject = "MySQL dump completed successfullyi for all DBs, listed in PC"
text = "Hello! \nI am notifying you that I checked mysqldump files this morning.\nThere are nothing to worry about. :)"
report_to_noc(subject,text)
Answer: You can process your log file after it has been written.
One option is to read your file and sort the lines:
lines = open('log.txt').readlines()
lines.sort()
open('log_sorted.txt', 'w').write("\n".join(lines))
This won't emit an empty line between log types.
Another option is to use a `Counter`:
from collections import Counter
lines = open('log.txt').readlines()
counter = Counter()
for line in lines:
counter[line] += 1
out_file = open('log_sorted.txt', 'w')
for line, num in counter.iteritems():
out_file.write(line * num + "\n")
|
Pythonanywhere MySQL connection
Question: I'm new to pythonanywhere and am currently deploying my first app with it and
the bottle framework. I have created a db with the online console but I don't
know the syntax for accessing it. Is it the same syntax as when deploying
locally? Or is it something else? MySQLdb has been imported... Thanks for any
help.
Answer: Here is [page on using MySQL at
PythonAnywhere](https://www.pythonanywhere.com/wiki/UsingMySQL). It suggests
to use the following configuration for django:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': '<your_username>$<your_database_name>',
'USER': '<your_username>',
'PASSWORD': '<your_mysql_password>',
'HOST': 'mysql.server',
}
}
And I'm almost entirely sure that these credentials will be acceped by
[MySQLdb driver](http://mysql-python.sourceforge.net/MySQLdb.html):
db=MySQLdb.connect(
host='mysql.server',
user='<your_username>',
passwd='<your_mysql_password>',
db='<your_username>$<your_database_name>')
|
Limit loop frame rate
Question: As like of
[pygame](http://www.pygame.org/docs/ref/time.html#pygame.time.Clock.tick) i
want to limit the frame rate of a loop. Pygame provide the
**pygame.time.Clock.tick()** way to do it:
> If you pass the optional framerate argument the function will delay to keep
> the game running slower than the given ticks per second. This can be used to
> help limit the runtime speed of a game. By calling Clock.tick(40) once per
> frame, the program will never run at more than 40 frames per second.
But how to do it natively in python ?
To illustrate:
import time
max_frames = 125 # 25*5
current_frame = 1
while current_frame <= max_frames:
print('frame', time.clock(), current_frame)
current_frame += 1
produce:
('frame', 0.01, 1)
('frame', 0.01, 2)
('frame', 0.01, 3)
[...]
('frame', 0.01, 124)
('frame', 0.01, 125)
I would like 25 frames per seconds, so
('frame', 0.01, 1)
('frame', 0.05, 2)
('frame', 0.08, 3)
[...]
('frame', 4.98, 124)
('frame', 5.00, 125)
Answer: You could just use `time.sleep(1./25)` to wait _1/25_ of a second.
while current_frame <= max_frames:
# ... do stuff
time.sleep(1./25)
Note that will will always wait that time _additionally_ to whatever time the
loop body takes anyway. Alternatively, memorize the last execution time and
wait until _this time + 1/25_ of a second.
while current_frame <= max_frames:
start = time.time()
# ... do stuff that might take significant time
time.sleep(max(1./25 - (time.time() - start), 0))
|
Python Bloomberg API not connecting from ipython notebook
Question: based on following code [example of a simple historical data
request](https://code.google.com/p/pyalma/source/browse/trunk/InfoProviders/Bloomberg.py)
and the Python API example provided by Bloomberg I constructed the bdh
function below which works fine when directly called from ipython (see the
testing lines after the function definition).
import blpapi
import pandas as pd
import datetime as dt
from optparse import OptionParser
def parseCmdLine():
parser = OptionParser(description="Retrieve reference data.")
parser.add_option("-a",
"--ip",
dest="host",
help="server name or IP (default: %default)",
metavar="ipAddress",
default="localhost")
parser.add_option("-p",
dest="port",
type="int",
help="server port (default: %default)",
metavar="tcpPort",
default=8194)
(options, args) = parser.parse_args()
return options
def bdh(secList, fieldList,startDate,endDate=dt.date.today().strftime('%Y%m%d'),periodicity='Daily'):
""" Sends a historical request to Bloomberg.
Returns a panda.Panel object.
"""
options = parseCmdLine()
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(options.host)
sessionOptions.setServerPort(options.port)
print "Connecting to %s:%s" % (options.host, options.port)
# Create a Session
session = blpapi.Session(sessionOptions)
# Start a Session
if not session.start():
print "Failed to start session."
return
try:
# Open service to get historical data from
if not session.openService("//blp/refdata"):
print "Failed to open //blp/refdata"
return
# Obtain previously opened service
refDataService = session.getService("//blp/refdata")
# Create and fill the requestuest for the historical data
request = refDataService.createRequest("HistoricalDataRequest")
for s in secList:
request.getElement("securities").appendValue(s)
for f in fieldList:
request.getElement("fields").appendValue(f)
request.set("periodicityAdjustment", "ACTUAL")
request.set("periodicitySelection", "DAILY")
request.set("startDate", startDate)
request.set("endDate", endDate)
print "Sending Request:", request
# Send the request
session.sendRequest(request)
# Process received events
response={}
while(True):
# We provide timeout to give the chance for Ctrl+C handling:
ev = session.nextEvent(500)
if ev.eventType() == blpapi.Event.RESPONSE or ev.eventType() == blpapi.Event.PARTIAL_RESPONSE:
for msg in ev:
secData = msg.getElement('securityData')
name = secData.getElement('security').getValue()
response[name] = {}
fieldData = secData.getElement('fieldData')
for i in range(fieldData.numValues()):
fields = fieldData.getValue(i)
for n in range(1, fields.numElements()):
date = fields.getElement(0).getValue()
field = fields.getElement(n)
try:
response[name][field.name()][date] = field.getValue()
except KeyError:
response[name][field.name()] = {}
response[name][field.name()][date] = field.getValue()
if ev.eventType() == blpapi.Event.RESPONSE:
# Response completly received, so we could exit
break
#converting the response to a panda pbject
tempdict = {}
for r in response:
td = {}
for f in response[r]:
td[f] = pd.Series(response[r][f])
tempdict[r] = pd.DataFrame(td)
data = pd.Panel(tempdict)
finally:
# Stop the session
session.stop()
return(data)
#------------------------------------------------------------
secList = ['SP1 Index', 'GC1 Comdty']
fieldList = ['PX_LAST']
beg = (dt.date.today() - dt.timedelta(30)).strftime('%Y%m%d')
testData = bdh.bdh(secList,fieldList,beg)
testData = testData.swapaxes('items','minor')
print(testData['PX_LAST'])
However, when I try to run exactly the same example (see the lines after the
bdh function definition) from ipython notebook, then I get following error:
SystemExit Traceback (most recent call last)
<ipython-input-6-ad6708eabe39> in <module>()
----> 1 testData = bbg.bdh(tickers,fields,begin)
2 #testData = testData.swapaxes('items','minor')
3 #print(testData['PX_LAST'])
C:\Python27\bbg.py in bdh(secList, fieldList, startDate, endDate, periodicity)
33 """
34
---> 35 options = parseCmdLine()
36
37 # Fill SessionOptions
C:\Python27\bbg.py in parseCmdLine()
24 default=8194)
25
---> 26 (options, args) = parser.parse_args()
27
28 return options
C:\Python27\lib\optparse.pyc in parse_args(self, args, values)
1400 stop = self._process_args(largs, rargs, values)
1401 except (BadOptionError, OptionValueError), err:
-> 1402 self.error(str(err))
1403
1404 args = largs + rargs
C:\Python27\lib\optparse.pyc in error(self, msg)
1582 """
1583 self.print_usage(sys.stderr)
-> 1584 self.exit(2, "%s: error: %s\n" % (self.get_prog_name(), msg))
1585
1586 def get_usage(self):
C:\Python27\lib\optparse.pyc in exit(self, status, msg)
1572 if msg:
1573 sys.stderr.write(msg)
-> 1574 sys.exit(status)
1575
1576 def error(self, msg):
SystemExit: 2
My understanding is that the options needed to connect to Bloomberg work fine
if I call the bdh function from a local ipython session but are wrong if bdh
is called from the kernel notebook initiates???
Hope to get some help, thanks a lot in advance.
Answer: When you call `parseCmdLine()`, it looks at `sys.argv`, which is probably not
what you're expecting.
What about this?
def parseCmdLine():
parser = OptionParser(description="Retrieve reference data.")
parser.add_option("-a",
"--ip",
dest="host",
help="server name or IP (default: %default)",
metavar="ipAddress",
default="localhost")
parser.add_option("-p",
dest="port",
type="int",
help="server port (default: %default)",
metavar="tcpPort",
default=8194)
(options, args) = parser.parse_args()
return options
def bdh(secList, fieldList,startDate,endDate=dt.date.today().strftime('%Y%m%d'),periodicity='Daily', host='localhost', port=8194):
""" Sends a historical request to Bloomberg.
Returns a panda.Panel object.
"""
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(host)
sessionOptions.setServerPort(port)
...
if __name__ == '__main__':
options = parseCmdLine()
secList = ['SP1 Index', 'GC1 Comdty']
fieldList = ['PX_LAST']
beg = (dt.date.today() - dt.timedelta(30)).strftime('%Y%m%d')
testData = bdh.bdh(secList,fieldList,beg, host=options.host, port=options.port)
testData = testData.swapaxes('items','minor')
print(testData['PX_LAST'])
|
Accessing python dateutil relativedelta values
Question: I am very new to python and I've run across a small problem that I haven't
been able to find an answer to by googling. I am running the following code:
from dateutil import relativedelta as rdelta
def diff_dates(date1, date2):
return rdelta.relativedelta(date1,date2)
`d1` and `d2` are two separate dates
years = diff_dates(d2,d1)
print "Years: ", years
The values that get printed out for years are the correct values that I'm
expecting. My problem is that I need to access those values and compare
against some other values. No matter how I try to access the data I get
similar errors:
AttributeError: relativedelta instance has no __call__ method
I need to get the years, months, and days and any help would be greatly
appreciated.
Answer: The object you get, that you call `years`, has the whole information inside.
That object has as attributes the values you want:
In [12]: d = rdelta.relativedelta(datetime.datetime(1998, 10, 20, 1, 2, 3), datetime.datetime(2001, 5, 3, 3, 4, 5))
In [13]: d
Out[13]: relativedelta(years=-2, months=-6, days=-14, hours=-2, minutes=-2, seconds=-2)
In [14]: d.years
Out[14]: -2
In [15]: d.months
Out[15]: -6
|
Python Two Identical Strings are viewed as Different
Question: I have two strings that by all indication look identical:
x1 = 'N C Soft - NCSOFT_Guild Wars 2 December 2013 :: BNLX_AD_Parallax_160x600'
x2 = 'N C Soft - NCSOFT_Guild Wars 2 December 2013 :: BNLX_CT_Parallax_160X600'
However, checking for equality shows they are not.
In [312]: if x1 != x2:
.....: print 'yep'
.....:
yep
I also tried copying both strings out of command prompt and them pasting them
back in as a new variables but they are still not equal. I'm 80% sure it's
because they're encoded in a weird way, with some odd characters inserted that
I can't see, but using type() both just show up as string.
Is there any way I can see the "real" string? Any help is appreciated.
Answer: They are not the same; using
[`difflib.ndiff()`](https://docs.python.org/2/library/difflib.html#difflib.ndiff)
shows how these two values differ very clearly:
>>> import difflib
>>> print '\n'.join(difflib.ndiff([x1], [x2]))
- N C Soft - NCSOFT_Guild Wars 2 December 2013 :: BNLX_AD_Parallax_160x600
? ^^ ^
+ N C Soft - NCSOFT_Guild Wars 2 December 2013 :: BNLX_CT_Parallax_160X600
? ^^ ^
In general, when in doubt use
[`repr()`](https://docs.python.org/2/library/functions.html#repr) to look at
the representation. Python 2 will use escapes for any non-printable or non-
ASCII character in the string, any 'funny' characters will stand out like a
sore thumb. In Python 3, use the [`ascii()`
function](https://docs.python.org/3/library/functions.html#ascii) for the same
result as `repr()` there is less conservative and Unicode is rife with
character combinations that look the same at first glance.
For strings where you still cannot see what changes between the two, the above
`difflib` tool can also help point out what exactly changed.
|
Convert netcdf "land" variable to latitude and longitude with Python
Question: I have a global meteorological dataset and I want to access the data for a
certain grid (lat,lon). However, the data is compressed, i.e. the parameters
of interest do not have the dimensions (lat, lon), but "land". "land" is a 1D
array of integers.
I imported the file in python using
`import scipy.io.netcdf as netcdf`
`path = '/path/.../ncfile.nc'`
`ncfile = netcdf.netcdf_file(path,'r')`
Then I checked what variables there were and found that, e.g. the "Rainf"
variable has the dimensions (tstep, land). I researched this on the internet
and found the file landmask_gswp.nc
(<http://dods.ipsl.jussieu.fr/gswp/Fixed/landmask_gswp.nc>), which is supposed
to contain the information I need, that is, how to extract the information
(lat, lon) from "land". This file contains the variables nav_lat, nav_lon and
landmask. nav_lat and nav_lon relate, to my understanding, the coordinate
variables x and y to latitude and longitude. "landmask" is a 2D array and
contains the information ocean = 0 or land = 1. Indeed, the number of
landpoints agrees with the length of my "land" 1D array. However, I cannot
figure out how to extract the (lat, lon) information from it. Any help would
be much appreciated.
I hope I made my problem somewhat understandable; I am not experienced with
programming and/or using netcdf, so I hope that you can help out! Thanks in
advance!
Answer: You can extract variables from `ncfile` using
lat = ncfile.variables['nav_lat'][:,:]
lon = ncfile.variables['nav_lon'][:,:]
This will create 2D numpy arrays `lat` and `lon`.
|
Can I show decimal places and scientific notation on the axis of a matplotlib plot using Python 2.7?
Question: I am plotting some big numbers with matplotlib in a pyqt program using python
2.7. I have a y-axis that ranges from 1e+18 to 3e+18 (usually). I'd like to
see each tick mark show values in scientific notation and with 2 decimal
places. For example 2.35e+18 instead of just 2e+18 because values between
2e+18 and 3e+18 still read just 2e+18 for a few tickmarks. Here is an example
of that problem.
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
x = np.linspace(0, 300, 20)
y = np.linspace(0,300, 20)
y = y*1e16
ax.plot(x,y)
ax.get_xaxis().set_major_formatter(plt.LogFormatter(10, labelOnlyBase=False))
ax.get_yaxis().set_major_formatter(plt.LogFormatter(10, labelOnlyBase=False))
plt.show()
Answer: This is really easy to do if you use the
`matplotlib.ticker.FormatStrFormatter` as opposed to the `LogFormatter`. The
following code will label everything with the format `'%.2e'`:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
fig = plt.figure()
ax = fig.add_subplot(111)
x = np.linspace(0, 300, 20)
y = np.linspace(0,300, 20)
y = y*1e16
ax.plot(x,y)
ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))
plt.show()

|
Python Py2app packaging directories
Question: am getting an error about calling a method from a group of python files
bundled with py2app
(1) I have read various info on the py2app importing large directories or
package groups etc. but it seems to have problems interacting with said files.
I hard coded each file to be included via my setup, however it still says it
can't call a function from my file 'random.py' which generates its own script
to run through 'happy.py' <\- it runs perfectly on its own and all the
dependencies are correct (imports from etc.)
(2) to make this even more complex the app is run 100% via terminal so I'm not
sure if I will just need to send people the .exe in order to use since I
assume py2app will just run the script without any options for user input..
SETUP FILE
"""
This is a setup.py script generated by py2applet
Usage:
python setup.py py2app
"""
from setuptools import setup
APP = ['happy.py']
DATA_FILES = ['happy.pyc',
'random.py',
'random.pyc',
'happy.py',
'screener.py',
'__init__.py',
'screener.pyc',
'setup.py']
OPTIONS = {'argv_emulation': True}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
py_modules=['random', 'screener', '__init__','happy',],
setup_requires=['py2app'],
)
ERROR OUT(given by .exe inside of .app, since .app runs a console error 255 with 0 info)
| | _____ _____| | / |
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: | |/ _ \ \ / / _ \ | | |
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: | | __/\ V / __/ | | |
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: |_|\___| \_/ \___|_| |_|
Sep 9 04:39:12 softroot.local happy[39888] <Notice>:
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: Traceback (most recent call last):
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: File "/Users/random/Desktop/bla/dist/happy.app/Contents/Resources/__boot__.py", line 373, in <module>
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: _run()
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: File "/Users/random/Desktop/bla/dist/happy.app/Contents/Resources/__boot__.py", line 358, in _run
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: exec(compile(source, path, 'exec'), globals(), globals())
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: File "/Users/random/Desktop/bla/dist/happy.app/Contents/Resources/happy.py", line 275, in <module>
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: print testone()
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: File "/Users/random/Desktop/bla/dist/happy.app/Contents/Resources/happy.py", line 52, in testone
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: from random import function
Sep 9 04:39:12 softroot.local happy[39888] <Notice>: ImportError: cannot import name function
Answer: Is `random.py` a custom script written by you, or is it the module from Pyhton
standard?
Have you tried to copy the `random.py` from the resulting application package,
`import random`, check `random.__file__` to make sure it is this one, and the
try to execute the line with problems `from random import function` ?
|
PIR sensor Motion Detection count Write to Text File in python
Question: I need to Write "pir sensor" motion detection COUNT to text file.
I tried with this code and this is work without writing to text file. when I
put write to file it gives an error `file = open("textFile.txt", "w")`
`IndentationError: unindent does not matchanyouter indentation level.`
Expected Output is last motion count number in text file.
code is
# Import required Python libraries
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
# Define GPIO to use on Pi
GPIO_PIR = 7
# Set pin as input
GPIO.setup(GPIO_PIR,GPIO.IN)
Current_State = 0
Previous_State = 0
# I put Variable= 0 for the motion Count
Variable= 0
try:
print "Waiting for PIR to settle ..."
# Loop until PIR output is 0
while GPIO.input(GPIO_PIR)==1:
Current_State = 0
print " Ready"
# Loop until users quits with CTRL-C
while True :
# Read PIR state
Current_State = GPIO.input(GPIO_PIR)
if Current_State==1 and Previous_State==0:
# PIR is triggered
start_time=time.time()
print " Motion detected!"
# here I need to write numbers for the text file.
file = open("textFile.txt", "w")
file.write(Variable)
file.close()
Variable+=1
# Record previous state
Previous_State=1
elif Current_State==0 and Previous_State==1:
# PIR has returned to ready state
stop_time=time.time()
print " Ready ",
elapsed_time=int(stop_time-start_time)
print " (Elapsed time : " + str(elapsed_time) + " secs)"
Previous_State=0
except KeyboardInterrupt:
print " Quit"
# Reset GPIO settings
GPIO.cleanup()
Answer:
import RPi.GPIO as GPIO
import time
# Use BCM GPIO references
# instead of physical pin numbers
GPIO.setmode(GPIO.BCM)
# Define GPIO to use on Pi
GPIO_PIR = 7
print "PIR Module Test (CTRL-C to exit)"
# Set pin as input
GPIO.setup(GPIO_PIR,GPIO.IN) # Echo
Current_State = 0
Previous_State = 0
Variable=0
try:
print "Waiting for PIR to settle ..."
# Loop until PIR output is 0
while GPIO.input(GPIO_PIR)==1:
Current_State = 0
print " Ready"
# Loop until users quits with CTRL-C
while True :
# Read PIR state
Current_State = GPIO.input(GPIO_PIR)
if Current_State==1 and Previous_State==0:
# PIR is triggered
print " Motion detected!"
# Record previous state
Previous_State=1
file = open("textFile.txt", "w")
file.write(Variable)
file.close()
Variable+=1
elif Current_State==0 and Previous_State==1:
# PIR has returned to ready state
print " Ready"
Previous_State=0
# Wait for 10 milliseconds
time.sleep(0.01)
except KeyboardInterrupt:
print " Quit"
# Reset GPIO settings
This is my friend's code. (I believe nothing is wrong with your code but a few
Indentation Error(e.g. spacing Error) For that you can use some sort of text
editor (I use sublime text)
|
Resizing tkinter windows and contents
Question: I am using python tkinter to build a ui containing a matplotlib figure and
some buttons, but am having difficulty with resizing the window and it's
contents. I've looked at some of the examples on this site and the docs and as
I understand it, for a frame containing smaller frames to resize together they
all need to be configured individually. Each one gets a weight applied to it
to define how much of the available space it receives (is this correct?).
However, when I try to apply this as shown below none of the frames resize.
Also if the wiegth is zero for columnconfigure and 1 for row configure does
that mean it will only resize in one direction?
import Tkinter
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
class Application():
def __init__(self, master):
frame2 = Tkinter.Frame(master, height=510, width=770, bg='red')
frame2.grid(row=0, column=0, sticky='nsew')
frame2.columnconfigure(0, weight=1)
frame2.rowconfigure(0, weight=1)
frame2a = Tkinter.Frame(frame2, height=80, width=770, bg='blue')
frame2a.grid(row=0, column=0, sticky='nsew')
frame2a.columnconfigure(0, weight=1)
frame2a.rowconfigure(0, weight=1)
frame2b = Tkinter.Frame(frame2, height=410, width=770, bg='green')
frame2b.grid(row=1, column= 0, sticky='nsew')
frame2b.columnconfigure(0, weight=1)
frame2b.rowconfigure(1, weight=1)
# add plot
fig = Figure(figsize=(9.5,5.2), facecolor='white')
fig.add_subplot(111)
canvas = FigureCanvasTkAgg(fig, master=frame2b)
canvas.show()
canvas.get_tk_widget().pack(side='top', fill='both', expand=1)
if __name__ == '__main__' :
root = Tkinter.Tk()
root.geometry("770x510")
app = Application(root)
root.mainloop()
Answer: Your approach using `columnconfigure` and `rowconfigure` is correct, but you
forgot one thing: You haven't used the methods on the _master_ window. So, you
basically want to do this:
def __init__(self, master):
master.columnconfigure(0, weight=1)
master.rowconfigure(0, weight=1)
To answer your other question (_Also if the wiegth is zero for columnconfigure
and 1 for row configure does that mean it will only resize in one
direction?_): Yes, you're right, the widget/window would extend in just one or
none direction then.
Additionally, since you are using `grid` for dynamic resizing, the `height`
and `width` parameters are obsolete.
|
What is wrong with my Python 3.4.1 (last version) code?
Question: I was trying to do this code ... but I don't where's the problem.
I need do a program makes a person to 5 questions about the crime. The
questions are:
* Telephoned the victim?
* Was on the scene?
* Lives near the victim?
* Should(must) for the victim?
* Worked with the victim?
The program should in the end issue a rating about the participation of the
person in the crime.If the person to respond positively to two questions she
should be classified as "suspicious", between 3 and 4 as "accomplice" and 5 as
"Killer". Otherwise, it will be classified as "Innocent".
I did... like this....
questions = float(input("answer yes or no : "))
questions =
print("Telephoned the victim? "),
print("Was on the scene? "),
print ("Lives near the victim?"),
print("Should(must) for the victim? "),
print("Worked with the victim? "),``
]
for question in question:
r = raw_input(question)
if r.upper() != 'S' and r.upper() != 'N':
print ("the answer maybe is Y for Yes or N for No")
print ("start again")
exit()
else:
answer.append(r.upper())
qntd_yes = answers.count('S')
if qntd_yes == 2:
print ('Suspicious')
elif 3 <= qntd_yes <= 4:
print ('accomplice')
elif qntd_yes == 5:
print ('killer')
else:
print ('Inocent')
Here is my error:
Traceback (most recent call last):
File "/usr/lib/python3.2/py_compile.py", line 119, in compile
optimize=optimize)
File "./prog.py", line 3
questions =
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.2/py_compile.py", line 123, in compile
raise py_exc
py_compile.PyCompileError: File "./prog.py", line 3
questions =
^
SyntaxError: invalid syntax
(Of the line 1 at 30 is my code)
I think is possible to do more easy, but I can't do ... I try and try but I
can not run it ... If someone can help me I will be very happy.
Answer: Your code is full of syntax errors. Here is what I think might work, I'll need
more information to help any further.
import sys
questions = [
"Telephoned the victim? ",
"Was on the scene? ",
"Lives near the victim?",
"Should(must) for the victim? ",
"Worked with the victim? "
]
answers = []
for question in questions:
r = input(question)
if r.upper() != 'Y' and r.upper() != 'N':
print ("the answer maybe is Y for Yes or N for No")
print ("start again")
sys.exit(1)
else:
answers.append(r.upper())
qntd_yes = answers.count('Y')
if qntd_yes == 2:
print ('Suspicious')
elif qntd_yes == 3 or qntd_yes == 4:
print ('Accomplice')
elif qntd_yes == 5:
print ('Killer')
else:
print ('Innocent')
I'm not sure what you're getting at in the comments, but if you don't want to
use `sys.exit` or `r.upper` you can do this.
questions = [
"Telephoned the victim? ",
"Was on the scene? ",
"Lives near the victim?",
"Should(must) for the victim? ",
"Worked with the victim? "
]
main() :
answers = []
for question in questions:
r = input(question)
if r != 'Y' and r != 'y' and r != 'N' and r != 'n':
print ("the answer maybe is Y for Yes or N for No")
print ("start again")
return 1
else:
answers.append(r.upper())
qntd_yes = answers.count('Y')
if qntd_yes == 2:
print ('Suspicious')
elif qntd_yes == 3 or qntd_yes == 4:
print ('Accomplice')
elif qntd_yes == 5:
print ('Killer')
else:
print ('Innocent')
return 0
if __name__ == '__main__':
main()
Note: you can also just check `r` against `Y` and `N` if you only want the
user to be able to input capitals, or visa-versa for lower case.
|
Track name and list elements and give them default values extracted from a file: Python
Question:
Input file format is like below:
Label A
{
Zebra,
Lion,
Python,
Cat,
Dog,
Blah,
}
Label B
{
India,
Nigeria,
Malaysia,
Peru,
}
I want to assign a default value to these elements, I was thinking I will form
a multi-dim list/array and then it would be easy to assign. Right now I have
the name and the elements in a variable.
Code:
from numpy import *
for match in re.finditer(r'Label (\w+)\s+([^\n]+)\s*\{(.*?)\}', input_file_read, re.DOTALL):
Group_name = match.group(1)
Group_elements = str(match.group(3).strip())
ele_names = Group_pins.split(',')
ele_list = [ele.strip() for ele in ele_names]
ele_array.extend(pins_list)
print ele_array
To the above code I also tried:
ele_matrix = np.array(ele_array)
Output:
ele_array = ['Zebra','Lion','Python','Cat','Dog','Blah','India','Nigeria','Malaysia','Peru']
I want:
([A, Zebra, Default_value]
[A, Lion, Default_value]
[A, Python, Default_value]
[A, Cat, Default_value]
[A, Dog, Default_value]
[A, Blah, Default_value]
[B, India, Default_value]
[B, Nigeria, Default_value]
[B, Malaysia, Default_value]
[B, Peru, Default_value])
It should be a **single matrix** , with **n*3** where **n** is the **total
elements** in both the groups.
Answer: This should do it
import re
# Reading File Data
handle = open("input.txt", "r")
input_file_read = "".join(handle.readlines())
handle.close()
# print input_file_read
_PATTERN_ = re.compile('^[\s]*Label[\s]*(\w)+[\s\n\{]*([^\}]+)', re.MULTILINE);
_DEFAULT_VALUE = 1
_Myarray = []
for match in _PATTERN_.finditer(input_file_read):
Group_name = match.group(1).strip()
# print Group_name
elements = match.group(2).replace(',', '').split()
# print elements
# _Myarray.append([Group_name, i, _DEFAULT_VALUE] for i in elements)
for i in elements:
_Myarray.append([Group_name, i, _DEFAULT_VALUE])
print _Myarray
|
import function from standard library after importing similarly named function from other library
Question: This is a curiosity. Say I am using **IPython** interactively, which by
default imports
from numpy import sum
and after that I decide to use `sum` from the standard library. Say, to do
something like
texts = [['a','good','day'],['a','lovely','day']]
sum(texts, [])
Can I do that? How?
* _Notice that I don't need a solution for flattening a list. I want a general solution for importing the standard library as if I were importing any other library, or a way to circumvent the shadowing of a function._
Answer: You can access it under `__builtin__` (Python 2) or `builtins` (Python 3):
>>> from numpy import sum
>>> texts = [['a','good','day'],['a','lovely','day']]
>>> sum(texts, [])
Traceback (most recent call last):
[...]
TypeError: cannot perform reduce with flexible type
>>> __builtin__.sum(texts, [])
['a', 'good', 'day', 'a', 'lovely', 'day']
>>> from __builtin__ import sum
>>> sum(texts, [])
['a', 'good', 'day', 'a', 'lovely', 'day']
But two points:
(1) IPython does _not_ import numpy's sum by default-- unless you're working
in a legacy `pylab` mode, in which case you shouldn't. :-)
(2) `sum` isn't a great example because using `sum(something, [])` to
concatenate lists will show quadratic behaviour and so should generally be
avoided.
|
Running subprocess on Windows causes TypeError: must be str without null characters or None, not str
Question: I'm trying to make a script that opens a program. I have a command that I
normally run manually in a Windows run (Windows Key + R).
command "C:\Riot Games\League of Legends\RADS\solutions\lol_game_client_sln\releases\0.0.1.54\deploy\League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator 95.172.65.26:8088 kbbaZXzvdjwL2tHtX7XGaEG17tJQLVBa 945850509 EUN1"
import subprocess
subprocess.call('"C:\Riot Games\League of Legends\RADS\solutions\lol_game_client_sln\releases\0.0.1.53\deploy\League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator 95.172.65.26:8088 kbbaZXzvdjwL2tHtX7XGaEG17tJQLVBa 945850509 EUN1"')
When I run this, I get an error:
Traceback (most recent call last):
File "C:\Users\Duran\Desktop\helloworld.py", line 2, in <module>
subprocess.call('"C:\Riot Games\League of Legends\RADS\solutions\lol_game_client_sln\releases\0.0.1.53\deploy\League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator 95.172.65.26:8088 kbbaZXzvdjwL2tHtX7XGaEG17tJQLVBa 945850509 EUN1"')
File "C:\Python34\lib\subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Python34\lib\subprocess.py", line 858, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1111, in _execute_child
startupinfo)
TypeError: must be str without null characters or None, not str
You probably already saw I'm pretty new to this but if someone could help me I
would really appreciate it.
Answer: `"\0"` is NUL character -- `ord('\0') == 0`. It leads to the `TypeError` in
your question.
Use `r"\0"` to get two symbols: the backslash plus the decimal zero --
`list(map(ord, r'\0')) == [92, 48]`.
from subprocess import check_call
check_call([
r"C:\Riot Games\League of Legends\RADS\solutions\lol_game_client_sln\releases\0.0.1.54\deploy\League of Legends.exe",
"8394",
"LoLLauncher.exe",
"",
"spectator 95.172.65.26:8088 kbbaZXzvdjwL2tHtX7XGaEG17tJQLVBa 945850509 EUN1"
])
Note: there are no quotes inside quotes.
|
not able to get float value from python script in C# windows app
Question: I have a python script which reads datatable passed from C#. But when i try to
get the value of variable in python script.. it does not fetch value. This
issue occurs only for varible of type float. Example when i try to fetch value
of "ErrorRate" ( in python script), it returns 0.0 value only.. though the
actaule value is 37.55
// C# script
// dt is datatable from C# being passed to python
var engine = Python.CreateEngine();
ScriptScope scope = engine.CreateScope();
ScriptRuntime runTime = Python.CreateRuntime();
ScriptEngine pyEngine = runTime.GetEngine("py");
MemoryStream ms = new MemoryStream();
scope.SetVariable("dt",dtmp);
runTime.IO.SetOutput(ms, new StreamWriter(ms));
ScriptSource ss = pyEngine.CreateScriptSourceFromString(
txtPythonCode.Text.Trim(),
SourceCodeKind.Statements);
ss.Execute(scope);
dynamic v1 = scope.GetVariable("Error");
dynamic v3 = scope.GetVariable<float>("ErrorRate");
string str = ReadFromStream(ms);
// Python script
import clr
clr.AddReference('System.Data')
from System import Data
from System.Data import DataTable
TW = "Button1"
TotalCnt =0;
ErrorRate = 0.0;
Error =0;
# Function definition is here
def GetErrorRate(erate,total):
ErrorRate = float( erate / TotalCnt )*100
print("Error rate - ",float( erate / TotalCnt )*100);
return;
for row in dt.Rows:
if TW == row[3]:
print("TotalCnt - ",TotalCnt)
elif row[3]=="X":
TotalCnt = TotalCnt +1
elif row[3]=="Y":
TotalCnt = TotalCnt +1
elif row[3]=="Z":
TotalCnt = TotalCnt +1
elif row[3]==" Wire":
print()
else:
TotalCnt = TotalCnt +1
Error = Error +1
GetErrorRate(Error,TotalCnt)
Really don't know what could be the reason for the issue.
Answer: As [PM 2Ring](http://stackoverflow.com/users/4014959/pm-2ring) observed, you
are probably doing integer division. This means any decimal part of number
gets discarded. Now the result as you told us should be `37.55`. To get that
you are multiplying by `100`, so the result of `erate / TotalCnt` should be
`0.3755`. If this would be an integer division, the result of that would be,
however, `0`. You then convert this `0` into `0.0` with `float` constructor,
multiply by `100` and there you get the result, `0.0`.
Solution for this is very simple, just use `TotalCnt = 0.0` instead of
`TotalCnt =0;`. That will make the `TotalCnt` variable `float` from start and
you will perform floating point operations each time using it.
**PS:** You don't need to use semicolons `;` after statements in python
|
How to extract specific columns from a space separated file in Python?
Question: I'm trying to process a file from the protein data bank which is separated by
spaces (not \t). I have a .txt file and I want to extract specific rows and,
from that rows, I want to extract only a few columns.
I need to do it in Python. I tried first with command line and used awk
command with no problem, but I have no idea of how to do the same in Python.
Here is an extract of my file:
[...]
SEQRES 6 B 80 ALA LEU SER ILE LYS LYS ALA GLN THR PRO GLN GLN TRP
SEQRES 7 B 80 LYS PRO
HELIX 1 1 THR A 68 SER A 81 1 14
HELIX 2 2 CYS A 97 LEU A 110 1 14
HELIX 3 3 ASN A 122 SER A 133 1 12
[...]
For example, I'd like to take only the 'HELIX' rows and then the 4th, 6th, 7th
and 9th columns. I started reading the file line by line with a for loop and
then extracted those rows starting with 'HELIX'... and that's all.
EDIT: This is the code I have right now, but the print doesn't work properly,
only prints the first line of each block (HELIX SHEET AND DBREF)
#!/usr/bin/python
import sys
for line in open(sys.argv[1]):
if 'HELIX' in line:
helix = line.split()
elif 'SHEET'in line:
sheet = line.split()
elif 'DBREF' in line:
dbref = line.split()
print (helix), (sheet), (dbref)
Answer: Have a look at the CSV library. <https://docs.python.org/2/library/csv.html>
The following code should do the trick
>>> import csv
>>> with open('my-file.txt', 'rb') as myfile:
... spamreader = csv.reader(myfile, delimiter=' ', )
... for row in spamreader:
... print row[3]
|
python 3.4 spirograph error
Question:
from turtle import*
from math import*
def Xcord(R,r,p,t):
x= (R-r) * cos(t) - (r+p) * cos((R-r)//r*t)
def Ycord (R,r,p,t):
y= (R-r) * sin(t) - (r+p) * sin((R-r)//r*t)
def t_iter(R,r,p):
t=0
down()
goto(Xcord(R,r,p,t),Ycord(R,r,p,t))
while (t < 2 * pi):
t = t+.01
Xcord(R,r,p,t)
Ycord(R,r,p,t)
up()
return
def main():
R=100
r=4
p=int(input("Please enter a number between 10 and 100: "))
if p < 10 or p > 100:
input(" Incorrect value of p!")
t_iter(R,r,p)
input("Hit enter to close porgram")
bye()
main()
I get this error:
Traceback (most recent call last):
File "C:/Users/Coscio/Desktop/spirals.py", line 31, in <module>
main()
File "C:/Users/Coscio/Desktop/spirals.py", line 27, in main
t_iter(R,r,p)
File "C:/Users/Coscio/Desktop/spirals.py", line 13, in t_iter
goto(Xcord(R,r,p,t),Ycord(R,r,p,t))
File "<string>", line 1, in goto
File "D:\Python\lib\turtle.py", line 1774, in goto
self._goto(Vec2D(*x))
TypeError: type object argument after * must be a sequence, not NoneType
Answer:
from turtle import*
from math import*
def Xcord(R,r,p,t):
return (R-r) * cos(t) - (r+p) * cos((R-r)//r*t)
def Ycord (R,r,p,t):
return (R-r) * sin(t) - (r+p) * sin((R-r)//r*t)
def t_iter(R,r,p):
t=0
up()
Xcord(R,r,p,t)
Ycord(R,r,p,t)
while (t < 2 * pi):
t = t+.01
goto(Xcord(R,r,p,t),Ycord(R,r,p,t))
down()
return float(Xcord(R,r,p,t))
return float(Ycord(R,r,p,t))
def validate():
while True:
p=int(input("Please enter a number between 10 and 100: "))
if p >= 10 or p <= 100:
break
return p
def main():
speed(0)
R=100
r=4
p = validate()
t_iter(R,r,p)
Xcord(R,r,p,t_iter(R,r,p))
Ycord(R,r,p,t_iter(R,r,p))
input("Hit enter to close porgram")
bye()
main()
|
Running a module that I've created out of Enthought Canopy
Question: I've created some module, and would like to access it through another Python
script in Enthought Canopy. When I attempt to do the same thing using python
directly through the command line, this works just fine -- I just import
myfile.py. Additionally, I know that my default Python distribution on this
machine is Enthought Canopy. Anyone know why I'm not able to access the module
I've created from within a Python script in the Canopy editor? It just says
there is 'No module named myfile', even though myfile.py is in the same
directory.
Answer: There was an issue with the current working directory--it's not set by default
to where the file is saved. `cd`ing into the directory with the module in it
fixed it.
|
In Python, how can I export a 3D isosurface into Blender
Question: I have some 3D (x,y,z,value) data in python and I can visualize the
isosurfaces in Mayavi. How can I export this isosurface into a file that I can
read into Blender?
Here is some example code:
import numpy
from mayavi import mlab
x, y, z = numpy.ogrid[-5:5:64j, -5:5:64j, -5:5:64j]
values = x * x * 0.5 + y * y + z * z * 2.0
mlab.contour3d(values, contours=[.5])
mlab.show()
Answer: Using @timday's suggestion, I added the following code to save the isosurface
in a wavefront (.obj) format:
mlab.savefig('surface.obj')
Then, I can open it in Blender with File>>Import>>Wavefront (.obj)
I had to scale down the image considerably (~100x) to make it visible in the
Blender viewport.

The origin is set way off to the side of the actual object, so the object is
easier to deal with if I use Object>>Transform>>Origin to Geometry

After adding some lighting and a plane, the object looks pretty good!

|
Sierpinski triangle recursion using turtle graphics
Question: I am trying to write a program that draws a sierpinski tree with python using
turtle. Here is my idea:
import turtle
def draw_sierpinski(length,depth):
window = turtle.Screen()
t = turtle.Turtle()
if depth==0:
for i in range(0,3):
t.fd(length)
t.left(120)
else:
draw_sierpinski(length/2,depth-1)
t.fd(length/2)
draw_sierpinski(length/2,depth-1)
t.bk(length/2)
t.left(60)
t.fd(length/2)
t.right(60)
draw_sierpinski(length/2,depth-1)
window.exitonclick()
draw_sierpinski(500,1)
The program does not reach the 2nd line after the else statement and I don't
know why. Can anyone help me?
Answer: I don't think you should be creating the turtle or window object inside the
function. Since `draw_sierpinski` gets called four times if you orininally
call it with depth 1, then you'll create four separate windows with four
separate turtles, each one drawing only a single triangle. Instead, I think
you should have only one window and one turtle.
import turtle
def draw_sierpinski(length,depth):
if depth==0:
for i in range(0,3):
t.fd(length)
t.left(120)
else:
draw_sierpinski(length/2,depth-1)
t.fd(length/2)
draw_sierpinski(length/2,depth-1)
t.bk(length/2)
t.left(60)
t.fd(length/2)
t.right(60)
draw_sierpinski(length/2,depth-1)
window = turtle.Screen()
t = turtle.Turtle()
draw_sierpinski(500,1)
window.exitonclick()
Result:

* * *
These results look pretty good for a depth 1 triangle, but what about when we
call `draw_sierpinski(100,2)`?

Ooh, not so good. This occurs because the function should draw the shape, and
then return the turtle to its original starting position and angle. But as is
evident from the depth 1 image, the turtle doesn't return to its starting
position; it ends up halfway up the left slope. You need some additional logic
to send it back home.
import turtle
def draw_sierpinski(length,depth):
if depth==0:
for i in range(0,3):
t.fd(length)
t.left(120)
else:
draw_sierpinski(length/2,depth-1)
t.fd(length/2)
draw_sierpinski(length/2,depth-1)
t.bk(length/2)
t.left(60)
t.fd(length/2)
t.right(60)
draw_sierpinski(length/2,depth-1)
t.left(60)
t.bk(length/2)
t.right(60)
window = turtle.Screen()
t = turtle.Turtle()
draw_sierpinski(100,2)
window.exitonclick()
Result:

|
Python's datetime strptime() inconsistent between machines
Question: I'm stumped. The date-cleaning functions I wrote work in Python 2.7.5 on my
Mac but not in 2.7.6 on my Ubuntu server.
Python 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> date = datetime.strptime('2013-08-15 10:23:05 PDT', '%Y-%m-%d %H:%M:%S %Z')
>>> print(date)
2013-08-15 10:23:05
Why does this not work in 2.7.6 on Ubuntu?
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> date = datetime.strptime('2013-08-15 10:23:05 PDT', '%Y-%m-%d %H:%M:%S %Z')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/_strptime.py", line 325, in _strptime
(data_string, format))
ValueError: time data '2013-08-15 10:23:05 PDT' does not match format '%Y-%m-%d %H:%M:%S %Z'
Edit: I tried using the timezone offset with the lowercase %z, but still get
an error (although a different one):
>>> date = datetime.strptime('2013-08-15 10:23:05 -0700', '%Y-%m-%d %H:%M:%S %z')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_strptime.py", line 317, in _strptime
(bad_directive, format))
ValueError: 'z' is a bad directive in format '%Y-%m-%d %H:%M:%S %z'
Answer: Timezone abbreviations are ambiguous. [For
example](http://stackoverflow.com/a/22364003/190597), EST can mean Eastern
Standard Time in the US, or it could mean Eastern Summer Time in Australia.
Therefore, datetime strings which contain timezone abbreviations can not be
reliably parsed into timezone-aware datetime objects.
`strptime`'s `'%Z'` format will only match UTC, GMT or the timezone
abbreviation listed in `time.tzname`, which is machine-locale dependent.
If you can change the datetime strings to ones containing UTC offsets, then
you could use [dateutil](http://niemeyer.net/python-dateutil) to parse the
strings into timezone-aware datetime objects:
import dateutil
import dateutil.parser as DP
date = DP.parse('2013-08-15 10:23:05 -0700')
print(repr(date))
# datetime.datetime(2013, 8, 15, 10, 23, 5, tzinfo=tzoffset(None, -25200))
|
Parsing Relaxed Dates in Python
Question: Is there any package (like Natty in Java) in python that is able to parse a
relaxed date such as:
"Third day of September, 1988"
I tried using parsedatetime, and I get the correct month and year, but not the
correct day.
>>>>import parsedatetime
>>>>cal = parsedatetime.Calendar()
>>>>cal.parse("Third day of September, 1988")
((1988, 9, 1, 14, 2, 15, 2, 253, 1), 1)
Am I using the module wrong? If not, is there an alternative package I can use
that can get the correct resi;t?
Answer: I'm not sure there is any direct replacement for Natty on Python. As you found
`parsedatetime` is close but not quite what you want. `parsedatetime` will
accept `"3rd September, 1988"` . It doesn't like `day of`.
If you want Natty and you are willing to run Java from Python then you may
want to see this [article](http://baojie.org/blog/2014/06/16/call-java-from-
python/) about different ways of doing that.
|
Python format Decimal near zero
Question: I'm trying to parse a near 0 number using the decimal.Decimal python module:
>>> import decimal
>>> from decimal import Decimal
>>> Decimal("0.00000161")
Decimal('0.00000161')
>>> Decimal("0.00000061")
Decimal('6.1E-7')
>>>
What would be the best way to print "0.00000061" instead of "6.1E-7"?
Answer:
In [157]: from decimal import Decimal
In [158]: x = Decimal("0.00000061")
In [159]: format(x, 'f')
Out[159]: '0.00000061'
|
Where is the FiPy "base directory"?
Question: I have recently installed the FiPy package onto my Macbook, with all the
dependencies, through MacPorts. I have no troubles calling FiPy and NumPy as
packages in Python.
Now that I have it working I want to go through the examples. However, I
cannot find the "base directory" or FiPy Directory in my computer.
How can I find the base directory? Do I even have the base directory if I have
installed all this via Macports?
As a note I am using Python27.
Please, help! Thanks.
Answer: From the FiPy docs (<http://www.ctcms.nist.gov/fipy/README.html>):
> When references are made to file system paths, it is assumed that the
> current working directory is the FiPy distribution directory, refered to as
> the “base directory”, such that:
>
> examples/diffusion/steadyState/mesh1D.py
>
> will correspond to, e.g.:
>
> /some/where/FiPy-X.Y/examples/diffusion/steadyState/mesh1D.py
This just means the working directory if the FiPy repository is cloned or
tarball unpacked and then the directory is changed to `fipy/`. It will have
`setup.py` and `examples/` in there. If you install FiPy without cloning or
using the tarball (e.g. using pip instead) the distribution directory (base
directory) won't be readily available.
It isn't the path returned from `import fipy; print(fipy.__file__)`. That's
the installation path.
|
WebDriverException: Message: 'ChromeDriver executable needs to be available in the path
Question: Every time I run my python algorithm I am getting this error in cmd:
Traceback (most recent call last):
File "newScrape.py", line 1, in <module>
import SScraper as SS
File "build\bdist.win32\egg\SScraper\__init__.py", line 45, in <module>
File "C:\Python27\lib\site-packages\selenium-2.43.0-py2.7.egg\selenium\webdriv
er\chrome\webdriver.py", line 59, in __init__
self.service.start()
File "C:\Python27\lib\site-packages\selenium-2.43.0-py2.7.egg\selenium\webdriv
er\chrome\service.py", line 68, in start
and read up at http://code.google.com/p/selenium/wiki/ChromeDriver")
selenium.common.exceptions.WebDriverException: Message: 'ChromeDriver executable
needs to be available in the path. Please download from http://
chromedriver.storage.googleapis.com/index.html and read up at htt
p://code.google.com/p/selenium/wiki/ChromeDriver'
Here is my enviroment path C:\Program
Files\Java\jdk1.8.0_05\bin;C:\Python27;C:\Python27\Lib\site-
packages\;C:\Python27\Scripts\; C:\FILES\chromedriver.exe;
What am I doing wrong?
Answer: You need to replace `C:\FILES\chromedriver.exe` with just `C:\FILES` since the
`PATH` environment variable contains the semi-colon separated list of
_directories_.
|
How can i find the string inside single quotes in python regex
Question: I want to match the string inside these lines
key: value
key: 'value'
I am using this regex
`re.compile(r"(.*key: )(.*)")`
but \2 always catches single quotes as well.
I tried many things like
`(.*key: )'?(.*)'?` but didn't work
I am trying like this
line = regex.sub(r"\1'blah'\2", line)
Answer: I think you are trying to capture the value which was not present within the
single quotes aswell as the one present within the single quotes.
key:\s*'?([^'\n]*)'?
Group index 1 contains the value of the field `key`.
[DEMO](http://regex101.com/r/bX3wF8/1)
>>> import re
>>> s = """key: value
... key: 'value'"""
>>> m = re.findall(r"key:\s*'?([^'\n]*)'?", s, re.M)
>>> m
['value', 'value']
|
Get full path for a file in python
Question: I'm trying to create a csv of paths to wav files in a series of directories.
Each row should correspond to a directory. Each cell in a row should contain
the path to a single file. The script below is "almost" working. It creates a
CSV with the files as cells. However, os.path.realpath and os.path.abspath
don't include the direct parent directory of the file. So, instead of
"/root/directory/file.wav". I'm getting "/root/file.wav".
import fnmatch
import os
import csv
with open('filelist.csv', 'wb') as csvfile:
lister = csv.writer(csvfile, delimiter=',')
for root, dirnames, filenames in os.walk(os.getcwd()):
matches = []
for filename in fnmatch.filter(filenames, '*.wav'):
matches.append(os.path.realpath(filename))
if len(matches) > 0:
print matches
lister.writerow(matches)
Answer: You need to `join` the `root` path to get a relative path from the current
directory, _then_ you can call `abspath`.*
You can see this in every one of the examples for
[`os.walk`](https://docs.python.org/2/library/os.html#os.walk) in the docs,
like this one:
import os
for root, dirs, files in os.walk(top, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
* * *
So, for your code:
with open('filelist.csv', 'wb') as csvfile:
lister = csv.writer(csvfile, delimiter=',')
for root, dirnames, filenames in os.walk(os.getcwd()):
matches = []
for filename in fnmatch.filter(filenames, '*.wav'):
matches.append(os.path.abspath(os.path.join(root, filename)))
if len(matches) > 0:
print matches
lister.writerow(matches)
* * *
* Alternatively, you can start the `walk` with an absolute path and not have to `abspath` each file… but only if you understand what that means for symlinks and are happy with that. If you don't know, use `abspath` on each file.
|
sending video stream from server to client using socket programming in python
Question: I am trying to send video stream from server to client using python sockets
but facing errors. Here is my server side and client side code.There is some
problem with sending frame from server to client ,
Here is my **SERVER SIDE** code
#!/usr/bin/env python
import socket
import cv2
import json
host = ' '
port = 50058
backlog = 5
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
**strong text**s.bind((host,port))
s.listen(backlog)
vc=cv2.VideoCapture(0)
if vc.isOpened():
rval, frame = vc.read()
while 1:
client, address = s.accept()
rval, frame = vc.read()
#data = client.recv(size)
if rval:
#print "recieved data " + str(data)
#print "sending data to %s" % str(address)
#p=json.dumps(frame)
client.send(frame)
client.close()
And **CLIENT SIDE CODE **
#!/usr/bin/env python
"""
A simple echo client
"""
import socket
import cv2
import json
host = '169.254.132.51'
port = 50058
size = 1024000000
cv2.namedWindow("preview")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host,port))
while True:
data = s.recv()
#frame=json.loads(data)
cv2.imshow("preview", data)
s.close()
Answer: 1) The Server closes the connection after 1 send, but the clients seems to
expect more than one frame
2) SOCK_STREAM (TCP) only supports a stream without delimiting packets. you
need to build the delimiting parts yourself.
3) recvall isn't even a function
maybe you should try implementing it with remote multiprocessing:
<https://docs.python.org/2/library/multiprocessing.html#using-a-remote-
manager>
|
understanding decorators in python
Question: I am having an object I cant touch, but would like to log the method calls. I
don't really understand decorators, but I was wondering if there is a more
elegant way to do something like this
import pprint
class myTest(object):
def __init__(self):
self.a = 1
def test(self, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
def wrap(o, method, **kwargs):
for k, v in kwargs.items():
print '%s set to %s' %(k, v)
getattr(o, method)(**kwargs)
o = myTest()
d = {'b':2,'c': 3}
wrap(o, 'test', **d)
pprint.pprint(vars(o))
such that I can decorate a regular method call and do it somehow like this:
@mydecorator
o.test(**d)
and would get a similar result. Then I would not have to replace all the
method calls in the script
Answer: The question mentioned in the comments has some excellent explainations of
decorators, so you should read that first - [How can I make a chain of
function decorators in Python?](http://stackoverflow.com/questions/739654/how-
can-i-make-a-chain-of-function-decorators-in-python)
As to your specific question. As you can't touch the `myTest` object
definition, you can just change it's methods at runtime! You can do this just
to your instance:
import pprint
class myTest(object):
def __init__(self):
self.a = 1
def test(self, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
def log_calls(fn):
def wrapped(*args, **kwargs):
print '%s called with args %s and kwargs %s' % (repr(fn), repr(args), repr(kwargs))
fn(*args, **kwargs)
return wrapped
o = myTest()
o.test = log_calls(o.test) # equivalent to applying the decorator at method
# definition, but only applies to this instance of myTest
d = {'b':2, 'c':3}
o.test(**d)
pprint.pprint(vars(o))
Outputs:
<bound method myTest.test of <__main__.myTest object at 0x0000000002EE07F0>> called with args () and kwargs {'c': 3, 'b': 2}
{'a': 1, 'b': 2, 'c': 3, 'test': <function wrapped at 0x0000000002EDDEB8>}
Or modify the class itself:
myTest.test = log_calls(myTest.test)
o = myTest()
d = {'b':2, 'c':3}
o.test(**d)
pprint.pprint(vars(o))
This is called monkey patching - you can find some more info here [What is
monkey patch?](http://stackoverflow.com/questions/5626193/what-is-monkey-
patch)
|
How to drag images from Chrome to PyQt?
Question: I want to drag a picture from the browser but can not get the URL!
if ev.mimeData().hasUrls():
ev.mimeData().urls()
This code works well with images from Firefox(but I sometimes get the link of
the image not the source URL of the image - I already have an idea how to fix
it).
However, the same code returns an empty list when I drag a image from Chrome.
So, what's the problem?
* * *
I have tried draging image to default `QlineEdit` widget, and the src url was
dropped automaticly both FireFox and Chrome !
————
**Experimental result :**
PyQt4 - Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 (Ubuntu 14.04)
Firefox 32.0 : Only work with URL, I can't found image type.
Google chrome Version 37.0.2062.120 (64-bit) : URL is empty and no images founded.
PyQt5 Python 3.4.0 (default, Apr 11 2014, 13:05:11) [GCC 4.8.2] on linux2 (Ubuntu14.04)
the same
Answer: All data drag & drop in `QWidget` (in PyQt) it must be common MIME types
([`QtCore.QMimeData`](http://pyqt.sourceforge.net/Docs/PyQt4/qmimedata.html)).
It's possibility data are not URL (`text/uri-list`) as in your problem, It
maybe MIME see image (`image/ *`) type only, not see URL (But sometime it see
both URL and image). But, your can get image data from `QtCore.QMimeData`
directly;
Your can try it, use method [`bool QMimeData.hasImage
(self)`](http://pyqt.sourceforge.net/Docs/PyQt4/qmimedata.html#hasImage) to
check type is image. And put image to `QImage` by call [`QVariant
QMimeData.imageData
(self)`](http://pyqt.sourceforge.net/Docs/PyQt4/qmimedata.html#imageData), see
example how to implement;
import sys
from PyQt4 import QtGui
class QCustomQWidget (QtGui.QWidget):
def __init__ (self, parentQWidget = None):
super(QCustomQWidget, self).__init__(parentQWidget)
self.mimeQLabel = QtGui.QLabel()
allQHBoxLayout = QtGui.QHBoxLayout()
allQHBoxLayout.addWidget(self.mimeQLabel)
self.setLayout(allQHBoxLayout)
self.setAcceptDrops(True)
def dragEnterEvent (self, eventQDragEnterEvent):
eventQDragEnterEvent.acceptProposedAction()
def dropEvent (self, eventQDropEvent):
self.mimeQLabel.clear()
if eventQDropEvent.mimeData().hasImage():
mimeQImage = QtGui.QImage(eventQDropEvent.mimeData().imageData())
self.mimeQLabel.setPixmap(QtGui.QPixmap.fromImage(mimeQImage))
elif eventQDropEvent.mimeData().hasUrls():
self.mimeQLabel.setText('\n'.join([str(url) for url in eventQDropEvent.mimeData().urls()]))
QtGui.QWidget.dropEvent(self, eventQDropEvent)
app = QtGui.QApplication(sys.argv)
myQCustomQWidget = QCustomQWidget()
myQCustomQWidget.show()
sys.exit(app.exec_())
* * *
**Experimental result :**
PyQt4 - Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Firefox 31.0 : Image & URL work fine.
Google chrome Version 37.0.2062.103 m : Only work with URL, I can't found image type.
|
Python change module name to its realname
Question: `app.py`:
import sys
for name, module in sys.modules.iteritems():
print name
I get:
encodings
site
__builtin__
sysconfig
__main__
....
as you can see, the module name is changed to `__main__`.
I know this behaviour(`if __name__ == '__main__'`), but how can I change the
module name back to its real name?
Answer: Take a look at `module` for your module:
print module
Output:
<module '__main__' from 'app.py'>
|
Python Paramiko Error
Question: I wrote a python program. I have conf file, I wrote router configuration
commands inside the conf file and I want to execute these commands inside
paramiko. I have a problem - the error message is below. Can you help me
please ?
# CODE:
#!/usr/bin/env python
import paramiko
ip="10.100.1.200"
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,username="admin",password="pass")
text=open("conf")
for komut in text.readlines():
stdin, stdout, stderror = ssh.exec_command(komut)
for line in stdout.readlines():
print line.strip()
ssh.close()
text.close()
# error message:
Traceback (most recent call last):
File "./configmaker.py", line 13, in <module>
stdin, stdout, stderror = ssh.exec_command(str(komut.strip()))
File "/usr/lib/python2.7/dist-packages/paramiko/client.py", line 370, in exec_command
chan = self._transport.open_session()
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 662, in open_session
return self.open_channel('session')
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 764, in open_channel
raise e
EOFError
Answer: Try with this code:
import paramiko
if __name__ == "__main__":
ip = "127.0.0.1"
username = "admin"
password = "root"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,username=username,password=password)
ssh_transport = ssh.get_transport()
for command in ("ls /tmp", "date"):
chan = ssh_transport.open_session()
chan.exec_command(command)
exit_code = chan.recv_exit_status()
stdin = chan.makefile('wb', -1) # pylint: disable-msg=W0612
stdout = chan.makefile('rb', -1)
stderr = chan.makefile_stderr('rb', -1) # pylint: disable-msg=W0612
output = stdout.read()
print output
|
computing the determinants of matrices in an array using python
Question: Say you have a numpy array of matrices e.g. an array of dimension (n,m,m).
Think of it as n matrices each of size mxm. Is there a way (not using a loop)
of computing the determinant of each each matrix in one go?
Answer: You can calculate the determinant of numpy arrays
using[`numpy.linalg.det`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html)
as shown below:
import numpy as np
N = 10
M = 4
# Generate N random MxM arrays
arrays = np.array([np.random.random((M,M)) for _ in range(N)])
dets = np.linalg.det(arrays)
print(dets)
# array([-0.20353081, 0.01632881, -0.17733447, -0.01518313, -0.23457492,
# 0.00284906, 0.16210605, 0.03887231, 0.07726804, -0.05107936])
In the example above I have 10 matrices of size 4x4 (randomly generated as an
example). `dets` is a numpy array of 10 numbers, your determinants.
|
When importing a function it runs the whole script?
Question: I'm new to python and doing an assignment. It's meant to be done with linux
but as I'm doing it by myself on my own computer I'm doing it on windows.
I've been trying to do this test system that we use looking like this:
>>> import file
>>> file.function(x)
"Answer that we want"
Then we run it through the linux terminal. I've been trying to create my own
way of doing this by making a test file which imports the file and runs the
function. But then on the other hand of just running the function it runs the
whole script. Even though it's never been called to do that.
Import file
file.function(x)
That's pretty much what I've been doing but it runs the whole "file". I've
also tried `From File Import` function; It does the same.
What kind of script can I use to script the "answer that I want" for the
testfile? As when we run in through linux terminal it says if it has failed or
scored.
Answer: `import`ing a file is equivalent to running it.
When you `import` a file (module), a new module object is created, and upon
executing the module, every new identifier is put into the object as an
attribute.
So if you don't want the module to do anything upon importing, rewrite it so
it only has assignments and function definitions.
If you want it to run something only when invoked directly, you can do
A = whatever
def b():
...
if __name__ == '__main__'
# write code to be executed only on diret execution, but not on import
This holds no matter if you do `import module` or `from module import
function`, as these do the same. Only the final assignment is different:
`import module` does:
* Check `sys.modules`, and if the module name isn't contained there, import it.
* Assign the identifier `module` to the module object.
`from module import function` does
* Check `sys.modules`, and if the module name isn't contained there, import it. (Same step as above).
* Assign the identifier `function` to the module object's attribute `function`.
|
Python gc.get_referents() returning references that are unknown to inspect module
Question: I am trying to debug a memory leak and have tracked it down to a single
object, call it "parent".
`gc.get_referents(parent)` indicates that it is effectively gaining more and
more references to the object that is leaking. I'm trying to find out more
information about how it is happening, however, `inspect.getmembers(parent)`
knows nothing about these references that gc.get_referents does know about:
i.e.
import gc
import inspect
parent = someObject()
dependents = gc.get_referents(parent)
fromInspect = [b for (a,b) in inspect.getmembers(parent) if b in dependents]
notFromInspect = [b for (a,b) in inspect.getmembers(parent) if b not in dependents]
print len(fromInspect)
>>> 3
print len(notFromInspect)
>>> 69
So there are lots of objects (69 of them!) that the gc module knows about, but
inspect does not.
How does `gc.get_referents()` construct the list of "referent" objects for a
Python object?
Do I need to look at slots? Something else?
Answer: In my case, it turned out that "parent" was a custom python object implemented
in C/C++.
The documentation here describes how you can write your own "container"
objects in Python:
<https://docs.python.org/2/c-api/typeobj.html#Py_TPFLAGS_HAVE_GC>
All that such objects have to do to then interface with the garbage collector
is provide two function pointers,
[`tp_traverse`](https://docs.python.org/2/c-api/typeobj.html#c.PyTypeObject.tp_traverse)
and `tp_clear`.
`tp_traverse` allows a custom type object to tell the Garbage collector
anything it wants about which objects this object owns.
So in my case, this is how I am seeing references which are unknown to the
`inspect` module.
Unfortunately for me, I still haven't found the actual cause of the leak -
except now I know it's related to the C/C++ code.
|
Looking for built-in, invertible, list-of-list-accepting constructor/deconstructor pair for pandas dataframes
Question: Are there built-in ways to construct/deconstruct a dataframe from/to a Python
list-of-Python-lists?
As far as the constructor (let's call it `make_df` for now) that I'm looking
for goes, I want to be able to write the initialization of a dataframe from
literal values, including columns of arbitrary types, in an easily-readable
form, like this:
df = make_df([[9.75, 1],
[6.375, 2],
[9., 3],
[0.25, 1],
[1.875, 2],
[3.75, 3],
[8.625, 1]],
['d', 'i'])
For the deconstructor, I want to essentially recover from a dataframe `df` the
arguments one would need to pass to such `make_df` to re-create `df`.
AFAIK,
1. [officially](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) at least, the pandas.DataFrame constructor accepts only a numpy ndarray, a dict, or another DataFrame (and not a simple Python list-of-lists) as its first argument;
2. the pandas.DataFrame.values property does not preserve the original data types.
I can roll my own functions to do this (e.g., see below), but I would prefer
to stick to built-in methods, if available. (The Pandas API is pretty big, and
some of its names not what I would expect, so it is quite possible that I have
missed one or both of these functions.)
* * *
FWIW, below is a hand-rolled version of what I described above, minimally
tested. (I doubt that it would be able to handle every possible corner-case.)
import pandas as pd
import collections as co
import pandas.util.testing as pdt
def make_df(values, columns):
return pd.DataFrame(co.OrderedDict([(columns[i],
[row[i] for row in values])
for i in range(len(columns))]))
def unmake_df(dataframe):
columns = list(dataframe.columns)
return ([[dataframe[c][i] for c in columns] for i in dataframe.index],
columns)
values = [[9.75, 1],
[6.375, 2],
[9., 3],
[0.25, 1],
[1.875, 2],
[3.75, 3],
[8.625, 1]]
columns = ['d', 'i']
df = make_df(values, columns)
Here's what the output of the call to `make_df` above produced:
>>> df
d i
0 9.750 1
1 6.375 2
2 9.000 3
3 0.250 1
4 1.875 2
5 3.750 3
6 8.625 1
A simple check of the round-trip1:
>>> df == make_df(*unmake_df(df))
True
>>> (values, columns) == unmake_df(make_df(*(values, columns)))
True
BTW, this is an example of the loss of the original values' types:
>>> df.values
array([[ 9.75 , 1. ],
[ 6.375, 2. ],
[ 9. , 3. ],
[ 0.25 , 1. ],
[ 1.875, 2. ],
[ 3.75 , 3. ],
[ 8.625, 1. ]])
Notice how the values in the second column are no longer integers, as they
were originally.
Hence,
>>> df == make_df(df.values, columns)
False
* * *
1 In order to be able to use `==` to test for equality between dataframes
above, I resorted to a little monkey-patching:
def pd_DataFrame___eq__(self, other):
try:
pdt.assert_frame_equal(self, other,
check_index_type=True,
check_column_type=True,
check_frame_type=True)
except:
return False
else:
return True
pd.DataFrame.__eq__ = pd_DataFrame___eq__
Without this hack, expressions of the form `dataframe_0 == dataframe_1` would
have evaluated to dataframe objects, not simple boolean values.
Answer: I'm not sure what documentation you are reading, because the link you give
explicitly says that the default constructor accepts _other list-like objects_
(one of which is a list of lists).
In [6]: pandas.DataFrame([['a', 1], ['b', 2]])
Out[6]:
0 1
0 a 1
1 b 2
[2 rows x 2 columns]
In [7]: t = pandas.DataFrame([['a', 1], ['b', 2]])
In [8]: t.to_dict()
Out[8]: {0: {0: 'a', 1: 'b'}, 1: {0: 1, 1: 2}}
Notice that I use `to_dict` at the end, rather than trying to get back the
original list of lists. This is because it is an ill-posed problem to get the
list arguments back (unless you make an overkill decorator or something to
actually store the ordered arguments that the constructor was called with).
The reason is that a pandas DataFrame, by default, is not an _ordered_ data
structure, at least in the column dimension. You could have permuted the order
of the column data at construction time, and you would get the "same"
DataFrame.
Since there can be many differing notions of equality between two DataFrame
(e.g. same columns even including type, or just same named columns, or some
columns and in same order, or just same columns in mixed order, etc.) --
pandas defaults to trying to be the least specific about it (Python's
principle of least astonishment).
So it would not be good design for the default or built-in constructors to
choose an overly specific idea of equality for the purposes of returning the
DataFrame back down to its arguments.
For that reason, using `to_dict` is better since the resulting keys will
encode the column information, and you can choose to check for column types or
ordering however you want to for your own application. You can even discard
the keys by iterating the `dict` and simply pumping the contents into a list
of lists if you really want to.
In other words, because order _might not_ matter among the columns, the
"inverse" of the list-of-list constructor maps backwards into a bigger set,
namely all the permutations of the same column data. So the inverse you're
looking for is not well-defined without assuming more structure -- and casual
users of a DataFrame might not want or need to make those extra assumptions to
get the invertibility.
As mentioned elsewhere, you should use `DataFrame.equals` to do equality
checking among DataFrames. The function has many options that allow you
specify the specific kind of equality testing that makes sense for your
application, while leaving the default version as a reasonably generic set of
options.
|
WebStorm startup error on Lubuntu 14.04
Question: I am new to Linux. I am using Lubuntu 14.04. Python version is 2.7.6.
I have installed WebStorm 8 in following location:
david@david:/usr/opt/webstorm/bin$
When I run following command in bin folder:
./webstorm.sh
It gives me following error:
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 398, in <module>
import UserDict
File "/usr/lib/python2.7/UserDict.py", line 83, in <module>
import _abcoll
File "/usr/lib/python2.7/_abcoll.py", line 11, in <module>
from abc import ABCMeta, abstractmethod
File "/usr/lib/python2.7/abc.py", line 8, in <module>
from _weakrefset import WeakSet
ImportError: No module named _weakrefset
I have installed "weakrefset" by using following command (and it gave me
message of successful installation):
sudo pip install weakrefset
But problem is still there and Webstorm is not starting up.
WebStrom.sh is as follows:
#!/usr/bin/python
import socket
import struct
import sys
import os
import time
# see com.intellij.idea.SocketLock for the server side of this interface
RUN_PATH = '/usr/opt/webstorm/bin/webstorm.sh'
CONFIG_PATH = '/home/david/.WebStorm8/config'
args = []
skip_next = False
for i, arg in enumerate(sys.argv[1:]):
if arg == '-h' or arg == '-?' or arg == '--help':
print(('Usage:\n' + \
' {0} -h |-? | --help\n' + \
' {0} [-l|--line line] file[:line]\n' + \
' {0} diff file1 file2').format(sys.argv[0]))
exit(0)
elif arg == 'diff' and i == 0:
args.append(arg)
elif arg == '-l' or arg == '--line':
args.append(arg)
skip_next = True
elif skip_next:
args.append(arg)
skip_next = False
else:
if ':' in arg:
file_path, line_number = arg.rsplit(':', 1)
if line_number.isdigit():
args.append('-l')
args.append(line_number)
args.append(os.path.abspath(file_path))
else:
args.append(os.path.abspath(arg))
else:
args.append(os.path.abspath(arg))
def launch_with_port(port):
found = False
s = socket.socket()
s.settimeout(0.3)
try:
s.connect(('127.0.0.1', port))
except:
return False
while True:
try:
path_len = struct.unpack(">h", s.recv(2))[0]
path = s.recv(path_len)
path = os.path.abspath(path)
if os.path.abspath(path) == os.path.abspath(CONFIG_PATH):
found = True
break
except:
break
if found:
if args:
cmd = "activate " + os.getcwd() + "\0" + "\0".join(args)
encoded = struct.pack(">h", len(cmd)) + cmd
s.send(encoded)
time.sleep(0.5) # don't close socket immediately
return True
return False
port = -1
try:
f = open(os.path.join(CONFIG_PATH, 'port'))
port = int(f.read())
except Exception:
type, value, traceback = sys.exc_info()
print(value)
port = -1
if port == -1:
# SocketLock actually allows up to 50 ports, but the checking takes too long
for port in range(6942, 6942+10):
if launch_with_port(port): exit()
else:
if launch_with_port(port): exit()
if sys.platform == "darwin":
# Mac OS: RUN_PATH is *.app path
if len(args):
args.insert(0, "--args")
os.execvp("open", ["-a", RUN_PATH] + args)
else:
# unix common
bin_dir, bin_file = os.path.split(RUN_PATH)
os.chdir(bin_dir)
os.execv(bin_file, [bin_file] + args)
Can someone guide me to solve this problem.
Answer: Might be a problem of python-virtualenv that was fixed in python-virtualenv -
1.4.9-3ubuntu1. Please see: <https://bugs.launchpad.net/ubuntu/+source/python-
virtualenv/+bug/662611>
See also <http://devnet.jetbrains.com/message/5514381#5514381>
|
Ansible ec2 only provision required servers
Question: I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2
instance; it also creates a host group [webservers] and adds the created
instance IP to it.
The Configure {{ application_name }} servers step then configures that server,
installing everything I need.
So far so good, this all does exactly what I want and everything seems to
work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for
different roles. Ideally I'd create a dbserver, a webserver and maybe a
memcached server. I'd like to be able to deploy any part(s) of this
infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the
host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I
don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to
understand how that would help me. I've also looked through countless examples
of ansible ec2 provisioning projects, they are all invariably either
provisioning pre-existing ec2 instances, or just create a single instance and
install everything on it.
Answer: In the end I realised it made much more sense to just separate the different
parts of the stack into separate playbooks, with a full-stack playbook that
called each of them.
My remote hosts file stayed largely the same as above. An example of one of
the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the
apiservers group, with the second step (Configure ... apiservers) then being
able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what
it is, some guy's attempt to find something that works. There may be better
options and this could violate best practice all over the place.
|
Asking for advice on Django deployment settings with Apache and mod_wsgi
Question: I have deployed Django with Apache and mod_wsgi following the official
documentation and other posts. While I have my site working I am concerned
that I may have gotten my setup wrong. I'd like some advice on my setup and if
it is following best practices. Please let me know if you see problems with
this setup. Thanks, Lee
wsgi.py
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../../")))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../")))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "DjangoProject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
settings.py
...
ALLOWED_HOSTS = ['DjangoProject.example.com']
STATIC_ROOT = "/var/www/DjangoProject/static/"
STATIC_URL = '/static/'
....
/etc/apache2/apache2.conf - other settings are above this line
...
WSGIPythonPath /var/www/DjangoProject/DjangoProject:/var/www/DjangoProject/env/lib/python2.6/site-packages
/etc/apache2/httpd.conf - no other settings but this line deployed
WSGIPythonPath /var/www/DjangoProject:/var/www/DjangoProject/DjangoProject:/var/www/DjangoProject/env/lib/python2.6/site-packages
/etc/apache2/sites-available/default
NameVirtualHost *:8080
<VirtualHost *:8080>
ServerAdmin webmaster@localhost
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
<VirtualHost *:80>
##############################
## DjangoProject WSGI ##
##############################
ServerName DjangoProject.example.com
Alias /favicon.ico /var/www/DjangoProject/DjangoProject/static/favicon.ico
AliasMatch ^/([^/]*\.css) /var/www/DjangoProject/MyApp/static/MyApp/css/$1
Alias /media/ /var/www/DjangoProject/DjangoProject/media/
Alias /static/ /var/www/DjangoProject/MyApp/static/
<Directory /var/www/DjangoProject/MyApp/static>
Order deny,allow
Allow from all
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 1 seconds"
ExpiresByType text/html "access plus 1 seconds"
ExpiresByType image/gif "access plus 10080 minutes"
ExpiresByType image/jpeg "access plus 10080 minutes"
ExpiresByType image/png "access plus 10080 minutes"
ExpiresByType text/css "access plus 60 minutes"
ExpiresByType text/javascript "access plus 60 minutes"
ExpiresByType application/x-javascript "access plus 60 minutes"
ExpiresByType text/xml "access plus 60 minutes"
ExpiresByType text/xml "access plus 60 minutes"
</IfModule>
</Directory>
<Directory /var/www/DjangoProject/DjangoProject/media>
Order deny,allow
Allow from all
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 1 seconds"
ExpiresByType text/html "access plus 1 seconds"
ExpiresByType image/gif "access plus 10080 minutes"
ExpiresByType image/jpeg "access plus 10080 minutes"
ExpiresByType image/png "access plus 10080 minutes"
ExpiresByType text/css "access plus 60 minutes"
ExpiresByType text/javascript "access plus 60 minutes"
ExpiresByType application/x-javascript "access plus 60 minutes"
ExpiresByType text/xml "access plus 60 minutes"
</IfModule>
</Directory>
WSGIDaemonProcess DjangoProject.example.com processes=2 threads=15 display-name=%{GROUP}
WSGIProcessGroup DjangoProject.example.com
WSGIScriptAlias /MyApp /var/www/DjangoProject/DjangoProject/wsgi.py
WSGIScriptAlias / /var/www/DjangoProject/DjangoProject/wsgi.py
<Directory /var/www/DjangoProject/DjangoProject>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 1 seconds"
ExpiresByType text/html "access plus 1 seconds"
ExpiresByType image/gif "access plus 10080 minutes"
ExpiresByType image/jpeg "access plus 10080 minutes"
ExpiresByType image/png "access plus 10080 minutes"
ExpiresByType text/css "access plus 60 minutes"
ExpiresByType text/javascript "access plus 60 minutes"
ExpiresByType application/x-javascript "access plus 60 minutes"
ExpiresByType text/xml "access plus 60 minutes"
</IfModule>
</Directory>
AddType audio/mpeg .mp1 .mp2 .mp3 .mpg .mpeg
</VirtualHost>
Answer: Setting:
DocumentRoot /var/www
as you have is dangerous for a start.
You should never set DocumentRoot directory to be a parent directory of where
your Django project is being stored. If you stuff up other parts of your
configuration it could result in your Django settings file being downloadable,
including any database passwords.
|
Calculating string difference in python
Question: I am trying to calculate the number of characters that differ between two
strings in python. Ideally I want a function just like strdif in C. I see
ndiff in python's difflib, but that returns a Differ object whereas I want a
simple integer (ex: "10011" vs "00110" returns 3). I know the answer must be
simple, but I can't figure it out and Id rather use the library function
instead of write it myself
Answer:
s1,s2 = "10011", "00110"
print sum(a!=b for a,b in zip(s1,s2)) + abs(len(s1)-len(s2))
should work fine.
or as John Clements points out
print sum(a!=b for a,b in map(None,s1,s2))
which avoids the extra length check ... and will be slightly faster if the
strings are typically the same length (and its an awesome solution!) ...
or even more terse (now its starting to enter black magic land where enough
reader comprehension is lost that I would probably not recommend actually
implementing it like this in anything that may be seen by others, and if you
do make sure to add lots of comments)
from operator import ne
print sum(map(ne, s1, s2))
|
adding backslash to symbols in python while using the regex library
Question: So I need to selectively add backslashes to characters that already have pre-
defined meaning (such as + and *)in the re library in python.
Say that I am give an array
arr = ["five", "+", "two", "*", "zero", "=", "five"]
are there any functions within re (or python ) that will allow me to add a \
to "+" and " * ", so it will look like
arr = ["five", "\+", "two", "\*", "zero", "=", "five"] ?
I can technically do exhausive search and coompare each element within arr to
one of the symbols and add \ accordingly, but that is far from ideal. Any help
would be appreciated!
Answer: Use [`re.escape`](https://docs.python.org/2/library/re.html#re.escape) with
either [`map`](https://docs.python.org/2/library/functions.html#map) or a list
comprehension:
>>> import re
>>> arr = ["five", "+", "two", "*", "zero", "=", "five"]
>>> map(re.escape, arr)
['five', '\\+', 'two', '\\*', 'zero', '\\=', 'five']
>>> [re.escape(x) for x in arr]
['five', '\\+', 'two', '\\*', 'zero', '\\=', 'five']
|
Python, Tkinter: Get Text from Checkbutton when Checked
Question: I'm working with the following code. My goal is to get the text of a
Checkbutton when it is checked, and append that text to a list. I want to
write the code dynamically, because the size of list 'x' may change. Here's
what I have so far:
from Tkinter import *
root = Tk()
global j
j = []
x = ['hello', 'this', 'is', 'a', 'list']
def chkbox_checked():
j.append(c.cget("text"))
for i in x:
c = Checkbutton(root, text=i, command=chkbox_checked)
c.grid(sticky=W)
mainloop()
print j
My output so far for j has been:
['list', 'list', 'list', 'list', 'list'] #depending on how many Checkbuttons I select
I'm looking for an output that is like this:
['this', 'list'] #depending on the Checkbuttons that I select; this would be the output if I
#selected Checkbuttons "this" and "list".
I've experimented with the "variable" option in the Checkbutton, but I can't
seem to connect the dots. Can anyone point me in the right direction? I have a
feeling it's relatively straightforward. Thanks!
Answer: The problem is that the variable c in the for loop is reassigned every
iteration. That's why it only prints the last element `list`.
One solution is to use lambda functions.
def chkbox_checked(text):
return lambda : j.append(text)
for i in x:
c = Checkbutton(root, text=i, command=chkbox_checked(i))
c.grid(sticky=W)
|
Python3 datetime negative number
Question: I am trying to time a the run time of a function using datatime.now() and
subtracting their differences to get a floating point number of seconds.
tStart = datetime.now()
*function*
tEnd = datetime.now()
# microseconds for more precision then convert to seconds
time = tEnd.microsecond - tStart.microsecond
print(listLength, "\t", '{0:.6f}'.format(time/1000000))
Example Error:
3250 0.043427
3500 -0.940100
3750 0.075566
Am I wrong for assuming that I can convert back to normal seconds with a
standard conversion? Is there a better way of doing this? I tried using
seconds but it only gets to hundredths of a second precision so I don't get
the 6 decimal digits I need.
Answer: If you just want to calculate the offset in seconds between two dates, the
_time_ module may be more what you are looking for. It expresses dates in
seconds since the UNIX epoch, so subtracting two dates gives natively a result
in seconds :
import time
tStart = time.time()
*some code*
print time.time() - tStart
|
how to know which quarter does the current month belongs to ? (in python )
Question: I want to know to which quarter(Q1,Q2,Q3,Q4) does the current month belongs to
in python. I'm fetching the current date by importing time module as follows:
import time
print "Current date " + time.strftime("%x")
any idea how to do it ?
Answer: [`strftime`](https://docs.python.org/2/library/time.html#time.strftime) does
not know about quarters, but you can calculate them from the month:
1. Use [`time.localtime`](https://docs.python.org/2/library/time.html#time.localtime) to retrieve the current time in the current timezone. This function returns a named tuple with year, month, day of month, hour, minute, second, weekday, day of year, and time zone offset. You will only need the month (`tm_mon`).
2. Use the month to calculate the quarter. If the first quarter starts with January and ends with March, the second quarter starts with April and ends with June, etc. then this is as easy as dividing by 4 without remainder and adding 1 (for 1..3 // 4 == 0, 0 + 1 == 1, 4..6 // 4 == 1, 1 + 1 == 2, etc.). If your definition of what a quarter is differs (e.g. companies may choose different start dates for their financial quarters), you have to adjust the calculation accordingly.
|
Git and file renaming and replacing
Question: I don't generally have a problem with renaming with git, but I've run across a
really difficult problem I'm trying to work out.
For various reasons, I have a situation where we have a file `dir1/file`. Due
to some long ago decisions, it's in completely the wrong place and needs to be
moved to `dir2/file`.
However there's a lot of code that needs to be changed and for various reasons
we have to keep the file in the new place and the old place for a while.
So, the natural(ish) approach would be to do this:
git mv dir1/file dir2/file
git commit -a
so far so good:
> git diff master --name-status --find-renames
R100 dir1/file dir2/file
So then we do
ln -s ../dir2/file dir1/file
git commit -a
but this happens
> git diff master --name-status --find-renames
A dir2/file
T dir1/file
And if anyone changes `dir1/file` on master and I try to pull it I get told
there's a merge conflict with `dir1/file1` and `dir2/file1` is left unchanged.
I thought from reading other posts that git tracked content, but it seems to
be tracking filenames as well as content. And completely missing the fact that
the content has moved.
So how on earth do I get git to recognise that I have in fact renamed a file
and then added a new file which just happens to have the same name as the old
one?
Note: I'd rather not do this as multiple pushes. There's several files like
this that are affected and the chances that someone is doing changes to one of
them in parallel are quite high and there's no guarantee they will be able to
do the pull to get the rename and then the pull to get the soft link.
Addition example. I was removing a function from a python module `__init__.py`
which should never have been in there, the `__init__.py` should have been
empty. This too is not spotted as a rename. Even though the contents of the
new file are 99% identical to the original `__init__.py` and the contents of
the new `__init__.py` are 0% identical to the old contents. Everything is fine
till I add a file with the same name.
Answer: Git does, in fact, track content rather than—or rather, we should say "in
addition to"—names. The diff goes wrong because `git diff` (necessarily) tries
to map names and compare the contents of two separate commits (or one commit
and the current working directory, or one commit and the current index, etc.,
but these are just variations on the theme of "compare two commits").
More specifically, _when_ `git diff` compares trees1 _`T1`_ and _`T2`_ , it
assumes by default that the only _candidates_ for a rename are those where
some file-name exists in _`T1`_ but not in _`T2`_ , and some other (different)
file name exists in _`T2`_ but not in _`T1`_.
Thus, when you make the first commit, you have two commits—let's call these A
and B—with two trees where `dir1/file1` "goes missing" from A and `dir2/file2`
appears in B. That's a candidate for rename-detection, and because the file
contents are 100% identical, git easily spots the rename and gives you the
`R100` diff output.
When you make the second commit, you add commit C with a third tree. Comparing
B and C works fine: `dir2/file` appears in both, and the new symlink
`dir1/file` appears only in C, and the diff output from this pair is fine too.
The problem comes in when comparing A and C: now `dir1/file1` appears in both,
while `dir2/file2` is only in C, and `git diff` does not realize that there's
a rename candidate.
There is a flag, `--find-copies-harder`—or you may specify `-C` more than
once—that (rather unsurprisingly) makes the copy/rename detection code work
harder. In this case git will consider the possibility that a file that
"appears unchanged" (has the same name in both trees) might have been copied
or renamed to another file that "appears new" (exists in second tree but not
in first). This is not enabled by default because the fully-general version is
extremely computationally-intensive.
* * *
Unfortunately, there is no way to control the diff options used when computing
diff-sets for `git merge`. The merge command sets some defaults (-M50%, etc.)
and does several diffs, and does not let you set `--find-copies-harder`. So
even if this works for a manual `git diff`, it won't solve your merge
conflict.
Note that when you do a merge,2 git computes just two sets of diffs: that from
the merge-base3 to the current `HEAD`, and that from the merge-base to the
merged-in commit (git merges a commit, not a branch: the fact that the result
merges that branch, when that commit is the tip of a branch, is a sort of
"intentional coincidence"). So it _is_ possible to make the rename as one
commit, and the symlink as a second, but to get `git merge` to "see" the
rename, you must also do two separate `git merge`s. It's not particularly
pleasing, but to fix this, you would have to make git's `diff` machinery
smarter, so that it could at least figure out that a file-type-change makes
for much greater chance of finding a rename if it "finds copies/renames a bit
harder".
(Note that adding this to the diff machinery would fix both issues—git diff
not seeing the rename, and git merge not seeing the rename—all at once.)
* * *
1By "trees" here I mean full file trees, rather than git's `tree` objects.
2More specifically, this is the case for a two-parent merge. Octopus merges
are handled differently. I have not dug into the innards of octopus merges and
can't really say anything more about those.
3The merge-base depends on the two (or more) commits to be merged, and to
complicate things, with the default (`recursive`) strategy, if there are
multiple merge-base candidates, git computes a "virtual merge base", which is
not necessarily the same as any actual commit. The details are not something I
can explain properly here: I know the general idea but not the specifics
within git, and in any case it's rarely important and not directly relevant to
your issue. There's a fairly nice example
[here](http://codicesoftware.blogspot.com/2011/09/merge-recursive-
strategy.html), if you want to read more, although the example uses some
rather Clearcase-like terminology.
|
pycrypto random not supported on GAE?
Question: I have deployed a App engine application that uses pycrypto. I installed
pycrypto locally but when i deploy on the App Engine it says:
TargetAppError: Traceback (most recent call last):
File "/base/data/home/apps/s~shared-playground/55de226e3bc6746b0c2a029d52be624810ea0d14.376065013735366090/mimic/__mimic/target_env.py", line 968, in RunScript
loader.load_module('__main__')
File "/base/data/home/apps/s~shared-playground/55de226e3bc6746b0c2a029d52be624810ea0d14.376065013735366090/mimic/__mimic/target_env.py", line 316, in load_module
return self.env.LoadModule(self, fullname)
File "/base/data/home/apps/s~shared-playground/55de226e3bc6746b0c2a029d52be624810ea0d14.376065013735366090/mimic/__mimic/target_env.py", line 725, in LoadModule
exec(code, module.__dict__) # pylint: disable-msg=W0122
File "helloworld.py", line 2, in <module>
from pycrypto import Random
ImportError: No module named pycrypto
I have the following app.yaml:
application: my-app-id
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: /.*
script: helloworld.app
libraries:
- name: webapp2
version: "2.5.2"
- name: pycrypto
version: "2.6"
My code is as follows:
import webapp2
from Crypto.Cipher import AES
from Crypto import Random
from google.appengine.api import users
class MainPage(webapp2.RequestHandler):
def get(self):
user = users.get_current_user()
if user:
self.response.headers['Content-Type'] = 'text/plain'
iv = Random.new().read(AES.block_size)
key = b'Sixteen byte key'
cipher = AES.new(key, AES.MODE_CFB, iv)
msg = iv + cipher.encrypt(b'Attack at dawn')
self.response.out.write('Hello, '+ msg + ': ' + user.nickname())
else:
self.redirect(users.create_login_url(self.request.uri))
app = webapp2.WSGIApplication([
('/', MainPage)
], debug=True)
The cause of the error seems fairly simple. There is no module named pycrypto.
However the following
[thread](http://stackoverflow.com/questions/12504489/pycrypto-and-google-app-
engine) suggest there is. What is the cause of this error then? Please advise
thanks.
Answer: App Engine provides third party libraries in their sandbox. Find the link[1]
below for the 3rd party libraries supported by App Engine. Also you can try to
change the version to **"latest"** instead of 2.6 in app.yaml
[1] <https://cloud.google.com/appengine/docs/python/tools/libraries27>
|
How to multiply two columns of a data file and add them in python?
Question: I want to multiply the contents two columns of a file generated from a bash
script and then add them up to get the summation. The problem is the file
contains some special character on the last column which I cannot get rid of.
How do I read the following file and multiply col. 3 and col. 5 and then add
them up? I want to get sum = 4*821+3*28+4*1+6*1+5*13 ...
> 1. 0 0 4 0 821.00 95.02%
> 2. 0 0 3 0 28.00 3.24%
> 3. 0 1 4 0 1.00 0.12%
> 4. 0 0 6 0 1.00 0.12%
> 5. 0 0 5 0 13.00 1.50%
>
Normally I could have done it by
flist = glob.glob(filename)
fdata = []
for f in flist:
load = np.loadtxt(f)
fdata.append(load)
fdata_arry=np.array(fdata)
print fdata_arry
c = fdata_arry[:,2]*fdata_arry[:,4]
d = np.sum(c)
but in this case the last col. has a %sign which makes it difficult to read.
Is there a way to ignore the last column. This should be pretty basic but I'm
just a new programmer so your help is much appreciated. Thanks!
Answer:
import csv
dat=open('dat','r')
reader= csv.reader(dat, delimiter=' ')
prod=0
for row in reader:
prod+=float(row[2])*float(row[4])
prod
# 3443.0
|
update user installed packages with pip
Question: I'm using a bunch of python packages for my research that I install in my home
directory using the `--user` option of pip. There are also some packages that
were installed by the package manager of my distribution for other things. I
would like to have a pip command that only upgrade the packages I installed
myself with the `--user` option.
I tried the recommend version `pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install -U` but this seems to only work using virtualenvs. `pip freeze --local` is showing packages that are installed for my user and systemwide.
Is there a way to upgrade only the packages installed locally for my user?
Answer: I've come across this problem as well and created the following script. The
script updates any packages found by pip and which reside within the user's
home directory. Usually, this should be the packages that were installed with
`pip install --user`.
#!/usr/bin/env python
# Starting point for this script:
# http://stackoverflow.com/a/5839291
import os
import pip
import subprocess
homedir = os.getenv('HOME')
homepkg = []
for dist in pip.get_installed_distributions():
if not dist.location.startswith(homedir):
continue
homepkg.append(dist.project_name)
if len(homepkg) == 0:
print('No locally-installed packages, nothing to update.')
raise SystemExit
# --no-deps is required because --upgrade by default is recursive and would try
# to update packages that are not from homedir (e.g. from /usr).
subprocess.call(['pip', 'install', '--user', '--upgrade', '--no-deps'] + homepkg)
|
How can I parallel parsing in python?
Question: I have the following code which converts graph from edges list to adjacency
matrix:
for line in open('graph.txt'):
converted = [sparse_to_dense.get(int(ID)) for ID in line.split()]
i = converted[0]
j = converted[1]
I.append(i)
J.append(j)
n = max([max(I), max(J)]) + 1
data = [1]*len(I)
return coo_matrix((data, (I,J)), shape=(n,n), dtype='i1')
This code is awfully slow -- on may machine conversion of 500k edges takes
hours. On the other hand i/o is obviously is not bottleneck (I can read full
file in memory almost instantaneously) so I think there is a room for
parallelism. But I'm not sure how to proceed: should I read file in parallel
or something?
Answer: Use multiprocessing one way to do it is this. I did not check and could be
further improved
import multiprocessing
class Worker(multiprocessing.Process):
def __init__(self, queue, results):
multiprocessing.Process.__init__(self):
self.q = queue
self.results = results
def run(self):
while True:
try:
lineno, linecontents = self.q.get(block=False)
except Queue.Empty:
break
converted = [sparse_to_dense.get(int(ID)) for ID in line.split()]
i = converted[0]
j = converted[1]
self.results.put((i, j))
def main():
q = multiprocessing.Queue()
results = multiprocessing.JoinableQueue()
for i, l in open(fname):
q.put((i, l))
for _ in xrange(4):
w = Worker(q, results)
w.start()
I, J = []
while True:
try:
i, j = results.get(block=False)
except Queue.Empty:
break
I.append(i)
J.append(j)
results.task_done()
results.join()
n = max([max(I), max(J)]) + 1
data = [1]*len(I)
coo = coo_matrix((data, (I,J)), shape=(n,n), dtype='i1')
|
Failure prediction from sensor data using Machine Learning
Question: I am going to do a research project which involves predicting imminent failure
of an engine using time data obtained from sensors. The data basically
contains the readings of various embedded sensors every 10 minutes for many
months. Such data is available for about 100 or so different units (all are
the same engine model), along with the time of failure.
While I do have a reasonably good understanding of Machine Learning, I am at a
loss of approaching this. I have done a few projects that involved static
datasets (using SVMs, Neural Nets, Logistic Regression etc.) and even one on
predicting time series. But this is quite different. While the project
involves time data, it is hardly a matter of predicting the future values.
Rather it is a case of anomaly detection on sequential time data.
Please could you give some ideas as to how I could approach it? I'm
particularly interested in Neural Networks/ Deep Learning, so any ideas on
using them for this task would also be welcome. I would prefer to use Python
or R, although I would be open to using something else if it was particularly
geared for this sort of task. Also could you give me some formal terms using
which I could search for relevant literature?
Thanks
Answer: As a general comment, try hard to express everything that you know about the
physical system in a model, then use that model for inference. I worked on
such problems in my dissertation: [Unified Prediction and Diagnosis in
Engineering Systems by means of Distributed Belief
Networks](http://riso.sourceforge.net/docs/dodier-dissertation.pdf) (see
chapter 6). I can say more if you provide additional details about your
problem domain.
Don't expect general machine learning models (neural networks, SVM, etc) to
figure out the structure of the problem for you. Having the right form of the
model is much, much more important than having a general model + lots of data
-- this is the summary of my experience.
|
Python program to search videos using Bing
Question: I have been trying to search videos using bing search engine. But every-time I
try I get error HTTPError:HTTPError 403:Forbidden
import urllib
import urllib2
import json
def main():
query = "'pyscripter'"
print bing_search(query, 'Video')
def bing_search(query, search_type):
#search_type: Web, Image, News, Video
key= 'LsE7jElMmTDfbrnCEmrCmCEBbaPxMG5BvKr9CsfmSNS'
query = urllib.quote(query)
#create credential for authentication
user_agent = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; FDM; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 1.1.4322)'
credentials = (':%s' % key).encode('base64')[:-1]
auth = 'Basic %s' % credentials
url = 'https://api.datamarket.azure.com/Data.ashx/Bing/Search/'+search_type+'?Query=%27'+query+'%27&$top=5&$format=json'
request = urllib2.Request(url)
request.add_header('Authorization', auth)
request.add_header('User-Agent', user_agent)
request_opener = urllib2.build_opener()
response = request_opener.open(request)
response_data = response.read()
json_result = json.loads(response_data)
result_list = json_result['d']['results']
print result_list
return result_list
if __name__ == '__main__':
main()
The error shown is:
Traceback (most recent call last):
File "<module1>", line 30, in <module>
File "<module1>", line 7, in main
File "<module1>", line 22, in bing_search
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
Before trying this I worked with YouTube search API which worked fine. But the
only problem was that it was limited to the videos present in YouTube
database. What I want is the list of URL's of all the videos related to the
keyword present in internet. So I started with Bing search engine. Any help
regarding this would be appreciated.
Answer: I had save issue,
A web server may return a 403 Forbidden HTTP status code in response to a
request from a client for a web page or resource to indicate that the server
can be reached and understood the request, but refuses to take any further
action. Status code 403 responses are the result of the web server being
configured to deny access, for some reason, to the requested resource by the
client.
in my case, I forgot to activate "bing search" subscription, so go to
"<https://datamarket.azure.com/dataset/bing/search>" and activate "bing
search" subscription
|
aws crontab not working using subprocess
Question: Crontab is working but not executing aws cli command. I am using python and
Subprocess.Popen
import subprocess
does not work...
proc = subprocess.Popen("aws rds describe-db-instances > /tmp/testoutput.txt", stdout=subprocess.PIPE, shell=True)
does work...
proc = subprocess.Popen("echo $(date) > /tmp/testoutput.txt", stdout=subprocess.PIPE, shell=True)
same user and same permissions also same permission on aws credentials. aws
rds describe-db-instances works from command line
Answer: You can add credentials in your env variables or create a confirgratin file.
Aws click will automatically locate the credentials
|
Python : Searching a directory for files in a list
Question: ## Context
I have a list of filenames (many tens of thousands) that I would like to
locate within a directory. For all files that are located they must be copied
to a single output folder.
Using Python, what would be, **in your opinion, the most efficient* strategy**
for doing this? I'm not looking for a solution, but good strategy to get
started.
## To break this down:
* How the list of filenames should be stored and the method of searching if a filename is in the list?
* How to go through an entire directory, folder by folder, and consider each file in tern?
* How to copy the file (least processing time)?
## Caveat
*efficient in the sense that the script execution should not 'hog' system's resources. Other more important applications maybe running concurrently.
Many thanks!
Answer:
import os
import shutil
filenames_i_want = set() # fill this with the filenames you want
dest_dir = 'whatever'
src_dir = 'whatever'
for (dirpath, dirnames, filenames) in os.walk(src_dir):
for fname in filenames:
if fname in filenames_i_want:
shutil.copy(os.path.join(dirpath, fname), dest_dir)
If this proves too slow [use a
profiler](https://docs.python.org/2/library/profile.html) to figure out the
slow parts and optimize from there.
If you find that `shutil.copy` is slow, refer to ["Python copy larger file too
slow"](http://stackoverflow.com/questions/21799210/python-copy-larger-file-
too-slow).
|
Unable to receive more than 20 MQTT messages using Mosquitto/Paho for Python
Question: I'm using the Mosquitto (now Paho) python MQTT client to connect to a HiveMQ
broker. The code is very basic, as taken from their documentation here -
<https://pypi.python.org/pypi/paho-mqtt>
#SUBSCRIBER
import paho.mqtt.client as mqtt
# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, rc):
client.subscribe("GB/node0/", 2)
# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
print "Topic: ", msg.topic+'\nMessage: '+str(msg.payload)
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect("192.168.15.4", 1883, 60)
# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
client.loop_forever()
As you notice in client.subscribe(), the QoS is 2. This is the only
modification to the official documentation's code.
The publishing client's code is-
#PUBLISHER
import paho.mqtt.client as mqtt
mqttc = mqtt.Client("python_pub")
mqttc.connect("192.168.15.4", 1883, 60)
mqttc.publish("GB/node0/", "Hello, World baby!",2)
Here also, the QoS is 2.
This modification of QoS results in only 20 messages being received by the
subscriber. Upon further probing, I realized the problem is probably due to
`max_inflight_messages_set()`, which is an option function which sets the
maximum number of messages with QoS>0 that can be part way through their
network flow at once. The default is 20.
However, changing it to any other value does not work. Also, why does the
client think these messages are still inflight when they've been received? How
do I resolve the problem? How do I ensure that the client understands these
messages are not "inflight" and have been delivered?
Answer: Try calling `mqttc.loop(2,10)` after the `mqttc.publish()` in the publisher so
the publisher can handle the QOS2 acknowledgement from the broker that it has
received the publish.
The 2 second timeout and the 10 packets is probably more than is needed but it
should work
|
Python IRC bot disconnecting after 3 pings
Question: So i have coded a small twitch irc bot, but its getting disconnected, the bot
just stays ponging the pings and after 3 pongs my bot receives 0 data from
twitch and disappear from the viewer list.
Here is the code(the important part):
readbuffer = ""
while (1):
readbuffer=readbuffer+s.recv(4000)
temp=string.split(readbuffer, "\n")
readbuffer=temp.pop( )
for line in temp:
print line
elif(line[0]=="PING"):
s.sendall("PONG %s\r\n" % line[1])
Its a function that is deployed as a thread 2 times with different arguments..
The thing is i see the 2 bots on twitch.tv viewer list at first for around 5
minutes then after 3 pings exactly twitch no longer pings or sends anything.
Ask me for more code if you'd like more information, please.
Answer: Maybe unrelated:
Interpret the socket as a file:
<https://docs.python.org/2/library/socket.html#socket.socket.makefile>
f = s.makefile()
for line in f:
print 'Read:', line
command, arguments = line.rstrip().split(' ', 2)
if command == 'PING':
f.write('PONG ' + arguments + '\r\n')
That makes so many things so much easier. Please try that and comment if the
problem persist.
|
Adding tuples of lists to a list in python
Question: I don't understand the syntax of adding tuples whose elements are also
elements of a list to another list that encompasses all of the information.
I'm trying to create a trajectory list that contains tuples of flight data of
a projectile during flight. I want to use tuples so that I can see all of the
information for each moment in time.
import random
import math
gg = -9.81 # gravity m/s**2
tt = 10 **-9 # seconds
wind = random.randint(-10,10) # m/s
#position
x=random.randint(0,100) # m/s
#projectile
v0 = float(raw_input('Enter the initial velocity (m/s) -> '));
theta = float(raw_input('Enter the initial launch angle of projectile (degrees) -> '));
theta *= (180/3.14159265359)
xx = [x]
yy = [.000000000000000000001]
dz =[v0]
time = [0];
data = ( time, xx, yy, dz)
traj = [data]
while yy[-1] >0:
traj.append ( math.sqrt( (traj[-1][3][-1] * math.sin(theta) + gg * tt)** 2+ (traj[-1][4] * math.cos(theta) -wind) ** 2 )) # velocity
traj.append ( traj[-1][2][-1] + dz[-1] * math.sin(theta) * tt + .5* gg * tt) # y position
traj.append ( traj[-1][1][-1] * dz[-1] * math.cos(theta) - wind * tt) # x position
traj.append ( traj[-1][0][-1] + tt) # time
print traj
Edit: I would input integers for the initial angle and velocity (i.e.-45,45).
Expected outputs would be a list of tuples containing four elements
corresponding to the time, x coordinate, y coordinate, and velocity,
respectively. Currently, I'm receiving a tuple index out of range error.
Answer: Where you have
traj[-1][4]
in your first `traj.append` line, `traj[-1]` is `data`, and `data` is only
four elements long, so the last item is at index 3.
|
How to mock an object returned by a mocked object?
Question: I don't quite have the hang of Python mocks.
Clearly I don't want my test code to call the actual method`requests.post()`
in the following method, so I want to to mock its behavior instead:
def try_post(self, url, body):
r = requests.post(url, data=body)
msg = str(r.status_code) + " " + r.content + "\n"
if r.status_code >= 300:
sys.stderr.write("Error: POST returned " + msg)
My question: How do I mock the object returned by `requests.post()`, i.e. the
response object?
For example, I'd like to write one test where r.status_code is 200 and another
where r.status_code is 300 so I can test the conditional logic. Also, I'd need
to mock r.content to return some string.
My non-working code follows:
from monitor_writer import MonitorWriter
import mock
import unittest
class TestMonitorWriter(unittest.TestCase):
@mock.patch('monitor_writer.requests')
def test_conforming_write(self, mock_requests):
xml_frag = """
<InstantaneousDemand>
</InstantaneousDemand>
"""
mock_requests.status_code.return_value = 200
mock_requests.content.return_value = "OK"
writer = MonitorWriter()
writer.update(xml_frag)
self.assertTrue(mock_requests.post.called, "Failed to call requests.post")
This test fails with `TypeError: expected a character buffer object` because
r.status_code and r.content evaluate to mock.MagicMock objects, not strings,
and the try_post() method is attempting to catenate them.
Answer: You want to mock `requests.post` directly, rather than the entire `requests`
module:
class TestMonitorWriter(unittest.TestCase):
@mock.patch('monitor_writer.requests.post')
def test_conforming_write(self, mock_post):
xml_frag = """
<InstantaneousDemand>
</InstantaneousDemand>
"""
response = mock.MagicMock()
response.status_code = 200
respone.content = "OK"
mock_post.return_value = response
writer = MonitorWriter()
writer.update(xml_frag)
self.assertTrue(mock_post.called, "Failed to call requests.post")
Once we've mocked the function you're actually calling, we create a mock
response object, set `status_code` and `content` on the mocked response to the
desired values, and finally assign the mocked response to the `return_value`
of our mocked `post` function.
|
How could I get square root of a symbol in python?
Question: I am using math.sqrt in python to compute the squre root of something, but
that "something" is a symbol instead of a known value. I am using that
"something" as an intermediate variable for later use.
import math
from math import sqrt
x = Symbol('x')
y = math.sqrt(x)
print(y)
Hower I get the error message
File "/Library/Python/2.7/site-packages/sympy/core/expr.py", line 207, in __float__
raise TypeError("can't convert expression to float")
TypeError: can't convert expression to float
Looks like there's no other packages for me to compute square root except
"math.sqrt", does anyone know how I could get rid of this problem?
Answer: When using `sympy`, one should use the functions in `sympy` that operate on
symbols, rather than the `math.*` functions that operate on floating point
numbers, e.g.
from sympy import *
x = Symbol('x')
y = sqrt(x)
print(y)
In this case, the code is using `sympy`'s `sqrt()` function.
|
pytest fixtures doesn't work - why?
Question: I'm trying to use PyTest and I cannot obtain how to set fixtures. I've tried
following code:
import pytest
import random
@pytest.fixture()
def setup():
a = random.randint(0,10)
def test(setup):
assert 3 > a
if __name__ == '__main__':
pytest.main()
And I am getting `"NameError: name 'a' is not defined".`
Also example from official documentation doesn't work. What's wrong? I need
functionality similar to **setUp/tearDown**. But I don't want to use unittest.
Can someone provide me an example with working fixtures (both setUp type and
tearDown type)? I want to write some test as functions and some test as
methods inside classes, therefore me second question is for an working example
of using fixture with classes/methods. I just need to see working examples of
fixtures in python.
Is there a different **python3** unit testing framework with assertions as
simple as in **PyTest**?
Answer: Fixtures don’t work like this. They cannot magically transfer the name `a`
from one function’s (`setup`) local scope to another’s (`test`). Instead, your
`setup` function must explicitly return the object that will be passed as the
`setup` argument to your `test` function. For example:
import pytest
import random
class TestSetup:
def __init__(self):
self.a = random.randint(0, 10)
@pytest.fixture()
def setup():
return TestSetup()
def test(setup):
assert 0 <= setup.a <= 10
if __name__ == '__main__':
pytest.main()
|
How to set a environment variable in the current shell with Python?
Question: I want to set an environment variable with a
[Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29)
script, influencing the shell I am starting the script in. Here is what I mean
python -c "import os;os.system('export TESTW=1')"
But the command
echo ${TESTW}
returns nothing. Also with the expression
python -c "import os;os.environ['TEST']='1'"
it does not work.
Is there another way to do this in the direct sense? Or is it better to write
the variables in a file which I execute from 'outside' of the Python script?
Answer: You can influence environment via:
[putenv](https://docs.python.org/2/library/os.html#os.putenv) BUT it will not
influence the caller environment, only environment of forked children. It's
really much better to setup environment before launching the python script.
I may propose such variant. You create a bash script and a python script. In
bash script you call the python script with params. One param - one env
variable. Eg:
#!/bin/bash
export TESTV1=$(python you_program.py testv1)
export TESTV2=$(python you_program.py testv2)
and `you_program.py testv1` returns value just for one env variable.
|
Updating data with bulkloader
Question: I'm using python script do upload some data to my app engine backend.
Here is it's definition in bulkloader.yaml
- kind: Subcategory
connector: csv
connector_options:
encoding: utf-8
property_map:
- property: __key__
external_name: id
export_transform: transform.key_id_or_name_as_string
import_transform: transform.none_if_empty(int)
- property: name
external_name: name
- property: categoryId
external_name: categoryId
export_transform: transform.key_id_or_name_as_string
import_transform: transform.none_if_empty(int)
- property: language
external_name: language
- property: active
external_name: active
import_transform: bool
The problem is, that column active is dynamically changing later, and next
time I upload the same data again it's being replaced with false because,
column doesn't exist in csv.
I tried removing the column from bulkloader, but then the columns just
dissapears. It's probably because entities are replaced and not updated. Is
there a way to preserve already existing columns without replacing/deleting
them when uploading?
Answer: Sorry, there is no easy way to merge properties with the bulk loader. As you
noticed, it's creating entities based on your CSV data, then storing them with
known keys, overwriting existing entities. To accomplish a merge, you'll need
to bulk download the existing entities, perform the merge in your dataset,
then load the merged entities.
There are hooks for doing fancy things during the loader logic which might
help implement a merge tool. See this related SO question: [Merge multiple
columns in bulkloader](http://stackoverflow.com/questions/11261081/merge-
multiple-columns-in-bulkloader) But you'd still need to fetch the properties
to merge from the datastore somehow. The datastore can't perform the merge on
its own.
|
Python ElementTree: How to add SubElement at VERY specific position?
Question: I want to add a subelement to an xml file, but in a very specific position,
not appended to the end.
The standard way is:
subi = ET.SubElement(root[0][0], 'subi')
which is fine.
but: Let's say, root[0][0] already has two children, hence accessible via
root[0][0][0] and root[0][0][1].
And I want "subi" to become the new middle child, root[0][0][1], making the
original second child become the third child root[0][0][2].
Is there a way to do that? (My experiences with life and nature would say no,
but I have high hopes for python=)
Answer: You can use
[`Element.insert`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.insert)
method. It allows you to specify an index.
For example, to insert before the 3rd (index: 2) element:
>>> import xml.etree.ElementTree as ET
>>>
>>> root = ET.fromstring('''
... <root>
... <first></first>
... <second></second>
... <third></third>
... </root>
... ''')
>>>
>>> new = ET.Element('new')
>>> root.insert(2, new) # <-----------
>>> print(ET.tostring(root))
<root>
<first />
<second />
<new /><third />
</root>
|
No module named linear_modelsklearn._model
Question: from sklearn.linear_modelsklearn._model import SGDClassifier
No module named linear_modelsklearn._model
I am on OSX version 10.9.4
Python 2.7.6
numpy 1.9.0
scipy 0.14.0
scikit-learn 0.15.2
What I am missing? Thanks.
(import sklearn works fine but not the model)
Answer: Your import seems wrong, try:
from sklearn.linear_model import SGDClassifier
|
Can a mock side_effect iterator be reset after it has been exhausted?
Question: `mock.reset_mock()` will not reset a side effect iterator. Is there a way to
do this without creating the mock again?
>>> from mock import MagicMock
>>> mock = MagicMock(side_effect = [1,2])
>>> mock(), mock()
(1, 2)
>>> mock()
Traceback (most recent call last):
File "<pyshell#114>", line 1, in <module>
mock()
File "C:\Python27\Lib\site-packages\mock.py", line 955, in __call__
return _mock_self._mock_call(*args, **kwargs)
File "C:\Python27\Lib\site-packages\mock.py", line 1013, in _mock_call
result = next(effect)
StopIteration
>>> mock.reset_mock()
>>> mock()
Traceback (most recent call last):
...
StopIteration
>>> mock = MagicMock(side_effect = [1,2])
>>> mock(), mock()
(1, 2)
>>>
The intent is to re-use the mock in subsequent tests but I suspect that, like
a generator, it cannot be _restarted_.
So (better late than never) after being pointed in the right direction I
looked in `mock.py` and found that `side_effect` is an iterator object (which
cannot be _reset_ once exhausted):
def __set_side_effect(self, value):
value = _try_iter(value)
...
def _try_iter(obj):
...
try:
return iter(obj)
except TypeError:
# XXXX backwards compatibility
# but this will blow up on first call - so maybe we should fail early?
return obj
and `def reset_mock()` does not address the side efect.
Answer: As user2357112 commented, reassigning `side_effect` will solve you problem.
>>> from mock import MagicMock
>>>
>>> lst = [1, 2]
>>> mock = MagicMock(side_effect=lst)
>>> mock(), mock()
(1, 2)
>>> mock.side_effect = lst # <-------
>>> mock(), mock()
(1, 2)
|
Python eval function with numpy arrays via string input with dictionaries
Question: I am implementing the code in python which has the variables stored in numpy
vectors. I need to perform simple operation: something like
(vec1+vec2^2)/vec3. Each element of each vector is summed and multiplied.
(analog of MATLAB elementwise .* operation).
The problem is in my code that I have dictionary which stores all vectors:
var = {'a':np.array([1,2,2]),'b':np.array([2,1,3]),'c':np.array([3])}
The 3rd vector is just 1 number which means that I want to multiply this
number by each element in other arrays like 3*[1,2,3]. And at the same time I
have formula which is provided as a string:
formula = '2*a*(b/c)**2'
I am replacing the formula using Regexp:
formula_for_dict_variables = re.sub(r'([A-z][A-z0-9]*)', r'%(\1)s', formula)
which produces result:
2*%(a)s*(%(b)s/%(c)s)**2
and substitute the dictionary variables:
eval(formula%var)
In the case then I have just pure numbers (Not numpy arrays) everything is
working, but when I place numpy.arrays in dict I receive an error.
1. Could you give an example how can I solve this problem or maybe suggest some different approach. Given that vectors are stored in dictionary and formula is a string input.
2. I also can store variables in any other container. The problem is that I don't know the name of variables and formula before the execution of code (they are provided by user).
3. Also I think iteration through each element in vectors probably will be slow given the python for loops are slow.
Answer: Using [numexpr](https://github.com/pydata/numexpr), then you could do this:
In [143]: import numexpr as ne
In [146]: ne.evaluate('2*a*(b/c)**2', local_dict=var)
Out[146]: array([ 0.88888889, 0.44444444, 4. ])
|
TypeError: 'str' does not support the buffer interface in 3.4.1
Question: I've created a server in Python. The code is:
from socket import *
serverSocket = socket(AF_INET, SOCK_STREAM)
localHost = # my address
port = 5369
serverSocket.bind( (localHost, port) )
serverSocket.listen(1)
while True:
print ("Ready to serve.....")
connectionSocket, addr = serverSocket.accept()
try:
message = connectionSocket.recv(1024)
filename = message.split()[1]
f = open(filename[1:])
outputdata = f.read()
for i in range(0, len(outputdata) ):
connectionSocket.send(outputdata[i])
connectionSocket.close()
except IOError:
connectionSocket.send('404 Not Found')
connectionSocket.close()
serverSocket.close()
I executed the file. It gives an error:
Traceback (most recent call last):
File:"D;\.....\server.py", line 25, in <module>
connectionSocket.send(outputdata[i])
TypeError: 'str' does not support the buffer interface.
After scanning through similar queries on Stackoverflow,I found out that from
Python 3.0, the data needs to converted to bytes. If that's the case, then
where and how should I change/parse the data?
Answer: You opened your file in _text mode_ , which means that the contents are
decoded to unicode strings.
When writing to the socket you need to write `bytes` instead. You can easily
get bytes from a file by opening it in binary mode instead:
with open(filename[1:], 'rb') as f:
connectionSocket.send(f.read())
Note that I send the data to the socket _in one go_ ; no need to send the data
byte by byte here.
|
Serve static files with openshift and python
Question: I've set up [rawdog](http://offog.org/code/rawdog/) in an OpenShift
application and installed the Python and cron cartridges. I get the
application to serve the generated HTML, however the css stylesheet is not
available.
I use the following wsgi.py file.
#!/usr/bin/python
import os
SERVE_FILE = os.path.join(os.environ['OPENSHIFT_DATA_DIR'], 'www', 'index.html')
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/'
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
def application(environ, start_response):
status = '200 OK'
output = open(SERVE_FILE, 'r').read()
response_headers = [('Content-type', 'text/html'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 8051, application)
# Wait for a single request, serve it and quit.
httpd.handle_request()
The css file should be in the same 'folder' as index.html,
<link rel="stylesheet" href="style.css" type="text/css">
The css file is in `~/app-root/repo/wsgi/static/css/style.css`. I also have
a`~/app-root/repo/wsgi/.htaccess` file that contains:
RewriteEngine on
RewriteRule ^/style.css static/css/style.css [R]
So far it doesn't seem to be working. Any ideas on how I can make static
content available on OpenShift using Apache?
Answer: Your `<link rel="stylesheet" href="style.css" type="text/css">` should
probably look like `<link rel="stylesheet" href="static/css/style.css"
type="text/css">`
|
Syntax error for down()?
Question: I'm a beginner computer science student and using python in computer science
1. My assignment is to write a program that creates a spirograph. I think that
the code is all right to do that, but when i run it, an error message pops up
that says syntax error and it highlights `down()`, which is a common turtle
command. I have no idea why. It said syntax error for `main()`, but then i
restarted python and now it says there's an error in `down()`. Here's the
code:
from turtle import *
from math import *
def xValue(R,r,p,t):
x=(R-r)*cos(t)-(r+p)*cos((R-r)/r*t)
def yValue(R,r,p,t):
y=(R-r)*sin(t)-(r+p)*sin((R-r)/r*t)
def initialPosistion():
t=2*pi
up()
goto(xValue(R,r,p,t),yValue(R,r,p,t)
down()
def iterating(R,r,p):
t = 2*pi
while t < 0:
t = t-0.01
goto(xValue(R,r,p,t),yValue(R,r,p,t)
up()
def main():
R = 100
r = 4
p = int(input("Enter p(10-100): "))
if p < 10 or p > 100:
input("Incorrect value for p!")
iterating(R,r,p)
input("Hit enter to close...")
main()
Answer: Missed a closing `)` at the end of this line:
goto(xValue(R,r,p,t),yValue(R,r,p,t))
|
Importing correctly with pytest
Question: I just got set up to use pytest with Python 2.6. It has worked well so far
with the exception of handling "import" statements: I can't seem to get pytest
to respond to imports in the same way that my program does.
My directory structure is as follows:
src/
main.py
util.py
test/
test_util.py
geom/
vector.py
region.py
test/
test_vector.py
test_region.py
To run, I call `python main.py` from src/.
In main.py, I import both vector and region with
from geom.region import Region
from geom.vector import Vector
In vector.py, I import region with
from geom.region import Region
These all work fine when I run the code in a standard run. However, when I
call "py.test" from src/, it consistently exits with import errors.
* * *
## Some Problems and My Solution Attempts
My first problem was that, when running "test/test_foo.py", py.test could not
"import foo.py" directly. I solved this by using the "imp" tool. In
"test_util.py":
import imp
util = imp.load_source("util", "util.py")
This works great for many files. It also seems to imply that when pytest is
running "path/test/test_foo.py" to test "path/foo.py", it is based in the
directory "path".
However, this fails for "test_vector.py". Pytest can find and import the
`vector` module, but it **cannot** locate any of `vector`'s imports. The
following imports (from "vector.py") both fail when using pytest:
from geom.region import *
from region import *
These both give errors of the form
ImportError: No module named [geom.region / region]
I don't know what to do next to solve this problem; my understanding of
imports in Python is limited.
**What is the proper way to handle imports when using pytest?**
* * *
## Edit: Extremely Hacky Solution
In `vector.py`, I changed the import statement from
from geom.region import Region
to simply
from region import Region
This makes the import relative to the directory of "vector.py".
Next, in "test/test_vector.py", I add the directory of "vector.py" to the path
as follows:
import sys, os
sys.path.append(os.path.realpath(os.path.dirname(__file__)+"/.."))
This enables Python to find "../region.py" from "geom/test/test_vector.py".
This works, but it seems extremely problematic because I am adding a ton of
new directories to the path. What I'm looking for is either
1) An import strategy that is compatible with pytest, or
2) An option in pytest that makes it compatible with my import strategy
So I am leaving this question open for answers of these kinds.
Answer: **_import_** looks in the following directories to find a module:
1. The **home directory** of the program. This is the directory of your root script. When you are running pytest your home directory is where it is installed (/usr/local/bin probably). No matter that you are running it from your src directory because the location of your pytest determines your home directory. That is the reason why it doesn't find the modules.
2. **PYTHONPATH**. This is an environment variable. You can set it from the command line of your operating system. In Linux/Unix systems you can do this by executing: '_export PYTHONPATH=/your/custom/path_ ' If you wanted Python to find your modules from the test directory you should include the src path in this variable.
3. The **standard libraries** directory. This is the directory where all your libraries are installed.
4. There is a less common option using a **pth** file.
**_sys.path_** is the result of combining the **home directory** ,
**PYTHONPATH** and the **standard libraries** directory. What you are doing,
modifying **_sys.path_** is correct. It is something I do regularly. You could
try using **PYTHONPATH** if you don't like messing with **_sys.path_**
|
Convert markdown image format to html
Question: I am using the markdown2 library in python to convert markdown to html.
However, it does not support the common image formatting
>
> [![alt text][some link]][img src link]
>
which should convert to
<a href="some link" rel="alt text"><img src="img src link"/></a>
But I am mostly getting, from this and other markdown libraries
import markdown2
markdown2.markdown('[![hi][http://google.com]][http://example.com/png]')
u'<p>[![hi][http://google.com]][http://example.com/png]</p>\n'
What options do I have if I want to have images in my markdown files that are
eventually converted to html.
Answer: Found it in [their source code](https://github.com/trentm/python-
markdown2/blob/master/lib/markdown2.py#L1086), they have a slightly different
format for images.
|
Making/download a csv file with python
Question: I want to make a CSV with the Closed values of stocks, by downloading them
from Yahoo Finance using python 2.7 on Windows. The file is call "Historical
Prices". I have the tickers in a list and i want to know if I can make a csv
file with the closed values in a row. For example:
AAPL,109,87,110.06,
GOOG,2123.546,213,56,
(and so on)
So far my script is this:
import urllib2
import csv
nasdaqlisted = urllib2.urlopen("ftp://ftp.nasdaqtrader.com/SymbolDirectory/nasdaqlisted.txt")
raw = nasdaqlisted.read().split("\r\n")
del raw[0]
del raw[-1]
tickerslist = []
for l in raw:
linea = l.split("|")
tickerslist.append(linea[0])
del tickerslist[-1]
def closed(tickerslist):
url = urllib2.urlopen("http://real-chart.finance.yahoo.com/table.csv?s=" +tickerslist+ "&d=8&e=13&f=2014&g=d&a=11&b=12&c=1980&ignore=.csv")
raw = url.read().split("\r\n")
del raw[0]
del raw[-1]
closed_us = open("closed-us.csv","w")
for i in tickerslist:
cierres.write(closed(i))
closed_us.close()
Thank you very much!
P.D.: I found a this question. It may help you [Download a .csv file with
Python](http://stackoverflow.com/questions/21500918/download-a-csv-file-with-
python) but I doesn't work saying that is because "request" can not be be
import (I guess is because I use python 2.7 instead of 3.3)
Answer: Everything you have looks good. For CSV file, generally, it's simple enough
just to write a normal text files an put commas where needed - no need for
anything fansy. In your example:
def closed(tickerslist):
returnLine = tickerslist
url = urllib2.urlopen("http://real-chart.finance.yahoo.com/table.csv?s=" +tickerslist+ "&d=8&e=13&f=2014&g=d&a=11&b=12&c=1980&ignore=.csv")
raw = url.read().split("\r\n")[1:-1]
# if you wanted to grab the last number on each line...
for line in raw:
returnLine += ','+line[line.rfind(',')+1:])
return returnLine+'\n'
If you're interested in only a small portion of a file, the 'read' method has
an optional parameter for bytes to read. So you don't have to download the
whole file to find something at the beginning. This parameter is also helpful
if you wanted to download a larger file in a separate thread and update the
main thread (usually a gui) about the progress.
Hope this helps.
|
To close a QtGui Window
Question: I am a beginner in python . Recently i got stuck in a problem . Problem is
stated as follows : I needed a progressbar in my app . So i googled and found
a similar code . With this code even if the progress is 100% main window is
not closing (while the progress window closes). Please help me in resolving
this issue .
After searching i found the following code :
from threading import *
import sys
import time
from PyQt4 import QtGui
from PyQt4 import QtCore
class QCustomThread (QtCore.QThread):
startLoad = QtCore.pyqtSignal(int)
progressLoad = QtCore.pyqtSignal(int)
statusLoad = QtCore.pyqtSignal(bool)
def __init__ (self, parentQWidget = None):
super(QCustomThread, self).__init__(parentQWidget)
self.wasCanceled = False
def run (self):
# Simulate data load estimation
numberOfprogress = 100
self.startLoad.emit(numberOfprogress)
for progress in range(numberOfprogress + 1):
# Delay
time.sleep(0.1)
if not self.wasCanceled:
self.progressLoad.emit(progress)
else:
break
self.statusLoad.emit(True if progress == numberOfprogress else False)
self.exit(0)
def cancel (self):
self.wasCanceled = True
class QCustomMainWindow (QtGui.QMainWindow):
def __init__ (self):
super(QCustomMainWindow, self).__init__()
self.loadingQProgressDialog = QtGui.QProgressDialog(self)
self.loadingQProgressDialog.setLabelText('Processing')
self.loadingQProgressDialog.setCancelButtonText('Cancel')
self.loadingQProgressDialog.setWindowModality(QtCore.Qt.WindowModal)
self.startWork()
def startWork (self):
myQCustomThread = QCustomThread(self)
def startLoadCallBack (numberOfprogress):
self.loadingQProgressDialog.setMinimum(0)
self.loadingQProgressDialog.setMaximum(numberOfprogress)
self.loadingQProgressDialog.show()
def progressLoadCallBack (progress):
self.loadingQProgressDialog.setValue(progress)
def statusLoadCallBack (flag):
print 'SUCCESSFUL' if flag else 'FAILED'
myQCustomThread.startLoad.connect(startLoadCallBack)
myQCustomThread.progressLoad.connect(progressLoadCallBack)
myQCustomThread.statusLoad.connect(statusLoadCallBack)
self.loadingQProgressDialog.canceled.connect(myQCustomThread.cancel)
myQCustomThread.start()
self.loadingQProgressDialog.hide()
def app():
myQApplication = QtGui.QApplication(sys.argv)
myQCustomMainWindow = QCustomMainWindow()
myQCustomMainWindow.show()
print 'main complete'
## myQCustomMainWindow.loadingQProgressDialog.hide()
(myQApplication.exec_())
def deep():
print 'hello'
app()
print 'hi'
if __name__=="__main__":
deep()
deep()
Answer: Your found my
[answer](http://stackoverflow.com/questions/25583418/implementing-
qprogressbar-against-a-class/25584954#25584954) of progress bar ;). But my
answer to show how to implement `QThread` open file, not close window until
100% progress. But simply to close is, when progress is successful, close it
by use
[`self.close()`](http://pyqt.sourceforge.net/Docs/PyQt4/qwidget.html#close) in
`def statusLoadCallBack (flag)`;
import sys
import time
from PyQt4 import QtGui
from PyQt4 import QtCore
class QCustomThread (QtCore.QThread):
startLoad = QtCore.pyqtSignal(int)
progressLoad = QtCore.pyqtSignal(int)
statusLoad = QtCore.pyqtSignal(bool)
def __init__ (self, parentQWidget = None):
super(QCustomThread, self).__init__(parentQWidget)
self.wasCanceled = False
def run (self):
# Simulate data load estimation
numberOfprogress = 100
self.startLoad.emit(numberOfprogress)
for progress in range(numberOfprogress + 1):
# Delay
time.sleep(0.001)
if not self.wasCanceled:
self.progressLoad.emit(progress)
else:
break
self.statusLoad.emit(True if progress == numberOfprogress else False)
self.exit(0)
def cancel (self):
self.wasCanceled = True
class QCustomMainWindow (QtGui.QMainWindow):
def __init__ (self):
super(QCustomMainWindow, self).__init__()
self.loadingQProgressDialog = QtGui.QProgressDialog(self)
self.loadingQProgressDialog.setLabelText('Processing')
self.loadingQProgressDialog.setCancelButtonText('Cancel')
self.loadingQProgressDialog.setWindowModality(QtCore.Qt.WindowModal)
self.startWork()
def startWork (self):
myQCustomThread = QCustomThread(self)
def startLoadCallBack (numberOfprogress):
self.loadingQProgressDialog.setMinimum(0)
self.loadingQProgressDialog.setMaximum(numberOfprogress)
self.loadingQProgressDialog.show()
def progressLoadCallBack (progress):
self.loadingQProgressDialog.setValue(progress)
def statusLoadCallBack (flag):
print 'SUCCESSFUL' if flag else 'FAILED'
if flag:
self.close()
myQCustomThread.startLoad.connect(startLoadCallBack)
myQCustomThread.progressLoad.connect(progressLoadCallBack)
myQCustomThread.statusLoad.connect(statusLoadCallBack)
self.loadingQProgressDialog.canceled.connect(myQCustomThread.cancel)
myQCustomThread.start()
def app():
myQApplication = QtGui.QApplication(sys.argv)
myQCustomMainWindow = QCustomMainWindow()
myQCustomMainWindow.show()
print 'main complete'
(myQApplication.exec_())
def deep():
print 'hello'
app()
print 'hi'
if __name__=="__main__":
deep()
deep()
|
MySql table into JSON format
Question: I would like to convert like 100 tables from MySql to MongoDb
So I think that the easiest way will be to import the data as JSON
Any generic query to convert MySql data to JSON format withou7t using PHP
/ruby /Python?
SELECT CONCAT(CONCAT('{"CompanyId":', company_id),"}") AS JSON FROM company;
Answer: Try this:
SET @schema = 'test_db';
SET @table = 'test';
SELECT CONCAT(
'SELECT CONCAT(TRIM(TRAILING ', QUOTE(','), ' FROM CONCAT(', QUOTE('{'), ',',
GROUP_CONCAT(QUOTE('"'), ',', QUOTE(COLUMN_NAME), ',',
QUOTE('"'), ',', QUOTE(':'), ',', QUOTE('"'), ',', COLUMN_NAME, ',',
QUOTE('"'),',', QUOTE(',')),
')), ''}'') FROM ', @table
)
INTO @qry FROM
(SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS c
WHERE TABLE_SCHEMA = @schema AND TABLE_NAME = @table) t;
SELECT @qry;
PREPARE stmt FROM @qry;
EXECUTE stmt;
Above query generates table data in JSON format. Using the query you can
prepare a stored procedure taking database and table names as input parameters
and populate your data. If you want to exclude some of the columns in a table,
just modify the query which selects data from `INFORMATION_SCHEMA` database
like `SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS c WHERE TABLE_SCHEMA
= @schema AND TABLE_NAME = @table AND COLUMN_NAME NOT IN (**columns to
exclude**)`.
|
Acora doesn't work with gzip-opened files
Question: I'm trying to run Acora parsing on a file, which works as expected on plain
text files. When I try to run it on gzipped files using the python gzip module
(which is supposed to allow transparent reading of compressed files) I receive
nothing in return. This is not a case of not rewinding the file to the
beginning, I tried it from fresh with both compressed and uncompressed.
from acora import AcoraBuilder
f1 = open('input_file.txt', 'r')
ac = AcoraBuilder(tokens).build()
ac.filefindall(f1) ## Works as expected
import gzip
f2 = gzip.open('input_file.txt.gz', 'r')
ac = AcoraBuilder(tokens).build()
ac.filefindall(f2) ## Doesn't work, returns no results
Please let me know if this is something I'm missing.
Answer: I think Acora doesn't has support to compacted files. You need to extract it
before use it.
from acora import AcoraBuilder
import gzip
with gzip.GzipFile('input_file.txt.gz', 'rb') as fp:
f2 = fp.read()
ac = AcoraBuilder(tokens).build()
ac.findall(f2)
Anyway, acora has native support to search in files.
keywords = ['Import', 'FAQ', 'Acora', 'NotHere'.upper()]
builder = AcoraBuilder([s.encode('ascii') for s in keywords])
ac = builder.build()
found = set(kw for kw, pos in ac.filefind('README.rst'))
|
Mayavi2 standalone script with command line arguments
Question: I am trying to parse command line argument to a MayaVi2 standalone script.
However, the `mayavi2.standalone()` function eats command line arguments
before me. For example:
#! /usr/bin/python
import sys, argparse
from mayavi.scripts import mayavi2
from mayavi import mlab
@mayavi2.standalone
def view():
mayavi.new_scene()
mlab.test_plot3d()
def parseCmdLineArgs():
parser = argparse.ArgumentParser(description='Simple plotting using MayaVi2')
parser.add_argument('--scale', dest='scale', action='store',help='Sets the axis scaling')
parser.set_defaults(scale=1.0)
args = parser.parse_args(sys.argv[1:])
return args
if __name__ == '__main__':
args=parseCmdLineArgs()
print "Scale=%g" % args.scale
view()
If call this script `plot.py` and run it as
$ plot.py -h
I get the `mayavi2.standalone()` help message and not the one for my own
parser.
Answer: From the source code of the `mayavi2` module at
[GitHub](https://github.com/enthought/mayavi/blob/master/mayavi/scripts/mayavi2.py),
it is observed that it has code in the global name space. Some of the code
will execute command line parsing. Since Python code is run from top to down,
the problem can be solved by a reordering of the code:
import sys, argparse
def parseCmdLineArgs():
parser = argparse.ArgumentParser(description='Simple plotting using MayaVi2')
parser.add_argument('--scale', dest='scale', action='store',help='Sets the axis scaling')
parser.set_defaults(scale=1.0)
args = parser.parse_args(sys.argv[1:])
return args
if __name__ == '__main__':
args=parseCmdLineArgs()
from mayavi.scripts import mayavi2
from mayavi import mlab
@mayavi2.standalone
def view():
mayavi.new_scene()
mlab.test_plot3d()
if __name__ == '__main__':
print "Scale=%g" % args.scale
view()
|
Passing php array to python , doesn't work
Question: I'm trying to send php array encoded as JSON to python script , it doesn't
work . Here's my code :
<?php
$data = array('as', 'df', 'gh');
// $result = shell_exec('python /path/to/myScript.py ' . escapeshellarg(json_encode($data)));
$result = system('pythomPath/python scriptPath/myscript.py ' . escapeshellarg(json_encode($data)).' 2>&1',$result);
// Decode the result
$resultData = json_decode($result, true);
// This will contain: array('status' => 'Yes!')
var_dump($resultData);
?>
python :
import sys, json
# Load the data that PHP sent us
try:
data = json.loads(sys.argv[1])
except:
print ("ERROR")
sys.exit(1)
# Processing
result = data[0]
# Sending (to PHP)
print (json.dumps(result))
Answer: The data you're sending to `json.loads` is not a valid JSON, check it. If you
want to convert that array to JSON, just use `json.dumps` as you did in the
end of code.
But that isn't the only error. If you send this array as parameter as
sys.argv, you'll not have expected result. If you send an array to this
script, it will handle this param as a string.
Try this approach to handle it as an list, than to JSON.
data = eval(sys.argv[1])[0]
print (json.dumps(data))
|
Weird TypeError from json.dumps
Question: In python 3.4.0, using `json.dumps()` throws me a TypeError in one case but
works like a charm in other case (which I think is equivalent to the first
one).
I have a dict where keys are strings and values are numbers and other dicts
(i.e. something like `{'x': 1.234, 'y': -5.678, 'z': {'a': 4, 'b': 0, 'c':
-6}}`).
This fails (the stacktrace is not from this particular code snippet but from
my larger script which I won't paste here but it is essentialy the same):
>>> x = dict(foo()) # obtain the data and make a new dict of it to really be sure
>>> import json
>>> json.dumps(x)
Traceback (most recent call last):
File "/mnt/data/gandalv/progs/pycharm-3.4/helpers/pydev/pydevd.py", line 1733, in <module>
debugger.run(setup['file'], None, None)
File "/mnt/data/gandalv/progs/pycharm-3.4/helpers/pydev/pydevd.py", line 1226, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/mnt/data/gandalv/progs/pycharm-3.4/helpers/pydev/_pydev_execfile.py", line 38, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc) #execute the script
File "/mnt/data/gandalv/School/PhD/Other work/Krachy/code/recalculate.py", line 54, in <module>
ls[1] = json.dumps(f)
File "/usr/lib/python3.4/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.4/json/encoder.py", line 192, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.4/json/encoder.py", line 250, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.4/json/encoder.py", line 173, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 306 is not JSON serializable
The `306` is one of the values in one of ther inner dicts in `x`. It is not
always the same number, sometimes it is a different number contained in the
dict, apparently because of the unorderedness of a dict.
However, this works like a charm:
>>> x = foo() # obtain the data and make a new dict of it to really be sure
>>> import ast
>>> import json
>>> x2 = ast.literal_eval(repr(x))
>>> x == x2
True
>>> json.dumps(x2)
"{...}" # the json representation of dict as it should be
Could anyone, please, tell me why does this happen or what could be the cause?
The most confusing part is that those two dicts (the original one and the one
obtained through evaluation of the representation of the original one) are
equal but the `dumps()` function behaves differently for each of them.
Answer: The cause was that the numbers inside the `dict` were not ordinary python
`int`s but `numpy.in64`s which are apparently not supported by the json
encoder.
|
Python - Use to tablib to import Excel (xls, xlsx) files
Question: I'm having trouble figuring out how to import Excel files into my Python
script. I'm only a few days into Python so I'm guessing it's something very
obvious I'm missing. I'm using Python 3 and the tablib module. From the
examples on the tablib site, I've worked out how to save files in xls format
def saveXLS(self, name, data):
# Form the dataset with the accompanying headers
dataTab = tablib.Dataset()
dataTab.headers = data[0][:]
for i in range(1,len(data)):
dataTab.append(data[i][:])
with open(self.saveDir + name + ".xls", 'wb') as f:
f.write(dataTab.xls)
(I know that loop is horrible and un-Pythonic, but it's important I get
results at the moment as it's for work). At the moment, I open the Excel
workbook and save it as a text file (I should point out that all my data is
tab-delimited and consists of strings, even for numbers).
I open it like this
def loadTxt(self,name, fileType, data):
if( fileType == "txt"):
with open(self.currentWorkingDir + "\\" + name + ".txt",'r') as f:
reader=csv.reader(f,delimiter='\t')
for X in reader:
data.append(X)
I tried copying the "dbf" example on the tablib website
(<http://tablib.readthedocs.org/en/latest/api/>) to get
def loadXLS(self):
self.data = tablib.Dataset()
self.data = open('Data.xlsx').read()
return self.datav
And I get an error (as I expected, as I pulled it from my ass)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 637:
character maps to .
I really have no clue how to figure this out unfortunately, so any advice
would be really appreciated.
Answer: You've probably figured it out by now, but for the next person, you need to
read the Excel file as binary:
my_input_stream = open("my_file.xlsx", "rb")
my_dataset = tablib.import_set(my_input_stream)
dataset[1:5]
|
Opening a postgres connection in psycopg2 causes python to crash
Question: I'm getting the following error message when I try to open up a connection to
a postgres database. Perhaps it's related to OpenSSL, but I can't understand
the error message. Can anyone help?
>>> import psycopg2
>>> conn = psycopg2.connect(host = '', port = , dbname
= '', user = '', password = '')
Auto configuration failed
12848:error:02001015:system library:fopen:Is a directory:.\crypto\bio\bss_file.c
:169:fopen('D:/Build/OpenSSL/openssl-1.0.1h-vc9-x64/ssl/openssl.cnf','rb')
12848:error:2006D002:BIO routines:BIO_new_file:system lib:.\crypto\bio\bss_file.
c:174:
12848:error:0E078002:configuration file routines:DEF_LOAD:system lib:.\crypto\co
nf\conf_def.c:199:
Answer: One problem that I can think of is that your installation may not have been
linked/built properly to use openssl. If you haven't tried the packages listed
in the docs yet, maybe you could give it a try.
When I look at the [docs](http://initd.org/psycopg/docs/install.html):
> **Microsoft Windows:**
>
> Jason Erickson maintains a packaged [Windows port of
> Psycopg](http://www.stickpeople.com/projects/python/win-psycopg/) with
> installation executable. Download. Double click. Done.
So you could try to install it from there. Or you can try the pip-friendly
windows-friendly (note: I didn't try it myself)
[psycopg2-windows](https://github.com/nwcell/psycopg2-windows) package.
|
Receive HTTP POST response by python
Question: I use the following example: <http://www.w3schools.com/php/php_forms.asp>
When I run it from browser, I see the results in the browser:
Welcome John
Your email address is [email protected]
When I run python POST http request:
import httplib, urllib
params = urllib.urlencode({'@name': 'John','@email': '[email protected]'})
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/html"}
conn = httplib.HTTPConnection("10.0.0.201")
conn.request("POST","/welcome.php",params, headers)
response = conn.getresponse()
print "Status"
print response.status
print "Reason"
print response.reason
print "Read"
print response.read()
conn.close()
I see the following:
Status
200
Reason
OK
Read
<html>
<body>
Welcome <br>
Your email address is:
</body>
</html>
The question is: How to receive POST request data in python?
Answer: You are using the wrong form names _and_ the wrong HTTP method. There are no
`@` characters at their starts:
params = urllib.urlencode({'name': 'John','email': '[email protected]'})
Next, the form you point to uses _GET_ , not _POST_ as the handling method, so
you'll have to add these parameters to the URL instead:
conn.request("GET", "/welcome.php?" + params, '', headers)
You are doing yourself a disservice by trying to drive the `HTTPConnection()`
manually. You could use
[`urllib2.urlopen()`](https://docs.python.org/2/library/urllib2.html#urllib2.urlopen)
instead for example:
from urllib2 import urlopen
from urllib import urlencode
params = urlencode({'name': 'John','email': '[email protected]'})
response = urlopen('http://10.0.0.201/welcome.php?' + params)
print response.read()
or you could make use of the [`requests` library](http://docs.python-
requests.org/en/latest/) (separate install) to make it yourself much easier
still:
import requests
params = {'name': 'John','email': '[email protected]'}
response = requests.get('http://10.0.0.201/welcome.php', params=params)
print response.content
|
Python importing global variables inconsistent behaviour
Question: I'm having trouble importing a variable from a different python file to my
current one. I know this has been asked several time previously and I have
tried almost all those solutions, but no use.
In file top.py:
import sys, getopt, pdb
import argparse
import my_parser
my_parser.start_parse(6)
my_parser.in_out(2)
print "info: ",my_parser.verilog_inps
print "N1 data: ",my_parser.ckt_data["N1"]
In file parser.py (the first few lines only):
from collections import defaultdict
ckt_data = {}
global verilog_inps
verilog_inps = []
global verilog_outs
verilog_outs = []
global levels
levels = []
level_dict = defaultdict(list)
class ckt_elements:
delay = 0
inp_ = {}
out_ = {}
level = 0
change = False
prev = {}
typ_ = ""
def start_parse(a):
ckt_data["N1"] = a
def in_out(a):
verilog_inps = [a,a+1,a+2]
The strange thing is that I am able to access some variables and I am not able
to do so for others (I declared the inaccessible ones global to see if that
helps but no)
The aforementioned global variables are being modified in functions in
parser.py.
So, my question: Why this strange behaviour? Am I doing something wrong? Using
python 2.7
Please let me know if the question is not clear enough (I am at a loss to
explain this better)
**EDIT**
I have solved the issue I am facing by using a global definition file. In a
separate file, I have declared the variables and then imported the file into
all relevant files. (using `import globals`)
In any case, I am very curious to know what was wrong with my previous
approach.
Answer: The problem is that this function:
def in_out(a):
verilog_inps = [a,a+1,a+2]
don't actually affect the global `verilog_inps` variable. It's assigning to a
local. You need to put the `global` declaration inside each function where you
want to assign to the global variable:
def in_out(a):
global verilog_inps
verilog_inps = [a,a+1,a+2]
or assignments inside a function will cause the Python bytecode compiler to
create a local variable with the same name and target the assignment to that
variable.
|
Django - Always receiving __init__() takes exactly 2 arguments (5 given) when trying to store oauth2 credentials in db
Question: I've been trying to save credentials after a user uses their youtube account
to authenticate their account. I've been following this example to store my
new created credentials in my database for later use.
<https://code.google.com/p/google-api-python-
client/source/browse/samples/django_sample/>. In it we are supposed to create
a credentials model for django like below.
from django.contrib.auth.models import User
from django.db import models
from oauth2client.django_orm import CredentialsField
...
class CredentialsModel(models.Model):
id = models.ForeignKey(User, primary_key=True)
credential = CredentialsField()
I'm using South so I had to create a custom migration as it did not like the
custom "CredentialsField" in my model. I copies a user's migration from this
repo
<https://github.com/ssutee/watna_location/blob/master/location/migrations/0010_auto__add_credentialsmodel.py#L19>,
shown below.
def forwards(self, orm):
# Adding model 'CredentialsModel'
db.create_table('location_credentialsmodel', (
('id', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'], primary_key=True)),
('credential', self.gf('oauth2client.django_orm.CredentialsField')(null=True)),
))
db.send_create_signal('location', ['CredentialsModel'])
Now every time I run my app, it crashes on
storage = Storage(CredentialsModel, 'id', user, 'credential')
with the error "**init**() takes exactly 2 arguments (5 given)". I'm pretty
sure that it should be taking 5 arguments, not 2 judging from the docs. Does
anyone have an idea for what I may be doing wrong?
Answer: And the solution hit me right after I posted this. I needed to change my
import statement from the python general version of
from oauth2client.file import Storage
to the Django specific version of
from oauth2client.django_orm import Storage
|
Using Python to append a list of names to a single URL during a crawl of a URL
Question: I am looking to create a script that would crawl my URL and would append a
name to the end of the url during a search. For example
192.168.1.100/map/**foo**
I would then want to parse through the response and if the status code is 200
and the content-length is 83. I would want to output it to a text file. If
both these conditions are not matched then I will skip the printing of the
URL.
Here is my rough idea. I am looking for some general direction.
I would start with the URL I would either read an array or a list, depending
on the parameter length. I would then parse the response and look for the
conditions. If true I would write the URL to a text document else I would
continue the loop.
Thoughts? I'm not looking for you to write my code just to point me in the
general direction.
Thanks
Here is the latest revision based on @German Petrov code.
import requests
import urlparse
url = "http://192.168.1.2/map/ShowPage.ashx?="
names = ["admin","backup","contact","index","logs","news","reboot",
"register","test","users"]
with open("/root/Desktop/urls.txt", 'a') as urls:
for name in names:
newUrl = url + name
r = requests.get(newUrl)
c = r.content
if r.status_code == 200 and (("Unknown" in c) <>1):
urls.write(url + '\n')
WOW!!! SUPER STUPID ERROR on my part. It helps if I add http:// to the url....
Also no need to do the urlparse.urljoin as it cuts off my full url.
This is why I like my super easy C# ide :-D
Answer: For the following, you have to install `requests`: <http://docs.python-
requests.org/en/latest/user/install/#install>
First
import requests
import urlparse
baseurl= "192.168.1.2/map/ShowPage.ashx?="
names = ["admin","backup","contact","index","logs","news","reboot",
"register","test","users"]
for name in names:
print urlparse.urljoin(baseurl, name)
gives you the following output:
192.168.1.2/map/admin
192.168.1.2/map/backup
192.168.1.2/map/contact
192.168.1.2/map/index
192.168.1.2/map/logs
192.168.1.2/map/news
192.168.1.2/map/reboot
192.168.1.2/map/register
192.168.1.2/map/test
192.168.1.2/map/users
Then you can update the code with get call:
import requests
import urlparse
baseurl = "192.168.1.2/map/ShowPage.ashx?="
names = ["admin","backup","contact","index","logs","news","reboot",
"register","test","users"]
with open("C:\\urls.txt", 'a') as urls:
for name in names:
url = urlparse.urljoin(baseurl, name)
r = requests.get(url)
if r.status_code == 200 and int(r.headers['content-length']) > 73:
urls.write(url) #Write it to the file
|
Extracting all tweets on a topic from yesterday using Python's Tweepy?
Question: It seems that Python API allows people can dig back tweets tweeted a couple
days ago. Since I don't need to stream tweets instantaneously and yet I want
to collect all tweets on a particular topic (ie: fast car) for a period, I
think running a python script collecting all the tweets on a topic from
"yesterday" will do. The following codes does something like that, but I can
only get a pre-specified amount (ie: 200), I can bumped up the number to very
large (ie: 50,000) but is there a better way to capture ALL the tweets on a
topic the day before?
import tweepy
import time
ckey = ""
csecret = ""
atoken = ""
asecret = ""
OAUTH_KEYS = {'consumer_key':ckey, 'consumer_secret':csecret,
'access_token_key':atoken, 'access_token_secret':asecret}
auth = tweepy.OAuthHandler(OAUTH_KEYS['consumer_key'], OAUTH_KEYS['consumer_secret'])
api = tweepy.API(auth)
# Extract the first "xxx" tweets related to "fast car"
for tweet in tweepy.Cursor(api.search, q='fast-car', since='2014-09-14', until='2014-09-15').items(200): # need to figure out how to extract all tweets in the previous day
if tweet.geo != None:
print "////////////////////////////////"
print "Tweet created:", tweet.created_at
print ""
Answer: To attempt answering this question, I could set the number of item retrieved
to be a hypothetically huge number, for example items(999999999), and once the
script extract all tweets from the previous day, it will stop automatically.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.