text
stringlengths 226
34.5k
|
---|
Searching for key words in python
Question: If I ask a question in Python and the answer is `chicken`, I want to output
something related to chicken. And, if the answer is `beef` I want to output
something related to beef, dependent on the answer provided.
How could I structure this? Should I have multiple lists with key words and
related answers? Newbie.
Answer: I would use a `dict` of `list`s:
import random
similar_words = {
'chicken': ['poultry', 'wings'],
'beef': ['cow', 'ground-beef', 'roast-beef', 'steak'],
}
word = raw_input("Enter a word: ").strip()
if word in similar_words:
print random.choice(similar_words[word])
else:
print "Not found!"
See the Python manual on [Data
Structures](https://docs.python.org/2/tutorial/datastructures.html) for more
information. Note that I'm also using
[`random.choice()`](https://docs.python.org/2/library/random.html#random.choice)
to select a random item from each list.
Here's the output of it running:
$ python words.py
Enter a word: chicken
poultry
$ python words.py
Enter a word: beef
cow
$
**Edit** : You were asking in the comments how you could do this if the words
were contained inside a whole sentence. Here's one example:
words = raw_input("Enter a word or sentence: ").strip().split()
for word in words:
if word.lower() in similar_words:
print random.choice(similar_words[word.lower()])
else:
print "Not found!"
Here, we're using `split()` to split the sentence into a `list` of words. Then
we loop through each word, and see if (the lowercase version of) the word
exists in our `dict`, and do the same thing as we did above with a single
word.
|
Regex doesn't return all img tags - Python
Question: I have a python script that downloads the html and the images shown in the
html so I can open the file locally.
It works fine, the only problem is, there is a certain div in which the images
don't get downloaded/found by the regex. I have no idea why tho. It's not a
huge problem, but I'd like to know the reason.
This is the important part of the script:
url = "http://www.somedomain.com"
urlContent = urllib2.urlopen(url).read()
#Write originalHtml to file
f = open("originalHtml",'w')
f.write(urlContent)
f.close()
# HTML image tag: some_text
imgUrls = re.findall('img .*?src="(.*?)"', urlContent)
After that I loop over the links, one by one, downloading the images and
replacing the links in the html so the "src" points to the local path where I
downloaded it. The script takes care of relative links and direct links.
However, part of the images never gets downloaded. This is html that doesn't
get picked up :
<img src="/images/news/den-mcx80001.jpg" style="width:60px;height:36px;margin-top:12px; margin-bottom:12px; margin-left:17px; margin-right:17px;float:left; ">
This however does get picked up:
<img class="productimg" style="width:72px;height:74px;margin-top:15px; margin-bottom:15px; margin-left:3px; margin-right:28px " src="/images/01_prdarticledocs/ImagesSmall/jpr/jpr-prx718xlf.jpg" alt="jpr-prx718xlf">
I'm not an expert in regexes, far from it, but it does seem that it should
pick up both, no?
Answer: Fixed with BeautifulSoup, as the comments suggested. Code snippet for anyone
looking for a script to download HTML with the Images, save them and relink
the images in the html to local relative links.
import urllib2
import re
from BeautifulSoup import BeautifulSoup
from os.path import basename
from urlparse import urlsplit
#get content of a url and save (not necessary) the originalhtml
url = "http://www.someDomain.com"
urlContent = urllib2.urlopen(url).read()
page = BeautifulSoup(urlContent)
f = open("originalHtml",'w')
f.write(urlContent)
f.close()
#Find all images in the file, put them in imgUrls
imgUrls = page.findAll('img')
imagesDict = {}
# download all images
for image in imgUrls:
try:
#get src tag and download file, save link and local link in dict
imgUrl = image['src']
imgData = urllib2.urlopen(imgUrl).read()
fileName = basename(urlsplit(imgUrl)[2])
location = "images/" + fileName;
imagesDict[location] = imgUrl
print "loc=" + location
output = open(location,'wb')
output.write(imgData)
output.close()
except:
#not so clean solution to catch hard-linked images ('http://somedomain.com/img/image.jpg
try:
imgData = urllib2.urlopen(url + imgUrl).read()
fileName = basename(urlsplit(imgUrl)[2])
location = "images/" + fileName
imagesDict[location] = imgUrl
print "loc=" + location
output = open(location,'wb')
output.write(imgData)
output.close()
except:
print "Double ERROR"
print "Error" + imgUrl
pass
#Replace the old links to new local links
for dictKey in imagesDict:
urlContent = re.sub(imagesDict[dictKey], dictKey, urlContent)
#save HTML
f = open("imagesReplaced.html", 'w')
f.write(urlContent)
f.close()
|
run bash command in python and display on browser
Question: I have this simple python code:
import os
#os.system ("bash -c 'ls /home/'")
script = "ls /home/user/"
os.system(script)
How do I use PHP to display the output?
Answer: Not with HTML, but using PHP first you can output to HTML from a python script
using shell_exec e.g.
<?php $output = shell_exec("python scriptName.py");
echo '<p>'.$output.'<p>'; ?>
|
How to use windows authentication to connect to MS SQL server from windows workstation in another domain with Python
Question: I'm trying to connect to `SQL server 2000` installed on `Windows server 2003`
from `Windows Server 2008 R2` using `Python 3.4` and `pyodbc` module. Those
servers are in different AD domains. `Windows only` authentication is enabled
on SQL server and I can't change that.
drv = '{SQL server}'
svr = 'sql.my-domain.local'
usr = 'my-domain.local\testuser'
pwd = 'password'
db = 'testdb'
pyodbc.connect(driver=drv, server=svr, user=usr, password=pwd, database=db)
The connection above fails with the following error:
pyodbc.Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQLServer]
Login failed for user 'svx-iroot.local\\sqlexecutive'.
Reason: Not associated with a trusted SQL Server connection. (18452) (SQLDriverConnect)")
There are some questions, for example [this
one](http://stackoverflow.com/questions/16515420/connecting-to-ms-sql-server-
with-windows-authentication-using-python), suggesting to add
`trusted_connection='yes'` argument to `pyodbc` connection for support of
windows authentication but in this case it does not help because with this
option local credentials are used and I need to provide credentials explicitly
because originating workstation is in a different AD domain.
Creation of `User DSN` in `ODBC Data Source Administrator` with `SQL Server`
driver fails with the same error mentioned above.
Is there a way to make this work?
Meanwhile I installed `FreeTDS`driver for Windows from
<http://sourceforge.net/projects/freetdswindows/> and connection test using
`tsql` utility does work:
tsql -S sql.my-domain.local -U my-domain.local\testuser -P password
But `FreeTDS`driver is not available in `ODBC Data Source Administrator`.
`FreeTDS` driver is traditionally used with `unixODBC`. Is it possible to use
this driver in Windows environment with `pyodbc`?
Update:
It turns out `FreeTDS` binaries mentioned above include `unixODBC` as well.
Configuration of `freetds.conf`, `odbc.ini` and `odbcinst.ini` was made like
described, for example,
[here](http://stackoverflow.com/questions/16925825/having-troubles-with-
unixodbc-freetds-and-pyodbc). But at this point I don't have understanding how
`pyodbc` is supposed to know that `FreeTDS` driver exists. And indeed
connection attempt with `FreeTDS` driver fails with the following error:
pyodbc.Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager]
Data source name not found and no default driver specified (0) (SQLDriverConnect)')
`Pyodbc` only knows about drivers available in `ODBC Data Source
Administrator`:
[](http://i.stack.imgur.com/9zSJo.png)
There are 2 ways to move forward. First option is to make `ODBC Data Source
Administrator` aware of `FreeTDS` driver. To achieve that a new value needs to
be created in registry key `HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\ODBC
Drivers` with name `FreeTDS` and value `Installed`. Then a new key `FreeTDS`
is created in `HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI` and settings for
`FreeTDS` driver are set as string values in this registry key.
[](http://i.stack.imgur.com/O4B9F.png)
After completion of this procedure `FreeTDS` driver became available in `ODBC
Data Source Administrator` but connection still failed. Attempt to create
`User DSN` in `ODBC Data Source Administrator` with `FreeTDS` fails with error
code `193` which is caused by incompatibility of 64 bit `ODBC Data Source
Administrator` and 32 bit version `FreeTDS`. I don't have 64 bit version of
`FreeTDS` available. Potentially it could be possible to compile it from
source.
Another option is to make `pyodbc` use another driver manager (`unixODBC`)
instead of `ODBC Data Source Administrator`. Don't know how to approach that
yet.
Answer: I ended up using [pymssql](https://github.com/pymssql/pymssql "pymssql")
version 2.1.3 installed with a wheel obtained from
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#pymssql>. It has `FreeTDS`
included and it worked right out of the box:
import pymssql
conn = pymssql.connect(
host=r'sql.my-domain.local',
user=r'my-domain.local\testuser',
password='password',
database='testdb'
)
cursor = conn.cursor()
cursor.execute('SELECT * FROM testtable')
|
Django migrations throw 1072 - key column 'car_make_id' doesn't exist in table
Question: Here's simplified task and setup (Django 1.8, MySQL, Python 2.7), I've got:
class Car(models.Model):
make = models.ForeignKey(CarMake)
class Bike(models.Model):
make = models.ForeignKey(BikeMake)
class CarMake(models.Model):
name = models.CharField(max_length=32)
class BikeMake(models.Model):
name = models.CharField(max_length=32)
Now, I need to ditch the **BikeMake** model completely so I update the
**CarMake** model with values from **BikeMake** and also update the foreign
key relationship in **Bike**.
I've created the following migration, which updates the **CarMake** with names
from **BikeMake** , adds temporary field **Bike.car_make** , migrates data
from **Bike.make** to **Bike.car_make** , removes the **Bike.make** field and
renames **Bike.car_make** to **Bike.make**.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
def update_car_makes(apps, schema_editor):
"""Update CarMakes with BikeMakes"""
BikeMake = apps.get_model('my_app', 'BikeMake')
CarMake = apps.get_model('my_app', 'CarMake')
for item in BikeMake.objects.all():
if not CarMake.objects.filter(name=item.name).exists():
CarMake.objects.create(name=item.name)
def remove_car_makers(apps, schema_editor):
"""Restore original CarMake (exclude BikeMake)"""
pass
def migrate_to_car_make(apps, schema_editor):
"""Set Bike.car_make according to Bike.make"""
CarMake = apps.get_model('my_app', 'CarMake')
Bike = apps.get_model('my_app', 'Bike')
for item in Bike.objects.all():
old_make = item.make
new_make = CarMake.objects.get(name=old_make.name)
item.car_make = new_make
item.save()
def reverse_migrate_to_car_make(apps, schema_editor):
pass
def dummy_forwards(apps, schema_editor):
# Empty forward migration needed for having custom backwards migration
pass
def restore_make_column_data(apps, schema_editor):
BikeMake = apps.get_model('products', 'BikeMake')
Bike = apps.get_model('products', 'Bike')
for item in Bike.objects.all():
old_make = item.bike_make
new_make = BikeMake.objects.get(name=old_make.name)
item.make = new_make
item.save()
class Migration(migrations.Migration):
dependencies = [('my_app', '0001_blah_blah')]
operations = [
migrations.RunPython(
update_car_makers,
reverse_code=remove_car_makers
),
migrations.AddField(
model_name='bike',
name='car_make',
field=models.ForeignKey(default=1, to='my_app.CarMake'),
preserve_default=False
),
migrations.RunPython(
migrate_to_car_make,
reverse_code=reverse_migrate_to_car_make
),
migrations.RunPython(
dummy_forwards,
reverse_code=restore_make_column_data
),
migrations.RemoveField(
model_name='bike',
name='make',
),
migrations.RenameField(
model_name='bike',
old_name='car_make',
new_name='make'
)
]
And when I try to run it, I get the #1072 error on running the last operation:
`migrations.RenameField`. Now the interesting part is that from DB POV
everything is complete, data migrated, column renamed, only the migration
isn't marked as done and error is thrown.
Also if I just move the `migrations.RenameField` to a separate migration file
and run two migrations in a row – everything works fine and it doesn't raise
the #1072 error.
In addition, I've tried inserting a breakpoint just before the
`migrations.RenameField` and I verified that **Bike.car_make** column exists
and I can fetch normally all objects of **Bike** model at that point.
The MySQL query, that results in error is following:
CREATE INDEX `my_app_bike_c2036163` ON `my_app_bike` (`car_make_id`)
Any ideas how to fix it and have it within one migration file? Thanks in
advance!
**UPDATE 04.02.16**
As @kvikshaug pointed out, this happens because Django creates indexes and
constraints after performing all operations i.e. raw SQL for creating index
and/or constraints is generated at the time, the corresponding operation is
executed (in my case `AddField`), but that query is actually run at the very
end, thus the error.
One possible solution for relatively small schemas could be to use Django's
[`RunSQL`](https://docs.djangoproject.com/en/1.9/ref/migration-
operations/#runsql) and type the raw queries yourself, but that's quite
cumbersome + you've got to create constraints yourself.
So I went with separating the renaming migration.
Answer: Django migrations create indexes after performing all operations. Your second
operation, adding field `car_make`, makes Django add the `CREATE INDEX`
command which you noted causes the error:
CREATE INDEX `my_app_bike_c2036163` ON `my_app_bike` (`car_make_id`)
**Even though you later renamed the field, Django still tries to create the
index for the now-missing`car_make` field, that's why you get the error.** You
can see this clearly by running `sqlmigrate`:
$ ./manage.py sqlmigrate my_app 0002_blah_blah
BEGIN;
--
-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:
-- Raw Python operation
--
--
-- Add field car_make to bike
--
ALTER TABLE "my_app_bike" ADD COLUMN "car_make_id" integer DEFAULT 1 NOT NULL;
ALTER TABLE "my_app_bike" ALTER COLUMN "car_make_id" DROP DEFAULT;
--
-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:
-- Raw Python operation
--
--
-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:
-- Raw Python operation
--
--
-- Remove field make from bike
--
ALTER TABLE "my_app_bike" DROP CONSTRAINT "my_app_bike_make_id_5615ed11_fk_my_app_bikemake_id";
ALTER TABLE "my_app_bike" DROP COLUMN "make_id" CASCADE;
--
-- Rename field car_make on bike to make
--
ALTER TABLE "my_app_bike" RENAME COLUMN "car_make_id" TO "make_id";
CREATE INDEX "my_app_bike_78e8ca60" ON "my_app_bike" ("car_make_id");
ALTER TABLE "my_app_bike" ADD CONSTRAINT "my_app_bike_car_make_id_6c42be09_fk_my_app_carmake_id" FOREIGN KEY ("car_make_id") REFERENCES "my_app_carmake" ("id") DEFERRABLE INITIALLY DEFERRED;
COMMIT;
You could try to report this as a bug (or search; maybe it's already
reported), but you're probably best off following Alasdairs suggestion and
just separate the migrations.
|
python turtle graphics window won't open
Question: I have little piece of code from a tutorial which should work fine but I don't
get the turtle graphics window to show (I'm on Windows 10 using python
2.7.10). The code looks like this
import turtle
def draw_square():
window = turtle.Screen()
window.bgcolor("red")
brad = turtle.Turtle()
brad.forward(100)
window.exitonclick()
However, when I execute it nothing happens, I don't even get an error message.
Instead, the shell just says
================================ RESTART ================================
and displays the little Windows circle (indicating it is working on something)
but the turtle graphics window does not pop up.
I have tried repairing my python installation and additionally installing the
x86 version, but I get the same outcome on the other installation, too.
Does anyone please know how to fix this?
Thank you, Tomislav
Answer: Functions don't do anything unless you call them. Try:
import turtle
def draw_square():
window = turtle.Screen()
window.bgcolor("red")
brad = turtle.Turtle()
brad.forward(100)
window.exitonclick()
draw_square()
|
How do you remove a middle initial from a name in python?
Question: I'm trying to figure out something that seems like it should be simple. I'm
trying to remove the middle initial from names, but I'm not sure how to do it
without making a replace() for every letter of the alphabet. This is what I'm
looking for:
(Starts as)
"John D Smith"
"Robert B Johnson"
(Ends as)
"John Smith"
"Robert Johnson"
What is the simplest way of accomplishing the above in Python? The middle
initial is random, but is always surrounded by white space.
Answer: May this help, based on your post
>>> import re
>>> re.sub(' [A-Z]* ', ' ', "John D Smith")
'John Smith'
>>> re.sub(' [A-Z]* ', ' ', "Robert B Johnson")
'Robert Johnson'
|
Python - Script that appends rows; checks for duplicates before writing
Question: I'm writing a script that has a for loop to extract a list of variables from
each 'data_i.csv' file in a folder, then appends that list as a new row in a
single 'output.csv' file.
My objective is to define the headers of the file once and then append data to
the 'output.csv' container-file so it will function as a backlog for a
standard measurement. The first time I run the script it will add all the
files in the folder. Next time I run it, I want it to only append files that
have been added since. I thought one way of doing this would be to check for
duplicates, but the codes I found for that so far only searched for
consecutive duplicates.
Do you have suggestions?
Here's how I made it so far:
import csv, os
# Find csv files
for csvFilename in os.listdir('.'):
if not csvFilename.endswith('.csv'):
continue
# Read in csv file and choose certain cells
csvRows = []
csvFileObj = open(csvFilename)
csvData = csv.reader(csvFileObj,delimiter=' ',skipinitialspace='True')
csvLines = list(csvData)
cellID = csvLines[4][3]
# Read in several variables...
csvRows = [cellID]
csvFileObj.close()
resultFile = open("Output.csv", 'a') #open in 'append' modus
wr = csv.writer(resultFile)
wr.writerows([csvRows])
csvFileObj.close()
resultFile.close()
This is the final script after mgc's answer:
import csv, os
f = open('Output.csv', 'r+')
merged_files = csv.reader(f)
merged_files = list()
for csvFilename in os.listdir('.'):
if not csvFilename.endswith('_spm.txt'):
continue
if csvFilename in merged_files:
continue
csvRows = []
csvFileObj = open(csvFilename)
csvData = csv.reader(csvFileObj,delimiter=' ',skipinitialspace='True')
csvLines = list(csvData)
waferID = csvLines[4][3]
temperature = csvLines[21][2]
csvRows = [waferID,thickness]
merged_files.append(csvRows)
csvFileObj.close()
wr = csv.writer(f)
wr.writerows(merged_files)
f.close()
Answer: You can keep track of the name of each file already handled. If this log file
don't need to be human readable, you can use
[pickle](https://docs.python.org/3/library/pickle.html). At the start of your
script, you can do :
import pickle
try:
with open('merged_log', 'rb') as f:
merged_files = pickle.load(f)
except FileNotFoundError:
merged_files = set()
Then you can add a condition to avoid files previously treated :
if filename in merged_files: continue
Then when you are processing a file you can do :
merged_files.add(filename)
And keep trace of your variable at the end of your script (so it will be used
on a next use) :
with open('merged_log', 'wb') as f:
pickle.dump(merged_files, f)
(However there is other options to your problem, for example you can slightly
change the name of your file once it has been processed, like changing the
extension from `.csv` to `.csv_` or moving processed files in a subfolder,
etc.)
Also, in the example in your question, i don't think that you need to open
(and close) your output file on each iteration of your `for` loop. Open it
once before your loop, write what you have to write, then close it when you
have leaved the loop.
|
math.log(x) returns unexpected results in Python
Question: I am attempting to do some basic dB calculations using Python. If I use either
Excel or a scientific calculator:
20*log(0.1) = -20
in Python:
20*log(0.1) = -46.0517018599
to simplify further, Excel and Scientific calc:
log(0.1) = -1
Python:
log(0.1) = -2.30258509299
I start my script with
import math
log = math.log
Can someone explain why this is and how I can fix it?
Answer: `math.log` is actually `ln` (with a base of _e_). To get the expected results,
use `math.log(0.1, 10)` or `math.log10(0.1)`.
|
Problems with decoding bytes into string or ASCII in python 3
Question: I'm having a problem decoding received bytes with python 3. I'm controlling an
arduino via a serial connection and read it with the following code:
import serial
arduino = serial.Serial('/dev/ttyACM0', baudrate=9600, timeout=20)
print(arduino.isOpen())
myData = arduino.readline()
print(myData)
The outcome I get looks like
`b'\xe1\x02\xc1\x032\x82\x83\x10\x83\xb2\x80\xb0\x92\x0b\xa0'` or
`b'\xe1\x02"\xe1\x00\x83\x92\x810\x82\xb2\x82\x91\xb2\n'` and tried to decode
it the usual way via `myData.decode('utf-8')` and I get the error
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 1:
invalid start byte`. I tried other decodings (ASCII, cp437, hex, utf-16), but
always face the same error.
Do you have any suggestions, how I can decode the received bytes or which
decoding the arduino requires? I already tried to decode it piece by piece
using a for loop, but I always face the same error message.
And is there a general way to avoid decoding problems or to find out, which
decoding I have to use?
Thanks in advance.
Answer: As @jsbueno said in the comments this is not a decoding problem, it is
probably because the byte data being received is actually binary data. I had a
very similar problem when reading binary data (bytes) from a file.
There are 2 options to use here, the first one being the struct module:
import struct
a = open("somedata.img", "rb")
b = a.read(2)
file_size, = struct.unpack("i",a.read(4))
writing the code this way produces a tuple, so to get an integer, just use
`struct.unpack('i', a.read(4))[0]`
Another way which I used if you want to store the data in a numpy array is:
import numpy as np
f = open("somefile.img", "r")
a = np.fromfile(f, dtype=np.uint32)
|
Unable to store terminal output of subprocess with python
Question: My code has two potential outcomes in the terminal: `Can't connect RFCOMM
socket: Permission denied` and `Can't connect RFCOMM socket: Host is down`. I
need to store either result as a string in a variable, but everything I've
tried has failed. This is the code that I thought would do it:
from subprocess import check_output
out = check_output(["sudo", "rfcomm", "connect", "0", "AA:BB:CC:DD:EE:FF", "10"])
print "output: %s" % out
Instead I get nothing:
user:~/home $./foo.py
Can't connect RFCOMM socket: Permission denied
output:
Another attempt:
proc = subprocess.Popen(["sudo rfcom connect 0 AA:BB:CC:DD:EE:FF 10"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
print "output: %s" % out, err
This at least gives me something when I print. Unfortunately it's "None"
telling me there is no error and not the actual output:
user:~/home $./foo.py
Can't connect RFCOMM socket: Permission denied
output: None
I've already tried [this](http://stackoverflow.com/questions/2502833/store-
output-of-subprocess-popen-call-in-a-string)
[this](http://stackoverflow.com/questions/19243020/in-python-get-the-output-
of-system-command-as-a-string)
[this](http://stackoverflow.com/questions/3503879/assign-output-of-os-system-
to-a-variable-and-prevent-it-from-being-displayed-on)
[this](http://stackoverflow.com/questions/4514751/pipe-subprocess-standard-
output-to-a-variable) and probably a couple others. I'm sure I'm missing a
piece of critical knowledge somewhere. Thanks for any pointers!
Answer: `rfcomm` is apparently writing its output to standard error, yet you are only
capturing standard output. To capture both, include `stderr=subprocess.STDOUT`
in the call to `check_output`:
subprocess.check_output(["sudo", "rfcomm", "connect", "0", "AA:BB:CC:DD:EE:FF", "10"],
stderr=subprocess.STDOUT)
|
IPython scripting - Exit script with status code
Question: I am trying to use ipython to script git pre-commit hooks since it has a nice
syntax to run shell commands and converting the stdout result into a list of
strings (which makes for easy processing).
I need to return a status code != 0 from the ipython script so that git pre-
commit hook will abort the commit.
Consider this example script
#! /bin/ipython
import sys
sys.exit(1)
running it from the shell
$ ipython test.ipy
but then checking the status code with `$ echo $?` always returns `0`
Is there a way to make ipython return a non-zero status code?
Answer: Open up ipython and try running that code interactively.
In [1]: import sys
In [2]: sys.exit(1)
An exception has occurred, use %tb to see the full traceback.
SystemExit: 1
To exit: use 'exit', 'quit', or Ctrl-D.
So, ipython is catching the SystemExit exception. I would suggest using a
different interpreter for this particular job, nice as ipython is.
Alternatively you can use:
import os
os._exit(1)
However, this skips all sorts of important cleanup code (e.g. `finally`
blocks) and is generally a Bad Idea.
Edit:
This seems to work. After writing it I can hear some alarm bells in the
distance and there are red flags waving. Not sure what that's about. Anyway,
create a new script `/usr/bin/ipysh` with:
#!/usr/bin/python
import sys
from IPython.core import interactiveshell
shell = interactiveshell.InteractiveShell()
shell.safe_execfile(sys.argv[1], {}, raise_exceptions=True,
exit_ignore=False)
Make that executable, then set your hook's hashbang to `#!/usr/bin/ipysh`.
|
How to install mysql-connector for python 3.5.1?
Question: I'm on python 3.5.1 and I am having trouble installing mysql connector:
install --allow-external mysql-connector-python-rf mysql-connector-python-rf
is not working, neither is the normal pip command for mysql-connector-python-
rf. I am getting the following message:
error: option --single-version-externally-managed not recognized
Any ideas?
Answer: There is no `mysql-connector` for `python 3.5.1` up till now, but you can use
pymysql to connect mysql to python 3.5.1!
import pymysql
conn = pymysql.connect(host='localhost', port=port_no, user='db_user', passwd='password', db='db_name')
|
Building a generic XML parser in Python?
Question: I am a newbie and having 1 week experience writing python scripts.
I am trying to write a generic parser (Library for all my future jobs) which
parses any input XML without any prior knowledge of tags.
* Parse input XML.
* Get the values from the XML and Set the values basing on the tags.
* Use these values in the rest of the job.
I am using the "xml.etree.ElementTree" library and i am able to parse the XML
in the below mentioned way.
#!/usr/bin/python
import os
import xml.etree.ElementTree as etree
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('start reading XML property file')
filename = "mood_ib_history_parameters_DEV.xml"
logger.info('getting the current location')
__currentlocation__ = os.getcwd()
__fullpath__ = os.path.join(__currentlocation__,filename)
logger.info('start parsing the XML property file')
tree = etree.parse(__fullpath__)
root = tree.getroot()
hive_db = root.find("hive_db").text
EDGE_HIVE_CONN = root.find("EDGE_HIVE_CONN").text
target_dir = root.find("target_dir").text
to_email_alias = root.find("to_email_alias").text
to_email_cc = root.find("to_email_cc").text
from_email_alias = root.find("from_email_alias").text
dburl = root.find("dburl").text
SQOOP_EDGE_CONN = root.find("SQOOP_EDGE_CONN").text
user_name = root.find("user_name").text
password = root.find("password").text
IB_log_table = root.find("IB_log_table").text
SR_DG_master_table = root.find("SR_DG_master_table").text
SR_DG_table = root.find("SR_DG_table").text
logger.info('Hive DB %s', hive_db)
logger.info('Hive DB %s', hive_db)
logger.info('Edge Hive Connection %s', EDGE_HIVE_CONN)
logger.info('Target Directory %s', target_dir)
logger.info('To Email address %s', to_email_alias)
logger.info('CC Email address %s', to_email_cc)
logger.info('From Email address %s', from_email_alias)
logger.info('DB URL %s',dburl)
logger.info('Sqoop Edge node connection %s',SQOOP_EDGE_CONN)
logger.info('Log table name %s',IB_log_table)
logger.info('Master table name %s',SR_DG_master_table)
logger.info('Data governance table name %s',SR_DG_table)
Now the question is if i want to parse an XML without any knowledge of the
tags and elements and use the values how do i do it. I have gone through
multiple tutorials but all of them help me with parsing the XML by using the
tags like below
SQOOP_EDGE_CONN = root.find("SQOOP_EDGE_CONN").text
Can anybody point me to a right tutorial or library or a code snippet to parse
the XML dynamically.
Answer: I think official documentation is pretty clear and contains some examples:
<https://docs.python.org/3/library/xml.etree.elementtree.html>
The main part you need to implement is loop over the child nodes (potentially
recursively):
for child in root:
# child.tag contains the tag name, child.attrib contains the attributes
print(child.tag, child.attrib)
|
How to group words whose Levenshtein distance is more than 80 percent in Python
Question: Suppose I have a list:-
person_name = ['zakesh', 'oldman LLC', 'bikash', 'goldman LLC', 'zikash','rakesh']
I am trying to group the list in such a way so the [Levenshtein
distance](https://en.wikipedia.org/wiki/Levenshtein_distance) between two
strings is maximum. For finding out the ratio between two words, I am using a
python package [fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy).
**Examples :-**
>>> from fuzzywuzzy import fuzz
>>> combined_list = ['rakesh', 'zakesh', 'bikash', 'zikash', 'goldman LLC', 'oldman LLC']
>>> fuzz.ratio('goldman LLC', 'oldman LLC')
95
>>> fuzz.ratio('rakesh', 'zakesh')
83
>>> fuzz.ratio('bikash', 'zikash')
83
>>>
**My end goal:**
> My end goal is to group the words such that Levenshtein distance between
> them is more than 80 percent?
My list should look something like this :-
person_name = ['bikash', 'zikash', 'rakesh', 'zakesh', 'goldman LLC', 'oldman LLC'] because the distance between `bikash` and `zikash` is very high so they should be together.
Code:
I am trying to achieve this by sorting but key function should be
`fuzz.ratio`. Well below code is not working, but I am approaching the problem
in this angle.
from fuzzywuzzy import fuzz
combined_list = ['rakesh', 'zakesh', 'bikash', 'zikash', 'goldman LLC', 'oldman LLC']
combined_list.sort(key=lambda x, y: fuzz.ratio(x, y))
print combined_list
> Could anyone help me to combine the words so that Levenshtein distance
> between them is more than 80 percent?
Answer: This groups the names
from fuzzywuzzy import fuzz
combined_list = ['rakesh', 'zakesh', 'bikash', 'zikash', 'goldman LLC', 'oldman LLC']
combined_list.append('bakesh')
print('input names:', combined_list)
grs = list() # groups of names with distance > 80
for name in combined_list:
for g in grs:
if all(fuzz.ratio(name, w) > 80 for w in g):
g.append(name)
break
else:
grs.append([name, ])
print('output groups:', grs)
outlist = [el for g in grs for el in g]
print('output list:', outlist)
producing
input names: ['rakesh', 'zakesh', 'bikash', 'zikash', 'goldman LLC', 'oldman LLC', 'bakesh']
output groups: [['rakesh', 'zakesh', 'bakesh'], ['bikash', 'zikash'], ['goldman LLC', 'oldman LLC']]
output list: ['rakesh', 'zakesh', 'bakesh', 'bikash', 'zikash', 'goldman LLC', 'oldman LLC']
As you can see, the names are grouped correctly, but the order may not be the
one you desire.
|
how to write a matrix to a file in python with this format?
Question: I need to write a matrix to a file with this format `(i, j, a[i,j])` row by
row, but I don't know how to get it. I tried with: `np.savetxt(f, A,
fmt='%1d', newline='\n')`, but it write only matrix values and don't write i,
j!
Answer:
import numpy as np
a = np.arange(12).reshape(4,3)
a_with_index = np.array([idx+(val,) for idx, val in np.ndenumerate(a)])
np.savetxt('/tmp/out', a_with_index, fmt='%d')
writes to /tmp/out the contents
0 0 0
0 1 10
0 2 20
1 0 30
1 1 40
1 2 50
2 0 60
2 1 70
2 2 80
3 0 90
3 1 100
3 2 110
|
Python 2.7 : LookupError: unknown encoding: cp65001
Question: I have installed python 2(64 bit), on windows 8.1 (64 bit) and wanted to know
pip version and for that I fired `pip --version` but it is giving error.
C:\Users\ADMIN>pip --version
Traceback (most recent call last):
File "c:\dev\python27\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "c:\dev\python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\dev\Python27\Scripts\pip.exe\__main__.py", line 5, in <module>
File "c:\dev\python27\lib\site-packages\pip\__init__.py", line 15, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "c:\dev\python27\lib\site-packages\pip\vcs\mercurial.py", line 10, in <module>
from pip.download import path_to_url
File "c:\dev\python27\lib\site-packages\pip\download.py", line 35, in <module>
from pip.utils.ui import DownloadProgressBar, DownloadProgressSpinner
File "c:\dev\python27\lib\site-packages\pip\utils\ui.py", line 51, in <module>
_BaseBar = _select_progress_class(IncrementalBar, Bar)
File "c:\dev\python27\lib\site-packages\pip\utils\ui.py", line 44, in _select_progress_class
six.text_type().join(characters).encode(encoding)
LookupError: unknown encoding: cp65001
Note : The same command works fine for python 3. I have uninstalled both and
installed again but still no success.
Answer: The error means that Unicode characters that your script are trying to print
can't be represented using the current console character encoding.
Also try to run `set PYTHONIOENCODING=UTF-8` after execute pip --version
**without** reloading terminal if everything going well add `PYTHONIOENCODING`
as env variable with value `UTF-8`. See [How to set the path and environment
variables in Windows](http://www.computerhope.com/issues/ch000549.htm) article
to get info how to add Windows variable.
Also you can try to install [win-unicode-
console](https://github.com/Drekin/win-unicode-console) with pip:
pip install win-unicode-console
Then reload your terminal and try to execute `pip --version`
However you can follow suggestions from [Windows cmd encoding change causes
Python crash](http://stackoverflow.com/questions/878972/windows-cmd-encoding-
change-causes-python-crash?answertab=active#tab-top) answer because you have
**same problem**.
|
Python parse XML from online web service
Question: I have been trying to use python to parse an XML that I get from a webserver.
The link to the XML is <http://gagnaveita.vegagerdin.is/api/faerd2014_1>. It
does not matter which library I use I always end up with really weird results,
it doesn't parse and the file. Also whenever I try to save the file it doesn't
display like a XML at all. Any idea of how to parse a flie like that?
Answer: When I run this link in browser it shows me XML data.
But when I try to read this in script I get JSON file.
The same when I use `wget` command (on Linux) - I get JSON file.
Maybe you have the same situation.
-
Or maybe browser gets JSON data but it use own method to show it and I get XML
on screen :)
* * *
**EDIT:** I found answer - server check `accept` header. If there is `xml`
then it sends XML file.
Try with and without `headers`
import requests
headers = {
'Accept': 'application/xml',
}
r = requests.get('http://gagnaveita.vegagerdin.is/api/faerd2014_1', headers=headers)
print(r.content)
#print(r.json())
|
Python circular imports with inheritance
Question: I have a parent and child class, where a parent's method returns an instance
of the child. Both classes are in separate files `classA.py` and `classB.py`.
In order to avoid circular imports when I import `classA` I added the `classB`
import to the end of `classA.py` (as shown below). Everything worked well and
I was able to properly use `classA` in my code.
Now I'm having issues if I want to use ONLY `classB`. For example, if I run
from classB import ClassB
I get the following error:
File "classA.py", line 269, in <module>
from classB import ClassB
ImportError: cannot import name ClassB
If I run:
from classA import ClassA
from classB import ClassB
then everything works perfectly and I can use both classes. Is there a way to
only import `classB` or must I ALWAYS first import `classA` and then `classB`?
classA.py
class ClassA():
def __init__(self, ...):
....
def someMethod(self, ...):
...
return ClassB(...)
from classB import ClassB
classB.py
from classA import ClassA
class ClassB(ClassA):
def __init__(self, ...):
super(ClassB, self).__init__(...)
Answer: The obvious solution is to put both classes into the same file (same
`module`). They are tightly related, so it makes perfect sense and no "hacks"
(placing the import at the end of file) and workarounds (special order of
imports) will be needed.
Check also these sources: [How many Python classes should I put in one
file?](http://stackoverflow.com/questions/106896/how-many-python-classes-
should-i-put-in-one-file), [Is it considered Pythonic to have multiple classes
defined in the same
file?](http://programmers.stackexchange.com/questions/209982/is-it-considered-
pythonic-to-have-multiple-classes-defined-in-the-same-file).
|
How to dynamically modify CSS using Python?
Question: I'm creating a Flask-based web app. I need to modify the CSS of an element
dynamically.
To be more specific, I have a file that I want to read from Python. Based on
what I read from the file, I want to modify the CSS of an element. Just to
give you an idea,
with open('xyz') as f:
if f.readline() == 'foo':
$("#baz").css("visibility", "none")
I've tried using Opal with Ruby, however I'm unable to setup the 'opal-jquery'
stuff. Any guidance down either paths is appreciated.
Answer: If you are configuring the CSS as part of the backend rendering process, then
treat the style change as any other variable in your template:
from flask import Flask, render_template_string
app = Flask(__name__)
index_page = """<html><body><h2 style="color: {{ color }}">Index page</h2></body></html>"""
@app.route('/')
def index():
with open('/path/to/file.txt') as f:
color = f.readlines()[-1]
return render_template_string(index_page, color=color)
if __name__ == '__main__':
app.run()
If `file.txt` ends in, for example, `blue`, then the header will be blue. Same
goes for `red`, `yellow`, etc.
The fact that you're changing a CSS value instead of, perhaps, a value in a
table makes no difference to Jinja.
If, instead, you're trying to update the CSS value after the server has
rendered the page and sent it off to the client, then you need to use
JavaScript. In your backend, add an additional view to get that data from the
server:
@app.route('/lastline')
def last_line():
with open('/path/to/file.txt') as f:
color = f.readlines()[-1]
return color
Access this endpoint from the client with jQuery:
<script>
function updateCSS() {
$.ajax({
url: "{{ url_for('last_line') }}",
method: 'GET',
success: function(data) {
$("#baz").css("visibility", data);
}
});
}
setInterval(updateCSS, 1000);
</script>
This will check the file every 1 second and update your element's CSS
accordingly.
|
Matlab installation (LD_LIBRARY_PATH) messes up other library files
Question: I am trying to install Matlab on a Linux machine, but setting LD_LIBRARY_PATH
(as the installation requires) breaks other library files. I am not an Linux
expert, but I have tried several things and cannot get it working correctly. I
have even contacted Matlab support, got the issue elevated to the dev team,
and was basically told "haha sucks to suck". I have seen a few other people
online have had the same issue, but either their questions were never answered
or they had a slightly different problem and their solution didn't apply to
me.
**Installing on a VM running Ubuntu:**
I set LD_LIBRARY_PATH as the instructions say, then it breaks network files. I
can ping google.com, but I cannot nslookup google.com or visit it in a
browser. Nslookup provides this error:
nslookup: /usr/local/MATLAB/MATLAB_Runtime/v90/bin/glnxa64/libcrypto.so.1.0.0: no version information available (required by /usr/lib/libdns.so.100)
03-Feb-2016 11:32:22.361 ENGINE_by_id failed (crypto failure)
03-Feb-2016 11:32:22.362 error:25070067:DSO support routines:DSO_load:could not load the shared library:dso_lib.c:244:
03-Feb-2016 11:32:22.363 error:260B6084:engine routines:DYNAMIC_LOAD:dso not found:eng_dyn.c:447:
03-Feb-2016 11:32:22.363 error:2606A074:engine routines:ENGINE_by_id:no such engine:eng_list.c:418:id=gost
(null): dst_lib_init: crypto failure
The installation worked though (I can run my Java programs that reference
compiled Matlab functions). Unsetting LD_LIBRARY_PATH fixes the network files
but then I can't run programs anymore.
**Installing on EC2 instance:**
On an EC2 instance it does not break the network files (nslookup is fine).
Instead it messes up Python library files. Trying to use any aws cli command,
I get the error:
File "/usr/bin/aws", line 19, in <module>
import awscli.clidriver
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 16, in <module>
import botocore.session
File "/usr/lib/python2.7/dist-packages/botocore/session.py", line 25, in <module>
import botocore.config
File "/usr/lib/python2.7/dist-packages/botocore/config.py", line 18, in <module>
from botocore.compat import six
File "/usr/lib/python2.7/dist-packages/botocore/compat.py", line 139, in <module>
import xml.etree.cElementTree
File "/usr/lib64/python2.7/xml/etree/cElementTree.py", line 3, in <module>
from _elementtree import *
ImportError: PyCapsule_Import could not import module "pyexpat"
Printing sys.path in Python shows lib-dynload is already there though, so it
doesn't seem to the problem.
And when trying to run the program, I get:
Exception in thread "main" java.lang.LinkageError: libXt.so.6: cannot open shared object file: No such file or directory
at com.mathworks.toolbox.javabuilder.internal.DynamicLibraryUtils.dlopen(Native Method)
at com.mathworks.toolbox.javabuilder.internal.DynamicLibraryUtils.loadLibraryAndBindNativeMethods(DynamicLibraryUtils.java:134)
at com.mathworks.toolbox.javabuilder.internal.MWMCR.<clinit>(MWMCR.java:1529)
at VectorAddExample.VectorAddExampleMCRFactory.newInstance(VectorAddExampleMCRFactory.java:48)
at VectorAddExample.VectorAddExampleMCRFactory.newInstance(VectorAddExampleMCRFactory.java:59)
at VectorAddExample.VectorAddClass.<init>(VectorAddClass.java:62)
at com.mypackage.Example.main(Example.java:13)
I'm at a brick wall and really have no clue how to proceed.
Answer: Maybe something else already needs LD_LIBRARY_PATH set to work. Make sure you
prepend not overwrite:
export LD_LIBRARY_PATH=new/path:$LD_LIBRARY_PATH
**Edit** :
OK, if LD_LIBRARY_PATH was initially empty, this suggests that Matlab comes
with shared libraries that are incompatible with your system ones:
nslookup: /usr/local/MATLAB/MATLAB_Runtime/v90/bin/glnxa64/libcrypto.so.1.0.0: no version information available (required by /usr/lib/libdns.so.100)
suggests that `/usr/lib/libdns.so.100` needs `libcrypto.so.1.0.0`, which is
now being resolved to the one that comes with MATLAB, which is incompatible.
You can check the dependencies of a dll by
ldd /usr/lib/libcrypto.so.1.0.0
and hopefully you can find a configuration that keeps both MATLAB and your
system happy. Unfortunately, this may involve a lot of trial and error.
If there is no such configuration, you can try setting LD_LIBRARY_PATH only
when you run MATLAB:
LD_LIBRARY_PATH=$MATLAB_LD_LIBRARY_PATH matlab
**Edit 2** :
Well, for the Python issue, it seems to boil down to `pyexpat`, which is a
wrapper around the standard `expat` XML parser. Try doing (name guessed since
I don't have a Linux right now):
ldd /usr/local/lib/python2.7/site-packages/libpyexpat.so
and see what that depends on. Probably, it will be `libexpat.so`, which is now
being resolved to MATLAB's version.
|
pyspark json not working
Question: I am trying to parse [json
data](http://%7B%20%20%20%22hash%22:%220000000000000000059134ebb840559241e8e2799f3ebdff56723efecfd6567a%22,%20%20%20%22branch%22:%22main%22,%20%20%20%22previous_block_hash%22:%220000000000000000010d1517398ca2c64b055aa00ce04bcc36a0fc66fc12e76a%22,%20%20%20%22next_blocks%22:\[%20%20%20%20%20%7B%20%20%20%20%20%20%20%22hash%22:%22000000000000000002662fadc11e999216cc6add1568a6af09f2dc84a1c500b9%22,%20%20%20%20%20%20%20%22branch%22:%22main%22,%20%20%20%20%20%20%20%22height%22:395546%20%20%20%20%20%7D%20%20%20\],%20%20%20%22height%22:395545,%20%20%20%22confirmations%22:876,%20%20%20%22merkle_root%22:%228cf3eea32f692e5ebc9c25bb912ab3aff43c02761609d52cdd48afc5a05918fb%22,%20%20%20%22time%22:%222016-01-29T02:54:45Z%22,%20%20%20%22created_at%22:%222016-01-29T02:55:18Z%22,%20%20%20%22nonce%22:3284789611,%20%20%20%22bits%22:403253488,%20%20%20%22difficulty%22:120033340651.23708,%20%20%20%22reward%22:2500000000,%20%20%20%22fees%22:481000,%20%20%20%22total_out%22:94587603763,%20%20%20%22size%22:52543,%20%20%20%22transactions_count%22:28,%20%20%20%22version%22:4,%20%20%20%22transaction_hashes%22:\[%20%20%20%20%20%22b3df3d5fedadd07a46753af556c336c41e038a9aec7ddd9921ad249828fd6d66%22,%20%20%20%20%20%224ada431255d104c1c76ef56bdef4186ea89793223133e535383ff39d5a322910%22,%20%20%20%20%20%228598c3c2f9c0e319123df71b66558ea09d07a832817afcddbea050b279ec641d%22,%20%20%20%20%20%227ba6dd7775102b071727f3839cff2ad455971d6d17f93602daab79797c32f23a%22,%20%20%20%20%20%22aaad0db749d431806048073c2f96e3c49e41d87f46f0aadc505596c06fa63faf%22,%20%20%20%20%20%221ef914db9dfcf97fb6d0954506a1a360bd8eabc07a7bd32ea9d0723e8802d082%22,%20%20%20%20%20%2262d4eaac2cbd20182c53992bf69af4a1a0244f783317e141b5680120ac3c2d98%22,%20%20%20%20%20%2231f63a7f5782d324da4eb6dd716bb36723f02fc53b3cb2c900ab3f4c2df105b2%22,%20%20%20%20%20%224a1fc06ac9c39068127dca5dc042e05d58ed2bd6a786a894351392b9cefa30b3%22,%20%20%20%20%20%2231ce58b46297f0998e35be1fd93e30caa85b608d04e9565138429870a39ad2ba%22,%20%20%20%20%20%222fd5e015f3d0f2f06e9d31bfecdfa2308fa33109645ffbe4e477225efa6b4286%22,%20%20%20%20%20%2293aa6d00e82a7831909697099c3241dd7dd324ec0a93e29976bdfbbd80fcc551%22,%20%20%20%20%20%2203942d53df85cc13d8183f541f127c1e796b8c04d163fa4ada86da89f1563272%22,%20%20%20%20%20%224f6cf7d83597e7db7f5ae67573a03aba3c8e8cec38f9b61cd1e3a060827e0aac%22,%20%20%20%20%20%22c885cf7aebdbae2af3d26061de45b768de81a2720b54e573fe802a0e81056ab0%22,%20%20%20%20%20%22ea364361d8506f53934241482fefe387b6cb851a5c7058cbe88cd16d1442fba6%22,%20%20%20%20%20%22a8799fbd00ed00cb9bb95316b73e56aefb5548d2abf7d4e789c7a71d5b91fd19%22,%20%20%20%20%20%2292891df395623487dbc2c92adc418394c0c732389d4b02455d471c8fcffab34b%22,%20%20%20%20%20%2250c0cca90792e8ac141a13bdae1400aa6b36b7760d4eb453eea1b0ad40b390b3%22,%20%20%20%20%20%22557f6dda4e4388f0d94b491599aab03796f778a9612a5167b1c5435f9db4cf7c%22,%20%20%20%20%20%22dfd97faa218658fe0a69e3948074459c8f37194a8cfaa28f0419b23cc11f0599%22,%20%20%20%20%20%22351d694803761f1193ef61dcc6678f570b22aa540d5c0f0b3a0a7cc43e203c6d%22,%20%20%20%20%20%221228976f4be8e72ba7fad1e0e1b57453dc2d416865716bc1a6abadcb6c949ab6%22,%20%20%20%20%20%22022c5f6d8ea7b3ca48fabfb08bb0ebeee59acd409898aa5d3bc0fd35e49bf12b%22,%20%20%20%20%20%2274a42e5583f29937a3e66d6b11632868f9d02a9060ca809122d7e77cbe93401d%22,%20%20%20%20%20%22bd7fecc6e4e8212c54deba8845bbf4f818cddb77d9ddc02b5e7029614999cdf5%22,%20%20%20%20%20%224487bc63459c4308ef87c335798f2af681e350c1bb9699fcea8cf862326227c2%22,%20%20%20%20%20%227f35e654a7e5d8932968ca3c5de86b6f3bff0180022c3c25f9acbac81ef7c539%22%20%20%20\]%20%7D)
in Pyspark's map function. I am interested in the extracting the field
"fees":481000 from [json data](https://bitcoin.toshi.io/api/v0/blocks/395545)
on line #21. If i do this in plain python (that is without pyspark), i can do
this with the following and it works!!!
import json
f=open("block_395545.json")
lines = f.read()
json_data = json.loads(lines)
fee_data = json_data["fees"]
print fee_data
But if i put this inside a map function as follows, it does not work :(
function get_tx_fee(line):
json_data = json.loads(line)
fee_data = json_data["fees"]
print fee_data
return fee_data
lines_rdd = sc.textFile("file:///block_395545.json")
tx_fee = lines_rdd.map(get_tx_fee)
Any idea how to fix this?
Here is the code after incorporating @zero323 suggestion
from pyspark import SparkConf, SparkContext
import json
conf = SparkConf().setMaster("local").setAppName("bitcoin_TransactionFee_calcultor")
sc = SparkContext(conf=conf)
content_rdd = (sc.wholeTextFiles("file:///home/ubuntu/unix_practice/spark-example/block_json_395545.txt")
.map(lambda kv: kv[0])
.map(json.loads)
)
content_rdd.take(10)
and here is are the ERRORS after running spark-submit
Traceback (most recent call last):
File "/home/ubuntu/unix_practice/spark-example/bctxfee_text.py", line 48, in <module>
content_rdd.take(10)
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1299, in take
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 916, in runJob
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 36, in deco
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1295, in takeUpToNumLeft
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1295, in takeUpToNumLeft
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
16/02/03 22:42:42 INFO SparkContext: Invoking stop() from shutdown hook
16/02/03 22:42:42 INFO SparkUI: Stopped Spark web UI at http://ec2-52-72-36-43.compute-1.amazonaws.com:4040
16/02/03 22:42:42 INFO DAGScheduler: Stopping DAGScheduler
16/02/03 22:42:42 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/02/03 22:42:42 INFO MemoryStore: MemoryStore cleared
16/02/03 22:42:42 INFO BlockManager: BlockManager stopped
16/02/03 22:42:42 INFO BlockManagerMaster: BlockManagerMaster stopped
16/02/03 22:42:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/02/03 22:42:42 INFO SparkContext: Successfully stopped SparkContext
16/02/03 22:42:42 INFO ShutdownHookManager: Shutdown hook called
16/02/03 22:42:42 INFO ShutdownHookManager: Deleting directory /tmp/spark-8c82aefa-b0f4-4365-b7a9-cdb7468aed68/pyspark-47daf835-b869-48cc-9c3d-b836f2ff26b8
16/02/03 22:42:42 INFO ShutdownHookManager: Deleting directory /tmp/spark-8c82aefa-b0f4-4365-b7a9-cdb7468aed68
Answer: `SparkContex.textFile` splits data by line so it won't work with multiline
JSON. If files are relatively small you can try
[`SparkContex.wholeTextFiles`](https://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext.wholeTextFiles)
instead:
(sc.wholeTextFiles("file:///block_395545.json")
.map(lambda kv: kv[1])
.map(json.loads))
If not you can try to leverage document structure to create custom Hadoop
`InputFormat`, [use custom
delimiter](http://stackoverflow.com/a/31836069/1560062) or try to fix data
after loading using methods which can access multiple records like
`mapPartitions` / `mapPartitionsWithIndex`.
|
Efficient Vector Bit-Data "Rotation" / "Rearrangement" in Memory [e.g. in Python, Numpy]
Question: How does one efficiently convert from an 8 element long array of e.g. uint8s
into a its "rotated" counterpart, were e.g. the original 8bits of the first
element are spread across all vector elements as the MSB, and the second last
element spread across the second MSB, and so on: Working and slow example:
import numpy as np
original = np.random.randint(0, 255, 8).astypye(np.uint8) # some random example vector
[np.binary_repr(i, width=8) for i in original] # original data
=>['01111111',
'00100111',
'01110111',
'00100010',
'00111101',
'10010000',
'10000100',
'10101000']
rotated = np.packbits(np.unpackbits(original).reshape(-1,8).T) # <= SLOW ROTATION
[np.binary_repr(i, width=8) for i in rotated] # this is should be the result
=>['00000111', # what where rows originally
'10100000', # are now columns
'11111001',
'10101100',
'10001001',
'11101010',
'11110000',
'11101000']
So in the end, I want to reorder the layout how BITS are "filed" into the RAM.
As you can see I got a working example in Numpy, it is not super slow (here ~
21 µs), however I would like to do this excercise with data structures in the
order of ~2k * 1 mio bits. Therefore the usage of the numpy or C bool dtype is
to wasteful (factor 8 overhead).
Any C bit shuffling magic or SSE instructions or general answers are welcome!
Answer: I suggest looking at the source provided
[here](http://programming.sirrida.de/calcperm.php)
In particular, calcperm.cpp. This is a simple bit permutation problem.
|
How to fetch a file name automatically in to a data frame instead of manually specifying it
Question: I am trying to automate my spark code in Scala or python and here is what I am
trying to do
Format of files in s3 bucket is filename_2016_02_01.csv.gz
From s3 bucket the spark code should be able to pick the file name and create
an Dataframe
example Dataframe=sqlContext.read.format("com.databricks.spark.csv").options(header="true").options(delimiter=",").options(inferSchema="true").load("s3://bucketname/filename_2016-01-29.csv.gz")
So every day when I run the job it should be pick that particular days file
and create an dataframe instead of me specifying the file name .
Any Idea on how to write code for this condition ?
Thanks in Advance.
Answer: If i understood you correctly, you want the file name change automatically
based on that day date. if that's the case:
here is a Scala solution:
Im using [joda-time](http://mvnrepository.com/artifact/joda-time) to generate
that date.
import org.joda.time.format.DateTimeFormat
import org.joda.time.{DateTimeZone, DateTime}
...
val today = DateTime.now(DateTimeZone.UTC).toString(DateTimeFormat.forPattern("yyyy_MM_dd"))
val fileName = "filename_" + today + ".csv.gz"
...
Python solution:
from datetime import datetime
today = datetime.utcnow().strftime('%Y_%m_%d')
file_name = 'filename_' + today + '.csv.gz'
|
Python - check for multiple items in 2d array
Question: I have looked around but cannot find anybody asking what I'm trying to do:
Let me give you a bit of background:
I am making an game in python where the player moves around the grid searching
for treasure chests, and I am trying to randomly generate 10 chest locations
in a 8x8 grid.
My grid was created by doing the following:
grid = []
I then fill the grid with 'subgrids'
Because my game is based around a grid design, this allowed me to separate the
individual rows.
for i in range(8):
grid.append([])
This makes me 8 empty lists inside the main 'grid' list.
What I am trying to do next is randomly generate the chest locations, and map
them to another list called 'chestLocations', which also uses 10 subgrids (one
for each unique chest location) . This is so I can create Y and X variables,
which are relative to the grid list.
Here is my GenerateChestLocations() function:
def GenerateChestLocations():
global chestY
global chestX
counter = 10
chestY = []
chestX = []
while counter > 0:
posY = random.randint(0,7)
posX = random.randint(1,8)
value = GetValue(posY,posX)
if value == gridChar:
pass
elif value == playerChar:
continue
chestY.append(posY)
chestX.append(posX)
counter -= 1
for a in range(len(chestY)):
chestLocations[a].append(chestY[a])
visitedChests[a].append(chestY[a])
for i in range(len(chestX)):
chestLocations[i].append(chestX[i])
visitedChests[i].append(chestX[i])
for subItem in range(len(visitedChests)):
visitedChests[subItem].append(0)
return
(BTW, the variables used in this are defined at the start of my program, and
are as follows:)
The GetValue() function just returns the value of the grid item for those Y
and X coordinates.
visitedChests is another grid, which needs to be an exact duplicate of
chestLocations, but with an extra item in each 'subgrid' to hold the number of
times that the user has landed on the chest.
My problem is I cannot workout how to detect whether the randomly generated
posY and posX integers are already existing in the chestLocations list.
How do I create detection so if it already finds an item with the same
coordinates, it will just 'continue' to run the whole while loop again?
Thanks for reading btw ;)
Answer: Use the stdlib:
import random
from itertools import product
num_bandits = 5
num_chests = 10
all_locns = list(product(range(0,8), range(1,9)))
chest_locns = random.sample(all_locns, num_chests)
unused_locns = [loc for loc in all_locns if loc not in chest_locns]
bandit_locns = random.sample(unused_locns, num_bandits)
|
How to combine records based on date using python connected components?
Question: I have a list of records (person_id, start_date, end_date) as follows:
person_records = [['1', '08/01/2011', '08/31/2011'],
['1', '09/01/2011', '09/30/2011'],
['1', '11/01/2011', '11/30/2011'],
['1', '12/01/2011', '12/31/2011'],
['1', '01/01/2012', '01/31/2012'],
['1', '03/01/2012', '03/31/2012']]
The records for each person are sorted in an ascending order of start_date.
The periods are consolidated by combining the records based on the dates and
recording the start_date of the first period as the start date and the
end_date of the last period as the end date. BUT, If the time between the end
of one period and the start of the next is 32 days or less, we should treat
this as continuous period. Otherwise, we treat this as two periods:
consolidated_person_records = [['1', '08/01/2011', '09/30/2011'],
['1', '11/01/2011', '03/31/2012']]
Is there any way to do this using the python connected components?
Answer: I thought about your question, and I originally wrote a routine that would map
the date intervals into a 1D binary array, where each entry in the array is a
day, and consecutive days are consecutive entries. With this data structure,
you can perform dilation and erosion to fill in small gaps, thus merging the
intervals, and then map the consolidated intervals back into date ranges. Thus
we use standard raster connected components logic to solve your problem, as
per your idea (a graph-based connected components could work as well...)
This works fine, and I can post the code if you are really interested, but
then I wondered what the advantages are of the former apporach over the simple
routine of just iterating through the (pre-sorted) date ranges and merging the
next into the current if the gap is small.
Here is the code for the simple routine, and it takes about 120 micro seconds
to run using the sample data. If you expand the sample data by repeating it
10,000 times, this routine takes about 1 sec on my computer.
When I timed the morphology based solution, it was about 2x slower. It might
work better under certain circumstances, but I would suggest we try simple
first, and see if there's a real problem that requires a different algorithmic
approach.
from datetime import datetime
from datetime import timedelta
import numpy as np
The sample data provided in the question:
SAMPLE_DATA = [['1', '08/01/2011', '08/31/2011'],
['1', '09/01/2011', '09/30/2011'],
['1', '11/01/2011', '11/30/2011'],
['1', '12/01/2011', '12/31/2011'],
['1', '01/01/2012', '01/31/2012'],
['1', '03/01/2012', '03/31/2012'],
['2', '11/11/2011', '11/30/2011'],
['2', '12/11/2011', '12/31/2011'],
['2', '01/11/2014', '01/31/2014'],
['2', '03/11/2014', '03/31/2014']]
The simple approach:
def simple_method(in_data=SAMPLE_DATA, person='1', fill_gap_days=31, printit=False):
date_format_str = "%m/%d/%Y"
dat = np.array(in_data)
dat = dat[dat[:, 0] == person, 1:] # just this person's data
# assume date intervals are already sorted by start date
new_intervals = []
cur_start = None
cur_end = None
gap_days = timedelta(days=fill_gap_days)
for (s_str, e_str) in dat:
dt_start = datetime.strptime(s_str, date_format_str)
dt_end = datetime.strptime(e_str, date_format_str)
if cur_end is None:
cur_start = dt_start
cur_end = dt_end
continue
else:
if cur_end + gap_days >= dt_start:
# merge, keep existing cur_start, extend cur_end
cur_end = dt_end
else:
# new interval, save previous and reset current to this
new_intervals.append((cur_start, cur_end))
cur_start = dt_start
cur_end = dt_end
# make sure final interval is saved
new_intervals.append((cur_start, cur_end))
if printit:
print_it(person, new_intervals, date_format_str)
return new_intervals
And here's the simple pretty printing function to print the ranges.
def print_it(person, consolidated_ranges, fmt):
for (s, e) in consolidated_ranges:
print(person, s.strftime(fmt), e.strftime(fmt))
Running in ipython as follows. Note that printing the result can be turned off
for timing the computation.
In [10]: _ = simple_method(printit=True)
1 08/01/2011 09/30/2011
1 11/01/2011 03/31/2012
Running in ipython with %timeit macro:
In [8]: %timeit simple_method(in_data=SAMPLE_DATA)
10000 loops, best of 3: 118 µs per loop
In [9]: %timeit simple_method(in_data=SAMPLE_DATA*10000)
1 loops, best of 3: 1.06 s per loop
[EDIT 8 Feb 2016: To make a long answer longer...] As I prefaced in my
response, I did create a morphological / 1D connected components version and
in my timing it was about 2x slower. But for the sake of completeness, I'll
show the morphological method, and maybe others will have insight on if
there's a big area for speed-up left somewhere in it.
#using same imports as previous code with one more
import calendar as cal
def make_occupancy_array(start_year, end_year):
"""
Represents the time between the start and end years, inclusively, as a 1-D array
of 'pixels', where each pixel corresponds to a day. Consecutive days are thus
mapped to consecutive pixels. We can perform morphology on this 1D array to
close small gaps between date ranges.
"""
years_days = [(yr, 366 if cal.isleap(yr) else 365) for yr in range(start_year, end_year+1)]
YD = np.array(years_days) # like [ (2011, 365), (2012, 366), ... ] in ndarray form
total_num_days = YD[:, 1].sum()
occupancy = np.zeros((total_num_days,), dtype='int')
return YD, occupancy
With the occupancy array to represent the time intervals, we need two
functions to map from dates to positions in the array and the inverse.
def map_date_to_position(dt, YD):
"""
Maps the datetime value to a position in the occupancy array
"""
# the start position is the offset to day 1 in the dt1,year,
# plus the day of year - 1 for dt1 (day of year is 1-based indexed)
yr = dt.year
assert yr in YD[:, 0] # guard...YD should include all years for this person's dates
position = YD[YD[:, 0] < yr, 1].sum() # the sum of the days in year before this year
position += dt.timetuple().tm_yday - 1
return position
def map_position_to_date(pos, YD):
"""
Inverse of map_date_to_position, this maps a position in the
occupancy array back to a datetime value
"""
yr_offsets = np.cumsum(YD[:, 1])
day_offsets = yr_offsets - pos
idx = np.flatnonzero(day_offsets > 0)[0]
year = YD[idx, 0]
day_of_year = pos if idx == 0 else pos - yr_offsets[idx-1]
# construct datetime as first of year plus day offset in year
dt = datetime.strptime(str(year), "%Y")
dt += timedelta(days=int(day_of_year)+1)
return dt
The following function fills the relevant part of the occupancy array given
start and end dates (inclusive) and optionally extends the end of the range by
a gap-filling margin (like 1-sided dilation).
def set_occupancy(dt1, dt2, YD, occupancy, fill_gap_days=0):
"""
For a date range starting dt1 and ending, inclusively, dt2,
sets the corresponding 'pixels' in occupancy vector to 1.
If fill_gap_days > 0, then the end 'pixel' is extended
(dilated) by this many positions, so that we can fill
the gaps between intervals that are close to each other.
"""
pos1 = map_date_to_position(dt1, YD)
pos2 = map_date_to_position(dt2, YD) + fill_gap_days
occupancy[pos1:pos2] = 1
Once we have the consolidated intervals in the occupancy array, we need to
read them back out into date intervals, optionally performing 1-sided erosion
if we'd previously done gap filling.
def get_occupancy_intervals(OCC, fill_gap_days=0):
"""
Find the runs in the OCC array corresponding
to the 'dilated' consecutive positions, and then
'erode' back to the correct end dates by subtracting
the fill_gap_days.
"""
starts = np.flatnonzero(np.diff(OCC) > 0) # where runs of nonzeros start
ends = np.flatnonzero(np.diff(OCC) < 0) # where runs of nonzeros end
ends -= fill_gap_days # erode back to original length prior to dilation
return [(s, e) for (s, e) in zip(starts, ends)]
Putting it all together...
def morphology_method(in_data=SAMPLE_DATA, person='1', fill_gap_days=31, printit=False):
date_format_str = "%m/%d/%Y"
dat = np.array(in_data)
dat = dat[dat[:, 0] == person, 1:] # just this person's data
# for the intervals of this person, get starting and ending years
# we assume the data is already sorted
#start_year = datetime.strptime(dat[0, 0], date_format_str)
#end_year = datetime.strptime(dat[-1, 1], date_format_str)
start_times = [datetime.strptime(d, date_format_str) for d in dat[:, 0]]
end_times = [datetime.strptime(d, date_format_str) for d in dat[:, 1]]
start_year = start_times[0].year
end_year = end_times[-1].year
# create the occupancy array, dilated so that each interval
# is extended by fill_gap_days to 'fill in' the small gaps
# between intervals
YD, OCC = make_occupancy_array(start_year, end_year)
for (s, e) in zip(start_times, end_times):
set_occupancy(s, e, YD, OCC, fill_gap_days)
# return the intervals from OCC after having filled gaps,
# and trim end dates back to original position.
consolidated_pos = get_occupancy_intervals(OCC, fill_gap_days)
# map positions back to date-times
consolidated_ranges = [(map_position_to_date(s, YD), map_position_to_date(e, YD)) for
(s, e) in consolidated_pos]
if printit:
print_it(person, consolidated_ranges, date_format_str)
return consolidated_ranges
|
audio over python tcp error
Question: I am writing a simple python tcp code to send over a wav file however I seem
to be getting stuck. can someone explain why my code is not working correctly?
Server Code
import socket, time
import scipy.io.wavfile
import numpy as np
def Main():
host = ''
port = 3333
MAX = 65535
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host,port))
s.listen(1)
print "Listening on port..." + str(port)
c, addr = s.accept()
print "Connection from: " + str(addr)
wavFile = np.array([],dtype='int16')
i = 0
while True:
data = c.recvfrom(MAX)
if not data:
break
# print ++i
# wavfile = np.append(wavfile,data)
print data
timestr = time.strftime("%y%m%d-%h%m%s")
print timestr
# wavF = open(timestr + ".wav", "rw+")
scipy.io.wavfile.write(timestr + ".wav",44100, data)
c.close()
if __name__ == '__main__':
Main()
Client Code
host, port = "", 3333
import sys , socket
import scipy.io.wavfile
# create a tcp/ip socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# connect the socket to the port where the server is listening
server_address = (host, port)
print >>sys.stderr, 'connecting to %s port %s' % server_address
input_data = scipy.io.wavfile.read('Voice 005.wav',)
audio = input_data[1]
sock.connect(server_address)
print 'have connected'
try:
# send data
sock.sendall(audio)
print "sent" + str(audio)
sock.close()
except:
print('something failed sending data')
finally:
print >>sys.stderr, 'closing socket'
print "done sending"
sock.close()
Please help someone, I want to send an audio file to my embedded device with
tcp since it crucial data to be processed on the embedded device.
Answer: Not sure why you go to the trouble of using `scipy` and `numpy` for this,
since you can just use the `array` module to create binary arrays that will
hold the wave file. Can you adapt and use the simple client/server example
below?
(Note: I've copy/pasted a small Windows sound file called 'tada.wav' to the
same folder to use with the test scripts.)
Code for the server script:
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
print('Listening...')
conn, addr = s.accept()
print('Connected by', addr)
outfile = open("newfile.wav", 'ab')
while True:
data = conn.recv(1024)
if not data: break
outfile.write(data)
conn.close()
outfile.close()
print ("Completed.")
Code for the client:
from array import array
from os import stat
import socket
arr = array('B') # create binary array to hold the wave file
result = stat("tada.wav") # sample file is in the same folder
f = open("tada.wav", 'rb')
arr.fromfile(f, result.st_size) # using file size as the array length
print("Length of data: " + str(len(arr)))
HOST = 'localhost'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send(arr)
print('Finished sending...')
s.close()
print('done.')
This works for me (though only tested by running both on localhost) and I end
up with a second wave file that's an exact copy of the one sent by the client
through the socket.
|
Having trouble changing background color in Python
Question: I'm practicing coding in Python. Here is what I am testing and messing around
with.
import tkinter as tk
from tkinter import ttk
class gui_programming(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
container.pack(side="left", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, Page1):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
frame =
frame.config(bg="red")
class Page1(tk.Frame):
def __init__(selfself, parent, controller):
tk.Frame.__init__(self, parent)
app = gui_programming()
app.geometry("400x200+10+10")
app.mainloop()
However I am curious and stumped as to what frame should be set to so that I
can change the background color to something other than default.
Answer: `frame` shouldn't set to anything. The object itself is a frame:
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.config(bg="red")
|
Can A Python Program Open An Text File On The Web?
Question: I am making a program, and was wondering if a .txt file can be hosted on the
web and be accessed by the open() function. Does anyone know about this?
Answer: You can't use `open()`, but you could use the [`requests`](http://docs.python-
requests.org/en/master/) library to do it.
import requests
url_to_txt_file = ""
print(requests.get(url_to_txt_file).text)
Alternatively, you could use
[`urllib.request`](https://docs.python.org/3.5/library/urllib.request.html#module-
urllib.request).
import urllib.request
url_to_txt_file = ""
print(urllib.request.urlopen(url_to_txt_file).read())
|
Value error while generating indexes using PCA in scikit-learn
Question: Using the following function i am trying to generate index from the data:
Function:
import numpy as np
from sklearn.decomposition import PCA
def pca_index(data,components=1,indx=1):
corrs = np.asarray(data.cov())
pca = PCA(n_components = components).fit(corrs)
trns = pca.transform(data)
index=np.dot(trns[0:indx],pca.explained_variance_ratio_[0:indx])
return index
Index: generation from principal components
index = pca_index(data=mydata,components=3,indx=2)
Following error is being generated when i am calling the function:
Traceback (most recent call last):
File "<ipython-input-411-35115ef28e61>", line 1, in <module>
index = pca_index(data=mydata,components=3,indx=2)
File "<ipython-input-410-49c0174a047a>", line 15, in pca_index
index=np.dot(trns[0:indx],pca.explained_variance_ratio_[0:indx])
ValueError: shapes (2,3) and (2,) not aligned: 3 (dim 1) != 2 (dim 0)
Can anyone help with the error.
According to my understanding there is some error at the following point when
i am passing the subscript indices as variable (indx):
trns[0:indx],pca.explained_variance_ratio_[0:**indx**]
Answer: In `np.dot` you are trying to multiply a matrix having dimensions (2,3) with a
matrix having dimensions (2,), i.e. a vector.
However, you can only multiply NxM to MxP, e.g. (3,2) to (2,1) or (2,3) to
(3,1).
In your example the second matrix have dimensions of (2,) which, in numpy
terms, is similar but not the same as (2,1). You can reshape a vector into a
matrix with `vector.reshape([2,1])`
You might also transpose you first matrix, thus converting its dimensions from
(2,3) to (3,2).
However, make sure that you multiply appropriate matrices as the result will
differ from you might expect.
|
turtle module seems to lack turtle.demo()
Question: When I try to do the following in python 2.7.8 shell:
>>> import turtle
>>> turtle.demo()
I get the following error:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
turtle.demo()
AttributeError: 'module' object has no attribute 'demo'
This is the first time in my life I try using turtle, and assumed it would be
installed when installing python. I can't find it anywhere on my computer and
don't know how to get it at the correct place. Can anybody please explain what
I have to do to get at least the turtle.demo() working?
Answer: Since `demo1()` and `demo2()` are defined under a `if __name__ == "__main__":`
statement, don't try to run them from inside a Python shell, instead run the
turtle.py library as if it were a standalone Python program:
python C:\Python27\Lib\lib-tk\turtle.py
that should invoke the two demonstration routines, one after the other. Worth
watching just to see the _undo_ feature in action.
|
eliminate text after certain character in python pipeline- with slice?
Question: This is a short script I've written to refine and validate a large dataset
that I have.
# The purpose of this script is the refinement of the job data attained from the
# JSI as it is rendered by the `csv generator` contributed by Luis for purposes
# of presentation on the dashboard map.
import csv
# The number of columns
num_headers = 9
# Remove invalid characters from records
def url_escaper(data):
for line in data:
yield line.replace('&','&')
# Be sure to configure input & output files
with open("adzuna_input_THRESHOLD.csv", 'r') as file_in, open("adzuna_output_GO.csv", 'w') as file_out:
csv_in = csv.reader( url_escaper( file_in ) )
csv_out = csv.writer(file_out)
# Get rid of rows that have the wrong number of columns
# and rows that have only whitespace for a columnar value
for i, row in enumerate(csv_in, start=1):
if not [e for e in row if not e.strip()]:
if len(row) == num_headers:
csv_out.writerow(row)
else:
print "line %d is malformed" % i
I have one field that is structured like so:
`finance|statistics|lisp`
I've seen ways to do this using other utilities like
[R](http://stackoverflow.com/questions/17847189/remove-text-after-final-
period-in-string), but I want to ideally achieve the same effect within the
scope of this python code.
Maybe I can iterate over all the characters of all the columnar values,
perhaps as a list, and if I see a `|` I can dispose of the `|` and all the
text that follows it within the scope of the column value.
I think surely it can be achieved with slices as they do
[here](http://stackoverflow.com/questions/17891443/how-to-delete-everything-
after-a-certain-character-in-a-string) but I don't quite understand how the
indices with slices work- and I can't see how I could include this process
harmoniously within the cascade of the current script pipeline.
With regex I guess it's something like this
(?:|)(.*)
Answer: Why not use string's `split` method?
In[4]: 'finance|statistics|lisp'.split('|')[0]
Out[4]: 'finance'
It does not fail with exception when you do not have separator character in
the string too:
In[5]: 'finance/statistics/lisp'.split('|')[0]
Out[5]: 'finance/statistics/lisp'
|
ConfigParser and Scrapy: NoSectionError
Question: I have a issue with my Scrapy crawler when I launch it.
I used ConfigParser in order to have a small config.ini to set my table name
which i create each time i launch the crawler to scrap. That a basic way to
scrap but i'm still noob with scrapy and python
I get the folowings errors:
File "c:\python27\lib\ConfigParser.py", line 279, in options
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'SectionOne'
2016-02-04 15:10:57 [twisted] CRITICAL:
Here is my config.py:
import ConfigParser
import os
Config = ConfigParser.ConfigParser()
Config.read(os.getcwd() + '/config.ini')
def ConfigSectionMap(section):
dict1 = {}
options = Config.options(section)
for option in options:
try:
dict1[option] = Config.get(section, option)
if dict1[option] == -1:
DebugPrint("skip: %s" % option)
except:
print("exception on %s!" % option)
dict1[option] = None
return dict1
Here is my config.ini
[SectionOne]
nom_table: Seche_cheveux
Here is my pipeline.py:
import sqlite3
from datetime import date, datetime
import os
from config import *
TableName = ConfigSectionMap("SectionOne")['nom_table']
print TableName
class sqlite3Pipeline(object):
def __init__(self):
#initialisation de la base et connexion
try:
#self.setupDBCon()
self.con = sqlite3.connect(os.getcwd() + '/db.sqlite')
self.cur = self.con.cursor()
self.table_name = TableName
self.createTables()
except sqlite3.Error as e:
raise e
def createTables(self):
self.createMgTable()
def closeDB(self):
self.con.close()
def __del__(self):
self.closeDB()
def createMgTable(self, table_name):
self.cur.execute("CREATE TABLE IF NOT EXISTS" + table_name + "(\
nom TEXT UNIQUE, \
url TEXT UNIQUE, \
prix TEXT, \
stock TEXT, \
revendeur TEXT, \
livraison TEXT, \
img TEXT UNIQUE, \
detail TEXT UNIQUE, \
bullet TEXT UNIQUE, \
created_at DATE \
)")
def process_item(self, item, spider):
self.storeInDb(item)
return item
def storeInDb(self,item):
etc....
etc...
Please can you tell me how i can handle configparser with scrapy crawler ? and
if possible tell me what i'm doing wrong
For info when i start each file seperatly ,all print function i included works
welll.
Answer: `ConfigParser.read()` tends to fail silently when the config file is not
found. There's probably a change in the current working directory
(`os.getcwd()`) that prevents it from finding `config.ini`.
If your `config.ini` file is next to your `config.py`, you can use this
instead:
Config.read(os.path.join(os.path.dirname(__file__), 'config.ini'))
|
Wrapping a commandline program with pstream
Question: I want to be able to read and write to a program from C++. It seems like
pstream can do the job, but I find the documentation difficult to understand
and have not yet find an example.
I have setup the following minimum working example. This opens python, which
in turn (1) prints `hello` (2) ask input, and (3) prints `hello2`:
#include <iostream>
#include <cstdio>
#include "pstream.h"
using namespace std;
int main(){
std::cout << "start";
redi::pstream proc(R"(python -c "if 1:
print 'hello'
raw_input()
print 'hello2'
")");
std::string line;
//std::cout.flush();
while (std::getline(proc.out(), line)){
std::cout << " " << "stdout: " << line << '\n';
}
std::cout << "end";
return 0;
}
If I run this with the "ask input" part commented out (i.e. `#raw_input()`), I
get as output:
start stdout: hello
stdout: hello2
end
But if I leave the "ask input" part in (i.e. uncommented `raw_input()`), all I
get is blank, not even `start`, but rather what seems like a program waiting
for input.
My question is, how can one interact with this pstream, how can one establish
a little read-write-read-write session? Why does the program not even show
`start` or the first `hello`?
**EDIT:**
I don't seem to be making much progress. I don't think I really grasp what is
going on. Here are some further attempts with commentary.
**1) It seems like I can successfully feed raw_input**
I prove this by writing to the child's stderr:
int main(){
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
sys.stdin.flush()
sys.stderr.write('hello2 '+ a)
sys.stderr.flush()
")");
string line;
getline(proc.out(), line);
cout << line << endl;
proc.write("foo",3).flush();
cout << "end" << endl;
return 0;
}
output:
start
hello
end
hello2 foo
But it locks if I try to read from the stdout again
int main(){
...
a = raw_input()
sys.stdin.flush()
print 'hello2', a
sys.stdout.flush()
")");
...
proc.write("foo",3).flush();
std::getline(proc.out(), line);
cout << line << endl;
...
}
output
start
hello
**2) I can't get the readsome approach to work at all**
int main(){
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
sys.stdin.flush()
")");
std::streamsize n;
char buf[1024];
while ((n = proc.out().readsome(buf, sizeof(buf))) > 0)
std::cout.write(buf, n).flush();
proc.write("foo",3).flush();
cout << "end" << endl;
return 0;
}
output
start
end
Traceback (most recent call last):
File "<string>", line 5, in <module>
IOError: [Errno 32] Broken pipe
The output contains a Python error, it seems like the C++ program finished
while the Python pipe were still open.
**Question:** Can anyone provide a working example of how this sequential
communication should be coded?
Answer: > But if I leave the "ask input" part in (i.e. uncommented raw_input()), all I
> get is blank, not even start, but rather what seems like a program waiting
> for input.
The Python process _is_ waiting for input, from its stdin, which is connected
to a pipe in your C++ program. If you don't write to the pstream then the
Python process will never receive anything.
The reason you don't see "start" is that Python thinks it's not connected to a
terminal, so it doesn't bother flushing every write to stdout. Try `import
sys` and then `sys.stdout.flush()` after printing in the Python program. If
you need it to be interactive then you need to flush regularly, or set stdout
to non-buffered mode (I don't know how to do that in Python).
You should also be aware that just using `getline` in a loop will block
waiting for more input, and if the Python process is _also_ blocking waiting
for input you have a deadlock. See the usage example on the [pstreams home
page](http://pstreams.sourceforge.net/#usage) showing how to use `readsome()`
for non-blocking reads. That will allow you to read as much as is available,
process it, then send a response back to the child process, so that it
produces more output.
**EDIT:**
> I don't think I really grasp what is going on.
Your problems are not really problems with pstreams or python, you're just not
thinking through the interactions between two communicating processes and what
each is waiting for.
Get a pen and paper and draw state diagrams or some kind of chart that shows
where the two processes have got to, and what they are waiting for.
> **1) It seems like I can successfully feed raw_input**
Yes, but you're doing it wrong. `raw_input()` reads a line, you aren't writing
a line, you're writing three characters, `"foo"`. That's not a line.
That means the python process keeps trying to read from its stdin. The parent
C++ process writes the three characters then exits, running the `pstream`
destructor which closes the pipes. Closing the pipes causes the Python process
so get EOF, so it stops reading (after only getting three characters not a
whole line). The Python process then prints to stderr, which is connected to
your terminal, because you didn't tell the `pstream` to attach a pipe to the
child's stderr, and so you see that output.
> But it locks if I try to read from the stdout again
Because now the parent C++ process doesn't exit, so doesn't close the pipes,
so the child Python process doesn't read EOF and keeps waiting for more input.
The parent C++ process is _also_ waiting for input, but that will never come.
If you want to send a line to be read by `raw_input()` then write a newline!
This works fine, because it sends a newline, which causes the Python process
to get past the `raw_input()` line:
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
print 'hello2', a
sys.stdout.flush()
")");
string line;
getline(proc, line);
cout << line << endl;
proc << "foo" << endl; // write to child FOLLOWED BY NEWLINE!
std::getline(proc, line); // read child's response
cout << line << endl;
cout << "end" << endl;
N.B. you don't need to use `proc.out()` because you haven't attached a pipe to
the process' stderr, so it always reads from `proc.out()`. You would only need
to use that when reading from _both_ stdout and stderr, where you would use
`proc.out()` and `proc.err()` to distinguish them.
> **2) I can't get the readsome approach to work at all**
Again, you have the same problem that you're only writing three characters and
so the Python processes waits forever. The C++ process is trying to read as
well, so it also waits forever. Deadlock.
If you fix that by sending a newline (as shown above) you have another
problem: the C++ program will run so fast that it will get to the `while` loop
that calls `readsome` before the Python process has even started. It will find
nothing to read in the pipe, and so the first `readsome` call returns 0 and
you exit the loop. Then the C++ program gets to the second `while` loop, and
the child python process _still_ hasn't started printing anything yet, so that
loop also reads nothing and exits. Then the whole C++ program exits, and
finally the Python child is ready to run and tries to print "hello" but by
then it's parent is gone and it can't write to the pipe.
You need `readsome` to keep trying if there's nothing to read the _first_ time
you call it_, so it waits long enough for the first data to be readable.
For your simple program you don't really need `readsome` because the Python
process only writes a single line at a time, so you can just read it with
`getline`. But if it might write more than one line you need to be able to
keep reading until there's no more data coming, which `readsome` can do (it
reads only if there's data available). But you also need some way to tell
whether more data is still going to come (maybe the child is busy doing some
calculations before it sends more data) or if it's really finished. There's no
general way to know that, it depends on what the child process is doing. Maybe
you need the child to send some sentinel value, like `"---END OF
RESPONSE---"`, which the parent can look for to know when to stop trying to
read more.
For the purposes of your simple example, let's just assume that if `readsome`
gets more than 4 bytes it received the whole response:
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
sys.stdin.flush()
print 'hello2', a
sys.stdout.flush()
")");
string reply;
streamsize n;
char buf[1024];
while ((n = proc.readsome(buf, sizeof(buf))) != -1)
{
if (n > 0)
reply.append(buf, n);
else
{
// Didn't read anything. Is that a problem?
// Need to try to process the content of 'reply' and see if
// it's what we're expecting, or if it seems to be incomplete.
//
// Let's assume that if we've already read more than 4 characters
// it's a complete response and there's no more to come:
if (reply.length() > 3)
break;
}
}
cout << reply << std::flush;
proc << "foo" << std::endl;
while (getline(proc, reply)) // maybe use readsome again here
cout << reply << std::endl;
cout << "end" << endl;
This loops while `readsome() != -1`, so it keeps retrying if it reads nothing
and only stops the loop if there's an error. In the loop body it decides what
to do if nothing was read. You'll need to insert your own logic in here that
makes sense for whatever you're trying to do, but basically if `readsome()`
hasn't read _anything_ yet, then you should loop and retry. That makes the C++
program wait long enough for the Python program to print something.
You'd probably want to split out the `while` loop into a separate function
that reads a whole reply into a `std::string` and returns it, so that you can
re-use that function every time you want to read a response. If the child
sends some sentinel value that function would be easy to write, as it would
just stop every time it receives the sentinel string.
|
How to randomly pick numbers from ranked groups in python, to create a list of specific length
Question: I am trying to create a sequence of length 6 which consists of numbers
randomly picked from ranked groups. _The first element of the sequence has to
be drawn from the first group, and the last element has to be drawn from the
last group_.
Let the new sequence be called "seq". Then, if
a = [1,2,3]
b = [9]
c = [5,6]
d = [11,12,4]
seq[0] in a == 1
seq[-1] in d == 1
The intermediate elements have to come from lists a,b,c,d. But, if the second
element is randomly drawn from 'a', then the third one, has to be drawn either
from a later 'a' element, or from b/c/d. Similarly, if the third element is
drawn from 'c', then the other ones have to come from later ranks like d.The
groups are ranked this way.
The number of groups given now, is arbitrary (maximum of 6 groups). The length
for the sequence ( len(seq) == 6 ) is standard.
One element from **each** group has to be in the final sequence. Repetition of
elements is not allowed. All group elements are unique (and they are always
numbers in the range of 1-12).
Answer: How about this:
from random import choice, randint
v = [[1, 2, 3],
[9],
[5, 6],
[11, 12, 4]]
def whatever(values, n=6):
first = [choice(values[0])]
last = [choice(values[-1])]
seq = []
k = 0
while len(seq) < n -2:
k = randint(k, len(values)-1)
seq.append(choice(values[k]))
return first + seq + last
print whatever(v, 6)
|
set up trac with wsgi
Question: I have followed the following steps
1. downlaod trac 1.0.9
2. install using `python2.7 ./setup.py install` . this is a altinstall of python on centos 6 64 bit
3. created repository on `trac-admin operationalintelligence initenv`
4. trying to set up apache but not working
I have selinux enabled and have run `chcon -R -t httpd_sys_content_t
/usr/share/trac`
>
> < Location /trac/operationalintelligence >
>
>
>
> SetHandler mod_python
>
> PythonHandler trac.web.modpython_frontend
>
> PythonOption TracEnv "/usr/share/trac/operationalintelligence"
>
> PythonOption TracUriRoot "/usr/share/trac/operationalintelligence"
>
> SetEnv PYTHON_EGG_CACHE /tmp
>
> PythonInterpreter trac
> </Location>
>
Apache fails when trying to restart.
with the following config i am recieving the error below
Invalid command 'PythonInterpreter', perhaps misspelled or defined by a module not included in the server configuration
#Alias /trac /usr/share/trac/operationalintelligence
#<Directory /usr/share/trac/operationalintelligence>
# SetHandler mod_python
# PythonInterpreter main_interpreter
# PythonHandler trac.web.modpython_frontend
# PythonOption TracEnv /usr/share/trac/operationalintelligence
# PythonOption TracUriRoot /trac
#</Directory>
I have now tried wsgi configuration which is progressing
WSGIScriptAlias /trac /usr/share/trac/operationalintelligence/cgi-bin/trac.wsgi
<Directory /usr/share/trac/operationalintelligence/cgi-bin>
WSGIApplicationGroup %{GLOBAL}
# For Apache 2.2
<IfModule !mod_authz_core.c>
Order deny,allow
Allow from all
</IfModule>
# For Apache 2.4
<IfModule mod_authz_core.c>
Require all granted
</IfModule>
</Directory>
The error i receive in httpd error log is
[Thu Feb 04 17:40:52 2016] [error] [client 47.73.16.6] mod_wsgi (pid=30558): Exception occurred processing WSGI script '/usr/share/trac/operationalintelligence/cgi-bin/trac.wsgi'.
[Thu Feb 04 17:40:52 2016] [error] [client 47.73.16.6] Traceback (most recent call last):
[Thu Feb 04 17:40:52 2016] [error] [client 47.73.16.6] File "/usr/share/trac/operationalintelligence/cgi-bin/trac.wsgi", line 30, in application
[Thu Feb 04 17:40:52 2016] [error] [client 47.73.16.6] from trac.web.main import dispatch_request
[Thu Feb 04 17:40:52 2016] [error] [client 47.73.16.6] ImportError: No module named trac.web.main
[Thu Feb 04 17:40:58 2016] [error] [client 47.73.16.6] mod_wsgi (pid=30553): Exception occurred processing WSGI script '/usr/share/trac/operationalintelligence/cgi-bin/trac.wsgi'.
[Thu Feb 04 17:40:58 2016] [error] [client 47.73.16.6] Traceback (most recent call last):
[Thu Feb 04 17:40:58 2016] [error] [client 47.73.16.6] File "/usr/share/trac/operationalintelligence/cgi-bin/trac.wsgi", line 30, in application
[Thu Feb 04 17:40:58 2016] [error] [client 47.73.16.6] from trac.web.main import dispatch_request
[Thu Feb 04 17:40:58 2016] [error] [client 47.73.16.6] ImportError: No module named trac.web.main
^C
Answer: The problem with ModPython looks to be due to mod_python not being loaded.
Try:
a2enmod mod_python
The problem with WSGI looks to be due to the Trac package not being found on
your Python path. It could be a permissions issue.
|
Python specific format output with itertools.product
Question: I am trying with a code as given below .
import itertools
f=[[0], [2], [3]]
e=[['x']if f[j][0]==0 else range(f[j][0]) for j in range(len(f))]
print(e)
List1_=[]
for i in itertools.product(e):
List1_.append(i)
print(List1_)
I am expecting a result like given below
[('x', 0, 0), ('x', 0, 1), ('x', 1, 0), ('x', 1, 1), ('x', 2, 0), ('x', 2,1)]
but I am getting output as
[(['x'],), ([0, 1],), ([0, 1, 2],)]
Answer: You can use
[`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product):
>>> from itertools import product
>>> list(product(range(3), range(4)))
[(0, 0), (0, 1), (0, 2), (0, 3), (1, 0), (1, 1), (1, 2), (1, 3), (2, 0), (2, 1), (2, 2), (2, 3)]
>>>
>>> def print_array(l):
... for p in product(*map(range, l)):
... print p
...
>>> print_array([3, 4])
(0, 0)
(0, 1)
(0, 2)
[...]
* * *
Here is a little bit of explanation on why your program doesn't work as
expected. Rather than a list of iterables, `product` takes a variable number
of arguments. You can unpack a list into arguments for a function using the
star `*`:
e = [['x'], [0, 1], [0, 1, 2]]
product(e) # won't work
product(e[0], e[1], e[2]) # ok when e's length is exactly 3
product(*e) # works for any e -> equivalent to product(e[0], e[1], ....)
|
What is wrong here? (Attribute Error __len__)
Question:
import Tkinter
class buttton(Tkinter.Button):
def __init__(self,frame,action=None):
if action==None:
action=self.action
Tkinter.Button.__init__(self,frame,command=action)
self.pack(frame)
def action(self):
None
root=Tkinter.Tk()
button=buttton(root)
root.mainloop()
Upon running this problem of which I am unable to find any problems with, I am
met with a rather cryptic Error...
Traceback (most recent call last):
File "C:/Users/19CRF01/Desktop/ab.py", line 14, in <module>
button=buttton(root)
File "C:/Users/19CRF01/Desktop/ab.py", line 8, in __init__
self.pack(frame)
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1940, in pack_configure
+ self._options(cnf, kw))
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1162, in _options
cnf = _cnfmerge(cnf)
File "C:\Python27\lib\lib-tk\Tkinter.py", line 114, in _cnfmerge
for c in _flatten(cnfs):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1898, in __getattr__
return getattr(self.tk, attr)
AttributeError: __len__
I would be more than happy for any help!
Answer: Here is your problem:
self.pack(frame)
`self.pack` doesn't take a frame argument. Remove `frame`, and it should run
fine, like so:
self.pack()
|
unicode text to Japanese text for Rakuten Web service API
Question: Hi I'm using Rakuten Web service API to play around with it in Ipython
Notebook. I successfully loaded the product ranking data using this url
([https://app.rakuten.co.jp/services/api/IchibaItem/Ranking/20120927?format=json&applicationId=1074393356181806125](https://app.rakuten.co.jp/services/api/IchibaItem/Ranking/20120927?format=json&applicationId=1074393356181806125))
My question is that since the Japanese text is unicode, I cannot read the
text. How can I handle this?
Here is my code on Ipython Notebook:
import requests
import urllib2
url = 'https://app.rakuten.co.jp/services/api/IchibaItem/Ranking/20120927?format=json&page=1&applicationId=1074393356181806125'
r = requests.get(url)
res = r.json()
res['title']
Current output for title for example:
u'\u3010\u697d\u5929\u5e02\u5834\u3011\u30e9\u30f3\u30ad\u30f3\u30b0\u5e02\u5834 \u3010\u7dcf\u5408\u3011'
When I code `print(res['title'])`, I got this error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: ordinal not in range(128)
Answer: That is the representation of an Unicode string, see
[`repr`](https://docs.python.org/3.5/library/functions.html?highlight=repr#repr).
Just _print_ the actual text instead of showing the representation:
print(res['title'])
Printing Unicode is tricky however, Eg. for Windows see [Python, Unicode, and
the Windows console](http://stackoverflow.com/questions/5419/python-unicode-
and-the-windows-console).
|
python sort itemgetter equivalent for N-dimensional nested lists
Question: To sort a nested 2D list (list2D) by the nth element of the second dimension
in python 2.7 I can use
import operator
sorted(list2D, key=operator.itemgetter(n))
How can I sort the second dimension of a 3D list (list3D) based on the nth
elements of the second and third dimensions?
This would sort list3D below, currently in order 4,7,2.
list3D = [[[5,4,5,7],
[0,1,1,0]],
[[2,7,5,7],
[1,0,0,1]],
[[1,2,5,7],
[1,1,1,0]]]
into sort3D below where it sorted by indices [0][1] of the second and third
dimensions as 2,4,7
sort3D = [[[1,2,5,7],
[1,1,1,0]],
[[5,4,5,7],
[0,1,1,0]],
[[2,7,5,7],
[1,0,0,1]]]
Answer: [`sorted()`](https://docs.python.org/2/library/functions.html#sorted) can
accept a lambda that will be given an element and should return something
sortable (here your value) to sort the results:
>>> list3D = [[[5,4,5,7],
[0,1,1,0]],
[[2,7,5,7],
[1,0,0,1]],
[[1,2,5,7],
[1,1,1,0]]]
>>> sorted(list3D, key=lambda x: x[0][1])
[[[1, 2, 5, 7], [1, 1, 1, 0]], [[5, 4, 5, 7], [0, 1, 1, 0]], [[2, 7, 5, 7], [1, 0, 0, 1]]]
The doc have a [great
appendix](https://docs.python.org/2/howto/sorting.html#key-functions) about
using `key=`
|
Correct Regex for Acronyms In Python
Question: I want to find so called Acronyms in text is this the correct way of defining
the regex for it? My idea is that if something starts with capital and ends
with capital letter it is acronym. Is this correct?
import re
test_string = "Department of Something is called DOS,
or DoS, or (DiS) or D.O.S. in United State of America, U.S.A./ USA"
pattern3=r'([A-Z][a-zA-Z]*[A-Z]|(?:[A-Z]\.)+)'
print re.findall(pattern3, test_string)
and the out put is:
['DOS', 'DoS', 'DiS', 'D.O.S.', 'U.S.A.', 'USA']
Answer: Think that you can use the word boundary \b anchor for what you want to do
>>> regex = r"\b[A-Z][a-zA-Z\.]*[A-Z]\b\.?"
>>> re.findall(regex, "AbIA AoP U.S.A.")
['AbIA', 'AoP', 'U.S.A.']
|
Python: how to retrieve some values from the elements of a list?
Question: I have a list of elements like this:
`mylist=['event_100of1000', 'event_17of1000', 'event_1000of1000',...]`
How can I extract the "number" of the event only and produce another list, in
the likes of:
`extracted_list=['100','17','1000',...]`?
Answer: You can use 'regex' for this....
>>> import re
>>> mylist=['event_100of1000', 'event_17of1000', 'event_1000of1000']
>>> [re.findall('_(\d+)', i)[0] for i in mylist]
['100', '17', '1000']
|
ImportError: No module named eventlet
Question: I have installed eventlet library in python using : `pip install eventlet`.
But when I tried to import eventlet this error occured:
$python
Python 2.7.10 (default, Oct 23 2015, 18:05:06)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import eventlet
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named eventlet
I tried to install it again but I got this :
$pip install eventlet
Requirement already satisfied (use --upgrade to upgrade): eventlet in /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/eventlet-0.18.1-py3.5.egg
Requirement already satisfied (use --upgrade to upgrade): greenlet>=0.3 in /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/greenlet-0.4.9-py3.5-macosx-10.6-intel.egg (from eventlet)
How to rectify this error?
P.S : I am using Python 2.7
Answer: This question is not specific to Eventlet, it's just about managing multiple
versions of Python on OSX.
Your `pip` command installed eventlet into
`/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5`, see
version.
It means you actually have two Python versions installed: 2.7 and 3.5 and
`pip` works with 3.5.
Your options:
* (recommended) use separate virtualenv [1] for every project, explicitly specify python version when creating virtualenv using `virtualenv --python=python2.7 /path/to/new/venv`
* run python3 and use eventlet in latest Python
* run `pip2 install eventlet`
* symlink pip to pip2 `ln -snf $(which pip2) $(which pip)`
[1] <http://docs.python-guide.org/en/latest/dev/virtualenvs/>
|
Beautiful soup and bottlenose, how to parse correctly
Question: I am currently trying to extract strings from the response of a bottlenose
amazon api request. Without wanting to cause [Russian hackers to pwn to my
webapp](http://stackoverflow.com/questions/1732348/regex-match-open-tags-
except-xhtml-self-contained-tags), I am trying to use beautiful soup following
[this small webpage as
guide.](http://pythonprojectwatch.blogspot.co.uk/2011/12/making-new-amazon-
product-api-easy-to.html)
My current code:
import bottlenose as BN
import lxml
from bs4 import BeautifulSoup
amazon = BN.Amazon('MyAmznID','MyAmznSK','MyAmznAssTag',Region='UK', Parser=BeautifulSoup)
rank = amazon.ItemLookup(ItemId="0198596790",ResponseGroup="SalesRank")
soup = BeautifulSoup(rank)
print rank
print soup.find('SalesRank').string
This is the current output from bottlenose looks like this:
<?xml version="1.0" ?><html><body><itemlookupresponse xmlns="http://webservices.amazon.com/AWSECommerceService/2011-08-01"><operationrequest><httpheaders><header name="UserAgent" value="Python-urllib/2.7"></header></httpheaders><requestid>53f15ff4-3588-4e63-af6f-279bddc7c243</requestid><arguments><argument name="AWSAccessKeyId" value="################"></argument><argument name="AssociateTag" value="#########-##"></argument><argument name="ItemId" value="0198596790"></argument><argument name="Operation" value="ItemLookup"></argument><argument name="ResponseGroup" value="SalesRank"></argument><argument name="Service" value="AWSECommerceService"></argument><argument name="Timestamp" value="2016-02-04T11:05:48Z"></argument><argument name="Version" value="2011-08-01"></argument><argument name="Signature" value="################+##################="></argument></arguments><requestprocessingtime>0.0234130000000000</requestprocessingtime></operationrequest><items><request><isvalid>True</isvalid><itemlookuprequest><idtype>ASIN</idtype><itemid>0198596790</itemid><responsegroup>SalesRank</responsegroup><variationpage>All</variationpage></itemlookuprequest></request><item><asin>0198596790</asin><salesrank>124435</salesrank></item></items></itemlookupresponse></body></html>
So the bottle nose section works but the soup section gives an error response:
Traceback (most recent call last):
File "/Users/Fuck/Documents/Amazon/Bottlenose_amzn_prog/test.py", line 12, in <module>
print soup.find(Rank).string
NameError: name 'soup' is not defined
I am trying to extract the digits between the 'SalesRank' tags, but failing.
Answer: Ok, so I have ignored the option to specify parser in the bottlenose line.
Instead just specifying to use BeautifulSoup and xml parsing later.
import bottlenose as BN
import lxml
from bs4 import BeautifulSoup
amazon = BN.Amazon('##############','##############','##########',Region='UK')
rank = amazon.ItemLookup(ItemId="specifiedItemId",ResponseGroup="SalesRank")
soup = BeautifulSoup(rank, "xml")
print " "
print soup.SalesRank
I am a fairly novice user of Python so sometimes its the simple things that
get me.
|
Trouble with raspberry pi and OpenCV
Question: I have a project in raspberry pi and I am using python. However I have a
problem with the OpenCV when I am trying to run this code:
`import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()`
I get this error:
> OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file
> /home/pi/opencv-3.1.0/modules/imgproc/src/color.cpp, line 8000
Traceback (most recent call last):
File "test.py", line 11, in <module>
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/color.cpp:8000: error: (-215) scn == 3 || scn == 4 in function cvtColor"
I have Python 3.4.2,OpenCV 3.1.0 and Numpy 1.8.2.
Answer: So I found an answer. All I had to do was run this code on my raspberry pi:
sudo modprobe bcm2835-v4l2
Thank you for all the help.
|
Can't figure out why numpy.log10 outputs nan?
Question: So I have an 500k array of floating values. When I am trying to:
np.log10(my_long_array)
270k numbers getting replaced to nan, and they are not that small. For
example:
In [1]: import numpy as np
In [2]: t = -0.055488893531690543
In [3]: np.log10(t)
/home/aydar/anaconda3/bin/ipython:1: RuntimeWarning: invalid value encountered in log10
#!/home/aydar/anaconda3/bin/python3
Out[3]: nan
In [4]: type(t)
Out[4]: float
What am I missing?
Answer: the logarithm of a negative number is undefined, hence the `nan`
From the [docs to
`numpy.log10`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.log10.html):
> Returns: y : ndarray
>
> The logarithm to the base 10 of x, element-wise. **NaNs are returned where x
> is negative**.
|
Python script not able to read user input when run remotely
Question: I am trying to remotely execute a simple python script userinfo.py present in
remotehost.
Below is sourcecode of userinfo.py [ using Python 2.7.10 ]
#############
print "Userinfo :"
name=raw_input("Enter your name")
age=raw_input("Enter your age")
print "Name"+name+"\nAge"+age
#############
But script is working abnormally when run remotely.
[user@localhost]# ssh remotehost python /home/userinfo.py
> Userinfo :
>
> Enter your nameEnter your ageName
>
> Age
>
> [user@localhost]#
>
> Execution summary ::
During execution, it doesn't print anything, it directly waits for user input
and I just pressed Enter key it will display output as above.
Would like to know why it is not behaving as expected when raw_input is used.
When values are passed as arguments, it works fine.
> [user@localhost]# ssh remotehost python userinfo.py xyz 20
>
> Userinfo :
>
> Name xyz
>
> Age 20
>
> [user@localhost]#
below is changed code.
###########
import sys
print "Userinfo :"
name=sys.argv[1]
age=sys.argv[2]
print "Name "+name+"\nAge "+age
############
Would like to know why interactive way is not working as expected and what may
be fix.
Answer: In a regular terminal, the raw_input prompt is flushed immediately, meaning
you will see the prompt "Enter Your Name".
If you run this script through ssh, it saves the output until the script it
finished and only then prints everything in the buffer.
What you need is to run python unbuffered, which will force stdout to flush
after every output, and thus display to your ssh session. This can be
accomplished several ways.
ssh user@remotehost python -u script.py
or make the file executable _and_ unbuffered by adding the following to top of
your .py script. Be sure to use your actual python path here:
#! /usr/bin/python - u
and then in make it executable
sudo chmod +x script.py
then
ssh user@remotehost ./script.py
|
Python unit test: testcase class with own constructor fails in standard library
Question: I have this plain vanilla unit test, which works as expected, as long I leave
out the constructor.
import sys
import unittest
class Instance_test(unittest.TestCase):
def __init__(self):
super(Instance_test, self).__init__()
self.attribute = "new"
def test_something(self):
pass
def test_other(self):
self.assertTrue(True)
pass
def setUp(self):
pass
def tearDown(self):
pass
def suite():
return unittest.makeSuite(Instance_test, "test")
def main():
runner = unittest.TextTestRunner(sys.stdout)
runner.run(suite())
if __name__ == "__main__":
main()
With the constructor in place in get this backtrace:
Traceback (most recent call last):
File "f:\gt\check.py", line 31, in main()
File "f:\gt\check.py", line 28, in main
runner.run(suite())
File "f:\gt\check.py", line 24, in suite
return unittest.makeSuite(Instance_test, "test")
File "C:\Python34\lib\unittest\loader.py", line 374, in makeSuite
testCaseClass)
File "C:\Python34\lib\unittest\loader.py", line 70, in loadTestsFromTestCase
loaded_suite = self.suiteClass(map(testCaseClass, testCaseNames))
File "C:\Python34\lib\unittest\suite.py", line 24, in __init__
self.addTests(tests)
File "C:\Python34\lib\unittest\suite.py", line 60, in addTests
for test in tests:
TypeError: __init__() takes 1 positional argument but 2 were given
What's wrong and how else could I have a central attribute to be shared by
different test_xxx methods?
Answer: I would use unittest.TestCase's setUp() and tearDown() methods instead of
**init**. Just do the same thing you're doing except use the `setUp` method.
|
Can't get Spark to work on IPython Notebook in Windows
Question: I have installed spark on a Windows 10 box, and the installation works fine
from the Pyspark console. But recently I have tried to configure Ipython
Notebook to work with the Spark installation. I have made the following
imports
os.environ['SPARK_HOME'] = "E:/Spark/spark-1.6.0-bin-hadoop2.6"
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/bin")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/pyspark")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/lib")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/lib/py4j-0.9- src.zip")
sys.path.append("C:/Program Files/Java/jdk1.8.0_51/bin")
This works fine for creating the SparkContext and also for code like
sc.parallelize([1, 2, 3])
But when I write the following
file = sc.textFile("E:/scripts.sql")
words = sc.count()
I get the following error
Py4JJavaError Traceback (most recent call last)
<ipython-input-22-3c172daac960> in <module>()
1 file = sc.textFile("E:/scripts.sql")
----> 2 file.count()
E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in count(self)
1002 3
1003 """
-> 1004 return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
1005
1006 def stats(self):
E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in sum(self)
993 6.0
994 """
--> 995 return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
996
997 def count(self):
E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in fold(self, zeroValue, op)
867 # zeroValue provided to each partition is unique from the one provided
868 # to the final reduce call
--> 869 vals = self.mapPartitions(func).collect()
870 return reduce(op, vals, zeroValue)
871
E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in collect(self)
769 """
770 with SCCallSiteSync(self.context) as css:
--> 771 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
772 return list(_load_from_socket(port, self._jrdd_deserializer))
773
E:\Spark\spark-1.6.0-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py in __call__(self, *args)
811 answer = self.gateway_client.send_command(command)
812 return_value = get_return_value(
--> 813 answer, self.gateway_client, self.target_id, self.name)
814
815 for temp_arg in temp_args:
E:\Spark\spark-1.6.0-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
306 raise Py4JJavaError(
307 "An error occurred while calling {0}{1}{2}.\n".
--> 308 format(target_id, ".", name), value)
309 else:
310 raise Py4JError(Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0 (TID 8, localhost): org.apache.spark.SparkException: Python worker did not connect back in time at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:136)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:131)
... 12 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker did not connect back in time
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:136)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:131)
... 12 more
Please help resolve this as I am on a short time project.
Answer: Try escaping the backslashes.
file = sc.textFile("E:\\scripts.sql")
_Edited to add a second item to look at:_
Also, I notice you called:
words = sc.count()
Try this instead, which worked on my Windows 10 install:
file = sc.textFile("E:/scripts.sql")
words = file.count()
|
PyQ5t: load Qt Designer into Python script (loadUiType): how to check error cause?
Question: I design GUI in Qt Designer, then I load UI-file in my Puthon3 script with the
loadUiType method:
class Main(QMainWindow, uic.loadUiType("adc_main_form.ui")[0]):
def __init__(self):
super(Main, self).__init__()
self.setupUi(self)
All works fine. Then I make a little revolution in my form design and it
includes a lot of renaming. So I take that UI-file of Qt Designer (XML file)
and edit it in a text editor. Maybe I make some typo mistakes. Now I get a
message during the start of Python script, on the line self.setupUi(self):
> File "string", line 671, in setupUi
>
> TypeError: argument 1 has unexpected type 'QRadioButton'
So, some goes wrong in the process of importing the XML file. But the error
type tells me not enough to find the error.
I double check all my QRadioButton widgets. No idea.
I open Ui with the Designer - it opens without error messages.
I convert UI into PY (pyiuc5) - no errors.
The `.ui` file is [here](https://gist.github.com/KubaO/601b3c867f6d3cd6d9c9).
What can be the way to find the error in such a closed process as setupUI?
Answer: In this .ui file the widgets have the same name as the main window's slots. As
we subclass the main window, both the widgets and the slots are in the same
namespace, so command `self.zero_fix = QtWidgets.QRadioButton(self.frame_4)`
in compiled .ui file overwrite slot `zero_fix()`
|
Correctly installing pyOpenSSL for Python (Windows)
Question: I'm trying to make an application that automatically updates a Google Plus
spreadsheet. In order to do this I had to set up `gspread`, which also
requires pyOpenSSL in order to work. Without it, it throws this error:
> CryptoUnavailableError: No crypto library available
Using `pip`, I type the command:
pip install pyopenssl
And import using:
from OpenSSL import SSL
When I try to run the code, I receive the following error:
> ImportError: No module named cryptography.hazmat.bindings.openssl.binding
I've tried reinstalling pyOpenSSL multiple times, and also tried reinstalling
the cryptography dependency (as well as attempting to install previous
versions of pyOpenSSL).
This problem is documented a few times, but the only solution I haven't tried
is doing a fresh install of python, or the OS.
Any suggestions? Thanks in advance.
Answer: **This is how I solved it in my Ubuntu Desktop. In windows you need to figure
out the solution but the real reason for this problem is same both in Linux
and Windows**
PyOpenSSL 14.x+ uses cffi-based cryptography package, maybe this is a cause of
your issue - cffi needs libffi (or libffi-dev) system package, this is a new
non-Python dependency.
First do this
sudo apt-get install python-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev
and then
pip install cryptography
Note the key module here is _libffi-dev_..I think instead of `apt-get`, you
can also use `pip install` if you have pip installed already
**In the meantime this is the documentation says about pyOpenSSL binding**
> This is a “Hazardous Materials” module. You should ONLY use it if you’re
> 100% absolutely sure that you know what you’re doing because this module is
> full of land mines, dragons, and dinosaurs with laser guns.
That's a pretty bold warning I must say
|
ENML to plain text converter for Python
Question: Port enml library from javascript (enml.js)
ENML.PlainTextOfENML for evernote-sdk-js works good for me and I would like to
find a good port of this tool for Python.
I tried to use these library's, but got an errors:
<https://github.com/CarlLee/ENML_PY>
ImportError: No module named bs4
<https://github.com/wanasit/enml-py>
Feb 2013, without the documentations,
ImportError: No module named internals
**For example**
I would like to get:
any sort of liquid damage to an Apple product will void your warranty.
from
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"><en-note>any sort of liquid damage to an Apple product will void your warranty.</en-note>
**The part of code in which I want to use enml:**
views.py
title_contents = {}
for note in result_list.notes:
content = note_store.getNoteContent(auth_token,
note_store.getNote(note.guid,
True,False, False, False).guid)
title_contents[note.title] = content
enter code herereturn render_to_response('oauth/callback.html', {'notebooks': notebooks,
'result_list': result_list,
'title_contents': title_contents})
callback.html
.....
<ul>
{% for title, content in title_contents.items %}
<li><b>{{ title }}</b><br>{{ content }}</li>
{% endfor %}
</ul>
Answer: This combination accomplish all the things needed:
from fenml import ENMLToHTML
# the fenml.py is my internal fork of the
# https://github.com/CarlLee/ENML_PY/blob/master/__init__.py
# with slightly modified code.
from bs4 import BeautifulSoup
import html2text
....
title_contents[note.title] = html2text.html2text(BeautifulSoup(ENMLToHTML(content)).prettify())
|
Weighted smoothing of a 1D array - Python
Question: I am quite new to Python and I have an array of some parameter detections,
some of the values were detected incorrectly and (like 4555555):
array = [1, 20, 55, 33, 4555555, 1]
And I want to somehow smooth it. Right now I'm doing that with a weighted
mean:
def smoothify(array):
for i in range(1, len(array) - 2):
array[i] = 0.7 * array[i] + 0.15 * (array[i - 1] + array[i + 1])
return array
But it works pretty bad, of course, we can take a weighted mean of more than 3
elements, but it results in copypasting... I tried to find some native
functions for that, but I failed.
Could you please help me with that?
P.S. Sorry if it's a noob question :(
Thanks for your time, Best regards, Anna
Answer: Would suggest
[numpy.average](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.average.html)
to help you with this. the trick is getting the weights calculated - below I
zip up the three lists - one the same as the original array, the next one step
ahead, the next one step behind. Once we have the weights, we feed them into
the `np.average` function
import numpy as np
array = [1, 20, 55, 33, 4555555, 1]
arrayCompare = zip(array, array[1:] + [0], [0] + array)
weights = [.7 * x + .15 * (y + z) for x, y, z in arrayCompare]
avg = np.average(array, weights=weights)
|
NET-SNMP + Python Mac Address shows as \x00\
Question: Hey Im trying to get the MAC-address via ipNetToMediaPhysAddress which works
fine when using the netsnmp.snmpget command but when saving that into a
variable(tuple?) and printing it out via "print" the mac-address looks like
this.
('\x00\n\xb7\x9c\x93\x80',)
Code looks like this,
mac = netsnmp.Varbind("ipNetToMediaPhysAddress."+i+"."+ipadd)
macadd = netsnmp.snmpget(mac, Version = 2, DestHost = ip, Community = comm)
print '%-15s' % macadd
So what do I need to do? I just want it to look like a normal MAC address.
Answer: Maybe a call to hexlify is enough
from binascii import hexlify
mac = netsnmp.Varbind("ipNetToMediaPhysAddress."+i+"."+ipadd)
macadd = netsnmp.snmpget(mac, Version = 2, DestHost = ip, Community = comm)
print hexlify(macadd[0])
|
How to access c++ object methods from inside vector with SWIG Python
Question: I have two C++ classes: Foo and Bar. The constructor for Foo looks like this:
Foo(std::vector<Bar *> * bars);
The constructor for Bar and one of its member functions are the following:
Bar(int data)
int getData()
In my interface file I have:
%module mySwig
%{
#include "Foo.h"
#include "Bar.h"
#include <vector>
%}
%include "Foo.h"
%include "Bar.h"
%include "std_vector.i"
namespace std {
%template(VectorOfBars) vector<Bar *>;
}
So then in python I do the following:
import mySwig
myBar = mySwig.Bar(5)
And now I need to create a `std::vector<Bar *> *` object to pass into the Foo
constructor, so I try the following:
vector = mySwig.VectorOfBars()
vector.push_back(myBar)
To test if this was successful I try:
print vector
print vector[0]
print "data: vector[0].getData()
If the result of the third print out is still "5" then it was successful, but
instead I get what I'm assuming is a pointer value instead
<mySwig.VectorOfBars; proxy of <Swig Object of type 'std::vector< Bar *,std::allocator< Bar * > > *' at 0xb6a9c4d0> >
<mySwig.Bar; proxy of <Swig Object of type 'std::vector< Bar * >::value_type' at 0xb6a9c3b0> >
3069958048
What am I doing wrong? How can I create the vector of Bar pointer objects that
I need to pass in to make a Foo object? Why am I getting a pointer value back
instead of the actual value?
Answer: I figured it out! The problem was that the type `<mySwig.Bar; proxy of <Swig
Object of type 'std::vector< Bar * >::value_type' at 0xb6a9c3b0> >` translates
to a `Bar *` and not a `Bar` object. Normally SWIG handles pointers for you
but in this case it didn't so when I called a method on the `Bar *` object it
gave me back bogus data.
In reality this is exactly what I needed to pass to the Foo constructor. I was
just mistaken in thinking that SWIG would automatically dereference the
pointer to Bar so that I could access its `data` attribute. When I changed the
template declaration from
%template(VectorOfBars) vector<Bar *>;
to the line
%template(VectorOfBars) vector<Bar>;
then I was able to access `data` but then it was incorrect for passing to
Foo's constructor. I had it right all along.
|
Parsing HTML in Python - Some pages work and some don't...?
Question: Using the following script:
from lxml import html
import requests
gameUrl = 'http://store.401games.ca/catalog/2415520/caylus'
page = requests.get(gameUrl)
tree = html.fromstring(page.content)
stock = tree.xpath('//*[@id="stock"]/span[1]/div/*/text()')[0]
print stock
It will correctly display the stock level listed on the page. (1 at this time)
gameUrl = 'http://store.401games.ca/catalog/2415324/ticket-to-ride'
It displays the stock as 68, which is incorrect. (I have no idea where 68 is
even coming from).
I tried this with a LOT of pages from this site and 90% of them work correctly
using this script. But the other 10% fail and give random numbers...some are
quite different like 68 instead of 30. Or 1100 instead of 30. Some are closer,
like 12 instead of 9. I have no idea what is happening.
Does anyone have an idea of what may be the problem?
Answer: If you would open the page in the browser, you would see the `Quantity: 68`
flashing before it changes to `Quantity: 30`.
At first, I thought there is an XHR request that dynamically gets the product
availability from a certain endpoint after the page is loaded and almost
started to provide a usual answer about browser automation, but the problem
here is different.
If you would open the Network tab in browser developer tools, you may see the
`store.js` javascript file being loaded. At the beginning of the script, you
can see:
if(stock>30) { $('div.availability span').text( "30" ); }
var instock = $('div.availability').text();
instock = instock.replace("In-Stock", "Quantity");
What it means is that, if the quantity is more than 30, it is "manually" set
to 30.
|
Easy way to tell apart python multiprocessing's OS processes
Question: **Summary**
I'd like to use the Python multiprocessing module to run multiple jobs in
parallel on a Linux server. Further, I'd like to be able to look at the
running processes with `top` or `ps` and `kill` one of them but let the others
run.
However, what I'm seeing is that every process launched from the Python
multiprocessing module looks identical to the `ps -f` command.
All I'm seeing is this:
fermion:workspace ross$ ps -f
UID PID PPID C STIME TTY TIME CMD
501 32257 32256 0 8:52PM ttys000 0:00.04 -bash
501 32333 32257 0 9:05PM ttys000 0:00.04 python ./parallel_jobs.py
501 32334 32333 0 9:05PM ttys000 0:00.00 python ./parallel_jobs.py
501 32335 32333 0 9:05PM ttys000 0:00.00 python ./parallel_jobs.py
501 32336 32333 0 9:05PM ttys000 0:00.00 python ./parallel_jobs.py
501 32272 32271 0 8:53PM ttys001 0:00.05 -bash
**Is there any way to get something more descriptive in the CMD column? Do I
need to just keep track of PIDs in log files? Or is there another option?**
**Background**
I am doing some batch processing where some jobs can run for hours. I need to
be able to run some of those jobs in parallel to save time. And all those
parallel jobs need to complete successfully before I can run another job that
depends on them all. However, if one job is misbehaving I want to be able to
kill it while letting the others complete... and this goes one where I have
one job, then parallel jobs, then a few more jobs in sequence, then some more
parallel jobs...
**Example code**
This is some dummy code that outlines the concept of what I'm trying to do.
#!/usr/bin/env python
import time
import multiprocessing
def open_zoo_cages():
print('Opening zoo cages...')
def crossing_road(animal, sleep_time):
print('An ' + animal + ' is crossing the road')
for i in range(5):
print("It's a wide road for " + animal + " to cross...")
time.sleep(sleep_time)
print('The ' + animal + ' is across.')
def aardvark():
crossing_road('aardvark', 2)
def badger():
crossing_road('badger', 4)
def cougar():
crossing_road('cougar', 3)
def clean_the_road():
print('Cleaning off the road of animal droppings...')
def print_exit_code(process):
print(process.name + " exit code: " + str(process.exitcode))
def main():
# Run a single job that must finish before running some jobs in parallel
open_zoo_cages()
# Run some jobs in parallel
amos = multiprocessing.Process(name='aardvark Amos', target=aardvark)
betty = multiprocessing.Process(name='badger Betty', target=badger)
carl = multiprocessing.Process(name='cougar Carl', target=cougar)
amos.start()
betty.start()
carl.start()
amos.join()
betty.join()
carl.join()
print_exit_code(amos)
print_exit_code(betty)
print_exit_code(carl)
# Run another job (clean_the_road) if all the parallel jobs finished in
# success. Otherwise end in error.
if amos.exitcode == 0 and betty.exitcode == 0 and carl.exitcode == 0:
clean_the_road()
else:
sys.exit('Not all animals finished crossing')
if __name__ == '__main__':
main()
Also, I noted that putting one of the functions in another Python module
doesn't change what goes in the `ps` command column for the associated
process.
**Output**
fermion:workspace ross$ ./parallel_jobs.py
Opening zoo cages...
An aardvark is crossing the road
It's a wide road for aardvark to cross...
An badger is crossing the road
It's a wide road for badger to cross...
An cougar is crossing the road
It's a wide road for cougar to cross...
It's a wide road for aardvark to cross...
It's a wide road for cougar to cross...
It's a wide road for aardvark to cross...
It's a wide road for badger to cross...
It's a wide road for cougar to cross...
It's a wide road for aardvark to cross...
It's a wide road for badger to cross...
It's a wide road for aardvark to cross...
It's a wide road for cougar to cross...
The aardvark is across.
It's a wide road for badger to cross...
It's a wide road for cougar to cross...
The cougar is across.
It's a wide road for badger to cross...
The badger is across.
aardvark Amos exit code: 0
badger Betty exit code: 0
cougar Carl exit code: 0
Cleaning off the road of animal droppings...
Answer: The nice easy answer, have each process open a descriptive file handle, and
then use lsof.
f = open('/tmp/hippo.txt','w')
this will give you the pid for your process
lsof | grep "hippo"
it's not the most pythonic answer, but so what : )
My initial answer was the easy way, here is an incomplete tiny example of
larger concept, adding signal handler to the class being called as a
subprocess, allows you to issue something like a kill -6 ... to dump out info
.... you can even use it to on demand dump out progress of how much is left to
process in a given subprocess,
import signal
class Foo():
def __init__(self, name):
self.myname = name
signal.signal(signal.SIGTERM, self.my_callback)
self.myqueue = Queue.Queue()
def my_callback(self):
logging.error("%s %s %s", self.myname, psutil.blah_getmypid(), len(self.myqueue))
Or you can do this, which i think may be what you really want:
import multiprocess, time
def foo():
time.sleep(60)
if __name__ == "__main__":
process = [
multiprocessing.Process(name="a",target=foo),
multiprocessing.Process(name="b",target=foo),
multiprocessing.Process(name="c",target=foo),
]
for p in process:
p.start()
for p in process:
print(p.name, p.pid)
for p in process:
p.join()
|
My code failed to iterate on somesection of the code
Question: I have this python function code that suppose to balance up the number of
"bracket" in any parameter supply. Though, it work well with require problem
in the exercise but when the parameter contain 4 or more bracket the function
failed to balance it up. below is the code.
def closedBracket(string_input):
import unicodedata, re
left_bracket = re.compile('[\u0028]')
right_bracket = re.compile('[\u0029]')
left_bracket = len(left_bracket.findall(string_input))
right_bracket = len(right_bracket.findall(string_input))
if left_bracket > right_bracket:
# Remove the leftmost bracket or add one last right bracket
count = 0
largest_bracket = (left_bracket - right_bracket)
while count <= largest_bracket:
final_string = str(string_input) + '\u0029'
count += 1
print('False', final_string)
elif left_bracket < right_bracket:
# Remove the last Rightmost Bracket or add another leftmost bracket
count = 0
largest_bracket = (right_bracket - left_bracket)
while count <= largest_bracket:
final_string = '\u0028' + str(string_input)
count += 1
print('False', final_string)
else:
print('True')
closedBracket("(5+3)*2)")
The output is "False ((5+3)*2)"
The above function work on the supply parameter, but when I run it on the
below supply parameter, the output is wrong, so I need advise on what to do.
def closedBracket(string_input):
import unicodedata, re
left_bracket = re.compile('[\u0028]')
right_bracket = re.compile('[\u0029]')
left_bracket = len(left_bracket.findall(string_input))
right_bracket = len(right_bracket.findall(string_input))
if left_bracket > right_bracket:
# Remove the leftmost bracket or add one last right bracket
count = 0
largest_bracket = (left_bracket - right_bracket)
while count <= largest_bracket:
final_string = str(string_input) + '\u0029'
count += 1
print('False', final_string)
elif left_bracket < right_bracket:
# Remove the last Rightmost Bracket or add another leftmost bracket
count = 0
largest_bracket = (right_bracket - left_bracket)
while count <= largest_bracket:
final_string = '\u0028' + str(string_input)
count += 1
print('False', final_string)
else:
print('True')
closedBracket("((((5+3)*2)")
The output is "False ((((5+3)*2))" and this is wrong
Answer: In the cycle you have this line
final_string = str(string_input) + '\u0029'
Which means that each time you take _input string_ and add `)` to it. So no
matter how many iterations you do `final_string` will always be equal to
`input_string` joined with `)`.
|
GZip: Python - How to get the file name of a particular line using gzip.open
Question: I have a '.tgz' file, I'm reading the content of the '.tgz' file using
gzip.open. What is the fastest way to get the file name of a particular line?
with gzip.open('sample.tgz','r') as fin:
for line in fin:
error = checkForError(line)
if error:
# get the file name in which that particular error line was present
else:
pass
Answer: This functions but quite how useful it is, is anyone's guess.
Check <https://en.wikipedia.org/wiki/Tar_%28computing%29#Header> for the
header format.
import gzip
fin = gzip.open("test.txt.tar","r")
for i in fin:
if "ustar" in i:
fname,b = i.split("0",1)
print fname
else:
print i
fin.close()
Note: this is for python 2.7.6 on Linux, assuming the archive was created with
`tar czvf`
|
Raising exception in a generator, handle it elsewhere and vice versa in python
Question: I'm thinking in a direction more advanced as well as difficult to find
solutions this problem. Before coming to any decision, I thought of asking
expert advice to address this problem.
The enhanced generators have new methods .send() and .throw() that allow the
caller to pass messages or to raise exceptions _into_ the generator
(coroutine).
From python documentation: This can be very handy, especially the .throw()
method that requests the generator to handle exceptions raised in the caller.
**Request #1: Any example code for the above statement. I didn't find any code
snippets for this explanation.**
However, I'm considering the inverse problem as well: can a generator raise an
exception, pass it to the caller, let the caller "repair" it, and continue the
generator's own execution? That is what I would like to call a "reverse
throw".
**Request #2: Any example code for the above statement. I didn't find any code
snippets for this explanation.**
Simply raising exceptions in the generator is not OK. I tried "raise
SomeException" in the generator, and that didn't work, because after a "raise"
the generator can no longer be executed --- it simply stops, and further
attempts to run the generator cause the StopIteration exception. In other
words, "raise" is much more deadly than "yield": one can resume itself after
yielding to the caller but a "raise" sends itself to the dead end.
I wonder if there are simple ways to do the "reverse throw" in Python? That
will enable us to write coroutines that cooperate by throwing exceptions at
each other. But why use exceptions? Well, I dunno... it all began as some
rough idea.
CASE STUDY CODE:
class MyException(Exception):pass
def handleError(func):
''' handle an error'''
errors =[]
def wrapper(arg1):
result = func(arg1)
for err in findError(result):
errors.append(err)
print errors
return result
return wrapper
def findError(result):
'''
Find an error if any
'''
print result
for k, v in result.iteritems():
error_nr = v % 2
if error_nr ==0:
pass
elif error_nr > 0:
yield MyException
@handleError
def numGen(input):
''' This function take the input and generates 10 random numbers. 10 random numbers are saved in result dictionary with indices. Find error decorator is called based on the result dictionary'''
from random import randint
result= {}
errors = []
for i in range(9):
j = (randint(0,4))
result[i] = input + j
return result
if __name__ == '__main__':
numGen(4)
Could anyone explain please both the ideas based on case study example(Raising
exception in a generator and handle it elsewhere vice versa)? I do expect
pro's and con's of both methods.
Thanks in advance.
Looking for an answer drawing from credible and/or official sources.
Answer: # Request #1 (Example for .throw())
I have never actually used this, but you _could_ use it to change behaviour in
the generator after the fact. You can also do this with `.send` of course, but
then you'll need to deal with it in the line with the `yield` expressions
(which might be in several locations in the code), rather than centralized
with a try-except block.
def getstuff():
i=0
try:
while True:
yield i
i+=1
except ValueError:
while True:
yield i**2
i+=1
generator = getstuff()
print("Get some numbers...")
print(next(generator))
print(next(generator))
print(next(generator))
print("Oh, actually, I want squares!")
print(generator.throw(ValueError))
print(next(generator))
print(next(generator))
|
Wagail one page with different content
Question: I am new to Wagtail and python, so could apprectiate some help with my
problem.
I have a web app (wagtail web site + rest api backend).
On my website I have 2 pages:
* HomePage with list of accessible objects (e.g. photos)
* PhotoPage with a detailed information on photo
What I want to achive:
* When I click on photo on homepage I am redirected to the photopage
* I fill the photopage with information I got from backend
* And the photopage url is smth like this <http://example.com/photo?id=12345>
So, I want to
* have 1 model for photopage
* fill photopage based on a requested url (i.e. from homepage I redirect user to example.com/photo?id=12345 and it is filled with information on photo with id=12345)
I guess there should be some middleware to parse requested url to get the id
and fill the page with info from API. Is there any standard solution to this
issue?
Answer: Page objects (and any classes that inherit from Page) have a get_context
method that can be used to add context pre-rendering of your templates.
from django.shortcuts import get_object_or_404
class PhotoPage(Page):
# your model definition ...
def get_context(self, request):
context = super(PhotoPage, self).get_context(request)
photo_pk = request.GET.get('id',None)
photo = get_object_or_404(YourPhotoModel,pk=photo_pk) # if no matching photo, return 404. You can do whatever you like instead :)
context['photo'] = photo
return context
Now in your photo template you can access your Photo model instance
directly...
{{ photo.some_attribute }}
{{ photo.some_other_attribute }}
|
Python email - 8bit MIME support
Question: I'm writing a simple MUA application, and I have a troubles with generating
message.
I want to my program automatically detect whether the SMTP server supports
`8bit MIME`, and if yes, then it'll generate a message, where the part with
plain text will be encoded on 8bits. In MIME header it should looks like that:
`Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-
Encoding: 8bit`
The major problem is that, python3.4 smtplib hasn't `8-bit encoder`, just only
`base64`, and `quoted printable`
In my case it looks that:
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
msg = MIMEMultipart()
msg['From'] = '[email protected]'
msg['To'] = '[email protected]'
msg['Subject'] = 'subject'
text = MIMEText("text here".encode('utf-8'), _charset='utf-8')
msg.attach(text)
# then sending...
The `text.as_string()` call returns
'Content-Type: text/plain; charset="utf-8"\nMIME-Version: 1.0\nContent-Transfer-Encoding: base64\n\ndGV4dCBoZXJl\n'
This message is quite good, but I'd want to have `8-bit` enconding, not
`base64`.
The question is am I really convicted to use `base64` enconding?
In `email.encoders` are only `encode_base64` and `encode_quopri` functions
Answer: Default body encoding for utf-8 is BASE64 which can be replaced locally:
from email import charset
ch = charset.Charset('utf-8')
ch.body_encoding = '8bit'
text = MIMEText("")
text.set_charset(ch)
text.set_payload("text here")
text.replace_header('Content-Transfer-Encoding', '8bit')
msg.attach(text)
or globally:
from email import charset
charset.add_charset('utf-8', charset.SHORTEST, '8bit')
text = MIMEText("text here".encode('utf-8'), _charset='utf-8')
text.replace_header('Content-Transfer-Encoding', '8bit')
msg.attach(text)
|
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/w9/1zsm5zp53jn8c0n0l4zrhzj40000gn/T/pip-build-mphahblv/http
Question: I have problem while trying to install package **http** using pip3:
$ pip3 install http
result is:
Collecting http
Using cached http-0.02.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/w9/1zsm5zp53jn8c0n0l4zrhzj40000gn/T/pip-build-mphahblv/http/setup.py", line 3, in <module>
import http
File "/private/var/folders/w9/1zsm5zp53jn8c0n0l4zrhzj40000gn/T/pip-build-mphahblv/http/http/__init__.py", line 17, in <module>
from request import Request
ImportError: No module named 'request'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/w9/1zsm5zp53jn8c0n0l4zrhzj40000gn/T/pip-build-mphahblv/http
Answer: You need to pip install requests first
|
Python - Creating an .ini or config file in the user's home directory
Question: This is probably a simple answer but I'm **really** new when it comes to
Python. I've been given existing code, and I'm trying to create a config file
for it, whereas previously it had everything hardcoded. I want to have a
`default` config that's with the source code, but then have the program create
a copy of that config file in the user's home directory (ex. ~/.config) when
they run it for the first time. That way they can edit the file in their home
directory rather than edit the source code. (At least I've been told that's
what's best.) I'm really bad with syntax and whatnot but I know it should be
something like:
try import ~/.config/cfg.ini
except if file doesn't exist, import defaultconfig and create a copy of
defaultconfig called `cfg.ini` in `~/.config` (the directory ~/.config would
have to be created then too)
Currently I just have `config.py` with my source code and then I use import
config, and config.(variable name) when I need the value.
I know this is formatted poorly, sorry.
Answer: It's generally not possible to import source code from a different directory
structure in Python. You should use a `configparser` to read and write data.
I divided the program into two files, first you run `write_default_config.py`
to generate the default config. Then on every invocation of `main.py` the
script checks for the existence of a user config file, and otherwise copies it
over from the source directory after creating the necessary folder.
**write_default_config.py:**
import configparser
config = configparser.ConfigParser()
config.add_section('section')
config['section']['setting_1'] = "hello"
config['section']['setting_2'] = "goodbye"
with open("default_config.ini", 'w') as f:
config.write(f)
**main.py:**
import os, shutil, configparser
user_config_dir = os.path.expanduser("~") + "/.config/Nick_H"
user_config = user_config_dir + "/user_config.ini"
if not os.path.isfile(user_config):
os.makedirs(user_config_dir, exist_ok=True)
shutil.copyfile("default_config.ini", user_config)
config = configparser.ConfigParser()
config.read(user_config)
print(config['section']['setting_1'])
print(config['section']['setting_2'])
You always have read and write config data using sections. This is a hassle if
you don't feel like you need section but there seems to be no way around it.
My example writes two strings and then prints them just to show how things
works, you should replace this with your own information.
|
rename files with list of special characters in python
Question: What's an efficient way to remove a list of special characters from a
filename? I want to replace 'spaces' with '.' and '(', ')', '[',']' with '_'.
I can do it for one, but I'm not sure how to rename multiple characters.
import os
import sys
files = os.listdir(os.getcwd())
for f in files:
os.rename(f, f.replace(' ', '.'))
Answer: You could do a for loop that checks each character in the file name and
replace:
import os
files = os.listdir(os.getcwd())
under_score = ['(',')','[',']'] #Anything to be replaced with '_' put in this list.
dot = [' '] #Anything to be replaced with '.' put in this list.
for f in files:
copy_f = f
for char in copy_f:
if (char in dot): copy_f = copy_f.replace(char, '.')
if (char in under_score): copy_f = copy_f.replace(char,'_')
os.rename(f,copy_f)
The trick with this is the second for loop runs **len(copy_f)** times which
will certainly replace all characters that match the criteria :) Also, there
was no need for this import:
import sys
|
Odoo - Python | File name too long, when processing images
Question: today I was making some code that will resize a few images that already been
uploaded. It is actually a def function and called when the button is pressed
on the Odoo web client.
Code:
@api.multi
def resize_image(self):
for record in self:
Image.open(self.foto1).resize((800, 600),Image.ANTIALIAS).save(self.foto1, quality=100)
Error:
File "/opt/odoo/addons/asset/asset.py", line 209, in resize_image
Image.open(self.foto1).resize((800, 600),Image.ANTIALIAS).save(self.foto1, quality=100)
IOError: [Errno 36] File name too long: '/9j/4AAQSkZJRgABAQEASABIAAD/7QFKUGhvdG9zaG9wIDMuMAA4QklNBAQAAAAAAREcAVoAAxslRxwCAAACAAIcAhkAJkFGLVMgR
I can't give all the error file name because it's probably the byte itself...
Answer:
import cStringIO
@api.multi
def resize_image(self):
for record in self:
Image.open(cStringIO.StringIO(self.foto1.decode('base64'))).resize((800, 600),Image.ANTIALIAS).save(self.foto1, quality=100)
|
ValueError: Found arrays with inconsistent numbers of samples [1,299]
Question: Here is data files
[here](https://d3c33hcgiwev3.cloudfront.net/_3e251a91db9262835c9e5855ae9e6573_perceptron-
test.csv?Expires=1454976000&Signature=Qn~RxsR1tlP2pruLzkiIaAVO988Q2RaY1A9DEOINYYSJGRtX7pssvFc-09rbyWLwGrzdaAg5wAf0dXWyA6DPaJ2cBjcKu3iris~TuAR4liGjj2uFaV6pFdAYgrpXaLKNpOe5J~J3l7LZJ4qz7Sp3Prm0EQuFAGf5hsf0OErAmvs_&Key-
Pair-Id=APKAJLTNE6QMUY6HBC5A) and
[here](https://d3c33hcgiwev3.cloudfront.net/_3e251a91db9262835c9e5855ae9e6573_perceptron-
train.csv?Expires=1454976000&Signature=LRNSab2UYWhM7JdvEz3NkT6YDKZBnP-OhM-
hyGGOKXxrvoyywj4VVhP8j3wQmFacR5vnhNeNNKcaQHbnX1OB8CmVDVg074Y79OwBcdAwbkyVd2FWYRIdQXRZ2lURrRIaPSI-
NPUpcO6KAb0WHeqYbqo-lC91qrm0yG4Hemvy7ZQ_&Key-Pair-Id=APKAJLTNE6QMUY6HBC5A).
You can download it by clicking on links the link. I am using Pandas, Numpy
and Python3.
Here is my code:
import pandas as pa
import numpy as nu
from sklearn.linear_model import Perceptron
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
def get_accuracy(X_train, y_train, X_test, y_test):
perceptron = Perceptron()
perceptron.fit(X_train, y_train)
perceptron.transform(X_train)
prediction = perceptron.predict(X_test)
result = accuracy_score(y_test, prediction)
return result
test_data = pa.read_csv("C:/Users/Roman/Downloads/perceptron-test.csv")
test_data.columns = ["class", "f1", "f2"]
train_data = pa.read_csv("C:/Users/Roman/Downloads/perceptron-train.csv")
train_data.columns = ["class", "f1", "f2"]
scaler = StandardScaler()
scaler.fit_transform(train_data[train_data.columns[1:]]).reshape(-1,1)
X_train = scaler.transform(train_data[train_data.columns[1:]])
scaler.fit_transform(train_data[train_data.columns[0]])
y_train = scaler.transform(train_data[train_data.columns[0]])
scaler.fit_transform(test_data[test_data.columns[1:]])
X_test = scaler.transform(test_data[test_data.columns[1:]])
scaler.fit_transform(test_data[test_data.columns[0]])
y_test = scaler.transform(test_data[test_data.columns[0]])
scaled_accuracy = get_accuracy(nu.ravel(X_train), nu.ravel(y_train), nu.ravel(X_test), nu.ravel(y_test))
print(scaled_accuracy)
And here is error that I get:
Traceback (most recent call last):
File "C:/Users/Roman/PycharmProjects/data_project-1/lecture_2_perceptron.py", line 33, in <module>
scaled_accuracy = get_accuracy(nu.ravel(X_train), nu.ravel(y_train), nu.ravel(X_test), nu.ravel(y_test))
File "C:/Users/Roman/PycharmProjects/data_project-1/lecture_2_perceptron.py", line 9, in get_accuracy
perceptron.fit(X_train, y_train)
File "C:\Users\Roman\AppData\Roaming\Python\Python35\site-packages\sklearn\linear_model\stochastic_gradient.py", line 545, in fit
sample_weight=sample_weight)
File "C:\Users\Roman\AppData\Roaming\Python\Python35\site-packages\sklearn\linear_model\stochastic_gradient.py", line 389, in _fit
X, y = check_X_y(X, y, 'csr', dtype=np.float64, order="C")
File "C:\Users\Roman\AppData\Roaming\Python\Python35\site-packages\sklearn\utils\validation.py", line 520, in check_X_y
check_consistent_length(X, y)
File "C:\Users\Roman\AppData\Roaming\Python\Python35\site-packages\sklearn\utils\validation.py", line 176, in check_consistent_length
"%s" % str(uniques))
**ValueError: Found arrays with inconsistent numbers of samples: [ 1 299]**
Without scaling data everything work fine. But after scaling not.
Answer: You should not call `fit_transform` each time you use scaler. You should `fit`
it once, on the training data, and later only `transform`, otherwise you get
different representation for training and testing (leading to error provided).
There is also no point in scaling labels.
|
div tag not populating, using selenium python, inner HTML
Question: I'm trying to access what appears to be a hidden table within a div tag on the
following page:
[whoscored.com](https://www.whoscored.com/Matches/959574/LiveStatistics/England-
Premier-League-2015-2016-West-Bromwich-Albion-Stoke)
...under the link "Passing"
from selenium import webdriver
driver = webdriver.Chrome()
base_url = "https://www.whoscored.com/Matches/959574/LiveStatistics/England-Premier-League-2015-2016-West-Bromwich-Albion-Stoke"
driver.get(base_url)
First i click the link:
elem = driver.find_element_by_link_text("Passing")
elem.click()
driver.implicitly_wait(10)
Next, I try to get the innerhtml of the tag where it appears this table
resides.
demo_div = driver.find_element_by_id("live-player-home-passing")
print demo_div.get_attribute('innerHTML')
print driver.execute_script("return arguments[0].innerHTML", demo_div)
But the innerhtml comes up empty in that tag. Very frustrating, because I see
the data on the page, but can't figure out a way to grab it.
Any ideas? I would greatly appreciate any help.
Edit: Here is the HTML:
<div id="live-player-home-passing" class="statistics-table-tab">
<div id="statistics-table-home-passing-loading"></div>
<div id="statistics-table-home-passing"></div>
<div id="statistics-table-home-passing-column-legend"></div>
</div>
The data is within 3rd tag, but only visible when I do "Inspect Element":
<div id="live-player-home-passing" class="statistics-table-tab" style="display: block;">
<div id="statistics-table-home-passing" data-fwsc="1">
<table id="top-player-stats-summary-grid" class="grid with-centered-columns hover">
<thead id="player-table-statistics-head">
.....
</thead>
</table>
</div>
Answer: The source its what cames from the server and the inspect its your browser
representation of the information
try to get the innerHTML directly from the webelement and not js script
table = driver.find_element_by_id("top-player-stats-summary-grid");
|
Inconsistent results when concatenating parsed csv files
Question: I am puzzled with the following problem. I have a set of csv files, which I
parse iterativly. Before collecting the dataframes in a list, I apply some
function (as simple as `tmp_df*2`) to each of the `tmp_df`. It all worked
perfectly fine at first glance, until I've realized I have inconsistencies
with the results from run to run. For example, when I apply `df.std()` I might
receive for a first run:
In[2]: df1.std()
Out[2]:
some_int 15281.99
some_float 5.302293
and for a second run after:
In[3]: df2.std()
Out[3]:
some_int 15281.99
some_float 6.691013
Strangly, I don't not observe inconsistencies like this one when I don't
manipulate the parsed data (simply comment out `tmp_df = tmp_df*2`). I also
noticed that for the columns where I have datatypes `int` the results are
consistent from run to run, which does not hold for `floats`. I suspect it has
to do with the precision points. I also cannot establish a pattern how they
vary, it might be that I have the same results for two or three consecutive
runs. Maybe someone has an idea if I am missing something here. I am working
on a replication example, I'll edit asap, as I cannot share the underlying
data. Maybe someone can shed some light on this in the meantime. I am using
win8.1, pandas 17.1, python 3.4.3.
Code example:
import pandas as pd
import numpy as np
data_list = list()
csv_files = ['a.csv', 'b.csv', 'c.csv']
for csv_file in csv_files:
# load csv_file
tmp_df = pd.read_csv(csv_file, index_col='ID', dtype=np.float64)
# replace infs by na
tmp_df.replace([np.inf, -np.inf], np.nan, inplace=True)
# manipulate tmp_df
tmp_df = tmp_df*2
data_list.append(tmp_df)
df = pd.concat(data_list, ignore_index=True)
df.reset_index(inplace=True)
Update:
Running the same code and data on a UX system works perfectly fine.
Edit: I have managed to re-create the problem, it should run on win and ux.
I've tested on win8.1 facing the same problem when `with_function=True`
(typically after 1-5 runs), on ux the it runs without problems.
`with_function=False` runs without differences for win and ux. I can also
reject the hypothesis that it is related to `int` or `float` issue as also the
simulated `int` are different...
Here is the code:
import pandas as pd
import numpy as np
from pathlib import Path
from tempfile import gettempdir
def simulate_csv_data(tmp_dir,num_files=5):
""" simulate a csv files
:param tmp_dir: Path, csv files are saved to
:param num_files: int, how many csv files to simulate
:return:
"""
rows = 20000
columns = 5
np.random.seed(1282)
for file_num in range(num_files):
file_path = tmp_dir.joinpath(''.join(['df_', str(file_num), '.csv']))
simulated_df = pd.DataFrame(np.random.standard_normal((rows, columns)))
simulated_df['some_int'] = np.random.randint(0,100)
simulated_df.to_csv(str(file_path))
def get_csv_data(tmp_dir,num_files=5, with_function=True):
""" Collect various csv files and return a concatenated dfs
:param tmp_dir: Path, csv files are saved to
:param num_files: int, how many csv files to simulate
:param with_function: Bool, apply function to tmp_dataframe
:return:
"""
data_list = list()
for file_num in range(num_files):
# current file path
file_path = tmp_dir.joinpath(''.join(['df_', str(file_num), '.csv']))
# load csv_file
tmp_df = pd.read_csv(str(file_path), dtype=np.float64)
# replace infs by na
tmp_df.replace([np.inf, -np.inf], np.nan, inplace=True)
# apply function to tmp_dataframe
if with_function:
tmp_df = tmp_df*2
data_list.append(tmp_df)
df = pd.concat(data_list, ignore_index=True)
df.reset_index(inplace=True)
return df
def main():
# INPUT ----------------------------------------------
num_files = 5
with_function = True
max_comparisons = 50
# ----------------------------------------------------
tmp_dir = gettempdir()
# use temporary "non_existing" dir for new file
tmp_csv_folder = Path(tmp_dir).joinpath('csv_files_sdfs2eqqf')
# if exists already don't simulate data/files again
if tmp_csv_folder.exists() is False:
tmp_csv_folder.mkdir()
print('Simulating temp files...')
simulate_csv_data(tmp_csv_folder, num_files)
print('Getting benchmark data frame...')
df1 = get_csv_data(tmp_csv_folder, num_files, with_function)
df_is_same = True
count_runs = 0
# Run until different df is found or max runs exceeded
print('Comparing data frames...')
while df_is_same:
# get another data frame
df2 = get_csv_data(tmp_csv_folder, num_files, with_function)
count_runs += 1
# compare data frames
if df1.equals(df2) is False:
df_is_same = False
print('Found unequal df after {} runs'.format(count_runs))
# print out a standard deviations (arbitrary example)
print('Std Run1: \n {}'.format(df1.std()))
print('Std Run2: \n {}'.format(df2.std()))
if count_runs > max_comparisons:
df_is_same = False
print('No unequal df found after {} runs'.format(count_runs))
print('Delete the following folder if no longer needed: "{}"'.format(
str(tmp_csv_folder)))
if __name__ == '__main__':
main()
Answer: Your variations are caused by something else, like input data change between
executions, or source code changes.
Float precision does not ever gives different results between executions.
Btw, clean your examples and you will find the bug. At this moment you say
something about and int but display a decimal value instead!!
|
convert python script to exe and run as windows service
Question: I just created an python script that solve me a problem i need but i want to
convert this script to exe file to run it in any windows machine without need
of install python on it I have search of how could i convert the py to exe and
run it and i have found that i could use script called py2exe the problem here
that i want to convert my file to exe and run it as a windows service
continuously on the my PC.
Here is the my script:
import socket, sys, serial
HOST = '' # Symbolic name, meaning all available interfaces
PORT = 8888 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
#Bind socket to local host and port
try:
s.bind((HOST, PORT))
except socket.error as msg:
print 'Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
print 'Socket bind complete'
#Start listening on socket
s.listen(10)
print 'Socket now listening'
# try:
#now keep talking with the client
while 1:
#wait to accept a connection - blocking call
conn, addr = s.accept()
# print('Connected with {}:{}'.format(addr[0], addr[1]))
str = conn.recv(100)
n_str = str[8:]
last_c = n_str.find('%')
last_str = n_str[:last_c]
final_str = last_str.replace('+',' ')[:-3]
print(final_str)
try:
pole = serial.Serial('COM4')
pole.write(' \r\n')
pole.write(final_str+'\r\n')
pole.close()
except:
print(Exception.message)
s.close()
Could i have some help here
Answer: Python is an interpreted language, not a compiled one. As such, it needs its
interpreter in order to be executed.
Bearing that in mind, you can use this: <http://www.py2exe.org>
More options given here: [a good python to exe
compiler?](https://stackoverflow.com/questions/14165398/a-good-python-to-exe-
compiler)
or even better, in here: <https://wiki.python.org/moin/DistributionUtilities>
|
Django upload and process file with no data retention
Question: Python: 2.7.11
Django: 1.9
I want to upload a csv file to Django and analyze it with a Python class. No
saving is allowed and the file is only needed to reach the class to be
analyzed. I'm using Dropzone.js for the form but I don't understand how I
should configure/program the views to achieve this.
<form action="/upload/" method="post" enctype="multipart/form-data" class="dropzone" id="dropzone">
{% csrf_token %}
<div class="fallback">
<input name="file" type="file" multiple />
</div>
</form>
I have found an
[article](https://amatellanes.wordpress.com/2013/11/05/dropzonejs-django-how-
to-build-a-file-upload-form/) about this but it describes saving and is based
on Django 1.5.
view.py
def upload(request):
if request.method == 'POST':
file = FileUploadForm(request.POST)
if file.is_valid():
return HttpResponseRedirect('/upload/')
else:
file = FileUploadForm()
return render(request, 'app/upload.html', {'file': file})
forms.py
from django import forms
class FileUploadForm(forms.Form):
file = forms.FileField()
**Closing Update:** The most important difference between the helping answer
and my situation is that I had to decode my input. See the following line as
mine csv_file in handle_csv_data:
StringIO(content.read().decode('utf-8-sig'))
Answer: Access the csv file in the view function. If you are using python 3, you must
wrap the `InMemoryUploadedFile` in a `TextIOWrapper` to parse it with the
`csv` module.
In this example the csv is parsed and passed back as a list named 'content'
that will be displayed as a table.
**views.py**
import csv
import io # python 3 only
def handle_csv_data(csv_file):
csv_file = io.TextIOWrapper(csv_file) # python 3 only
dialect = csv.Sniffer().sniff(csv_file.read(1024), delimiters=";,")
csv_file.seek(0)
reader = csv.reader(csv_file, dialect)
return list(reader)
def upload_csv(request):
csv_content=[]
if request.method == 'POST':
csv_file = request.FILES['file'].file
csv_content = handle_csv_data(csv_file)
return render(request, 'upload.html', {'content':content})
Your original code did not use django's form framework correctly, so I just
dropped that from this example. So you should implement error handling when
the uploaded file is invalid or missing.
**upload.html**
<form action="/upload/"
method="post"
enctype="multipart/form-data"
class="dropzone"
id="dropzone">
{% csrf_token %}
<div class="fallback">
<input name="file" type="file"/>
<input type="submit"/>
</div>
</form>
{% if content %}
<table>
{% for row in content %}
<tr>
{% for col in row %}
<td>{{ col }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
{% endif %}
I've added a 'submit' button so this works without the dropzone thing. I also
removed 'multiple' from the file input, to keep the example simple. Finally
there's a table if the template receives content from a parsed csv. But when
using dropzone.js, you have to use a javascript callback function to display
the table.
|
How to pass params to a ML Pipeline.fit method?
Question: I am trying to build a clustering mechanism using
* Google Dataproc + Spark
* Google Bigquery
* Create a job using Spark ML KMeans+pipeline
As follows:
* * *
1. Create user level based feature table in bigquery
Example: How the feature table looks like
`userid |x1 |x2 |x3 |x4 |x5 |x6 |x7 |x8 |x9 |x10
00013 |0.01 | 0 |0 |0 |0 |0 |0 |0.06 |0.09 | 0.001`
2. Spin up a default setting cluster, am using gcloud command line interface to create the cluster and run jobs as shown [here](https://cloud.google.com/hadoop/examples/bigquery-connector-mapreduce-example)
3. Using the starter code provided, I read the BQ table, convert RDD into a Dataframe and pass to KMeans model/pipeline:
#!/usr/bin/python
"""BigQuery I/O PySpark example."""
import json
import pprint
import subprocess
import pyspark
import numpy as np
from pyspark.ml.clustering import KMeans
from pyspark import SparkContext
from pyspark.ml import Pipeline
from pyspark.sql import SQLContext
from pyspark.mllib.linalg import Vectors, _convert_to_vector
from pyspark.sql.types import Row
from pyspark.mllib.common import callMLlibFunc, callJavaFunc, _py2java, _java2py
sc = pyspark.SparkContext()
# Use the Google Cloud Storage bucket for temporary BigQuery export data used by the InputFormat.
# This assumes the Google Cloud Storage connector for Hadoop is configured.
bucket = sc._jsc.hadoopConfiguration().get('fs.gs.system.bucket')
project = sc._jsc.hadoopConfiguration().get('fs.gs.project.id')
input_directory ='gs://{}/hadoop/tmp/bigquery/pyspark_input'.format(bucket)
conf = {# Input Parameters
'mapred.bq.project.id': project,
'mapred.bq.gcs.bucket': bucket,
'mapred.bq.temp.gcs.path': input_directory,
'mapred.bq.input.project.id': 'my-project',
'mapred.bq.input.dataset.id': 'tempData',
'mapred.bq.input.table.id': 'userFeatureInBQ'}
# Load data in from BigQuery.
table_data = sc.newAPIHadoopRDD(
'com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat',
'org.apache.hadoop.io.LongWritable',
'com.google.gson.JsonObject',conf=conf)
# Tranform the userid-Feature table into feature_data RDD
feature_data = (
table_data
.map(lambda (_, record): json.loads(record))
.map(lambda x:(x['x0'],x['x1'],x['x2'],x['x3'],x['x4'],
x['x5'],x['x6'],x['x7'],x['x8'],
x['x9'],x['x10'])))
# Function to convert each line in RDD into an array, return the vector
def parseVector(values):
array = np.array([float(v) for v in values])
return _convert_to_vector(array)
# Convert the RDD into a row wise RDD
data = feature_data.map(parseVector)
row_rdd = data.map(lambda x: Row(x))
sqlContext = SQLContext(sc)
# cache the RDD to improve performance
row_rdd.cache()
# Create a Dataframe
df = sqlContext.createDataFrame(row_rdd, ["features"])
# cache the Dataframe
df.cache()
Here is the Schema and head() which I print to the console:
|-- features: vector (nullable = true)
[Row(features=DenseVector([0.01,0,0,0,0,0,0,0.06,0.09,0.001]))]
* * *
4. Run the clustering KMeans algorithm in following manner
* Run the model multiple times
* With different parameters (Namely, change the #clusters and init_mode)
* Calculate error or Cost metric
* Choose best model-parameter combination
* Create pipeline with KMeans as an estimator
* Pass multiple parameters using paramMap
#Define the paramMap & model
paramMap = ({'k':3,'initMode':'kmeans||'},{'k':3,'initMode':'random'},
{'k':4,'initMode':'kmeans||'},{'k':4,'initMode':'random'},
{'k':5,'initMode':'kmeans||'},{'k':5,'initMode':'random'},
{'k':6,'initMode':'kmeans||'},{'k':6,'initMode':'random'},
{'k':7,'initMode':'kmeans||'},{'k':7,'initMode':'random'},
{'k':8,'initMode':'kmeans||'},{'k':8,'initMode':'random'},
{'k':9,'initMode':'kmeans||'},{'k':9,'initMode':'random'},
{'k':10,'initMode':'kmeans||'},{'k':10,'initMode':'random'})
km = KMeans()
# Create a Pipeline with estimator stage
pipeline = Pipeline(stages=[km])
# Call & fit the pipeline with the paramMap
models = pipeline.fit(df, paramMap)`
print models
* * *
I get the following output with a warning
`7:03:24 WARN org.apache.spark.mllib.clustering.KMeans: The input data was not
directly cached, which may hurt performance if its parent RDDs are also
uncached. [PipelineModel_443dbf939b7bd3bf7bfc,
PipelineModel_4b64bb761f4efe51da50, PipelineModel_4f858411ac19beacc1a4,
PipelineModel_4f58b894f1d14d79b936, PipelineModel_4b8194f7a5e6be6eaf33,
PipelineModel_4fc5b6370bff1b4d7dba, PipelineModel_43e0a196f16cfd3dae57,
PipelineModel_47318a54000b6826b20e, PipelineModel_411bbe1c32db6bf0a92b,
PipelineModel_421ea1364d8c4c9968c8, PipelineModel_4acf9cdbfda184b00328,
PipelineModel_42d1a0c61c5e45cdb3cd, PipelineModel_4f0db3c394bcc2bb9352,
PipelineModel_441697f2748328de251c, PipelineModel_4a64ae517d270a1e0d5a,
PipelineModel_4372bc8db92b184c05b0]`
* * *
#Print the cluster centers:
for model in models:
print vars(model)
print model.stages[0].clusterCenters()
print model.extractParamMap()
Output: `[array([7.64676638e-07, 3.58531391e-01, 1.68879698e-03,
0.00000000e+00, 1.53477043e-02, 1.25822915e-02, 0.00000000e+00,
6.93060772e-07, 1.41766847e-03, 1.60941306e-02], array([2.36494105e-06,
1.87719732e-02, 3.73829379e-03, 0.00000000e+00, 4.20724542e-02,
2.28675684e-02, 0.00000000e+00, 5.45002249e-06, 1.17331153e-02,
1.24364600e-02])`
* * *
Here it the list of questions and need help with:
* I get a list with only 2 cluster centers as arrays for all models,
* It seems the KMeans models is defaulting to k=2 when I try to access the pipeline? Why would this happen?
* The last loop is supposed to access the pipelineModel and the 0th stage and run the clusterCenter() method? Is this the right method?
* Why do I get the error that data is uncached?
* I could not find how to compute the WSSSE or any equivalent method like .computeCost()(for mllib) when using a pipeline? How can I compare the different models based on different parameters?
* I tried the following code to run the .computeCost method as defined in the source code [here](https://fossies.org/dox/spark-1.6.0/classpyspark_1_1mllib_1_1clustering_1_1KMeansModel.html#a6608cfb730cbd5eabb6ba722565317f5):
* This defeats the purpose of running KMeans model and model selection in parallel using pipeline, however I have tried the following code:
#computeError
def computeCost(model, rdd):`
"""Return the K-means cost (sum of squared distances of
points to their nearest center) for this model on the given data."""
cost = callMLlibFunc("computeCostKmeansModel",
rdd.map(_convert_to_vector),
[_convert_to_vector(c) for c in model.clusterCenters()])
return cost
cost= np.zeros(len(paramMap))
for i in range(len(paramMap)):
cost[i] = cost[i] + computeCost(model[i].stages[0], feature_data)
print cost
This prints out the following at the end of the loop:
`[ 634035.00294687 634035.00294687 634035.00294687 634035.00294687
634035.00294687 634035.00294687 634035.00294687 634035.00294687
634035.00294687 634035.00294687 634035.00294687 634035.00294687
634035.00294687 634035.00294687 634035.00294687 634035.00294687]`
* The cost/error calculated is the same for each model? Again cannot access the pipelineModel with the correct parameters.
Any help/ guidance is much appreciated! Thanks!
Answer: Your param is not properly defined. It should map from the specific parameters
to the values, not from arbitrary names. You get `k` equal 2 because
parameters you pass are not utilized and every model uses exactly the same
default parameters.
Lets start with example data:
import numpy as np
from pyspark.mllib.linalg import Vector
df = (sc.textFile("data/mllib/kmeans_data.txt")
.map(lambda s: Vectors.dense(np.fromstring(s, dtype=np.float64, sep=" ")))
.zipWithIndex()
.toDF(["features", "id"]))
and a `Pipeline`:
from pyspark.ml.clustering import KMeans
from pyspark.ml import Pipeline
km = KMeans()
pipeline = Pipeline(stages=[km])
As mentioned above parameter map should use specific parameters as the keys.
For example:
params = [
{km.k: 2, km.initMode: "k-means||"},
{km.k: 3, km.initMode: "k-means||"},
{km.k: 4, km.initMode: "k-means||"}
]
models = pipeline.fit(df, params=params)
assert [len(m.stages[0].clusterCenters()) for m in models] == [2, 3, 4]
Notes:
* correct `initMode` for K-means|| is `k-means||` not `kmeans||`.
* using parameter map in a Pipeline doesn't mean that model are trained in parallel. Spark parallelizes training process over data not over params. It is nothing more than a convenience method.
* you get the warning about not cached data because actual input to K-Means is not a `DataFrame` but transformed RDD.
|
Passing custom arguments to a Blender Operator as if it were a function
Question: I created a python script in Blender which obtains information about an
object. Said information is then stored in a list of numpy arrays for later
use. Initially, I wanted to use that information to have the camera move in a
certain way, but running the script freezes the 3D enviornment until the end
of execution.
Multiple folks suggest using Operators, but Operators (as far as I know) only
accept arguments in a very special and inconvenient way. For instance, here we
have an example operator
import bpy
class DialogOperator(bpy.types.Operator):
bl_idname = "object.dialog_operator"
bl_label = "Simple Dialog Operator"
my_float = bpy.props.FloatProperty(name="Some Floating Point")
my_bool = bpy.props.BoolProperty(name="Toggle Option")
my_string = bpy.props.StringProperty(name="String Value")
def execute(self, context):
message = "Popup Values: %f, %d, '%s'" % \
(self.my_float, self.my_bool, self.my_string)
self.report({'INFO'}, message)
return {'FINISHED'}
def invoke(self, context, event):
wm = context.window_manager
return wm.invoke_props_dialog(self)
bpy.utils.register_class(DialogOperator)
# test call
bpy.ops.object.dialog_operator('INVOKE_DEFAULT')
One can optionally write
# test call
bpy.ops.object.dialog_operator('INVOKE_DEFAULT',myfloat=2.3,...)
in order to set values for the parameters defined within the operator. My
problem is that only fields within the operator of the form
"bpy.props.****Property" can be assigned in this way.
Is there anyone who knows of a way to pass **ANY** desired set of argument to
an operator in the same way arguments can be passed to a function?
NOTE: An ugly way I though of to pass the arguments indirectly is by declaring
the set of variables you want to pass to be global variables...
Answer: To move your camera and get an updated 3DView, try using a [modal
operator](https://www.blender.org/api/blender_python_api_current/bpy.types.Operator.html#modal-
execution). When `execute()` is called and it returns `{'RUNNING_MODAL'}`, the
operators `modal()` method is called repeatedly until it returns
`{'FINISHED'}` or `{'CANCELLED'}`. Within `modal()` you can change things and
get the 3DView updated between calls.
Using
[`bpy.props`](https://www.blender.org/api/blender_python_api_current/bpy.props.html)
to add properties to an operator class is how you get properties that work
within blender. While we can add normal properties that the operator can use
within itself, we must use `bpy.props` to define the properties available to
an operator that are set during the operator's execute and effect its
operation, these properties can be adjusted by the user in the operator
properties panel and effect the undo/redo of the operator.
There are "Vector" versions of Bool, Int and Float properties so you can have
multiple values in one property. For more flexibility you might want to use a
[PropertyGroup](https://www.blender.org/api/blender_python_api_current/bpy.props.html#collection-
example) or look at defining custom [get and set
functions](https://www.blender.org/api/blender_python_api_current/bpy.props.html#get-
set-example).
Other options you have for storing values are adding properties to the scene
or object, you can also use module properties (an addon is a python module) or
use
[AddonPreferences](https://www.blender.org/api/blender_python_api_current/bpy.types.AddonPreferences.html).
|
migrating to mysql in django
Question: I want to migrate from sqlite3 to mysql in django. I used this command:
python manage.py dumpdata > datadump.json
after that I changed setting of my django server and configured it with my new
mysql database and then used following command:
python manage.py loaddata datadump.json
but I got this error :
> integrityError: Problem installing fixtures: The row in table
> 'django_admin_log' with primary key '20' has an invalid foregin key:
> django_admin_log.user_id contains a value '19' that does not have a
> corresponding value in auth_user.id.
Answer: You have consistency error in your data, django_admin_log table refers to
auth_user which does not exist. sqlite does not enforce foreign key
constraints, but mysql does. You need to fix data and then you can import it
into mysql.
|
Browser() in Python shows errors in IDLE
Question: I have some code here which is basically a bot that spams a specific google
form:
while True:
browser = Browser()
print("Form Filling Begun")
browser.visit('https://docs.google.com/forms/d/1Lyoox1FIpOP5nceVHqmdA3Exqf8PMCxaBgWIYQ67yX8/viewform?c=0&w=1')
browser.fill('entry.1796849606', 'test')
browser.fill('entry.1233774681', 'test')
browser.fill('entry.1687034525', 'test')
browser.fill('entry.2085519362', 'test')
browser.fill('entry.2085519362', 'test')
browser.fill('entry.87435301', 'test')
browser.find_by_name('entry.434307791', 'test')
browser.find_by_name('submit').click()
print("Form Filled")
browser.quit()
time.sleep(10)
When I ran it in IDLE, it said there was a problem on line 2. That Browser()
wasn't defined. Now, I know that my friend was able to accomplish this with
this code, so I'm not sure what's wrong. I'm sure it's a really basic problem,
but I'm a complete n00b at Python, so I don't really know what to do.
Thanks,
Sam.
EDIT: So I went on the Splinter website and put its own sample code in IDLE,
to see what happens and this is what the console was showing
Traceback (most recent call last):
File "C:/Users/Sam/Desktop/test.py", line 3, in <module>
with Browser() as browser:
File "C:\Python27\lib\site-packages\splinter\browser.py", line 63, in Browser
return driver(*args, **kwargs)
File "C:\Python27\lib\site-packages\splinter\driver\webdriver\firefox.py", line 39, in __init__
self.driver = Firefox(firefox_profile)
File "C:\Python27\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 82, in __init__
keep_alive=True)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 87, in __init__
self.start_session(desired_capabilities, browser_profile)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 141, in start_session
'desiredCapabilities': desired_capabilities,
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 201, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 193, in check_response
raise exception_class(message, screen, stacktrace)
WebDriverException: Message: a.addEventListener is not a function
I'm out of my depth here. When it opened a browser window, the URL was
`about:blank&utm_content=firstrun`. What does that mean??
Answer: Do you import browser? Try:
from splinter.browser import Browser
|
Key echo in Python in separate thread doesn't display first key stroke
Question: I would try to post a minimal working example, but unfortunately this problem
just requires a lot of pieces so I have stripped it down best I can.
First of all, I'm using a simple script that simulates pressing keys through a
function call. This is tweaked from
[here](http://stackoverflow.com/a/13290031/2924421).
import ctypes
SendInput = ctypes.windll.user32.SendInput
PUL = ctypes.POINTER(ctypes.c_ulong)
class KeyBdInput(ctypes.Structure):
_fields_ = [("wVk", ctypes.c_ushort),
("wScan", ctypes.c_ushort),
("dwFlags", ctypes.c_ulong),
("time", ctypes.c_ulong),
("dwExtraInfo", PUL)]
class HardwareInput(ctypes.Structure):
_fields_ = [("uMsg", ctypes.c_ulong),
("wParamL", ctypes.c_short),
("wParamH", ctypes.c_ushort)]
class MouseInput(ctypes.Structure):
_fields_ = [("dx", ctypes.c_long),
("dy", ctypes.c_long),
("mouseData", ctypes.c_ulong),
("dwFlags", ctypes.c_ulong),
("time",ctypes.c_ulong),
("dwExtraInfo", PUL)]
class Input_I(ctypes.Union):
_fields_ = [("ki", KeyBdInput),
("mi", MouseInput),
("hi", HardwareInput)]
class Input(ctypes.Structure):
_fields_ = [("type", ctypes.c_ulong),
("ii", Input_I)]
def getKeyCode(unicodeKey):
k = unicodeKey
curKeyCode = 0
if k == "up": curKeyCode = 0x26
elif k == "down": curKeyCode = 0x28
elif k == "left": curKeyCode = 0x25
elif k == "right": curKeyCode = 0x27
elif k == "home": curKeyCode = 0x24
elif k == "end": curKeyCode = 0x23
elif k == "insert": curKeyCode = 0x2D
elif k == "pgup": curKeyCode = 0x21
elif k == "pgdn": curKeyCode = 0x22
elif k == "delete": curKeyCode = 0x2E
elif k == "\n": curKeyCode = 0x0D
if curKeyCode == 0:
return 0, int(unicodeKey.encode("hex"), 16), 0x0004
else:
return curKeyCode, 0, 0
def PressKey(unicodeKey):
key, unikey, uniflag = getKeyCode(unicodeKey)
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( key, unikey, uniflag, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
ctypes.windll.user32.SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
def ReleaseKey(unicodeKey):
key, unikey, uniflag = getKeyCode(unicodeKey)
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( key, unikey, uniflag + 0x0002, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
ctypes.windll.user32.SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
I stored this in a file named keyPress.py.
Using this, I wanted to make a simple program that could detect what the user
was typing while they were typing it in the python shell. The idea was that I
would use msvcrt.getch() to get the key pressed, then the script above to make
it seem like it was still pressed (and "echo" the key press in a sense")
Here is the code:
import keyPress
import msvcrt
import threading
def getKey():
k = msvcrt.getch()
# Escaped Key: 224 is on the keyboard, 0 is on the numpad
if int(k.encode("hex"), 16) == 224 or int(k.encode("hex"), 16) == 0:
k = msvcrt.getch()
if k == "H": k = "up"
elif k == "P": k = "down"
elif k == "K": k = "left"
elif k == "M": k = "right"
elif k == "G": k = "home"
elif k == "O": k = "end"
elif k == "R": k = "insert"
elif k == "I": k = "pgup"
elif k == "Q": k = "pgdn"
elif k == "S": k = "delete"
# Fix weird linebreak
if k == "\r":
k = "\n"
return k
def actualGetKeys():
while True:
k = getKey()
keyPress.PressKey(k)
keyPress.ReleaseKey(k)
def getKeys():
p = threading.Thread(target=actualGetKeys)
p.daemon = True
p.start()
I stored this in a file named keyGet.py.
This is all working very well, except that whenever the user presses enter,
the first key isn't displayed on the screen. The console still knows that you
typed it, it just doesn't show up there. Something like this:
[](http://i.stack.imgur.com/lVD8H.png)
What is happening? I've tried many many things and I can't seem to get this
behavior to change.
I am now able to get this essentially working, as in it can capture key input
asynchronously while a script is running, and execute with the text of each
command you type into a command prompt (so you could, say, store these to an
array). The only problem I am running into is something like this:
[](http://i.stack.imgur.com/CxiS6.png)
I know this is due to essentially having to have a robot retype their input
after they type it, I'm just wondering if there is a way to do this that
prevents that input from actually being displayed when the robot types it, so
it acts just like the user would expect.
Answer: Here is the resulting code, basically written by eryksun's comments because
somehow he knows all.
This is called readcmd.py
# Some if this is from http://nullege.com/codes/show/src@e@i@einstein-HEAD@Python25Einstein@[email protected]/380/win32api.GetStdHandle
# and
# http://nullege.com/codes/show/src@v@i@VistA-HEAD@Python@[email protected]/901/win32console.GetStdHandle.PeekConsoleInput
from ctypes import *
import time
import threading
from win32api import STD_INPUT_HANDLE, STD_OUTPUT_HANDLE
from win32console import GetStdHandle, KEY_EVENT, ENABLE_WINDOW_INPUT, ENABLE_MOUSE_INPUT, ENABLE_ECHO_INPUT, ENABLE_LINE_INPUT, ENABLE_PROCESSED_INPUT
import keyPress
class CaptureLines():
def __init__(self):
self.stopLock = threading.Lock()
self.isCapturingInputLines = False
self.inputLinesHookCallback = CFUNCTYPE(c_int)(self.inputLinesHook)
self.pyosInputHookPointer = c_void_p.in_dll(pythonapi, "PyOS_InputHook")
self.originalPyOsInputHookPointerValue = self.pyosInputHookPointer.value
self.readHandle = GetStdHandle(STD_INPUT_HANDLE)
self.readHandle.SetConsoleMode(ENABLE_LINE_INPUT|ENABLE_ECHO_INPUT|ENABLE_PROCESSED_INPUT)
def inputLinesHook(self):
self.readHandle.SetConsoleMode(ENABLE_LINE_INPUT|ENABLE_ECHO_INPUT|ENABLE_PROCESSED_INPUT)
inputChars = self.readHandle.ReadConsole(10000000)
self.readHandle.SetConsoleMode(ENABLE_LINE_INPUT|ENABLE_PROCESSED_INPUT)
if inputChars == "\r\n":
keyPress.KeyPress("\n")
return 0
inputChars = inputChars[:-2]
inputChars += "\n"
for c in inputChars:
keyPress.KeyPress(c)
self.inputCallback(inputChars)
return 0
def startCapture(self, inputCallback):
self.stopLock.acquire()
try:
if self.isCapturingInputLines:
raise Exception("Already capturing keystrokes")
self.isCapturingInputLines = True
self.inputCallback = inputCallback
self.pyosInputHookPointer.value = cast(self.inputLinesHookCallback, c_void_p).value
except Exception as e:
self.stopLock.release()
raise
self.stopLock.release()
def stopCapture(self):
self.stopLock.acquire()
try:
if not self.isCapturingInputLines:
raise Exception("Keystrokes already aren't being captured")
self.readHandle.SetConsoleMode(ENABLE_LINE_INPUT|ENABLE_ECHO_INPUT|ENABLE_PROCESSED_INPUT)
self.isCapturingInputLines = False
self.pyosInputHookPointer.value = self.originalPyOsInputHookPointerValue
except Exception as e:
self.stopLock.release()
raise
self.stopLock.release()
And here is keyPress.py
# Modified from http://stackoverflow.com/a/13615802/2924421
import ctypes
from ctypes import wintypes
import time
user32 = ctypes.WinDLL('user32', use_last_error=True)
INPUT_MOUSE = 0
INPUT_KEYBOARD = 1
INPUT_HARDWARE = 2
KEYEVENTF_EXTENDEDKEY = 0x0001
KEYEVENTF_KEYUP = 0x0002
KEYEVENTF_UNICODE = 0x0004
KEYEVENTF_SCANCODE = 0x0008
MAPVK_VK_TO_VSC = 0
# C struct definitions
wintypes.ULONG_PTR = wintypes.WPARAM
SendInput = ctypes.windll.user32.SendInput
PUL = ctypes.POINTER(ctypes.c_ulong)
class KEYBDINPUT(ctypes.Structure):
_fields_ = (("wVk", wintypes.WORD),
("wScan", wintypes.WORD),
("dwFlags", wintypes.DWORD),
("time", wintypes.DWORD),
("dwExtraInfo", wintypes.ULONG_PTR))
class MOUSEINPUT(ctypes.Structure):
_fields_ = (("dx", wintypes.LONG),
("dy", wintypes.LONG),
("mouseData", wintypes.DWORD),
("dwFlags", wintypes.DWORD),
("time", wintypes.DWORD),
("dwExtraInfo", wintypes.ULONG_PTR))
class HARDWAREINPUT(ctypes.Structure):
_fields_ = (("uMsg", wintypes.DWORD),
("wParamL", wintypes.WORD),
("wParamH", wintypes.WORD))
class INPUT(ctypes.Structure):
class _INPUT(ctypes.Union):
_fields_ = (("ki", KEYBDINPUT),
("mi", MOUSEINPUT),
("hi", HARDWAREINPUT))
_anonymous_ = ("_input",)
_fields_ = (("type", wintypes.DWORD),
("_input", _INPUT))
LPINPUT = ctypes.POINTER(INPUT)
def _check_count(result, func, args):
if result == 0:
raise ctypes.WinError(ctypes.get_last_error())
return args
user32.SendInput.errcheck = _check_count
user32.SendInput.argtypes = (wintypes.UINT, # nInputs
LPINPUT, # pInputs
ctypes.c_int) # cbSize
def KeyDown(unicodeKey):
key, unikey, uniflag = GetKeyCode(unicodeKey)
x = INPUT( type=INPUT_KEYBOARD, ki= KEYBDINPUT( key, unikey, uniflag, 0))
user32.SendInput(1, ctypes.byref(x), ctypes.sizeof(x))
def KeyUp(unicodeKey):
key, unikey, uniflag = GetKeyCode(unicodeKey)
extra = ctypes.c_ulong(0)
x = INPUT( type=INPUT_KEYBOARD, ki= KEYBDINPUT( key, unikey, uniflag | KEYEVENTF_KEYUP, 0))
user32.SendInput(1, ctypes.byref(x), ctypes.sizeof(x))
def KeyPress(unicodeKey):
time.sleep(0.0001)
KeyDown(unicodeKey)
time.sleep(0.0001)
KeyUp(unicodeKey)
time.sleep(0.0001)
def GetKeyCode(unicodeKey):
k = unicodeKey
curKeyCode = 0
if k == "up": curKeyCode = 0x26
elif k == "down": curKeyCode = 0x28
elif k == "left": curKeyCode = 0x25
elif k == "right": curKeyCode = 0x27
elif k == "home": curKeyCode = 0x24
elif k == "end": curKeyCode = 0x23
elif k == "insert": curKeyCode = 0x2D
elif k == "pgup": curKeyCode = 0x21
elif k == "pgdn": curKeyCode = 0x22
elif k == "delete": curKeyCode = 0x2E
elif k == "\n": curKeyCode = 0x0D
if curKeyCode == 0:
return 0, int(unicodeKey.encode("hex"), 16), KEYEVENTF_UNICODE
else:
return curKeyCode, 0, 0
|
Python - Exclude contents of one file from another / removing duplicate lines amongst two files
Question: first off, i'm using python 2.7.9 ..... now, i'm trying to find the most
efficient way to compare the lines of one text file (file A) to the lines of
another text file (file B) and write all lines that are unique to file A into
a new file (file A\B).
actually i've written a short script that does this, but it is beyond slow...
i need the script to be able to handle files of up to 70mb(each, A&B), which
is unthinkable with this 'bad' boy:
import string
naked = string.strip
kiss = ''.join
def main():
list1 = raw_input("Enter name of .txt-file to clean!\n")
list2 = raw_input("Enter name of .txt-file to exclude!\n")
action(list1, list2)
raw_input("Done!\nPress [ENTER] to exit!")
def action(list1, list2):
f = open(kiss([list1, '.txt']), "r")
g = open(kiss([list2, '.txt']), "r")
h = open(kiss([list1, '_without_', list2, '.txt']), "w")
h_w = h.write
reset = g.seek
found = False
for i in f:
found = [True for j in g if naked(i) == naked(j)]
if not found:
h_w(kiss([naked(i), '\n']))
else:
found = False
reset(0)
f.close()
g.close()
h.close()
main()
yeah... does anyone have any idea how to do this more efficiently?! thanks in
advance!
Answer:
def read_file(filename):
with open(filename) as src:
return [line.strip() for line in src.readlines()]
def main():
list1 = raw_input("Enter name of .txt-file to clean!\n")
list2 = raw_input("Enter name of .txt-file to exclude!\n")
file1 = read_file(list1)
file2 = read_file(list2)
file3 = open('new_file.txt', 'w')
for line in file1:
if line not in file2:
file3.write(str(line) + '\n') # writes to a new file
file3.close()
print 'Completed'
main()
I am not sure this is fastest way but it will do the trick. you can use "diff"
or "comm" linux commands to get the required output.
|
How to link html files together using Django 1.9?
Question: I am currently programming in the atom code editor and python 3.4.0 as well as
Django 1.9. I am new to django coding.
[](http://i.stack.imgur.com/z5o1S.png)
I have managed to link the html files and the css files, but I don't know how
to link the html files together.
I have tried the `<a href=" ">` code because normally that works (I have
previous knowledge in HTML) however, everytime I click on the link, it says
webpage not found.
I am trying to link base.html file to the generalq.html file.
Does anyone know any tips?
Answer: You have to put an URL in the `href` of your `<a>` element that is handled by
your `urls.py` and `views.py` files:
In `blog/templates/blog/base.html`:
<a href="/blog/general/">My link</a>
In `compproject/urls.py`:
from django.conf.urls import include, url
urlpatterns = [
url(r'^blog/', include('blog.urls')),
]
In `blog/urls.py`:
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^general/$', views.general, name='general'),
]
In `blog/views.py`:
from django.shortcuts import render
def general(request):
return render(request, 'blog/generalq.html', {})
|
Change information in a CSV file using info from the first one in python
Question: I'm trying to edit a CSV file using informations from a first one. That
doesn't seem simple to me as I should filter multiple things. Let's explain my
problem.
I have two CSV files, let's say patch.csv and origin.csv. Output csv file
should have the same pattern as origin.csv, but with corrected values.
I want to replace trip_headsign column fields in origin.csv using
forward_line_name column in patch.csv if direction_id field in origin.csv row
is 0, or using backward_line_name if direction_id is 1.
I want to do this only if the part of the line_id value in patch.csv between
":" and ":" symbols is the same as the part of route_id value in origin.csv
before the ":" symbol.
I know how to replace a whole line, but not only some parts, especially that I
sometimes have to look only part of a value.
Here is a sample of origin.csv:
route_id,service_id,trip_id,trip_headsign,direction_id,block_id
210210109:001,2913,70405957139549,70405957,0,
210210109:001,2916,70405961139553,70405961,1,
and a sample of patch.csv:
line_id,line_code,line_name,forward_line_name,forward_direction,backward_line_name,backward_direction,line_color,line_sort,network_id,commercial_mode_id,contributor_id,geometry_id,line_opening_time,line_closing_time
OIF:100110010:10OIF439,10,Boulogne Pont de Saint-Cloud - Gare d'Austerlitz,BOULOGNE / PONT DE ST CLOUD - GARE D'AUSTERLITZ,OIF:SA:8754700,GARE D'AUSTERLITZ - BOULOGNE / PONT DE ST CLOUD,OIF:SA:59400,DFB039,91,OIF:439,metro,OIF,geometry:line:100110010:10,05:30:00,25:47:00
OIF:210210109:001OIF30,001,FFOURCHES LONGUEVILLE PROVINS,Place Mérot - GARE DE LONGUEVILLE,,GARE DE LONGUEVILLE - Place Mérot,OIF:SA:63:49,000000 1,OIF:30,bus,OIF,,05:39:00,19:50:00
Each file has hundred of lines I need to parse and edit this way.
Based on mhopeng answer, I obtained that code:
#!/usr/bin/env python2
from __future__ import print_function
import fileinput
import sys
# first get the route info from patch.csv
f = open(sys.argv[1])
d = open(sys.argv[2])
# ignore header line
#line1 = f.readline()
#line2 = d.readline()
# get line of data
for line1 in f.readline():
line1 = f.readline().split(',')
route_id = line1[0].split(':')[1] # '210210109'
route_forward = line1[3]
route_backward = line1[5]
line_code = line1[1]
# process origin.csv and replace lines in-place
for line in fileinput.input(sys.argv[2], inplace=1):
line2 = d.readline().split(',')
num_route = line2[0].split(':')[0]
# prevent lines with same route_id but different code to be considered as the same line
if line.startswith(route_id) and (num_route == line_code):
if line.startswith(route_id):
newline = line.split(',')
if newline[4] == 0:
newline[3] = route_backward
else:
newline[3] = route_forward
print('\t'.join(newline),end="")
else:
print(line,end="")
But unfortunately, that doesn't push the right forward or backward_line_name
in trip_headsign (always forward), and finally triggers that error, before
finishing parsing the file:
Traceback (most recent call last): File "./GTFS_enhancer_headsigns.py", line
28, in if newline[4] == 0: IndexError: list index out of range
Thanks for your help on this.
Answer: pandas is convenient for handling csv files. I would use something like this:
import pandas as pd
origin = pd.read_csv('origin.csv',index_col=None)
patch = pd.read_csv('patch.csv', index_col=None)
# Create match_keys for matching origin.csv from patch.line_id
patch['match_key'] = [x.split(':')[1] for x in patch.line_id.values]
origin['match_key'] = [x.split(':')[0] for x in origin.route_id.values]
for i,key in enumerate(origin.match_key.values):
p = patch[patch.match_key == key]
if len(p) == 1:
if (origin.direction_id[i] == 0):
origin.trip_headsign[i] = p.forward_line_name.values[0]
elif (origin.direction_id[i] == 1):
origin.trip_headsign[i] = p.backward_line_name.values[0]
origin.to_csv('new_origin.csv',index=False)
|
"SSLError certificate verify failed" for every domain/url
Question: I broke the SSL setup of my machine. Every `request` call now ends in an
`certificate verify failed`.
I am not sure what caused this, but I moved some module, that I had installed
va `pip install -e .` and reinstalled it. After that I noticed that error.
I tried `sudo apt-get install libffi-dev` and `pip install requests[security]
--user --upgrade` but it did not help.
Here the whole output:
import requests; requests.get('https://www.google.com')
---------------------------------------------------------------------------
SSLError Traceback (most recent call last)
<ipython-input-1-b4a9dae5ffaa> in <module>()
1 import requests
----> 2 requests.get('https://www.google.com')
/home/my_computer/.local/lib/python2.7/site-packages/requests/api.pyc in get(url, params, **kwargs)
65
66 kwargs.setdefault('allow_redirects', True)
---> 67 return request('get', url, params=params, **kwargs)
68
69
/home/my_computer/.local/lib/python2.7/site-packages/requests/api.pyc in request(method, url, **kwargs)
51 # cases, and look like a memory leak in others.
52 with sessions.Session() as session:
---> 53 return session.request(method=method, url=url, **kwargs)
54
55
/home/my_computer/.local/lib/python2.7/site-packages/requests/sessions.pyc in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
466 }
467 send_kwargs.update(settings)
--> 468 resp = self.send(prep, **send_kwargs)
469
470 return resp
/home/my_computer/.local/lib/python2.7/site-packages/requests/sessions.pyc in send(self, request, **kwargs)
574
575 # Send the request
--> 576 r = adapter.send(request, **kwargs)
577
578 # Total elapsed time of the request (approximately)
/home/my_computer/.local/lib/python2.7/site-packages/requests/adapters.pyc in send(self, request, stream, timeout, verify, cert, proxies)
445 except (_SSLError, _HTTPError) as e:
446 if isinstance(e, _SSLError):
--> 447 raise SSLError(e, request=request)
448 elif isinstance(e, ReadTimeoutError):
449 raise ReadTimeout(e, request=request)
SSLError: bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)
Answer: It is the same issue as here: [SSL3_GET_SERVER_CERTIFICATE certificate verify
failed on Python when requesting (only)
*.google.com](http://stackoverflow.com/questions/34646942/ssl3-get-server-
certificate-certificate-verify-failed-on-python-when-requesting?rq=1)
To fix one needs to run:
pip uninstall -y certifi && pip install certifi==2015.04.28
|
How does process join() work?
Question: I am trying to understand multiprocessing in Python, I wrote the following
program:
from multiprocessing import Process
numOfLoops = 10
#function for each process
def func():
a = float(0.0)
for i in xrange(0, numOfLoops):
a += 0.5
print a
processes = []
numOfProcesses = 2
#create the processes
for i in xrange(0, numOfProcesses):
processes.append(Process(target=func))
for process in processes:
process.start() #Start the processes
for process in processes:
process.join() #wait for each process to terminate
print "shouldn't this statement be printed at the end??"
I created two processes that executes function func(). I used join() method to
wait for each process to terminate before proceeding with the program. Doesn't
this mean that the last print statement should be printed at the end of the
program after the two processes have executed their function? But my output
was:
shouldn't this statement be printed at the end??
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
Which is not what I expected. Can you explain what's going on?
Answer: It is very simple, it just waits for each of the running processes to finish
and when this happens, it returns.
The reason why is called `join` is that is joining the processes into a single
one. [](http://i.stack.imgur.com/IRpP1.jpg)
|
NameError: guesses not defined
Question: I'm just starting out on python and I'm wondering exactly why my variable
guesses is not defined. I feel as if it's a indentation issue but once I
change the indentation I usually come upon a syntax error any help
understanding this issue would be greatly appreciated.
import random
def game():
guesses = []
secret_num = random.randint(1, 10)
while len(guesses) < 5:
try:
guess = int(input("Guess a number between 1 and 10 "))
except ValueError:
print("{} isn't a number!".format(guess))
else:
if guess == secret_num:
print("You got it! My number was {}".format(secret_num))
break
elif guess < secret_num:
print("My number is higher than {}".format(guess))
else:
print("My number is lower tha {}".format(guess))
guesses.append(guess)
else:
print("You didn't get it my secret number was {}".format(secret_num))
play_again = input("Do you want to play again? Y/N")
if play_again.lower() != 'n':
game()
else:
print("Bye thanks for playing!")
Answer: This doesn't throw any errors on my computer. Note you'll have to call the
game() function if you want to actually run the code.
import random
def game():
guesses = []
secret_num = random.randint(1, 10)
while len(guesses) < 5:
try:
guess = int(input("Guess a number between 1 and 10 "))
except ValueError:
print("{} isn't a number!".format(guess))
else:
if guess == secret_num:
print("You got it! My number was {}".format(secret_num))
break
elif guess < secret_num:
print("My number is higher than {}".format(guess))
else:
print("My number is lower tha {}".format(guess))
guesses.append(guess)
else:
print("You didn't get it my secret number was {}".format(secret_num))
play_again = input("Do you want to play again? Y/N")
if play_again.lower() != 'n':
game()
else:
print("Bye thanks for playing!")
game() # to run the code
|
PyGTK in webbrowser?
Question: I use PyGTK quite a lot, and my code is hosted on
[github](https://github.com/ralphembree). I know that there are many websites
that can run Python code, but I was hoping that maybe there is a website that
can handle the new windows created in my code in the browser, so that people
could test my code without needing to download it. Is there such a website?
Answer: No, there is not.
If you switch to Python 3 and `from gi.repository import Gtk`, then you can
use [Broadway](https://developer.gnome.org/gtk3/stable/gtk-broadway.html) to
run a GTK application in your browser. However, this only works locally, not
as a website.
|
Use Python to search and pull data from Excel
Question:
import csv
subject = ['emergency*', 'new ticket*', 'problem with*']
from_to = ['chris*', 'timothy*', 'daniel*', 'david*', 'jason*']
a = open('D:\testfile.csv', 'w')
New to python. So, here's what I'd like to do.
1) Open an excel csv file
2) Search for specific keywords that are in a list
3) If the keywords are found, pull the data that is in the D,E,F columns only.
(Since that is where the keywords will be)
4) Write this data to a new file
Example. Search testfile.csv for any of the keywords in the from_to list. If
these keywords appear ONLY in the D or E columns of excel AND if the
corresponding column F is not equal to the subject list, then write a new file
that has the columns of D,E,F and the associated lines, however many there
are, with it
Also, I put the stars next to the names/items in the list to denote a
wildcard, eg if the from_to contains chris.gmail.com or daniel@yahoo.
Answer: This solution has a list of keywords, an input file that you read line by
line, tokenize each line into a list of strings, then iterate your keyword
list to check if any of them is in the tokenized line. If any keyword is found
it writes the line to the output file.
keywrds = ["word1", "word2", "etc"]
with open ("myfile.csv") as fin:
with open ("outfile.txt") as fout:
for line in fin:
line_tokens = line.split(",")
for word in keywrds:
if word in line_tokens:
fout.write(line_tokens[3:6].join(" ") + "\n")
|
Python - sorting a list of tuples by an uneven list
Question: I have a deck of cards built by the following code:
import itertools
suits = "DCHS"
ranks = "23456789TJQKA"
cardDeck = list(set(itertools.product(ranks, suits)))
I want to sort the deck of card by ranks. Doing a sorted(cardDeck, key=lambda
x: x[0]) sorts the list by ranks alphabetically (23456789AJKQT) but I would
like to find a way to maintain the order of ranks (23456789TJQKA).
I've been playing with trying to get a lambda function for the key= parameter
that will iterate through the characters in the ranks string but so far I'm up
against a wall. Maybe I need to make suits and ranks list of characters rather
than strings?
Answer: You already have the ranking in your `ranks` list. Just use that in your
sorter:
print sorted(cardDeck, key=lambda x: ranks.index(x[0]))
You can see the [full thing here](https://ideone.com/nEHa0Q):
import itertools
suits = "DCHS"
ranks = "23456789TJQKA"
cardDeck = list(set(itertools.product(ranks, suits)))
print sorted(cardDeck, key=lambda x: ranks.index(x[0]))
Gives:
[('2', 'S'), ('2', 'C'), ('2', 'H'), ('2', 'D'), ('3', 'D'), ('3', 'H'), ('3', 'C'), ('3', 'S'), ('4', 'D'), ('4', 'S'), ('4', 'C'), ('4', 'H'), ('5', 'H'), ('5', 'S'), ('5', 'D'), ('5', 'C'), ('6', 'C'), ('6', 'D'), ('6', 'H'), ('6', 'S'), ('7', 'C'), ('7', 'D'), ('7', 'S'), ('7', 'H'), ('8', 'S'), ('8', 'C'), ('8', 'H'), ('8', 'D'), ('9', 'H'), ('9', 'S'), ('9', 'D'), ('9', 'C'), ('T', 'H'), ('T', 'C'), ('T', 'D'), ('T', 'S'), ('J', 'S'), ('J', 'C'), ('J', 'H'), ('J', 'D'), ('Q', 'H'), ('Q', 'C'), ('Q', 'D'), ('Q', 'S'), ('K', 'S'), ('K', 'H'), ('K', 'D'), ('K', 'C'), ('A', 'S'), ('A', 'D'), ('A', 'H'), ('A', 'C')]
|
Import WeakMethod error in Django 1.9 with python 3.3
Question: I'm using django 1.9.1 with python 3.3. Getting following error when I'm
running runserver
File "/home/virtualenv/python3.3.5/lib/python3.3/site-packages/django/dispatch/__init__.py", line 9, in <module>
from django.dispatch.dispatcher import Signal, receiver # NOQA
File "/home/virtualenv/python3.3.5/lib/python3.3/site-packages/django/dispatch/dispatcher.py", line 14, in <module>
from weakref import WeakMethod
ImportError: cannot import name WeakMethod
As I was reading WeakMethod of weakref has been introduced in python 3.4, and
its not exist in weakref of python 3.3.
Any suggestions on how to fix the same error with python 3.3.
Answer: Django 1.9.x does not support Python 3.3:
<https://docs.djangoproject.com/en/1.9/faq/install/#what-python-version-can-i-
use-with-django>
> Typically, we will support a Python version up to and including the first
> Django LTS release whose security support ends after security support for
> that version of Python ends. For example, Python 3.3 security support ends
> September 2017 and Django 1.8 LTS security support ends April 2018.
> **Therefore Django 1.8 is the last version to support Python 3.3.**
You can either downgrade to Django 1.8 or upgrade your Python interpreter to
3.4 or higher.
|
In Python, can I define an instance method map() for the list class?
Question: I was hoping to define an instance method `map()` or `join()` for the list
class (for array). For example, for `map()`:
class list:
def map(self, fn):
result = []
for i in self:
result.append(fn(i))
return result
print [1, 3, 5].map(str)
Is it possible in Python to do that for the `list` class? (if not, can the
last line `[1, 3, 5].map(str)` be made to work?)
Answer: Your code creates a new variable called `list`, which hides the original. You
can do a little better by inheriting:
class mylist(list):
def map(self, fn):
result = []
for i in self:
result.append(fn(i))
return result
mylist([1, 3, 5]).map(str)
Note that it is not possible to override `[]` to generate anything other than
a `builtins.list`
* * *
So that leaves monkeypatching the builtin. [There's a module for
that](https://github.com/clarete/forbiddenfruit), `forbiddenfruit`, which in
its own words:
> may lead you to hell if used on production code.
If hell is where you're aiming for, then this is what you want:
from forbiddenfruit import curse
def list_map(self, fn):
result = []
for i in self:
result.append(fn(i))
return result
curse(list, "map", list_map)
print [1, 3, 5].map(str)
|
Integrating Behave or Lettuce with Python unittest
Question: I'm looking at BDD with Python. Verification of results is a drag, because the
results being verified are not printed on failure.
Compare Behave output:
AssertionError:
File "C:\Python27\lib\site-packages\behave\model.py", line 1456, in run
match.run(runner.context)
File "C:\Python27\lib\site-packages\behave\model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "steps\EcuProperties.py", line 28, in step_impl
assert vin == context.driver.find_element_by_xpath("//table[@id='infoTable']/tbody/tr[4]/td[2]").text
to SpecFlow+NUnit output:
Scenario: Verify VIN in Retrieve ECU properties -> Failed on thread #0
[ERROR] String lengths are both 16. Strings differ at index 15.
Expected: "ABCDEFGH12345679"
But was: "ABCDEFGH12345678"
--------------------------^
Finding failure causes is way faster with the SpecFlow output. To get the
variable contents on error, they have to be put into a string manually.
From the [Lettuce tutorial](http://lettuce.it/tutorial/simple.html#lettuce-
id4):
assert world.number == expected, \
"Got %d" % world.number
From the [Behave tutorial](https://pythonhosted.org/behave/tutorial.html#step-
parameters):
if text not in context.response:
fail('%r not in %r' % (text, context.response))
Compare this to [Python
unittest](https://docs.python.org/3/library/unittest.html):
self.assertEqual('foo2'.upper(), 'FOO')
resulting in:
Failure
Expected :'FOO2'
Actual :'FOO'
<Click to see difference>
Traceback (most recent call last):
File "test.py", line 6, in test_upper
self.assertEqual('foo2'.upper(), 'FOO')
AssertionError: 'FOO2' != 'FOO'
However, the methods from Python unittest cannot be used outside a `TestCase`
instance.
Is there a good way of getting all the niceness of Python unittest integrated
into Behave or Lettuce?
Answer: [nose](https://nose.readthedocs.org/en/latest/) includes a package that takes
all the class-based asserts that `unittest` provides and turns them into plain
functions, the module's [documentation
states](https://nose.readthedocs.org/en/latest/testing_tools.html?highlight=nose.tools#module-
nose.tools):
> The nose.tools module provides [...] all of the same `assertX` methods found
> in `unittest.TestCase` (only spelled in [PEP 8#function-
> names](https://www.python.org/dev/peps/pep-0008#function-names) fashion, so
> `assert_equal` rather than `assertEqual`).
For instance:
from nose.tools import assert_equal
@given("foo is 'blah'")
def step_impl(context):
assert_equal(context.foo, "blah")
You can ad custom messages to assertions just like you would with the
`.assertX` methods of `unittest`.
That's what I use for the test suites that I run with Behave.
|
Display an image with Python
Question: I tried to use IPython.display with the following code:
from IPython.display import display, Image
display(Image(filename='MyImage.png'))
I also tried to use matplotlib with the following code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
plt.imshow(mpimg.imread('MyImage.png'))
In both cases, nothing is displayed, not even an error message.
Answer: In a much simpler way you can do the same using
import Image
image = Image.open('image.jpg')
image.show()
|
Crossbar 0.12.1 : No module named django - wsgi error
Question: I get an error while i launch crossbar 0.12.1 that I did not have with the
version 0.11
[Controller 210] crossbar.error.invalid_configuration:
WSGI app module 'myproject.wsgi' import failed: No module named django -
Python search path was [u'/myproject', '/opt/crossbar/site-packages/crossbar/worker', '/opt/crossbar/bin', '/opt/crossbar/lib_pypy/extensions', '/opt/crossbar/lib_pypy', '/opt/crossbar/lib-python/2.7', '/opt/crossbar/lib-python/2.7/lib-tk', '/opt/crossbar/lib-python/2.7/plat-linux2', '/opt/crossbar/site-packages']
I have not changed anything else that the crossbar update.
My config.json are still the same, with the pythonpath of my project within
the option :
{
"workers": [
{
"type": "router",
"options": {
"pythonpath": ["/myproject"]
},
"realms": [
{
"name": "realm1",
"roles": [
{
"name": "anonymous",
"permissions": [
{
"uri": "*",
"publish": true,
"subscribe": true,
"call": true,
"register": true
}
]
}
]
}
],
"transports": [
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 80
},
"paths": {
"/": {
"type": "wsgi",
"module": "myproject.wsgi",
"object": "application"
},
etc...
Do you have an idea ? Thanks.
Answer: It seems that `"pythonpath": ["/myproject"]` replaces other python path
configs from your dist-packages. Look for an option that **adds** `/myproject`
and not replacing current path settings.
Or - add the path to your project to the machine python path, and don't
provide crossbar with any python path, so it will pick the exisitng one.
Something like (depends on OS):
$ sudo nano /usr/lib/python2.7/dist-packages/myproject.pth
Then:
/home/username/path/to/myproject
|
Understanding @property decorator and inheritance
Question: Python 3 here, just in case it's important.
I'm trying to properly understand how to implement inheritance when
`@property` is used, and I've already searched StackOverflow and read like 20
similar questions, to no avail because the problems they are trying to solve
are subtly different. This is the code I'm using for testing:
class Example:
def __init__(self):
self.__data = None
@property
def data(self):
return self.__data
@data.setter
def data(self, data):
self.__data = data
class Example2(Example):
def __init__(self):
super().__init__()
@property
def data(self):
return super().data # Works!
@data.setter
def data(self, data):
data = '2' + data
#Example.data = data # Works, but I want to avoid using the parent name explicitly
#super().data = data # Raises AttributeError: 'super' object has no attribute 'data'
#super().data.fset(self, data) # Raises AttributeError: 'NoneType' object has no attribute 'fset'
#super(self.__class__, self.__class__).data = data # Raises AttributeError: 'super' object has no attribute 'data'
super(self.__class__, self.__class__).data.fset(self, data) # Works!
a = Example2()
a.data = 'element a'
print(a.data)
What I can't understand is why `super().data` works in `Example2` getter, but
not in setter. I mean, why in the setter a class bound method is needed, but
in the getter an instance bound method works?
Could anyone please point me to an explanation or explain why I'm getting
`AttributeError` in three of the five different calls I'm testing?
Yes, I know, I could use `Example.data` in the setter, but that's not needed
in the getter and a) I would prefer not to use the parent class name
explicitly if possible and b) I don't understand the asymmetry between getter
and setter.
Answer: you should do something like this:
class Example:
def __init__(self):
self._data = None
@property
def data(self):
return self._data
@data.setter
def data(self, data):
self._data = data
class Example2(Example):
def __init__(self):
super().__init__()
@Example.data.setter
def data(self, data):
data = '2' + data
self._data = data
a = Example2()
a.data = 'element a'
print(a.data)
you are getting the Attribute error because the class does not have the data
Attribute, the instance has it.
If you want to override the @property, just do it:
class Example:
def __init__(self):
self._data = None
@property
def data(self):
return self._data
@data.setter
def data(self, data):
self._data = data
class Example2(Example):
def __init__(self):
super().__init__()
@property
def data(self):
return self._data
@data.setter
def data(self, data):
data = '2' + data
self._data = data
|
Call program from subprocess.Popen in python: `OSError: .. No such file or directory`
Question: I run the following command from my bash script:
myProgram --name test1 --index 0
But now I want to run it from within a python script so I have tried the
following:
#!/usr/bin/python
from threading import Thread
import time
import subprocess
print "hello Python"
subprocess.Popen("myProgram --name test1 --index 0")
But I get the error:
hello Python
Traceback (most recent call last):
File "./myPythonProgram.py", line 8, in <module>
subprocess.Popen("myProgram --name test1 --index 0")
File "/usr/lib64/python2.6/subprocess.py", line 623, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1141, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Is the correct way to call this??
Answer: You need to pass the command as a list:
subprocess.Popen("myProgram --name test1 --index 0".split())
|
What's the advantage of putting nginx in front of uWSGI?
Question: I see a lot of people running their python app, with nginx, which then
communicates to nginx. uWSGI can run directly as a web server, and it looks
quite fast and scalable, so what's the purpose of putting nginx in front of
that?
Answer: [uWSGI documentation](http://uwsgi-
docs.readthedocs.org/en/latest/StaticFiles.html#serving-static-files-with-
uwsgi-updated-to-1-9) answers this question:
> Generally your webserver of choice (Nginx, Mongrel2, etc. will serve static
> files efficiently and quickly and will simply forward dynamic requests to
> uWSGI backend nodes.
>
> The uWSGI project has ISPs and PaaS (that is, the hosting market) as the
> main target, where generally you would want to avoid generating disk I/O on
> a central server and have each user-dedicated area handle (and account for)
> that itself. More importantly still, you want to allow customers to
> customize the way they serve static assets without bothering your system
> administrator(s).
|
no module named fuzzywuzzy
Question: I installed fuzzywuzzy with pip for python3. When I do pip list I see
fuzzywuzzy (0.8.1)
However when I try to import is I get an error.
Python 3.4.0 (default, Jun 19 2015, 14:20:21)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import fuzzywuzzy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'fuzzywuzzy'
>>>
Does anyone have experience with this problem?
Answer: Are you sure you ran `pip3` and not just `pip`? The latter only installs
Python 2 packages.
|
osx install packages inside virtualenv
Question: I tried to start virtualenv WITHOUT sudo but unfortunately it cannot find
(Permission denied) /lib/python2.7/site-packages/easy_install.py. So I did:
sudo virtualenv name_env
The problem is that now pip is the global version (not inside pip): which pip:
/usr/local/bin/pip So I cannot install any package inside the environment. If
I start virtualenv without sudo:
virtualenv name_env
OSError: Command /Users/andrea/package_lambda/bin/python2.7 -c "import sys,
pip; sys...d\"] + sys.argv[1:]))" setuptools pip wheel failed with error code
2 Any suggestion?
Answer: Don't use `sudo` just because you can!
I suggest you install another Python environment using `brew`, and then
install `pip`, and subsequently `virtualenv`. This way, you'll substantially
correct the underlying problem.
I would follow this method:
brew install pyenv
pyenv install 2.7.11
Or check the available versions through:
pyenv versions
This way, you can install different versions and switch between them as you
wish, for instance:
pyenv global 2.7.11
And then you can install `pip` like so:
python -m easy_intall pip
and then install `virtualenv` like so:
python -m pip install virtualenv
|
random.sample in Python 3 (jupyter notebook)
Question: When using Canopy I can do
from scipy import *
import pylab as py
import random
aa = random.sample(arange(1,4,0.5),1)
whereas in the Jupyter notebook it complaints with the following:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-e33979a68ee1> in <module>()
----> 1 cc = random.sample(arange(1,4,0.5), 1)
/u/rscratch/bm485/anaconda3/lib/python3.5/random.py in sample(self, population, k)
309 population = tuple(population)
310 if not isinstance(population, _Sequence):
--> 311 raise TypeError("Population must be a sequence or set. For dicts, use list(d).")
312 randbelow = self._randbelow
313 n = len(population)
TypeError: Population must be a sequence or set. For dicts, use list(d).
How can I fix this?
Answer: Please avoid star imports, wherever possible. It is often not clear what you
actually imported and shadowed in the end.
import random
from scipy import *
print(random)
Gives:
<module 'numpy.random' from '/Users/ch/miniconda/envs/sci34/lib/python3.4/site-packages/numpy/random/__init__.py'>
Whereas
from scipy import *
import random
print(random)
prints:
<module 'random' from '/Users/ch/miniconda/envs/sci34/lib/python3.4/random.py'>
Takehome message:
Prefer `import scipy as sp` or better `from scipy import <subpackage>` to
avoid shadowing your random function. Thanks @ev-br for the remark.
* * *
If you are dealing with numpy's ndarrays, prefer the methods in `numpy.random`
over the counter parts in python's random module. The issue you see here is
that python and numpy have a different definition what a sequence is. For more
details see the discussion here: <https://github.com/numpy/numpy/issues/2776>
What you want is probably
[`np.random.choice`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html#numpy-
random-choice):
numpy.random.choice(a, size=None, replace=True, p=None)
|
Python mysql connector returns tuple
Question: I am connecting to mysql database via mysql connector and running a simple
query to pull a list of IDs. I need to loop over that list and pass them into
some other code. For some reason I am getting a list of tuples. Is this
expected behavior? If not, what am I doing wrong? Here is the snippet of my
code:
import mysql.connector
conn = mysql.connector.connect(host='127.0.0.1', database='t', user='r', password='pwd')
cursor = conn.cursor()
query = ( "select id from T where updated < '%s'" % (run_date) )
cursor.execute(query)
for row in cursor:
print (row)
cursor.close()
I am getting the following back (from an INT field in d/b):
(Decimal('991837'),)
(Decimal('991838'),)
(Decimal('991839'),)
(Decimal('991871'),)
(Decimal('991879'),)
(Decimal('991899'),)
(Decimal('992051'),)
(Decimal('992299'),)
(Decimal('992309'),)
Answer: Yes, this is expected behavior. Using the cursor as an iterable is basically
equivalent to looping over it using the `fetchone()` method. From the
[documentation for `fetchone()`](https://dev.mysql.com/doc/connector-
python/en/connector-python-api-mysqlcursor-fetchone.html) (emphasis mine):
> This method retrieves the next row of a query result set and returns a
> single **sequence** , or None if no more rows are available. By default,
> **the returned tuple** consists of data returned by the MySQL server,
> converted to Python objects. If the cursor is a raw cursor, no such
> conversion occurs;
|
Boto DynamoDb - JSONResponseError: 400 Bad Request - really weird behaviour
Question: I'm working with the Boto DynamoDb2 API and I'm experiencing something really
strange. First off, I'm using an IAM Role for authentication and the code I'm
about to show is being run on an EC2 instance with the attached Role. The Role
has full administrative permissions and I'm positive the issue isn't related
to permissions.
I have code that creates a table and then adds an item to that table. However,
the first time I `put_item`, it throws an exception, but it works the next
time right afterwards. Here's a dump from my python interpreter:
Python 2.7.10 (default, Aug 11 2015, 23:39:10)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto.dynamodb2
>>> from boto.dynamodb2.table import Table
>>> from boto.dynamodb2.fields import HashKey
>>> from boto.dynamodb2.types import NUMBER
>>>
>>> table = Table.create('myTablse',
... schema=[HashKey('index', data_type=NUMBER)],
... connection=boto.dynamodb2.connect_to_region('eu-west-1'))
>>>
>>> table.put_item(data={
... 'index': 4,
... 'sequence': 'sdfsdf34rfdsa'
... })
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/dynamodb2/table.py", line 821, in put_item
return item.save(overwrite=overwrite)
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/dynamodb2/items.py", line 455, in save
returned = self.table._put_item(final_data, expects=expects)
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/dynamodb2/table.py", line 835, in _put_item
self.connection.put_item(self.table_name, item_data, **kwargs)
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/dynamodb2/layer1.py", line 1510, in put_item
body=json.dumps(params))
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/dynamodb2/layer1.py", line 2842, in make_request
retry_handler=self._retry_handler)
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/connection.py", line 954, in _mexe
status = retry_handler(response, i, next_sleep)
File "/home/ec2-user/fyp/venv/local/lib/python2.7/site-packages/boto/dynamodb2/layer1.py", line 2885, in _retry_handler
data)
boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request
{u'message': u'Requested resource not found', u'__type': u'com.amazonaws.dynamodb.v20120810#ResourceNotFoundException'}
>>> table.put_item(data={
... 'index': 4,
... 'sequence': 'sdfsdf34rfdsa'
... })
True
>>>
Can someone tell me what's going on here?
Answer: Figured this out. When you submit the CreateTable API call and it returns a
200, that doesn't mean that the table is created yet or ready to use for that
matter as everything seems to happen asynchronously.
So basically, you need to wait until the table is ready before you can add
items to it. Here is my solution:
con = boto.dynamodb2.connect_to_region('eu-west-1')
table = Table.create('myTables',
schema=[HashKey('index', data_type=NUMBER)],
connection=con)
while True:
try:
r=con.describe_table('myTables')
if r and r['Table']['TableStatus'] == 'ACTIVE':
break
except JSONResponseError, e:
if 'resource not found' in e.message:
pass
else:
raise
time.sleep(1)
table.put_item(data={
'index': 4,
'sequence': 'sdfsdf34rfdsa'
})
Hope this helps someone else!
|
How to redirect C-level streams in Python in Windows?
Question: Eli Bendersky has explained thoroughly how to "[Redirecting all kinds of
stdout in Python](http://eli.thegreenplace.net/2015/redirecting-all-kinds-of-
stdout-in-python/)", and specifically Redirecting C-level streams, e.g. stdout
of a shared library (dll). However, the example is in Linux and does not work
in windows, mainly due to the following lines:
libc = ctypes.CDLL(None)
c_stdout = ctypes.c_void_p.in_dll(libc = ctypes.CDLL(None), 'stdout')
How can we make it work in Windows?
Answer: I found the answer buried in [Drekin's
code](https://github.com/Drekin/readline-hooks/blob/master/readline_hooks.py).
Based on that, I made a small change to [Eli Bendersky's
example](http://eli.thegreenplace.net/2015/redirecting-all-kinds-of-stdout-in-
python/):
Update: This code has been tested on Python 3.4 64-bit on Windows and Python
3.5 64-bit on Linux. For Python 3.5 on Windows, please see eryksun's comment.
from contextlib import contextmanager
import ctypes
import io
import os
import sys
import tempfile
import ctypes.util
from ctypes import *
import platform
if platform.system() == "Linux":
libc = ctypes.CDLL(None)
c_stdout = ctypes.c_void_p.in_dll(libc, 'stdout')
if platform.system() == "Windows":
class FILE(ctypes.Structure):
_fields_ = [
("_ptr", c_char_p),
("_cnt", c_int),
("_base", c_char_p),
("_flag", c_int),
("_file", c_int),
("_charbuf", c_int),
("_bufsize", c_int),
("_tmpfname", c_char_p),
]
# Gives you the name of the library that you should really use (and then load through ctypes.CDLL
msvcrt = CDLL(ctypes.util.find_msvcrt())
libc = msvcrt # libc was used in the original example in _redirect_stdout()
iob_func = msvcrt.__iob_func
iob_func.restype = POINTER(FILE)
iob_func.argtypes = []
array = iob_func()
s_stdin = addressof(array[0])
c_stdout = addressof(array[1])
@contextmanager
def stdout_redirector(stream):
# The original fd stdout points to. Usually 1 on POSIX systems.
original_stdout_fd = sys.stdout.fileno()
def _redirect_stdout(to_fd):
"""Redirect stdout to the given file descriptor."""
# Flush the C-level buffer stdout
libc.fflush(c_stdout)
# Flush and close sys.stdout - also closes the file descriptor (fd)
sys.stdout.close()
# Make original_stdout_fd point to the same file as to_fd
os.dup2(to_fd, original_stdout_fd)
# Create a new sys.stdout that points to the redirected fd
sys.stdout = io.TextIOWrapper(os.fdopen(original_stdout_fd, 'wb'))
# Save a copy of the original stdout fd in saved_stdout_fd
saved_stdout_fd = os.dup(original_stdout_fd)
try:
# Create a temporary file and redirect stdout to it
tfile = tempfile.TemporaryFile(mode='w+b')
_redirect_stdout(tfile.fileno())
# Yield to caller, then redirect stdout back to the saved fd
yield
_redirect_stdout(saved_stdout_fd)
# Copy contents of temporary file to the given stream
tfile.flush()
tfile.seek(0, io.SEEK_SET)
stream.write(tfile.read())
finally:
tfile.close()
os.close(saved_stdout_fd)
if __name__ == '__main__':
f = io.BytesIO()
print('...')
with stdout_redirector(f):
print('foobar')
print(12)
libc.puts(b'this comes from C')
os.system('echo and this is from echo')
print('Got stdout:"\n{0}\n"'.format(f.getvalue().decode('utf-8')))
print('Resuming normal operation...')
|
How to separate warnings from errors found in stderr when using Popen.communicate?
Question: I used Python's
[`subprocess.Popen`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen)
to execute a command and capture its output:
p = Popen(cmd, stdout=PIPE, stderr=PIPE,shell=True)
stdout, stderr = p.communicate()
I want to use `stderr` to tell users when there was an error and exit my
script:
if stderr !='':
return {'error':stderr}
But now I've found that `stderr` can contain warnings that could be safely
ignored, so my script should not exit, but continue to finish the job.
Is there a way to separate warnings from errors in `stderr`?
Answer: univerio is correct, in that there is no specific meaning for any bytes you
discover in `stderr`... think of it as "standard _not_ -output" instead of
"standard error". However, on most operating systems, you can use the
process's exit status (or "return code") to skip most of the progress bars and
other non-error output.
A `Popen` object has a field called
[`returncode`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.returncode),
which is used to store whatever value the subprocess returned when it exited.
This value is `None` until 1) the process terminates, and 2) you collect its
exit status with either the
[`poll`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.poll)
or
[`wait`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.wait)
methods (at least on Unix-like systems). Since you're using
[`communicate`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate),
which _always_ does both 1 and 2, `p.returncode` should always be an integer
by the time you care about it.
As a general rule, a `0` exit status indicates success, while any other value
indicates failure. If you trust the programs you're calling to return proper
values, you can use this to skip most of the junk output on `stderr`:
# ...same as before...
stdout, stderr = p.communicate()
if p.returncode and stderr:
return {'error': stderr}
If the bytes found in `stderr` weren't important enough to produce a non-`0`
exit status, they're not important enough for you to report, either.
To test this, you can write a few tiny scripts that produce `stderr` output
and then [`exit`](https://docs.python.org/3/library/sys.html#sys.exit), either
successfully or not.
`warnings.py`:
import sys
print('This is spam on stderr.', file=sys.stderr)
sys.exit(0)
`errors.py`:
import sys
print('This is a real error message.', file=sys.stderr)
sys.exit(1)
This still leaves the task of separating spinning batons and other progress-
report spam from actual error messages, but you'll only have to do that for
processes that failed... and maybe not even then, since the "not dead yet!"
messages might actually be useful in those cases.
PS: In Python 3, `stdout` and `stderr` will be `bytes` objects, so you'll want
to [`decode`](https://docs.python.org/3/library/stdtypes.html#bytes.decode)
them before treating them like strings.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.