text
stringlengths 226
34.5k
|
---|
no module named crypto.cipher
Question: I'm trying my hands on encryption for a while now. I recently got hands on
this python based crypter named
[PythonCrypter](https://github.com/jbertman/PythonCrypter).
I'm fairly new to Python and when I try to open the CodeSection.py file via
terminal, I get error saying `from Crypto.Cipher import AES ImportError: No
Module Named Crypto.Cipher`
What am I doing wrong?
Answer: In order to use the pycypto library you should install it with:
pip install pycrypto
or
easy_install pycrypto
|
Add matrix in X-axis using matplotlib
Question: I try to add matrices in X-axis using matplotlib. The code I wrote is:
#!/bin/python
import sys
import numpy as np
import math
import decimal
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from matplotlib import rcParams
def plot():
N = 6
ind = np.arange(N)
ind_label = ['1X', '2X' , '3X' , '4X', '5X', '6X']
y = [1.60, 1.65, 1.70, 1.75, 1.80]
m1 = [1.62, 1.64, 1.64, 1.71, 1.7, 1.68]
m2 = [1.61 , 1.7, 1.7, 1.8, 1.75, 1.75]
m3 = [1.63 , 1.69, 1.7, 1.67, 1.64, 1.61]
width = 0.2
fig = plt.figure()
ax = fig.add_subplot(111)
rec_m1 = ax.bar(ind, m1, width, color='r', align='center')
rec_m2 = ax.bar(ind+width, m2, width, color='g', align='center')
rec_m3 = ax.bar(ind+width*2, m3, width, color='b', align='center')
ax.set_ylabel('Value',fontsize=20)
ax.set_xlabel('Matrix', fontsize=20)
ax.tick_params(axis='both', labelsize=17)
ax.set_xticks(ind+width)
ax.set_xticklabels(ind_label, fontsize=18)
ax.axis([-0.2, 5.6, 1.58, 1.82])
ax.legend((rec_m1[0],rec_m2[0],rec_m3[0]),('Method 1','Method 2','Method 3'), loc="upper right",shadow=True)
plt.grid()
plt.tight_layout()
plt.show()
if __name__ == '__main__':
plot()
The current output figure is: <http://i.stack.imgur.com/DJVxA.png>
However, the most painful part is to add X-labels. I show the expected output
in the figure. <http://i.stack.imgur.com/b6h3U.png>
I tried the method mentioned in this post [How to display a matrix in the
Matplotlib annotations](http://stackoverflow.com/questions/26329177/how-to-
display-a-matrix-in-the-matplotlib-annotations) .
But it does not work in my case. Any help or thoughts would be appreciated.
Thanks a lot!
Answer: You were almost there; with the links provided in your question, you can solve
it as follows:
ax.set_xticklabels([r"$\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 5 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 10 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 30 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 50 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 100 & 0 \end{array}\right]$"])
The total code becomes:
#!/bin/python
import sys
import numpy as np
import math
import decimal
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from matplotlib import rcParams
def plot():
N = 6
ind = np.arange(N)
ind_label = ['1X', '2X' , '3X' , '4X', '5X', '6X']
y = [1.60, 1.65, 1.70, 1.75, 1.80]
m1 = [1.62, 1.64, 1.64, 1.71, 1.7, 1.68]
m2 = [1.61 , 1.7, 1.7, 1.8, 1.75, 1.75]
m3 = [1.63 , 1.69, 1.7, 1.67, 1.64, 1.61]
width = 0.2
rcParams['text.usetex'] = True
fig = plt.figure()
ax = fig.add_subplot(111)
rec_m1 = ax.bar(ind, m1, width, color='r', align='center')
rec_m2 = ax.bar(ind+width, m2, width, color='g', align='center')
rec_m3 = ax.bar(ind+width*2, m3, width, color='b', align='center')
ax.set_ylabel('Value',fontsize=20)
ax.set_xlabel('Matrix', fontsize=20)
ax.tick_params(axis='both', labelsize=17)
ax.set_xticks(ind+width)
ax.set_xticklabels(ind_label, fontsize=18)
ax.axis([-0.2, 5.6, 1.58, 1.82])
ax.legend((rec_m1[0],rec_m2[0],rec_m3[0]),('Method 1','Method 2','Method 3'), loc="upper right",\
shadow=True)
ax.set_xticklabels([r"$\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 5 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 10 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 30 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 50 & 0 \end{array}\right]$",
r"$\left[ \begin{array}{cc} 0 & 1 \\ 100 & 0 \end{array}\right]$"])
ax.xaxis.set_tick_params(pad=15)
plt.grid()
plt.tight_layout()
plt.show()
if __name__ == '__main__':
plot()
A few notes:
* You will require LaTeX on your system.
* This can take a while to render: this is because Matplotlib was developed to create high quality plots, plus the additional LaTeX rendering underneath for each label.
* I have offset the tick labels using `xaxis.set_tick_params(pad=15)`, because with the brackets around the matrix, the tick labels ended up inside the plot
* They are probably ways (e.g. using more `rcParams`) to change the font size or the used LaTeX font.
The resulting figure is:

|
Delay load/calculate values in Python
Question: Many times I run into the situation where I want a static/global/constant
variable that may take time to calculate, may be reused many times in a single
run, and may not be used at all during a single run.
One example would be a filter that I apply to an image that takes time to
calculate or load from disk, but is only used on some runs of the program (if
the filter was requested), but could be applied to many images during that
run.
How to accomplish this?
Answer: **Simple Method**
One solution is to have an init function for a module and call it whenever you
are about to enter code that uses the static constants. It in turn looks to
see if it has already calculated it, and if not, calculate it.
This doesn't seem Python-esque and seems like it could be encapsulate and made
reusable, reducing the manual checking of initialization and use of global
statement.
**Advanced Method**
I created a module that can make variables that behave just like the value
they are delay-loading, but they don't calculate the value until necessary.
The module can be found at:
<https://gist.github.com/coderforlife/de80081322d535582581>
Trivial Example:
from delayed import delayed
del_list = delayed(lambda: [1,2,3,4,5])
# At this point the lambda function has not been called
print(del_list[0])
# Now that we have used the variable, the lambda has been called
More Advanced Example (better accomplished by pre-calculating the value, but
you get the idea):
def __is_prime(p, primes):
return all((p % prime != 0) for prime in primes)
def __get_prime():
primes = [3,5,7,11]
for p in xrange(13, 50000, 2):
if __is_prime(p, primes):
primes.append(p)
return primes[-1]
prime = delayed(__get_prime, int)
This is accomplished by creating a new class for the delay-loaded object that
has all of the magic-methods of the underlying type and forwards other
attribute requests to the the underlying object. Whenever a magic-method or
attribute is requested, it loads the object is necessary then continues on.
Once the object is loaded, the loading function is 'deleted' so it can't go
through it again.
I have tested the module with lots of basic types and it works. I am sure that
are plenty of classes for which it doesn't work. I am willing to hear
suggestions for improvements.
|
Pandas variable numbers of columns to binary matrix
Question: I am currently working with a data set(csv file) which doesn't have a fixed
number of columns. however, I want to convert it to a binary matrix which have
a fixed number of columns.
as an example, current data set is like this(no headers),
a,b,x,z,y
b,e,w,t,u,o,s,z,i
z,o,w
o,p,w,z,a
I want this to be converted to below (first row is the header)
a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z
1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1
0,1,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,0,0,1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1
1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,1
The main problem I am experiencing is the varied number of columns in data
set. The pseudo code or the logic I was considering is this
header=[a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z]
data_frame=csv file
df=new data frame
for each row in data_frame:
for each item in row:
create pandas Series
if item in header:
append '1' to Series
else:
append '0' to Series
append series to df
Finally the matrix should be written in to another csv file.
I have fair knowledge in python but not in pandas. Therefore, I am kindly
asking someone to give me some help with this point as I cant seems to be find
a way to do this. Thank You!
Answer: Here is one way to do it using `pd.get_dummies()`.
import pandas as pd
# read your csv data, separate must not be ',', for example, set tab `\t`
# =======================================================================
# I just read from clipboard
df = pd.read_clipboard(header=None, sep='\t')
df
0
0 a,b,x,z,y
1 b,e,w,t,u,o,s,z,i
2 z,o,w
3 o,p,w,z,a
# step 1
# =========================
df1 = df.groupby(level=0).apply(lambda group: pd.Series(group.values.ravel().tolist()[0].split(',')))
df1
0 0 a
1 b
2 x
3 z
4 y
1 0 b
1 e
2 w
3 t
4 u
..
7 z
8 i
2 0 z
1 o
2 w
3 0 o
1 p
2 w
3 z
4 a
dtype: object
# step 2
# =========================
pd.get_dummies(df1).groupby(level=0).agg(max)
a b e ... x y z
0 1 1 0 ... 1 1 1
1 0 1 1 ... 0 0 1
2 0 0 0 ... 0 0 1
3 1 0 0 ... 0 0 1
[4 rows x 13 columns]
# step 3, to_csv()
# =========================
|
Dronekit API Python: How to connect to the same vehicle from 2 different processes?
Question: I am looking for help working with the same vehicle from 2 different
processes.
I have one SITL instance runnning. I am trying to connect to this same
instance from both the main process of my DroneKit script and from a sub-
process spawned in the same script.
Both connections work fine (MPAPIConnection object returned in both cases,
with the same @ reference), but in the subprocess the connection object does
not appear to be live, and the vehicle parameters are not updated.
In the example below, the location returned by the main process when the drone
is moving is the actual location, but the location returned by the subprocess
remains stuck at the initial location when subprocess was first started.
Example:
import time
from pymavlink import mavutil
import multiprocessing
class OtherProcess(multiprocessing.Process):
def __init__(self):
super(OtherProcess,self).__init__()
def run(self):
sp_api = local_connect()
sp_v = api.get_vehicles()[0]
while True:
print "SubProcess : " + str(sp_v.location)
time.sleep(1)
api = local_connect()
v = api.get_vehicles()[0]
sp = OtherProcess()
sp.start()
while True:
print "MainProcess : " + str(v.location)
time.sleep(1)
So is there a way to access the same vehicle from different processes within
the same mavproxy instance ?
Answer: You should try this again - DKPY2 (just released) uses stand-alone scripts and
is designed with the idea that each Vehicle object returned using the
connect() function is completely independent. It is certainly possible to
connect to separate vehicles in the same script (same process) so very likely
you can connect to the same vehicle and from separate processes.
|
Python, Pygame Blitting to screen (argument 1 must be pygame.Surface)
Question: I am creating a list of blocks that i want to loop through to print to the
screen in pygame using the pygame.blit function. However an error is thrown
that says (argument 1 must be pygame.Surface, not block). I have made other
games in pygame printing objects to a screen but never putting them in a list
first. Any help would be greatly appreciated.
import pygame
import time
class block:
def __init__(self, image, posX, posY, name):
#Take passed image and assign to block#
self.image = image
#Create a rect for block for collisions#
# self.rect = self.image.get_rect()
#Create a positio from passed integers#
self.posX = posX
self.posY = posY
#Set id of the block#
self.name = name
def draw(self, screen):
screen.blit(self.image, (self.posX, self.posY))
def getName(self):
return self.name
class worldGeneration():
def __init__(self):
self.blocks = []
def worldGen(self):
grassImg = pygame.image.load("grass.png")
dirtImg = pygame.image.load("dirt.png")
for x in range(0, 640, 32):
block1 = block(grassImg, 0 + x ,416, x)
self. blocks.append(block1)
for i in range(416, 480, 32):
block1 = block(dirtImg, 0 + x ,448, x)
self.blocks.append(block1)
def getBlocks(self, x):
return self.blocks[x]
def draw(self, x, screen):
screen.blit(self.blocks[x])
def gameUpdate(screen, world):
screen.fill((147,200,250))
for x in range (0, len(world.blocks)):
world.draw(x, screen)
pygame.display.flip()
def gameLoop():
height = 480
width = 640
screen = pygame.display.set_mode((width, height))
clock = pygame.time.Clock()
world = worldGeneration()
world.worldGen()
running = True
while running:
clock.tick(60)
pygame.display.set_caption(str(clock.get_fps()))
for e in pygame.event.get():
if e.type == pygame.QUIT:
running = False
if e.type == pygame.KEYDOWN and e.key == pygame.K_ESCAPE:
running = False
gameUpdate(screen, world)
gameLoop()
Answer: Do it that way:
1. Create a surface()
2. Blit block by block to it, changing coordinates accordingly
3. Get the screen surface() and blit in the resulting one
4. flip() the display!
|
What happened to ifilter?
Question: In comparing documentation for itertools between Python
[2](https://docs.python.org/2/library/itertools.html) and
[3](https://docs.python.org/3/library/itertools.html), I noticed `ifilter`,
`imap`, `izip` are missing from Python 3. I suspect this is because many
builtin keywords have been converted to generators and replaced former
keywords, but it is unclear in this case.
Is it true `ifilter`, `imap`, `izip` are now equivalent to `filter`, `map`,
`zip` in Python 3? If not, where can I find rationales for why certain methods
were removed from current `itertools`?
Answer: Python 2.3 introduced the `itertools` module, which defined variants of the
global `zip()`, `map()`, and `filter()` functions that returned iterators
instead of lists. In Python 3, those global functions return iterators, so
those functions in the itertools module have been eliminated.

* Instead of `itertools.izip()`, just use the global `zip()` function.
* Instead of `itertools.imap()`, just use `map()`.
* `itertools.ifilter()` becomes `filter()`.
* The itertools module still exists in Python 3, it just doesn’t have the functions that have migrated to the global namespace. The 2to3 script is smart enough to remove the specific imports that no longer exist, while leaving other imports intact.
Read more
[here](http://www.cmi.ac.in/~madhavan/courses/prog2-2012/docs/diveintopython3/porting-
code-to-python-3-with-2to3.html)
|
aysncio cannot read stdin on Windows
Question: I'm trying to read stdin asynchronously on Windows 7 64-bit and Python 3.4.3
I tried this inspired by an [SO
answer](http://stackoverflow.com/questions/25351999/what-file-descriptor-
object-does-python-asyncios-loop-add-reader-expect#answer-25352042):
import asyncio
import sys
def reader():
print('Received:', sys.stdin.readline())
loop = asyncio.get_event_loop()
task = loop.add_reader(sys.stdin.fileno(), reader)
loop.run_forever()
loop.close()
However, it raises an `OSError: [WInError 100381] An operation was attempted
on something that is not a socket`.
Could a file-like object like `stdin` be wrapped in a class to give it the API
of a socket? I have [asked this question
separately](http://stackoverflow.com/questions/31511563/file-to-socket-
adapter-in-python), but if the solution is simple please answer here.
Assuming that I cannot wrap a file-like object to make it a socket, I tried
using streams as inspired by [this gist](https://gist.github.com/nathan-
hoad/8966377):
import asyncio
import sys
@asyncio.coroutine
def stdio(loop):
reader = asyncio.StreamReader(loop=loop)
reader_protocol = asyncio.StreamReaderProtocol(reader)
yield from loop.connect_read_pipe(lambda: reader_protocol, sys.stdin)
@asyncio.coroutine
def async_input(loop):
reader = yield from stdio(loop)
line = yield from reader.readline()
return line.decode().replace('\r', '').replace('\n', '')
@asyncio.coroutine
def main(loop):
name = yield from async_input(loop)
print('Hello ', name)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
loop.close()
And that raises a `NotImplementedError` in
`asyncio.base_events._make_read_pipe_transport`
Please advise how to read `stdin` using `asyncio` on Windows...
Answer: The `NotImplementedError` exception is raised because the [connect pipes
coroutines](https://docs.python.org/3/library/asyncio-eventloop.html#connect-
pipes) are not supported by the `SelectorEventLoop`, which is the default
event loop set on `asyncio`. You need to use a
[`ProactorEventLoop`](https://docs.python.org/3/library/asyncio-
eventloops.html#asyncio.ProactorEventLoop) to support pipes on Windows.
However, it would still not work because apparently the `connect_read_pipe`
and `connect_write_pipe` functions doesn't support `stdin`/`stdout`/`stderr`
or files in Windows as Python 3.5.1.
One way to read from `stdin` with an asynchronous behavior is using a thread
with the loop's `run_in_executor` method. Here is a simple example for
reference:
import asyncio
import sys
async def aio_readline(loop):
while True:
line = await loop.run_in_executor(None, sys.stdin.readline)
print('Got line:', line, end='')
loop = asyncio.get_event_loop()
loop.run_until_complete(aio_readline(loop))
loop.close()
In the example the function `sys.stdin.readline()` is called within another
thread by the `loop.run_in_executor` method. The thread remains blocked until
`stdin` receives a linefeed, in the mean time the loop is free to execute
others coroutines if they existed.
|
Python: Why is my program running when I insert pieces one by one, but not when all together?
Question: So I created this very easy paber-rock-scissors game in Python (Sorry, the
strings are not in English).
Anyway, I am running it in IDLE and it works when I insert it like this:
1. from random import randint --> ENTER
2. Def my function --> ENTER
3. call my function --> ENTER
This way it runs as expected.
But when I put them all together and then press ENTER, it does nothing and
when I call my function again, it says it's not defined. Also, when I save it
as .py and run it, it only pop's up for like 0.5 seconds.
My code is:
from random import randint
def myProgram():
kpk = raw_input("Kivi, paber või kaarid?")
random = randint(0,2)
result = ""
if kpk == "kivi":
kpk = 0
if random == 0:
result = "VIIK"
elif random == 1:
result = "Kivi < Paber --- KAOTASID"
else:
result = "kivi > kaarid --- VOITSID"
elif kpk == "paber":
kpk = 1
if random == 0:
result = "kivi < paber --- VOITSID"
elif random == 1:
result = "VIIK"
else:
result = "paber < kaarid --- KAOTASID"
elif kpk == "käärid":
kpk = 2
if random == 0:
result = "kivi > kaarid --- KAOTASID"
elif random == 1:
result = "paber > kaarid --- VOITSID"
else:
result = "VIIK"
else:
print "Sisend peab olema uks kolmest: Kivi, paber või kaarid"
myProgram()
print result
kas = raw_input("Kas soovite uuesti mangida? (Y/N)")
if kas == "Y" or kas == "y" or kas == "jah":
myProgram()
else:
print "Mang on läbi!"
myProgram()
Answer: I am guessing this is the minimal version of your issue -
>>> def blah():
... return 1
... blah()
File "<stdin>", line 3
blah()
^
SyntaxError: invalid syntax
>>> blah()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'blah' is not defined
In the python shell/IDLE , you need to enter a newline (empty line) after a
block has ended, so that python can interpret the block (this is valid for any
blocks, like function definition, loops, `with` block, etc etc).
In the above, when I copied and pasted the code-
def blah():
return 1
blah()
Without spaces, it would give the syntax errors , as there is not newline
after the definition of function. One way (as you already observed) is to copy
paste the function and function call separately (and make sure function gets
defined by pressing enter/adding a newline at end) . (I am not sure if this is
some kind of bug in the python shell, but it has been like that for both
Python 2.x as well as Python 3.x ) .
Please note this is only when copy pasting code to the shell, not when saving
your script as a .py file and running it , when running the code as a script,
then newlines are not needed (though it would be better to put them for
readability) .
|
Python subprocess.popen returns empty string
Question:
import subprocess
cd=['sudo','./interface','-a','</tmp/vol.js']
p = subprocess.Popen(cd, stdout = subprocess.PIPE,stderr=subprocess.PIPE, stdin=subprocess.PIPE)
Above code returns null but when I run same same command i.e `sudo ./interface
-a </tmp/vol.js` works totally fine
Answer: You are using _shell-specific redirection syntax_ (`</tmp/vol.js`) but
`subprocess.Popen()` doesn't run your command in a shell.
You'd need to either run the command in a shell (pass in a _string_ , not a
list, and set `shell=True`), or open `vol.js` yourself and write that to the
process through the `stdin` pipe you opened:
import subprocess
command = ('sudo', './interface', '-a')
with open('/tmp/vol.js') as input_stream
p = subprocess.Popen(cd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=input_stream)
stdout, stderr = p.communicate()
Note that I passed in the open `vol.js` file object as the input stream here.
|
Python call function from module in a subprocess
Question: I would like to retrieve the stdout, stderr and resultcode of a module
function called from the main program. I thought subprocess was the key, but I
don't succeed submitting the module function to subprocess.
What I have:
#my_module.py
def run(args):
do stuff
print this
return THAT
if name == "__main__":
args = read_args()
run(args)
.
#my_main_script.py
import subprocess as sp
import my_module
p = sp.Popen(my_module.run(args), stdout=sp.PIPE, stderr=sp.PIPE)
out, err = p.communicate()
result = p.resultcode
What occurs: Apparently the subprocess module does something with THAT
returned from the my_module.run() provoking the crash:
if `THAT = list_of_lists` error: `AttributeError: "sublist" object has no
attribute rfind`
if `THAT = ["a","b",0]` error: `TypeError: execv() arg 2 must contain only
strings`
if `THAT = ["a","b"]` error: `OSError: [Errno 2] No such file or directory`
So subprocess apparently wants THAT to be a list containing path to files ???
Answer: You are not using subprocess in the right way :
sp.Popen(["./my_module.py", "arg1", "arg2"], stdout=sp.PIPE, stderr=sp.PIPE)
By the way, you wouldn't get any resultcode if you does not exit properly your
program with the `sys.exit(retcode)` function.
The final scripts would look like this :
#my_module.py
def run(args):
do stuff
print this
return THAT
if name == "__main__":
import sys
args = read_args()
sys.exit(run(args))
#my_main_script.py
import subprocess as sp
p = sp.Popen(["./my_module.py", "arg1", "arg2"], stdout=sp.PIPE, stderr=sp.PIPE)
out, err = p.communicate()
result = p.returncode
|
BeautifulSoup - getting value from the resultant tags
Question: The below code is my python code with beautiful soup for getting a specified
value from a given URL.
from bs4 import BeautifulSoup
import urllib
import re
book = urllib.urlopen(url)
bookpage = book.read()
book.close()
booksoup=BeautifulSoup(bookpage)
bookTags=booksoup.findAll('div',{"class":"hotel_large_photp_score featured_review_score"})
print bookTags
and when I print the `bookTags` this is what I got (below)
<div class="hotel_large_photp_score featured_review_score">
<a class="big_review_score_detailed js-big_review_score_detailed ind_rev_total hp_review_score" data-component="track" data-hash="cPWbOTCcdSdCBYSbfYCWXT" data-stage="1" data-tab-link="" data-track="click" href="#blockdisplay4" onclick="return false;" rel="reviews" style="">
<span data-component="track" data-hash="cPWbOTCcdSdCBYSbfYCWXT" data-stage="5" data-track="click">
<span class=" ">
Very good
</span>
<span class="rating">
<span class="average">8.2</span><span class="out_of">/<span class="best">10</span></span>
</span>
</span>
</a>
<span class="trackit score_from_number_of_reviews">
Score from <strong class="count">229</strong> reviews
</span>
What I need is the value `8.2` from the tag `<span
class="average">8.2</span>`. Please help me to get that value
Answer: Try
average = bookTags.find("span", {"class": "average"})
print average.text
|
error using MySQLdb
Question: I am newbie to python,I have simple code to connect database using MySQLdb and
Python3.4 running in localhost.
Python Code:
#!c:/Python34/python.exe -u
import cgitb ,cgi
import sys
import MySQLdb
conn = MySQLdb.connect(host='localhost',port=3306,
db='example2',
user='root',
password='tiger')
cursor = conn.cursor()
if conn:
print("sucess")
But getting error ,while executing a code Error:
Traceback (most recent call last):
File "<pyshell#27>", line 4, in <module>
password='tiger')
File "C:\Python34\lib\site-packages\MySQLdb\__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "C:\Python34\lib\site-packages\MySQLdb\connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
TypeError: an integer is required (got type str)
Please give some suggestion
Answer: You need to remove the single quotes from the port.
conn = MySQLdb.connect(host='localhost',
port=3306,
database='example2',
user='root',
password='tiger')
|
Basic Maths Game, Python: 'input expected at most 1 arguements, got 4'
Question: I've been working with Python for a little while now, and I decided recently I
wanted to make a basic maths game that would choose a random number from a
list between 0-9, and make you add them together. If your answer was right, it
would say well done, if it was wrong, it would say try again next time.
I'm having a problem with a line of code though, because it comes up with an
error message saying:
input expected at most 1 arguments, got 4
for this line of code:
ans=int(input(num1,'+',num2,'= '))
I've looked it up several times, and have found answers to the problem, but
not in the way I need them. Here is the whole line of code.
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9]
import random
num1 = random.choice(nums)
num2 = random.choice(nums)
sum = num1+num2
correct_ans = sum
ans=int(input(num1,'+',num2,'= '))
if ans==correct_ans:
print('Correct, Well Done!')
else:
print('Wrong, Try Again Next Time!')
Answer: ## Input takes 1 arg
You are passing 4 args to `input` which accepts only 1 argument
input(num1,'+',num2,'= ') # 4 args!
try
question = '{} + {} ='.format(num1, num2)
input(question)
## Putting it all together and discussing about types
import random
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9]
num_1 = random.choice(nums)
num_2 = random.choice(nums)
# This sum is numeric sum not a string concat
correct_ans = num_1 + num_2
# Let's make a question string by concat of our numbers in string format
question = '{} + {} = '.format(num_1, num_2)
ans = input(question)
# At this point ans is still a string so convert
ans = int(ans)
# Now you are comparing two ints
if ans == correct_ans:
print('Correct!')
else:
print('Wrong!')
## Test run
python math_game.py
3 + 3 = 6
Correct!
python math_game.py
7 + 2 = 1
Wrong!
|
Handle/catch RuntimeWarnings in connection with igraph python
Question: I am running the code
testgraph = igraph.Graph.Degree_Sequence(degseq,method = "vl")
which sometimes throws the warning
RuntimeWarning: Cannot shuffle graph, maybe there is only a single one? at gengraph_graph_molloy_hash.cpp:332
I would like to catch this warning, so I can stop working with degree
sequences that only have one graph.
I tried
degseq = [1,2,2,3]
try:
testgraph = igraph.Graph.Degree_Sequence(degseq,method = "vl")
except RuntimeWarning:
print degseq
else:
print "go on"
which returns the warning and then "go on".
I tried upgrading the warning to an exception with
warnings.simplefilter('error', 'Cannot shuffle graph')
degseq = [1,2,2,3]
try:
testgraph = igraph.Graph.Degree_Sequence(degseq,method = "vl")
except RuntimeWarning:
print degseq
else:
print "go on"
among others, and now something weird happens! It returns
testgraph = igraph.Graph.Degree_Sequence(degseq,method = "vl")
MemoryError: Error at src/attributes.c:284: not enough memory to allocate attribute hashes, Out of memory
How do I make python catch the RuntimeWarning? And why does the new exception
occur when I upgrade the warning to an exception?
Answer: You can try this to catch all warnings during an igraph call:
from warnings import catch_warnings
with catch_warnings(record=True) as caught_warnings:
testgraph = igraph.Graph.Degree_Sequence(degseq, method="vl")
if caught_warnings:
# caught_warnings is a list of warnings; do something with them here
Regarding the `MemoryError` that you see: it is actually a bug in the Python
interface of igraph. Internally, the C core of the igraph library (which does
not know anything about the host language it is embedded in) raises a warning
for your degree sequence. This is then turned into a Python warning, which is
then turned into an exception by your warning handler. However, the C core of
the igraph library does not anticipate that igraph core warnings are sometimes
turned into exceptions so it happily proceeds with the execution of the code
for `Graph.Degree_Sequence`, notices the warning-that-was-turned-into-an-
exception a few internal calls later, and wrongly attributes it to the failure
of some other memory allocation (that has nothing to do with your original
warning).
|
Find the year with the most number of people alive in Python
Question: Given a list of people with their birth and end years (all between `1900` and
`2000`), find the year with the most number of people alive.
Here is my somewhat brute-force solution:
def most_populated(population, single=True):
years = dict()
for person in population:
for year in xrange(person[0], person[1]):
if year in years:
years[year] += 1
else:
years[year] = 0
return max(years, key=years.get) if single else \
[key for key, val in years.iteritems() if val == max(years.values())]
print most_populated([(1920, 1939), (1911, 1944),
(1920, 1955), (1938, 1939)])
print most_populated([(1920, 1939), (1911, 1944),
(1920, 1955), (1938, 1939), (1937, 1940)], False)
I'm trying to find a more efficient way to solve this problem in `Python`.
Both - `readability` and `efficiency` counts. Moreover, for some reason my
code won't print `[1938, 1939]` while it should.
**Update**
Input is a `list` of tuples, where first element of a tuple is a `year` when
person was born, and second element of a `tuple` is the year of death.
**Update 2**
End year (2nd part of tuple) counts as well as a year of the person being
alive (so If the person dies in `Sept 1939` (we don't care about the month),
he is actually alive in 1939, at least part of it). That should fix the 1939'
missing in results.
**Best solution?**
While readability counts in favor of [@joran-
beasley](http://stackoverflow.com/a/31522489/1554153), for bigger input most
efficient algorithm was provided by
[@njzk2](http://stackoverflow.com/a/31523813/1554153). Thanks @hannes-ovrén
for providing analysis in [IPython notebook on
Gist](https://gist.github.com/hovren/bf5335ba45cd7eee811e)
Answer:
>>> from collections import Counter
>>> from itertools import chain
>>> def most_pop(pop):
... pop_flat = chain.from_iterable(range(i,j+1) for i,j in pop)
... return Counter(pop_flat).most_common()
...
>>> most_pop([(1920, 1939), (1911, 1944), (1920, 1955), (1938, 1939)])[0]
|
Python Pandas - Don't sort bar graph on y axis values
Question: I am beginner in Python. I have a Series with Date and count of some
observation as below
Date Count
2003 10
2005 50
2015 12
2004 12
2003 15
2008 10
2004 05
I wanted to plot a graph to find out the count against the year with a Bar
graph **(x axis as year and y axis being count)**. I am using the below code
import pandas as pd
pd.value_counts(sfdf.Date_year).plot(kind='bar')
I am getting the bar graph which is automatically sorted on the count. So I am
not able to clearly visualize how the count is distributed over the years. Is
there any way we can stop sorting the data on the bar graph on the count and
instead sort on the x axis values (i,e year)?
Answer: The following code uses `groupby()` to join the multiple instances of the same
year together, and then calls `sum()` on the groupby() object to sum it up. By
default `groupby()` pushes the grouped object to the dataframe index. I think
that `groupby()` automatically sorts, but just in case, `sort(axis=0)` will
sort the index. All that then remains is to plot. All in one line:
df = pd.DataFrame([[2003,10],[2005,50],[2015,12],[2004,12],[2003,15],[2008,10],[2004,5]],columns=['Date','Count'])
df.groupby('Date').sum().sort(axis=0).plot(kind='bar')
|
Pydot Error involving parsing ':' character followed by number
Question: So I was using pydot in python 2.7 from Anaconda and noticed I keep getting
errors when I attempt to use certain strings in Pydot.
The error I have isolated to:
import pydot
graph = pydot.Dot(graph_type='digraph', rankdir = 'LR')
S = 'Total Flow Count ' + ':' + str(3)
legend = pydot.Node('Legend', label=S, shape='rectangle')
graph.add_node(legend)
Whenever I run this I get the following output:
Traceback (most recent call last):
File "path\of\my\code\errorisolate.py", line 13, in <module>
graph.write_png('example5graph.png')
File "c:\Anaconda\lib\site-packages\pydot.py", line 1609, in <lambda>
lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog))
File "c:\Anaconda\lib\site-packages\pydot.py", line 1703, in write
dot_fd.write(self.create(prog, format))
File "c:\Anaconda\lib\site-packages\pydot.py", line 1803, in create
status, stderr_output) )
InvocationException: Program terminated with status: 6. stderr follows: Error: c:\users\sidharth\appdata\local\temp\tmpxvwsls:3: syntax error near line 3
context: Legend [shape=rectangle, label=Total Flow Count >>> : <<< 3];
## Analysis/Work So Far:
Somehow the combination of a colon character ':' followed by a number in str()
format seems to raise an error. I tried to fix it by append the 'r' in front
since I know that is a way to fix errors invlving the '\n' character. But even
then no luck.
## Changes:
I removed the r as it appears to be causing a little confusion. I had kept
r':' hoping to emulate the solution to the problem of non compiling newlines
'\n', since pydot requires them to be listed as r'\n' where r is explicitly
not defined.
As per:
[Pydot not playing well with line
breaks?](http://stackoverflow.com/questions/24442094/pydot-not-playing-well-
with-line-breaks)
Answer: I found this [issue number 38](https://github.com/erocarrera/pydot/issues/38)
\- Which says we cannot use special symbols (like colon) in node names or
labels. The reason it has highlighted is -
> As with issue 28: The problem with the colon in Node names is that Graphviz
> will use them to specify a port where to attach edges, it's a Graphviz
> artifact. The way pydot supports them is to allow them in names, if you wish
> to simply have colon characters in the name simply add quotes to the string.
>
> For instance: (note the double quotes in the actual string):
>
>
> node = pydot.Node('"Testnode:###@"')
>
> print node.get_name()
> '"Testnode:###@"'
>
Though you may be better off not having colon in your name.
|
Remove accents in Windows username causing troubles with softwares and libraries
Question: I have an accent in my Windows username `Clément`. Therefore, there is an
accent in my user directory `C:\Users\Clément\`. This causes some troubles for
softwares and libraries.
For example, I recently installed Python Anaconda and I can't import the
packages (matplotlib, nltk,...) without having a `UnicodeDecodeError` because
of the path.
**My question is:** Is it possible to remove the accent of the Windows
username and change `C:\Users\Clément\` to `C:\Users\Clement\` without having
troubles with the other softwares? Or should I reinstall Windows?
Answer: A much easier solution would be to use Anaconda3, which is based on Python 3.
Python 3 handles Unicode strings natively, and it's much rarer to see issues
with non-ASCII characters in paths.
|
Write a CSV from Urlib and manage encoding properly
Question: I need to put the content of a vector/list into a CSV. I'm getting trouble
with the "python encoding problem" obviously.
Here is the code we're taklin' about :
import pdb
#pdb.set_trace()
import sys
sys.version_info
import csv
from bs4 import *
import urllib.request
rows=list()
def parse_csv(content, delimiter = ';'):
csv_data = []
for line in content.split('\n'):
csv_data.append( [x.strip() for x in line.split( delimiter )] ) # strips spaces also
return csv_data
list_url=parse_csv(open('url.csv','rU').read())
for i in range (0,len(list_url)):
url=str(list_url[i][0]) ## read URL from an array coming from an Url-CSV
page=urllib.request.urlopen(url)
soup = BeautifulSoup(page.read(),"html.parser")
for h1 in soup.find_all('h1'):
rows.append(h1.get_text().encode('utf-8').strip()) #looks for title
for h2 in soup.find_all('h2'):
rows.append(h2.get_text().encode('utf-8').strip()) #looks for title
restricted_webpage= soup.find( "div", {"id":"ingredients"} )
readable_restricted=str(restricted_webpage)
soup2=BeautifulSoup(readable_restricted,"html.parser")
for td in soup2.find_all('td'):
rows.append(td.get_text().encode('utf-8').strip())
print(rows)
if sys.version_info >= (3,0,0):
f = open('FB.csv', 'w', newline='')
else:
f = open('FB.csv', 'wb')
wr = csv.writer(f)
wr.writerows(rows)
If i do that. My CSV looks like that : (Which is not fine at all)
65,100,117,108,116,32,83,109,97,108,108,32,68,111,103
68,101,32,112,108,117,115,32,100,101,32,56,47,32,49,48,32,109,111,105,115,46
67,101,110,100,114,101,115,32,98,114,117,116,101,115,32,40,37,41
55,46,52
67,101,108,108,117,108,111,115,101,32,98,114,117,116,101,32,40,37,41
49,46,54
70,105,98,114,101,115,32,97,108,105,109,101,110,116,97,105,114,101,115,32,40,37,41
54,46,54
77,97,116,105,195,168,114,101,32,103,114,97,115,115,101,32,40,37,41
49,54,46,48
65,99,105,100,101,32,108,105,110,111,108,195,169,105,113,117,101,32,40,37,41
51,46,49
69,110,101,114,103,105,101,32,109,195,169,116,97,98,111,108,105,115,97,98,108,101,32,40,99,97,108,99,117,108,195,169,101,32,115,101,108,111,110,32,78,82,67,56,53,41,32,40,107,99,97,108,47,107,103,41
51,54,53,50,46,53
69,110,101,114,103,105,101,32,109,195,169,116,97,98,111,108,105,115,97,98,108,101,32,40,109,101,115,117,114,195,169,101,41,32,40,107,99,97,108,47,107,103,41
51,57,48,48,46,48
72,117,109,105,100,105,116,195,169,32,40,37,41
57,46,53
69,120,116,114,97,105,116,32,110,111,110,32,97,122,111,116,195,169,32,40,37,41
52,48,46,53
79,109,195,169,103,97,32,54,32,40,37,41
51,46,49,56
80,114,111,116,195,169,105,110,101,32,98,114,117,116,101,32,40,37,41
50,53,46,48
65,109,105,100,111,110,32,40,37,41
51,53,46,53
67,104,108,111,114,101,32,40,37,41
49,46,52,51
67,117,105,118,114,101,32,40,109,103,47,107,103,41
49,53,46,48
73,111,100,101,32,40,109,103,47,107,103,41
50,46,57
70,101,114,32,40,109,103,47,107,103,41
49,54,55,46,48
77,97,110,103,97,110,195,168,115,101,32,40,109,103,47,107,103,41
54,56,46,48
90,105,110,99,32,40,109,103,47,107,103,41
50,52,50,46,48
66,105,111,116,105,110,101,32,40,109,103,47,107,103,41
51,46,49,51
67,104,111,108,105,110,101,32,40,109,103,47,107,103,41
49,54,48,48,46,48
65,99,105,100,101,32,102,111,108,105,113,117,101,32,40,109,103,47,107,103,41
49,51,46,57
86,105,116,97,109,105,110,101,32,65,32,40,85,73,47,107,103,41
51,50,48,48,48,46,48
86,105,116,97,109,105,110,101,32,66,49,32,84,104,105,97,109,105,110,101,32,40,109,103,47,107,103,41
50,55,46,53
86,105,116,97,109,105,110,101,32,66,50,32,82,105,98,111,102,108,97,118,105,110,101,32,40,109,103,47,107,103,41
52,57,46,54
86,105,116,97,109,105,110,101,32,66,51,32,78,105,97,99,105,110,101,32,40,109,103,47,107,103,41
52,57,48,46,48
86,105,116,97,109,105,110,101,32,66,53,32,65,99,105,100,101,32,112,97,110,116,111,116,104,195,169,110,105,113,117,101,32,40,109,103,47,107,103,41
49,52,55,46,56
86,105,116,97,109,105,110,101,32,66,54,32,80,121,114,105,100,111,120,105,110,101,32,40,109,103,47,107,103,41
55,55,46,49
86,105,116,97,109,105,110,101,32,67,32,40,109,103,47,107,103,41
50,48,48,46,48
86,105,116,97,109,105,110,101,32,68,51,32,40,85,73,47,107,103,41
56,48,48,46,48
86,105,116,97,109,105,110,101,32,69,32,40,109,103,47,107,103,41
54,48,48,46,48
65,114,103,105,110,105,110,101,32,40,37,41
49,46,53,51
76,117,116,195,169,105,110,101,32,40,109,103,47,107,103,41
53,46,48
77,195,169,116,104,105,111,110,105,110,101,32,67,121,115,116,105,110,101,32,40,37,41
49,46,49,56
84,97,117,114,105,110,101,32,40,109,103,47,107,103,41
50,57,48,48,46,48
66,101,97,103,108,101,32,65,100,117,108,116
66,101,97,103,108,101,32,97,100,117,108,116,101,44,32,195,160,32,112,97,114,116,105,114,32,100,101,32,49,50,32,109,111,105,115
65,99,105,100,101,32,97,114,97,99,104,105,100,111,110,105,113,117,101,32,40,37,41
48,46,48,55
67,101,110,100,114,101,115,32,98,114,117,116,101,115,32,40,37,41
54,46,49
66,105,111,116,105,110,101,32,40,109,103,47,107,103,41
50,46,57,50
70,105,98,114,101,115,32,97,108,105,109,101,110,116,97,105,114,101,115,32,40,37,41
49,49,46,48
68,76,45,109,195,169,116,104,105,111,110,105,110,101,32,40,37,41
48,46,53
69,80,65,47,68,72,65,32,40,37,41
48,46,51
77,97,116,105,195,168,114,101,32,103,114,97,115,115,101,32,40,37,41
49,50,46,48
67,101,108,108,117,108,111,115,101,32,98,114,117,116,101,32,40,37,41
51,46,55
67,104,108,111,114,117,114,101,32,100,101,32,103,108,117,99,111,115,97,109,105,110,101,32,40,109,103,47,107,103,41
52,57,53,46,48
71,108,117,99,111,115,97,109,105,110,101,32,112,108,117,115,32,99,104,111,110,100,114,111,195,175,116,105,110,101,32,40,109,103,47,107,103,41
53,48,48,46,48
76,45,99,97,114,110,105,116,105,110,101,32,40,109,103,47,107,103,41
49,48,48,46,48
65,99,105,100,101,32,108,105,110,111,108,195,169,105,113,117,101,32,40,37,41
50,46,52,50
76,117,116,195,169,105,110,101,32,40,109,103,47,107,103,41
53,46,48
69,110,101,114,103,105,101,32,109,195,169,116,97,98,111,108,105,115,97,98,108,101,32,40,99,97,108,99,117,108,195,169,101,32,115,101,108,111,110,32,78,82,67,56,53,41,32,40,107,99,97,108,47,107,103,41
51,52,50,53,46,48
69,110,101,114,103,105,101,32,109,195,169,116,97,98,111,108,105,115,97,98,108,101,32,40,109,101,115,117,114,195,169,101,41,32,40,107,99,97,108,47,107,103,41
51,53,56,52,46,48
77,195,169,116,104,105,111,110,105,110,101,32,67,121,115,116,105,110,101,32,40,37,41
48,46,57,50
72,117,109,105,100,105,116,195,169,32,40,37,41
57,46,53
69,120,116,114,97,105,116,32,110,111,110,32,97,122,111,116,195,169,32,40,37,41
52,49,46,55
79,109,195,169,103,97,32,51,32,40,37,41
48,46,54,52
79,109,195,169,103,97,32,54,32,40,37,41
50,46,54,54
80,104,111,115,112,104,111,114,101,32,40,37,41
48,46,55
80,114,111,116,195,169,105,110,101,32,98,114,117,116,101,32,40,37,41
50,55,46,48
65,109,105,100,111,110,32,40,37,41
51,52,46,52
84,97,117,114,105,110,101,32,40,109,103,47,107,103,41
51,53,48,48,46,48
86,105,116,97,109,105,110,101,32,65,32,40,85,73,47,107,103,41
50,57,48,48,48,46,48
86,105,116,97,109,105,110,101,32,67,32,40,109,103,47,107,103,41
51,48,48,46,48
86,105,116,97,109,105,110,101,32,69,32,40,109,103,47,107,103,41
54,48,48,46,48
67,97,108,99,105,117,109,32,40,37,41
48,46,57
80,111,108,121,112,104,195,169,110,111,108,115,32,100,101,32,116,104,195,169,32,118,101,114,116,32,101,116,32,100,101,32,114,97,105,115,105,110,115,32,40,109,103,47,107,103,41
49,53,48,46,48
67,104,108,111,114,101,32,40,37,41
48,46,54,51
67,117,105,118,114,101,32,40,109,103,47,107,103,41
49,53,46,48
73,111,100,101,32,40,109,103,47,107,103,41
52,46,56
70,101,114,32,40,109,103,47,107,103,41
50,49,50,46,48
77,97,103,110,195,169,115,105,117,109,32,40,37,41
48,46,48,56
77,97,110,103,97,110,195,168,115,101,32,40,109,103,47,107,103,41
55,49,46,48
80,111,116,97,115,115,105,117,109,32,40,37,41
48,46,55
83,195,169,108,195,169,110,105,117,109,32,40,109,103,47,107,103,41
48,46,50,57
83,111,100,105,117,109,32,40,37,41
48,46,52
90,105,110,99,32,40,109,103,47,107,103,41
50,48,49,46,48
67,104,111,108,105,110,101,32,40,109,103,47,107,103,41
50,50,48,48,46,48
65,99,105,100,101,32,102,111,108,105,113,117,101,32,40,109,103,47,107,103,41
49,50,46,57
86,105,116,97,109,105,110,101,32,66,49,32,84,104,105,97,109,105,110,101,32,40,109,103,47,107,103,41
50,53,46,55
48,46,49,51
86,105,116,97,109,105,110,101,32,66,50,32,82,105,98,111,102,108,97,118,105,110,101,32,40,109,103,47,107,103,41
52,54,46,50
86,105,116,97,109,105,110,101,32,66,51,32,78,105,97,99,105,110,101,32,40,109,103,47,107,103,41
52,53,56,46,55
86,105,116,97,109,105,110,101,32,66,53,32,65,99,105,100,101,32,112,97,110,116,111,116,104,195,169,110,105,113,117,101,32,40,109,103,47,107,103,41
49,51,55,46,57
86,105,116,97,109,105,110,101,32,66,54,32,80,121,114,105,100,111,120,105,110,101,32,40,109,103,47,107,103,41
55,50,46,48
86,105,116,97,109,105,110,101,32,68,51,32,40,85,73,47,107,103,41
56,48,48,46,48
65,114,103,105,110,105,110,101,32,40,37,41
49,46,53,50
76,45,108,121,115,105,110,101,32,40,37,41
49,46,48,56
74,97,99,107,32,82,117,115,115,101,108,108,32,65,100,117,108,116
7
53,46,57
66,105,111,116,105,110,101,32,40,109,103,47,107,103,41
This implies i would have to decode the CSV to use it in the future. This
doesn't work for me. I need something "visual".
If i try to replace every
encode('utf-8').strip()
by :
strip()
I receive the error :
>
> wr.writerows(rows) UnicodeEncodeError: 'ascii' codec can't encode
> character '\xe8' in position 8: ordinal not in range(128)
>
strip
is very useful here because it helps me get rid of the '\n' i don't really
like
Here is a second thing i've tried : to get rid of those é'èàù% and other
trouble maker by turning to ASCI with that one :
yourstring = yourstring.encode('ascii', 'ignore').decode('ascii')
The results was not bad but not enough :
"
",A,d,u,l,t, ,S,m,a,l,l, ,D,o,g
D,e, ,p,l,u,s, ,d,e, ,8,/, ,1,0, ,m,o,i,s,.
"
", , , , , , , , , , , , , , , , ,C,e,n,d,r,e,s, ,b,r,u,t,e,s, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,7,.,4,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,e,l,l,u,l,o,s,e, ,b,r,u,t,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,6,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,F,i,b,r,e,s, ,a,l,i,m,e,n,t,a,i,r,e,s, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,6,.,6,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,a,t,i,r,e, ,g,r,a,s,s,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,6,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,l,i,n,o,l,i,q,u,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,.,1,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,n,e,r,g,i,e, ,m,t,a,b,o,l,i,s,a,b,l,e, ,(,c,a,l,c,u,l,e, ,s,e,l,o,n, ,N,R,C,8,5,), ,(,k,c,a,l,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,6,5,2,.,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,n,e,r,g,i,e, ,m,t,a,b,o,l,i,s,a,b,l,e, ,(,m,e,s,u,r,e,), ,(,k,c,a,l,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,9,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,H,u,m,i,d,i,t, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,9,.,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,x,t,r,a,i,t, ,n,o,n, ,a,z,o,t, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,0,.,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,O,m,g,a, ,6, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,.,1,8,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,P,r,o,t,i,n,e, ,b,r,u,t,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,m,i,d,o,n, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,5,.,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,h,l,o,r,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,4,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,u,i,v,r,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,I,o,d,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,.,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,F,e,r, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,6,7,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,a,n,g,a,n,s,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,6,8,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,Z,i,n,c, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,4,2,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,B,i,o,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,.,1,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,h,o,l,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,6,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,f,o,l,i,q,u,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,3,.,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,A, ,(,U,I,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,2,0,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,1, ,T,h,i,a,m,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,7,.,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,2, ,R,i,b,o,f,l,a,v,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,9,.,6,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,3, ,N,i,a,c,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,9,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,5, ,A,c,i,d,e, ,p,a,n,t,o,t,h,n,i,q,u,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,4,7,.,8,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,6, ,P,y,r,i,d,o,x,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,7,7,.,1,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,C, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,D,3, ,(,U,I,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,8,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,E, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,6,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,r,g,i,n,i,n,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,5,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,L,u,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,t,h,i,o,n,i,n,e, ,C,y,s,t,i,n,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,1,8,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,T,a,u,r,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,9,0,0,.,0,"
", , , , , , , ,
"
",B,e,a,g,l,e, ,A,d,u,l,t
B,e,a,g,l,e, ,a,d,u,l,t,e,",", , ,p,a,r,t,i,r, ,d,e, ,1,2, ,m,o,i,s
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,a,r,a,c,h,i,d,o,n,i,q,u,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,0,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,e,n,d,r,e,s, ,b,r,u,t,e,s, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,6,.,1,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,B,i,o,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,.,9,2,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,F,i,b,r,e,s, ,a,l,i,m,e,n,t,a,i,r,e,s, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,l,i,n,o,l,i,q,u,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,.,4,2,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,L,u,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,n,e,r,g,i,e, ,m,t,a,b,o,l,i,s,a,b,l,e, ,(,c,a,l,c,u,l,e, ,s,e,l,o,n, ,N,R,C,8,5,), ,(,k,c,a,l,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,4,2,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,n,e,r,g,i,e, ,m,t,a,b,o,l,i,s,a,b,l,e, ,(,m,e,s,u,r,e,), ,(,k,c,a,l,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,5,8,4,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,t,h,i,o,n,i,n,e, ,C,y,s,t,i,n,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,9,2,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,H,u,m,i,d,i,t, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,9,.,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,x,t,r,a,i,t, ,n,o,n, ,a,z,o,t, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,1,.,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,O,m,g,a, ,3, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,6,4,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,O,m,g,a, ,6, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,.,6,6,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,P,h,o,s,p,h,o,r,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,P,r,o,t,i,n,e, ,b,r,u,t,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,7,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,m,i,d,o,n, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,4,.,4,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,T,a,u,r,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,5,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,A, ,(,U,I,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,9,0,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,C, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,E, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,6,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,a,l,c,i,u,m, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,P,o,l,y,p,h,n,o,l,s, ,d,e, ,t,h, ,v,e,r,t, ,e,t, ,d,e, ,r,a,i,s,i,n,s, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,5,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,h,l,o,r,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,6,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,u,i,v,r,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,I,o,d,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,.,8,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,F,e,r, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,1,2,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,a,g,n,s,i,u,m, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,0,8,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,a,n,g,a,n,s,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,7,1,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,P,o,t,a,s,s,i,u,m, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,S,l,n,i,u,m, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,2,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,S,o,d,i,u,m, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,4,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,Z,i,n,c, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,0,1,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,h,o,l,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,2,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,f,o,l,i,q,u,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,2,.,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,1, ,T,h,i,a,m,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,5,.,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,1,2, ,C,y,a,n,o,c,o,b,a,l,a,m,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,1,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,2, ,R,i,b,o,f,l,a,v,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,6,.,2,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,3, ,N,i,a,c,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,5,8,.,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,5, ,A,c,i,d,e, ,p,a,n,t,o,t,h,n,i,q,u,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,3,7,.,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,B,6, ,P,y,r,i,d,o,x,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,7,2,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,V,i,t,a,m,i,n,e, ,D,3, ,(,U,I,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,8,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,r,g,i,n,i,n,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,5,2,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,L,-,l,y,s,i,n,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,0,8,"
", , , , , , , ,
"
",J,a,c,k, ,R,u,s,s,e,l,l, ,A,d,u,l,t
J,a,c,k, ,R,u,s,s,e,l,l, ,T,e,r,r,i,e,r, ,a,d,u,l,t,e,",", , ,p,a,r,t,i,r, ,d,e, ,1,0, ,m,o,i,s
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,a,r,a,c,h,i,d,o,n,i,q,u,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,0,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,e,n,d,r,e,s, ,b,r,u,t,e,s, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,5,.,9,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,B,i,o,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,.,0,7,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,F,i,b,r,e,s, ,a,l,i,m,e,n,t,a,i,r,e,s, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,7,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,D,L,-,m,t,h,i,o,n,i,n,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,6,5,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,P,A,/,D,H,A, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,0,.,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,M,a,t,i,r,e, ,g,r,a,s,s,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,6,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,e,l,l,u,l,o,s,e, ,b,r,u,t,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,.,3,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,C,h,l,o,r,u,r,e, ,d,e, ,g,l,u,c,o,s,a,m,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,4,9,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,G,l,u,c,o,s,a,m,i,n,e, ,p,l,u,s, ,c,h,o,n,d,r,o,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,5,0,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,L,-,c,a,r,n,i,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,1,5,0,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,A,c,i,d,e, ,l,i,n,o,l,i,q,u,e, ,(,%,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,2,.,8,8,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,L,u,t,i,n,e, ,(,m,g,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,5,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,n,e,r,g,i,e, ,m,t,a,b,o,l,i,s,a,b,l,e, ,(,c,a,l,c,u,l,e, ,s,e,l,o,n, ,N,R,C,8,5,), ,(,k,c,a,l,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,7,1,6,.,0,"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,E,n,e,r,g,i,e, ,m,t,a,b,o,l,i,s,a,b,l,e, ,(,m,e,s,u,r,e,), ,(,k,c,a,l,/,k,g,),"
", , , , , , , ,
"
", , , , , , , , , , , , , , , , ,3,9,6,4,.,0,"
", , , , , , , ,
When i try to add a `strip` in addition. The CSV produced is empty.
Now just a `strip` produce an error :
> UnicodeEncodeError: 'ascii' codec can't encode character '\xe8' in position
> 8: ordinal not in range(128)
I read and tried most of this [stack-
topic](http://stackoverflow.com/questions/9942594/unicodeencodeerror-ascii-
codec-cant-encode-character-u-xa0-in-position-20/9942822#9942822)
The ASCII way isn't very nice because i lose informations. And i do need them.
How can i put properly my list into this FB.csv ?
Answer: Test yout code with these instructions in the beggining of your code:
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
You should take a look [here](http://stackoverflow.com/questions/3828723/why-
we-need-sys-setdefaultencodingutf-8-in-a-py-script)
|
a bug with csv module in python
Question:
import csv
with open('database.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(row['NAME'])
Guys, I have a csv file with the first row as a index, and in linux the code
read from row['NAME'] and print only the names form colum NAME, when I run it
in windows, it says:
C:\Users\Desktop>python py.py
Traceback (most recent call last):
File "py.py", line 5, in <module>
print(row['NAME'])
KeyError: 'NAME'
WHY?
Answer: If you are using the python 2- versions, you need to open the csv with rb,
i.e.:
with open('database.csv', 'rb') as csvfile:....
for reference, checkout <https://docs.python.org/2/library/csv.html> as it
includes a part in the reader doc about this.
|
Converting a string of numbers to hex and back to dec pandas python
Question: I currently have a string of values which I retrieved after filtering through
data from a csv file. ultimately I had to do some filtering of the data but I
have the same numbers as a list, dataframe, or array. I just need to take the
numbers in the string and convert them to hex and then take the first 8
numbers of the hex and convert that to dec for each element in the string.
Lastly I also need to convert the last 8 of the same hex and then to dec as
well for each value in the string.
I cannot provide a snippet because it is sensitive data, but here is an
example.
I basically have something like this
>>> list_A
[52894036, 78893201, 45790373]
If I convert it to a dataframe and call `df.dtypes`, it says `dtype: object`
and I can convert the values of Column A to bool, int, or string, but the
dtype is always an object.
It does not matter whether it is a function, or just a simple loop. I have
been trying many methods and am unable to attain the results I need. But
ultimately the data is taken from different csv files and will never be the
same values or list size.
Answer: I'm not completely following your question, but this ought to cover most of
what you're trying to do. I'll do it in pandas since you seem to be doing it
that way, but as far as this question itself is concerned you could do it just
as easily with standard python.
import pandas as pd
df=pd.DataFrame({ 'a':[52894036999, 78893201999, 45790373999] })
df['b'] = df['a'].apply( hex )
df['c'] = df['b'].str[:10]
df['d'] = df['c'].apply( lambda x: int(x,base=0) )
df
a b c d
0 52894036999 0xc50baf407L 0xc50baf40 3305877312
1 78893201999 0x125e66ba4fL 0x125e66ba 308176570
2 45790373999 0xaa951a86fL 0xaa951a86 2861898374
Note that pandas basically has 2 main dtypes: numbers (ints or floats) and
objects. Objects can be anything, including strings.
df.dtypes
a int64
b object
c object
d int64
Sometimes it can be more informative to check the type of the individual
elements of a column, rather than the dtype of the whole column. Here we'll
look at the first row and you can see that the middle two columns are strings.
for col in df.columns:
print( type(df[col].iloc[0]) )
<type 'numpy.int64'>
<type 'str'>
<type 'str'>
<type 'numpy.int64'>
|
BeautifulSoup: RuntimeError: maximum recursion depth exceeded
Question: I can't avoid the maximum recursion depth Python RuntimeError using
BeautifulSoup.
I'm trying to recurse over nested sections of code and pull out the content.
The prettified HTML looks like this (don't ask why it looks like this :)):
<div><code><code><code><code>Code in here</code></code></code></code></div>
The function I'm passing my soup object to is:
def _strip_descendent_code(self, soup):
sys.setrecursionlimit(2000)
# soup = BeautifulSoup(html, 'lxml')
for code in soup.findAll('code'):
s = ""
for c in code.descendents:
if not isinstance(c, NavigableString):
if c.name != code.name:
continue
elif c.name == code.name:
if isinstance(c, NavigableString):
s += str(c)
else:
continue
code.append(s)
return str(soup)
You can see I'm trying to increase the default recursion limit but this is not
a solution. I've increased up to the point that C hits the memory limit on
computer, and the function above never works.
Any help to get this to work and point out the error/s would be much
appreciated.
The stack trace repeats this:
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1234, in find
l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1255, in find_all
return self._find_all(name, attrs, text, limit, generator, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 529, in _find_all
i = next(generator)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1269, in descendants
stopNode = self._last_descendant().next_element
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 284, in _last_descendant
if is_initialized and self.next_sibling:
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 997, in __getattr__
return self.find(tag)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1234, in find
l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1255, in find_all
return self._find_all(name, attrs, text, limit, generator, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 529, in _find_all
i = next(generator)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1269, in descendants
stopNode = self._last_descendant().next_element
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 284, in _last_descendant
if is_initialized and self.next_sibling:
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 997, in __getattr__
return self.find(tag)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1234, in find
l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1255, in find_all
return self._find_all(name, attrs, text, limit, generator, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 512, in _find_all
strainer = SoupStrainer(name, attrs, text, **kwargs)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1548, in __init__
self.text = self._normalize_search_value(text)
File "/Users/almccann/.virtualenvs/evernoteghost/lib/python3.4/site-packages/bs4/element.py", line 1553, in _normalize_search_value
if (isinstance(value, str) or isinstance(value, collections.Callable) or hasattr(value, 'match')
RuntimeError: maximum recursion depth exceeded while calling a Python object
Answer: I had encountered this problem and browsed a lot of web pages. I summary two
methods to solve this problem.
However, I think we should know why that happened. Python limits the number of
recursive(default number is 1000). We can see this number with `print
sys.getrecursionlimit()`. I guess that **BeautifulSoup uses recursion to find
child elements**. When recursion is more than 1000 times, `RuntimeError:
maximum recursion depth exceeded` will appear.
**First method:** use `sys.setrecursionlimit()` set limited number of
recursive. You obviously can set 1000000, but maybe cause `segmentation
fault`.
**Second Method:** use `try-except`. If appeared `maximum recursion depth
exceeded`, Our algorithm might have problems. Generally speaking, we can use
loops instead of recursion. In your question, we could deal with HTML with
`replace()` or regular expression in advance.
Finally, I give an example.
from bs4 import BeautifulSoup
import sys
#sys.setrecursionlimit(10000)
try:
doc = ''.join(['<br>' for x in range(1000)])
soup = BeautifulSoup(doc, 'html.parser')
a = soup.find('br')
for i in a:
print i
except:
print 'failed'
If removed the `#`, it could print `doc`.
Hoping to help you.
|
How to implement ZCA Whitening? Python
Question: Im trying to implement **ZCA whitening** and found some articles to do it, but
they are a bit confusing.. can someone shine a light for me?
Any tip or help is appreciated!
Here is the articles i read :
<http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf>
<http://bbabenko.tumblr.com/post/86756017649/learning-low-level-vision-
feautres-in-10-lines-of>
I tried several things but most of them i didnt understand and i got locked at
some step. Right now i have this as base to start again :
dtype = np.float32
data = np.loadtxt("../inputData/train.csv", dtype=dtype, delimiter=',', skiprows=1)
img = ((data[1,1:]).reshape((28,28)).astype('uint8')*255)
Answer: Is your data stored in an mxn matrix? Where m is the dimension of the data and
n are the total number of cases? If that's not the case, you should resize
your data. For instance if your images are of size 28x28 and you have only one
image, you should have a 1x784 vector. You could use this function:
import numpy as np
def flatten_matrix(matrix):
vector = matrix.flatten(1)
vector = vector.reshape(1, len(vector))
return vector
Then you apply ZCA Whitening to your training set using:
def zca_whitening(inputs):
sigma = np.dot(inputs, inputs.T)/inputs.shape[1] #Correlation matrix
U,S,V = np.linalg.svd(sigma) #Singular Value Decomposition
epsilon = 0.1 #Whitening constant, it prevents division by zero
ZCAMatrix = np.dot(np.dot(U, np.diag(1.0/np.sqrt(np.diag(S) + epsilon))), U.T) #ZCA Whitening matrix
return np.dot(ZCAMatrix, inputs) #Data whitening
It is important to save the `ZCAMatrix` matrix, you should multiply your test
cases if you want to predict after training the Neural Net.
Finally, I invite you to take the Stanford UFLDL Tutorials at
<http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial> or
<http://ufldl.stanford.edu/tutorial/> . They have pretty good explanations and
also some programming exercises on MATLAB, however, almost all the functions
found on MATLAB are on Numpy by the same name. I hope this may give an
insight.
|
Display a grid of images in wxPython
Question: I am trying to display a grid of images in wxPython. I am using a GridSizer to
create a grid to which I add my staticbitmaps. But for some reason I only see
one image at the first position in the grid. I am not sure where I am going
wrong. Here is my code and the corresponding output.
import wx
import sys
import glob
MAIN_PANEL = wx.NewId()
class CommunicationApp(wx.App):
"""This is the class for the communication tool application.
"""
def __init__(self, config=None, redirect=False):
"""Instantiates an application.
"""
wx.App.__init__(self, redirect=redirect)
self.cfg = config
self.mainframe = CommunicationFrame(config=config,
redirect=redirect)
self.mainframe.Show()
def OnInit(self):
# self.SetTopWindow(self.mainframe)
return True
class CommunicationFrame(wx.Frame):
"""Frame of the Communication Application.
"""
def __init__(self, config, redirect=False):
"""Initialize the frame.
"""
wx.Frame.__init__(self, parent=None,
title="CMC Communication Tool",
style=wx.DEFAULT_FRAME_STYLE)
self.imgs = glob.glob('../img/img*.png')
self.panel = CommuniationPanel(parent=self,
pid=MAIN_PANEL,
config=config)
# # Gridbagsizer.
nrows, ncols = 3, 4
self.grid = wx.GridSizer(rows=nrows, cols=ncols)
# Add images to the grid.
for r in xrange(nrows):
for c in xrange(ncols):
_n = ncols * r + c
_tmp = wx.Image(self.imgs[_n],
wx.BITMAP_TYPE_ANY)
_temp = wx.StaticBitmap(self.panel, wx.ID_ANY,
wx.BitmapFromImage(_tmp))
self.grid.Add(_temp, 0, wx.EXPAND)
self.grid.Fit(self)
# set to full screen.
# self.ShowFullScreen(not self.IsFullScreen(), 0)
class CommuniationPanel(wx.Panel):
"""Panel of the Communication application frame.
"""
def __init__(self, parent, pid, config):
"""Initialize the panel.
"""
wx.Panel.__init__(self, parent=parent, id=pid)
# CALLBACK BINDINGS
# Bind keyboard events.
self.Bind(wx.EVT_KEY_UP, self.on_key_up)
def on_key_up(self, evt):
"""Handles Key UP events.
"""
code = evt.GetKeyCode()
print code, wx.WXK_ESCAPE
if code == wx.WXK_ESCAPE:
sys.exit(0)
def main():
app = CommunicationApp()
app.MainLoop()
if __name__ == '__main__':
main()
Here is the output. 
What I would like to see (or expected to see) was 3X4 grid of the different
images.
What is the problem with my code? And how do I fix it?
Answer: You appear to have forgotten to set a sizer for your panel
Try putting this in the frame's `__init__`
self.panel.SetSizer(self.grid)
|
Openstack Neutron: How to update port vif-model using update_port() api
Question: I wanted to update vif_model of already created port. I use following command
in CLI
neutron port-update --binding:vif_model=avp <port_id>
How to achieve the same using python apis of neutron. I'm trying to use
update_port() api. But not sure about parameters.
>>> from neutronclient.neutron import client
>>> neutron = client.Client('2.0', endpoint_url=neutron_endpoint, token=tok)
>>> help(neutron.update_port)
gives below hint :
> Help on function with_params in module neutronclient.v2_0.client:
>
> with_params(*args, **kwargs)
Answer: I looked at the
[Nova](https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L360)
source for an example of how the `update_port` call is usually used, and it
looks like you would call it something like this:
client.update_port('1fe691a6-f3a0-4586-b126-9fabb11e962a',
{'port':
{'binding:vif_type': 'avp'}})
|
Django migration fails with "__fake__.DoesNotExist: Permission matching query does not exist."
Question: In a Django 1.8 project, I have a migration that worked fine, [when it had the
following
code](https://github.com/geometalab/osmaxx/blob/378ddc5043f1fd80727067de19316f30d1f725b5/osmaxx-
py/osmaxx/contrib/auth/migrations/0002_add_default_usergroup_osmaxx.py):
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
from django.conf import settings
def update_site_forward(apps, schema_editor):
"""Add group osmaxx."""
Group = apps.get_model("auth", "Group")
Group.objects.create(name=settings.OSMAXX_FRONTEND_USER_GROUP)
def update_site_backward(apps, schema_editor):
"""Revert add group osmaxx."""
Group = apps.get_model("auth", "Group")
Group.objects.get(name=settings.OSMAXX_FRONTEND_USER_GROUP).delete()
class Migration(migrations.Migration):
dependencies = [
('auth', '0001_initial'),
]
operations = [
migrations.RunPython(update_site_forward, update_site_backward),
]
This group is created in a migration, because it shall be available in all
installations of the web app. To make it more useful, I wanted to also give it
a default permission, so I changed `update_site_forward` to:
def update_site_forward(apps, schema_editor):
"""Add group osmaxx."""
Group = apps.get_model("auth", "Group")
Permission = apps.get_model("auth", "Permission")
ContentType = apps.get_model("contenttypes", "ContentType")
ExtractionOrder = apps.get_model("excerptexport", "ExtractionOrder")
group = Group.objects.create(name=settings.OSMAXX_FRONTEND_USER_GROUP)
content_type = ContentType.objects.get_for_model(ExtractionOrder)
permission = Permission.objects.get(codename='add_extractionorder',
content_type=content_type) # line 16
group.permissions.add(permission)
and `Migration.dependencies` to:
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('excerptexport', '0001_initial'),
('auth', '0001_initial'),
]
While applying the migration (after first reverting it) (`python3 manage.py
migrate auth 0001 && python3 managy.py migrate`) worked, migrating a newly
created PostgreSQL database with this and all other migrations (`python3
manage.py migrate`) fails:
Operations to perform:
Synchronize unmigrated apps: debug_toolbar, django_extensions, messages, humanize, social_auth, kombu_transport_django, staticfiles
Apply all migrations: excerptexport, admin, sites, contenttypes, sessions, default, stored_messages, auth
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying auth.0002_add_default_usergroup_osmaxx...Traceback (most recent call last):
File "manage.py", line 17, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.4/dist-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/base.py", line 393, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/base.py", line 444, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/commands/migrate.py", line 221, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.4/dist-packages/django/db/migrations/executor.py", line 110, in migrate
self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.4/dist-packages/django/db/migrations/executor.py", line 148, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/local/lib/python3.4/dist-packages/django/db/migrations/migration.py", line 115, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/local/lib/python3.4/dist-packages/django/db/migrations/operations/special.py", line 183, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/osmaxx/source/osmaxx/contrib/auth/migrations/0002_add_default_usergroup_osmaxx.py", line 16, in update_site_forward
permission = Permission.objects.get(codename='add_extractionorder', content_type=content_type)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/manager.py", line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/query.py", line 334, in get
self.model._meta.object_name
__fake__.DoesNotExist: Permission matching query does not exist.
What am I doing wrong?
Answer: The default permissions are created in a `post_migrate` signal handler,
_after_ the migrations have run. This won't be a problem if your updated code
runs as part of the second `manage.py migrate` run, but it is a problem in the
test suite and any new deployment.
The easy fix is to change this line:
permission = Permission.objects.get(codename='add_extractionorder',
content_type=content_type) # line 16
to this:
permission, created = Permission.objects.get_or_create(codename='add_extractionorder',
content_type=content_type)
The signal handler that creates the default permissions will never create a
duplicate permission, so it is safe to create it if it doesn't exist already.
|
More Efficient Way to Create array
Question: I am a bit new to python so I am wondering if there is a more efficient way of
accomplishing something. Basically I need to create an array of values which
come from one of two other arrays depending on a random number (0 or 1)
Currently its pretty easy to implement using a for loop, however I am just
curious if there is a more elegant/python-ish way to accomplish this, it seems
too clunky for how python is designed:
import random
xySet = ['x', 'y']
xP = [10.1, 11.2, 12.3]
yP = [12.5, 13.2, 14.1]
nObser = 10
x = []
p = []
randVals = [random.randint(0,1) for i in range(nObser)]
print randVals
for i in range(nObser):
x.append(xySet[randVals[i]])
if randVals[i]:
p.append(xP[random.randint(0,2)])
else:
p.append(yP[random.randint(0,2)])
print x
print p
This gives me the correct output I would expect:
[1, 1, 1, 0, 1, 1, 1, 0, 1, 0]
['y', 'y', 'y', 'x', 'y', 'y', 'y', 'x', 'y', 'x']
[12.3, 11.2, 10.1, 13.2, 10.1, 11.2, 12.3, 14.1, 10.1, 13.2]
Thanks!
Answer: You could use a pair of list comprehensions
>>> from random import choice
>>> x = [choice(xySet) for _ in range(nObser)]
>>> p = [choice(xP) if v == 'x' else choice(yP) for v in x]
>>> x
['y', 'y', 'y', 'x', 'x', 'x', 'y', 'x', 'x', 'x']
>>> p
[14.1, 13.2, 14.1, 10.1, 10.1, 12.3, 13.2, 12.3, 11.2, 10.1]
In the first list comprehension, you use
[`random.choice`](https://docs.python.org/3/library/random.html#random.choice)
to randomly pick `'x'` or `'y'`.
Then in the second list comprehension, you iterate over your `x` list and
sample (again using `random.choice`) from the appropriate list.
|
Python - Find all intersection points of 2 graphs
Question: I'm trying to find all the intersection points of two graphs and display them
on the final plot. I've looked around and tried multiple things, but I haven't
been able to obtain what l'm looking for.
Currently, I attempting to generate a list wherein the intersection points
would be listed, though I keep getting the following error:
> The truth value of an array with more than one element is ambiguous. Use
> `a.any()` or `a.all()`.
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
x = np.arange(-7.0, 7.0, 0.05)
def y(x):
return np.sin(x)*(0.003*x**4 - 0.1*x**3 + x**2 + 4*x + 3)
def g(x):
return -10 * np.arctan(x)
def intersection(x):
if (y(x) - g(x)) == 0:
print y.all(x)
plt.plot(x, y(x), '-')
plt.plot(x, g(x), '-')
plt.show()
Answer: It's similar to:
[Intersection of two graphs in Python, find the x
value:](http://stackoverflow.com/questions/28766692/intersection-of-two-
graphs-in-python-find-the-x-value)
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
x = np.arange(-7.0, 7.0, 0.05)
y = np.sin(x)*(0.003*x**4 - 0.1*x**3 + x**2 + 4*x + 3)
g = -10 * np.arctan(x)
def intersection():
idx = np.argwhere(np.isclose(y, g, atol=10)).reshape(-1)
print idx
plt.plot(x, y, '-')
plt.plot(x, g, '-')
plt.show()
intersection()
edit: you don't use a function, but a list of values
|
No migrations to apply but Django is still trying to create a new content type
Question: I pushed a new release to a server last week which included a database
migration for a new table. This completed as expected, and works, but now on
every deployment when the server runs it's migrations I'm seeing no migrations
to apply, but also a unique key error on content types;
Running migrations:
No migrations to apply.
...
File "/var/www/django/myproj/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.IntegrityError
(1062, "Duplicate entry 'djangocms_newsletter-signup' for key 'django_content_type_app_label_45f3b1d93ec8c61c_uniq'")
The table is in the database, the content type for the `Signup` model is in
the content types table, the migration is in the migrations table... so why
does Django try to create a new content type still?
Migrations are ran as part of a straightforward post deployment script that I
use for all projects;
#!/bin/bash
set -e
PROJ_PATH=/var/www/django/myproj
cd $PROJ_PATH
echo 'clear-out...'
find $PROJ_PATH -name "*.pyc" -exec rm -rf '{}' ';'
find $PROJ_PATH -name "*.pyo" -exec rm -rf '{}' ';'
echo 'set DJANGO_SETTINGS_MODULE'
export DJANGO_SETTINGS_MODULE="project.settings.local_override"
echo 'activating...'
source ../bin/activate
echo 'pip install...'
pip install -r requirements.txt --no-deps
echo 'migrate...'
python manage.py migrate --noinput
echo 'collectstatic...'
python manage.py collectstatic --noinput
echo 'restart apache...'
sudo service apache2 restart
deactivate
At the moment, I've been clearing the app name from the existing content type
in order to allow the deployment to succeed, but now I just want to understand
what the issue is so that I can resolve it in future.
Full Traceback;
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 165, in handle
emit_post_migrate_signal(created_models, self.verbosity, self.interactive, connection.alias)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/core/management/sql.py", line 268, in emit_post_migrate_signal
using=db)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 198, in send
response = receiver(signal=self, sender=sender, **named)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/contrib/contenttypes/management.py", line 56, in update_contenttypes
ContentType.objects.using(using).bulk_create(cts)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/models/query.py", line 409, in bulk_create
self._batched_insert(objs_without_pk, fields, batch_size)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/models/query.py", line 938, in _batched_insert
using=self.db)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/models/manager.py", line 92, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/models/query.py", line 921, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 921, in execute_sql
cursor.execute(sql, params)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 129, in execute
return self.cursor.execute(query, args)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/var/www/django/myproj/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.
IntegrityError: (1062, "Duplicate entry 'djangocms_newsletter-signup' for key 'django_content_type_app_label_45f3b1d93ec8c61c_uniq'")
execute bash /var/www/django/myproj/myproj/shell_scripts/production_post_deployment.sh
Answer: After coming across this issue again I've debugged it using the shell on the
server as it's not been possible to reproduce & found that the issue is caused
by cached ContentType objects.
Let me illustrate;
>>> content_types = dict((ct.model, ct) for ct in ContentType.objects.filter(app_label='consoles'))
>>> content_types
{}
>>> content_types = dict((ct.model, ct) for ct in ContentType.objects.filter(app_label='news'))
>>> content_types
{u'latestnews': <ContentType: latest news>}
>>> from django.core.cache import cache
>>> cache.clear()
>>> content_types = dict((ct.model, ct) for ct in ContentType.objects.filter(app_label='consoles'))
>>> content_types
{u'runner': <ContentType: runner>, u'importedclient': <ContentType: imported client>, u'placetype': <ContentType: place type>, u'invoice': <ContentType: invoice>, u'placetypetotal': <ContentType: place type total>, u'clientconsoles': <ContentType: client consoles>, u'emailtemplate': <ContentType: email template>, u'event': <ContentType: event>, u'occupation': <ContentType: occupation>, u'console': <ContentType: console>, u'meettheexperts': <ContentType: meet the experts>, u'auditlog': <ContentType: audit log entry>, u'proxyuser': <ContentType: user>, u'consoleuser': <ContentType: Console User>, u'paidby': <ContentType: paid by>, u'meettheexpertstransaction': <ContentType: Meet the experts Transaction>, u'clientuserconsole': <ContentType: client user console>, u'paymentmethod': <ContentType: payment method>, u'transaction': <ContentType: transaction>, u'country': <ContentType: Country>, u'client': <ContentType: client>, u'place': <ContentType: place>, u'consoletotal': <ContentType: console total>}
So a fix to the cached manager which was caching all content types has
resolved the issue.
|
Inspect a large dataframe for errors arising during merge/combine in python
Question: I hope this is an appropriate question for here. If not, let me know, and I
will remove it immediately.
Question:
How can I use python to inspect (visually?) a large dataset for errors that
arise during combination?
Background:
I am working with several large (but not, you know "Big") datasets that I
combine to form one larger dataset. This new set is ~2.5G in size, so it does
not fit in most spreadsheet programs, or at least not in the ones I've tried
(MS Excel, OpenOffice).
The process to create the final dataset uses fuzzy matching (via
`fuzzywuzzy`), and I want to inspect the results of the matching to see if
there are any errors introduced.
As of now, I have tried importing the entire set into a `pandas` dataframe.
This DF has 64 columns, so when I simply do something like `df.head()` the
resulting displayed info obviously does not show all the columns; I thus ruled
out just iterating through multiple `.head()` calls.
There is a similar question about visualizing specific aspects of a dataframe
[here](http://stackoverflow.com/questions/28813057/inspecting-and-visualizing-
gaps-blanks-and-structure-in-large-dataframes). My question is different, I
think, because I don't need to visualize anything about the underlying
structure or types. I just want to visually inspect areas I suspect might have
errors.
Answer: How about slicing your 10-12 rows and then transposing that you have a 64 rows
x 12 columns dataframe. This should be readable provided you don't have very
large index names.
import pandas as pd
import numpy as np
# Set max number of rows, 64 would be enough here but I'm trying to be safe
pd.set_option('display.max_rows', 500)
df = pd.DataFrame(np.random.randn(1000,64))
nstart = 100
# Slice 12 lines starting at nstart, and transpose that...
df.iloc[nstart:(nstart+13)].T
I'm sparing you the output here, but try running the above code.
|
Python rounding of random floats to nearest points on a 2D uniform mesh grid
Question: Despite numpy & scipy's many rounding functions, I cannot find one that allows
me to discretize randomized floats with respect to nodes in a 2D uniform mesh
grid. For example,
# create mesh grid
n = 11
l = 16.
x = np.linspace(-l/2, l/2, n)
y = np.linspace(-l/2, l/2, n)
xx, yy = np.meshgrid(x, y, sparse=True)
>>> xx
array([-8. , -6.4, -4.8, -3.2, -1.6, 0. , 1.6, 3.2, 4.8, 6.4, 8. ])
>>> yy
array([[-8. ],
[-6.4],
[-4.8],
[-3.2],
[-1.6],
[ 0. ],
[ 1.6],
[ 3.2],
[ 4.8],
[ 6.4],
[ 8. ]])
If I have `m` number of normally distributed random float
`a=np.random.multivariate_normal([0,0], [[l/2,0],[0,l/2]], m)`, how can I
round them to the nearest mesh nodes (assuming periodic boundaries)?
Answer: In another SO question I helped the questioner use a scipy nearest neigbhor
interpolator.
[Repeating Scipy's
griddata](http://stackoverflow.com/questions/29987168/repeating-scipys-
griddata/29991385#29991385)
Working from that I figured out a solution to your problem. In
`scipy.interpolate` I find a `NearestNDInterpolator`. From that I find that
`scipy.spatial` has a method for finding nearest neighbors:
In [936]: from scipy import interpolate
In [937]: interpolate.NearestNDInterpolator?
...
Docstring:
NearestNDInterpolator(points, values)
Nearest-neighbour interpolation in N dimensions.
...
Uses ``scipy.spatial.cKDTree``
In [938]: from scipy import spatial
In [939]: spatial.cKDTree?
`cKDTree` is used in 2 steps; create a tree from the grid, and query the
nearest neighbors.
From your meshgrid I can create a `(n,2)` array of points
In [940]: xx, yy = np.meshgrid(x, y)
In [941]: xygrid=np.array([xx,yy])
In [942]: xygrid=xygrid.reshape(2,-1).T # clumsy
create a search tree:
In [943]: tree=spatial.cKDTree(xygrid)
test set of points, (10,2):
In [944]: a=np.random.multivariate_normal([0,0], [[l/2,0],[0,l/2]],10)
Query of the search tree gives 2 arrays, one of distances from nearest
neighbor, and one with the indexes:
In [945]: I=tree.query(a)
In [946]: I
Out[946]:
(array([ 0.70739099, 0.9894934 , 0.44489157, 0.3930144 , 0.273121 ,
0.3537348 , 0.32661876, 0.55540787, 0.58433421, 0.538722 ]),
array([61, 72, 85, 72, 82, 39, 38, 62, 25, 59]))
Comparing the `a` points, with the `xygrid` nearest neighbor grid points,
shows that they indeed appear to be close. A scatter point plot would be
better.
In [947]: a
Out[947]:
array([[ 1.44861113, -0.69100176],
[ 1.00827575, 0.80693026],
[ 4.37200745, 3.07854676],
[ 1.2193471 , 1.50220587],
[ 0.12668563, 2.95803754],
[ 1.4758331 , -3.53122635],
[ 0.28425494, -3.03913067],
[ 2.8203361 , 0.40538034],
[-3.67726571, -4.46285921],
[-1.07228578, -0.10834709]])
In [948]: xygrid[I[1],:]
Out[948]:
array([[ 1.6, 0. ],
[ 1.6, 1.6],
[ 4.8, 3.2],
[ 1.6, 1.6],
[ 0. , 3.2],
[ 1.6, -3.2],
[ 0. , -3.2],
[ 3.2, 0. ],
[-3.2, -4.8],
[-1.6, 0. ]])
A solution in `rth`'s link also uses `cKDTree`. I'm just filling in the
details on how to work from your griddata. [Finding index of nearest point in
numpy arrays of x and y
coordinates](https://stackoverflow.com/questions/10818546/finding-index-of-
nearest-point-in-numpy-arrays-of-x-and-y-coordinates)
|
Python curses handling stdout from another thread
Question: I'm running two threads in my python program, one thread which uses python
curses to run a menu system and waits for input, and one thread which does
analysis based on menu choices and outputs it's status via the built in
`print()` function. My problem here is that print doesn't play well with
curses, as, if `curses.echo()` is on, then it prints to the line where I am
waiting for input, and if `curses.noecho()` is used, then the output is not
displayed at all.
Since I want control over where and when the output is displayed, my solution
to this initially was to set `window.timeout(1000)` and then have the input
loop like this:
try:
c = window.getkey()
except:
c = -1 #timeout or error in input
if c == -1:
check_for_input()
elif c == 'KEY_RESIZE':
...
This works quite well to allow me to check for output from stdout every
second, and then if need be update the menu, while still allowing user input.
The problem that I'm having is that I have no idea how to capture stdout and
choose to display it when I need to. Is this at all possible?
Answer: So I figured this one out, but as a disclaimer, I have no idea if this is
thread safe (no problems thus far though).
It's possible to capture the output of print using the python library
[io](https://docs.python.org/3/library/io.html), and more specifically
[`StringIO`](https://docs.python.org/3/library/io.html#StringIO) from that
library.
N.B. This is for Python3
Essentially, the solution was to set `sys.stdout` to an instance of
`io.StringIO` and read from that.
external_output = None
stdout_buff = io.StringIO()
sys.stdout = stdout_buff
stream_pos = 0 # lst read position of the stdout stream.
while True: #input loop
...
if stdout_buff.tell() > stream_pos:
stdout_buff.seek(stream_pos)
external_output = stdout_buff.read()
stream_pos = stdout_buff.tell()
...
Below I've included a short example of the menu system I was using in case the
above isn't clear to anyone having this issue, in the hopes that this will
clear it up.
Cheers!
* * *
# Unmodified Version
So the menu's display and event loop used to look a lot like this: (note that
this is a simplified version of things and therefore a lot to do with
displaying the menu and displaying what a user types has been left out). This
basic example displays a menu and allows user to exit the program, enter
digits into their selection, or enter their selection, which is then printed
out.
import sys
import curses
def menu(stdscr):
# initial startup settings
curses.start_color()
curses.use_default_colors()
stdscr.timeout(1000) #timeout the input loop every 1000 milliseconds
user_selection = ''
# other unrelated initial variables
while True: #display loop
stdscr.clear()
# the following is actually in a function to handle automatically
# taking care of fitting output to the screen and keeping
# track of line numbers, etc. but for demonstration purposes
# I'm using the this
start_y = 0
stdscr.addstr(start_y, 0, 'Menu Options:')
stdscr.addstr(start_y+1, 0, '1) option 1')
stdscr.addstr(start_y+2, 0, '1) option 2')
stdscr.addstr(start_y+3, 0, '1) option 3')
stdscr.addstr(start_y+4, 0, '1) option 4')
while True: #input loop
c = stdscr.getkey()
if c == 'KEY_RESIZE':
handle_window_resize() # handle changing stored widths and height of window
break #break to redraw screen
elif c.isdigit():
# if user typed a digit, add that to the selection string
# users may only select digits as their options
user_selection += c
elif c == '\n':
# user hit enter to submit their selection
if len(user_selection) > 0:
return user_selection
elif c == 'q':
sys.exit()
result = curses.wrapper(menu)
print(result)
In this example the problem still occurs that any output from a thread running
simultaneously to this one will be printed at the cursor of `stdscr` where the
program is currently waiting for input from the user.
* * *
# Modified Version
import sys
import curses
from io import StringIO
def menu(stdscr):
# initial startup settings
curses.start_color()
curses.use_default_colors()
stdscr.timeout(1000) #timeout the input loop every 1000 milliseconds
user_selection = ''
# other unrelated initial variables
# output handling variables
external_output = None # latest output from stdout
external_nlines = 2 # number of lines at top to leave for external output
stdout_buff = StringIO()
sys.stdout = stdout_buff
stream_pos = 0 # lst read position of the stdout stream.
while True: #display loop
stdscr.clear()
# the following is actually in a function to handle automatically
# taking care of fitting output to the screen and keeping
# track of line numbers, etc. but for demonstration purposes
# I'm using the this
if external_output is not None:
stdscr.addstr(0, 0, "stdout: " + external_output)
start_y = external_nlines
stdscr.addstr(start_y, 0, 'Menu Options:')
stdscr.addstr(start_y+1, 0, '1) option 1')
stdscr.addstr(start_y+2, 0, '1) option 2')
stdscr.addstr(start_y+3, 0, '1) option 3')
stdscr.addstr(start_y+4, 0, '1) option 4')
while True: #input loop
try:
c = stdscr.getkey()
except:
c = -1 # 1000ms timeout or error
if c == -1:
if stdout_buff.tell() > stream_pos:
# current stdout_buff pos is greater than last read
# stream position, so there is unread output
stdout_buff.seek(stream_pos)
external_output = stdout_buff.read().strip() #strip whitespace
stream_pos = stdout_buff.tell() #set stream_pos to end of stdout_buff
break #redraw screen with new output
elif c == 'KEY_RESIZE':
handle_window_resize() # handle changing stored widths and height of window
break #break to redraw screen
elif c.isdigit():
# if user typed a digit, add that to the selection string
# users may only select digits as their options
user_selection += c
elif c == '\n':
# user hit enter to submit their selection
if len(user_selection) > 0:
sys.stdout = sys.__stdout__ # reset stdout to normal
return user_selection
elif c == 'q':
sys.stdout = sys.__stdout__ # reset stdout to normal
sys.exit()
result = curses.wrapper(menu)
print(result)
|
Why does my list return characters instead of items?
Question: This is a very basic question but I'm having trouble understanding why this
is.
Taking a file with lists separated by new lines, if I use this script...
#!/usr/bin/python
import sys
fil = open(sys.argv[1])
for line in fil:
linearr = line.strip('\n')
print linearr[0]
I get back a "[" (the first character of the list, the beginning bracket).
Why does this print this character instead of the first item in the list? Is
it because the lists already exist in the file and you're not making them
within the script?
The lists look like this...
['fig|357276.26.peg.4402', 'COG0776']
and it should print the first item but doesn't.
Answer: linearr is a string. You are returning the first char in the string. Did you
mean to do split()? Split is what you need for making an array out of a
string.
|
How do I apply my python code to all of the files in a folder at once, and how do I create a new name for each subsequent output file?
Question: The code I am working with takes in a .pdf file, and outputs a .txt file. My
question is, how do I create a loop (probably a for loop) which runs the code
over and over again on all files in a folder which end in ".pdf"? Furthermore,
how do I change the output each time the loop runs so that I can write a new
file each time, that has the same name as the input file (ie. 1_pet.pdf >
1_pet.txt, 2_pet.pdf > 2_pet.txt, etc.)
Here is the code so far:
path="2_pet.pdf"
content = getPDFContent(path)
encoded = content.encode("utf-8")
text_file = open("Output.txt", "w")
text_file.write(encoded)
text_file.close()
Answer: One way to operate on all PDF files in a directory is to invoke `glob.glob()`
and iterate over the results:
import glob
for path in glob.glob('*.pdf')
content = getPDFContent(path)
encoded = content.encode("utf-8")
text_file = open("Output.txt", "w")
text_file.write(encoded)
text_file.close()
Another way is to allow the user to specify the files:
import sys
for path in sys.argv[1:]:
...
Then the user runs your script like `python foo.py *.pdf`.
|
Why is Python 3 is considerably slower than Python 2?
Question: I've been trying to understand why Python 3 is actually taking much time
compared with Python 2 in certain situations, below are few cases I've
verified from python 3.4 to python 2.7.
Note: I've gone through some of the questions like [Why is there no xrange
function in Python3?](http://stackoverflow.com/questions/15014310) and [loop
in python3 much slower than
python2](http://stackoverflow.com/questions/25678127) and [Same code slower in
Python3 as compared to Python2](http://stackoverflow.com/questions/14911122),
but I feel that I didn't get the actual reason behind this issue.
I've tried this piece of code to show how it is making difference:
MAX_NUM = 3*10**7
# This is to make compatible with py3.4.
try:
xrange
except:
xrange = range
def foo():
i = MAX_NUM
while i> 0:
i -= 1
def foo_for():
for i in xrange(MAX_NUM):
pass
When I've tried running this programme with py3.4 and py2.7 I've got below
results.
Note: These stats came through a `64 bit` machine with `2.6Ghz` processor and
calculated the time using `time.time()` in single loop.
Output : Python 3.4
-----------------
2.6392083168029785
0.9724123477935791
Output: Python 2.7
------------------
1.5131521225
0.475143909454
I really don't think that there has been changes applied to `while` or
`xrange` from 2.7 to 3.4, I know `range` has been started acting as to
`xrange` in py3.4 but as documentation says
> `range()` now behaves like `xrange()` used to behave, except it works with
> values of arbitrary size. The latter no longer exists.
this means change from `xrange` to `range` is very much equal to a name change
but working with arbitrary values.
I've verified disassembled byte code as well.
Below is the disassembled byte code for function `foo()`:
Python 3.4:
---------------
13 0 LOAD_GLOBAL 0 (MAX_NUM)
3 STORE_FAST 0 (i)
14 6 SETUP_LOOP 26 (to 35)
>> 9 LOAD_FAST 0 (i)
12 LOAD_CONST 1 (0)
15 COMPARE_OP 4 (>)
18 POP_JUMP_IF_FALSE 34
15 21 LOAD_FAST 0 (i)
24 LOAD_CONST 2 (1)
27 INPLACE_SUBTRACT
28 STORE_FAST 0 (i)
31 JUMP_ABSOLUTE 9
>> 34 POP_BLOCK
>> 35 LOAD_CONST 0 (None)
38 RETURN_VALUE
python 2.7
-------------
13 0 LOAD_GLOBAL 0 (MAX_NUM)
3 STORE_FAST 0 (i)
14 6 SETUP_LOOP 26 (to 35)
>> 9 LOAD_FAST 0 (i)
12 LOAD_CONST 1 (0)
15 COMPARE_OP 4 (>)
18 POP_JUMP_IF_FALSE 34
15 21 LOAD_FAST 0 (i)
24 LOAD_CONST 2 (1)
27 INPLACE_SUBTRACT
28 STORE_FAST 0 (i)
31 JUMP_ABSOLUTE 9
>> 34 POP_BLOCK
>> 35 LOAD_CONST 0 (None)
38 RETURN_VALUE
And below is the disassembled byte code for function `foo_for()`:
Python: 3.4
19 0 SETUP_LOOP 20 (to 23)
3 LOAD_GLOBAL 0 (xrange)
6 LOAD_GLOBAL 1 (MAX_NUM)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 GET_ITER
>> 13 FOR_ITER 6 (to 22)
16 STORE_FAST 0 (i)
20 19 JUMP_ABSOLUTE 13
>> 22 POP_BLOCK
>> 23 LOAD_CONST 0 (None)
26 RETURN_VALUE
Python: 2.7
-------------
19 0 SETUP_LOOP 20 (to 23)
3 LOAD_GLOBAL 0 (xrange)
6 LOAD_GLOBAL 1 (MAX_NUM)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 6 (to 22)
16 STORE_FAST 0 (i)
20 19 JUMP_ABSOLUTE 13
>> 22 POP_BLOCK
>> 23 LOAD_CONST 0 (None)
26 RETURN_VALUE
If we compare both the byte codes they've produced the same disassembled byte
code.
Now I'm wondering what change from 2.7 to 3.4 is really causing this huge
change in execution time in the given piece of code.
Answer: The difference is in the implementation of the `int` type. Python 3.x uses the
arbitrary-sized integer type (`long` in 2.x) exclusively, while in Python 2.x
for values up to `sys.maxint` a simpler `int` type is used that uses a simple
C `long` under the hood.
Once you limit your loops to _`long`_ integers, Python 3.x is faster:
>>> from timeit import timeit
>>> MAX_NUM = 3*10**3
>>> def bar():
... i = MAX_NUM + sys.maxsize
... while i > sys.maxsize:
... i -= 1
...
Python 2:
>>> timeit(bar, number=10000)
5.704327821731567
Python 3:
>>> timeit(bar, number=10000)
3.7299320790334605
I used `sys.maxsize` as `sys.maxint` was dropped from Python 3, but the
integer value is basically the same.
The speed difference in Python 2 is thus limited to the first (2 ** 63) - 1
integers on 64-bit, (2 ** 31) - 1 integers on 32 bit systems.
Since you cannot use the `long` type with `xrange()` on Python 2, I did not
include a comparison for that function.
|
python: how to delete row and modify specific list string from CSV
Question: This is my first time posting a question so I apologise in advance if I make
any mistakes.
I am currently attempting to create a custom python program (pretty much a
parser) that takes in data as such:
junk
junk
junk
junk
junk
junk
fields title title title
data_type d_type d_type d_type
data1 data2 data 3
data4 data 5 data6
data7 data8 data9
junk
Where my desired output is:
title title title
data1 data2 data3
data4 data5 data6
data7 data8 data9
Here is the working portion of my code that I have thus far:
import csv
import itertools
with open('file.log','rb') as csvfile:
rowlist = csv.reader(csvfile, delimiter = '\t')
for row in itertools.islice(rowlist,6,12):
print row
Whenever the above code is run it produces a series of lists as seen here
['fields','title1', 'title2', 'title3']
['data_type','d_type','d_type', 'd_type']
['data1', 'data2', 'data3']
['data4', 'data5', 'data6']
['data7', 'data8', 'data9']
The the first data entry (data1, data4, data7) in the list is always a number
whereas the other data entries may be any string/number/character.
The `itertools` has solved cutting off the top and bottom of the file, however
I am still struggling to
* delete the "data_type line"
* removing 'fields', that is: ['fields','title1', 'title2', 'title3']----->['title1', 'title2', 'title3']
I have seen some solutions that remove lines/overwrite line however, I do not
have a lot of memory to spare thus I must keep opening/closing/writing to a
minimum. Any and all help is extremely appreciated.
Answer: Just slice each row:
for row in islice(rowlist, 6, 12):
if row[0] == "data_type":
continue
elif row[0] == "fields":
print(row[1:])
else:
print(row)
If you are just writing rows use islice again:
for row in islice(rowlist, 6, 12):
if row[0] == "data_type":
continue
elif row[0] == "fields":
fileobj.write(islice(rowlist, 1,None))
else:
fileobj.write(row)
If you are actually trying to overwrite the original file, you can write the
lines to a
[tempfile](https://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile)
and replace the original file with
[shutil.move](https://docs.python.org/2/library/shutil.html#shutil.move):
from shutil import move
from tempfile import NamedTemporaryFile
with open('file.csv', 'rb') as csvfile, NamedTemporaryFile(dir=".", delete=False) as temp:
rowlist = csv.reader(csvfile)
for row in islice(rowlist, 6, 12):
if row[0] == "data_type":
continue
elif row[0] == "fields":
temp.write(islice(rowlist, 1, None))
else:
temp.write(row)
move(temp.name,"file.csv")
|
Python error urllib.request error in pycharm
Question: Getting the following error in when running this code using Pycharm
import random
import urllib.request
def download_web_image(url):
name = random.randrange(1, 1000)
full_name = str(name) + ".jpg"
urllib.request.urlretrive(url, full_name)
download_web_image("https://realpython.com/learn/python-first-steps/images/pythonlogo.jpg")
Answer: You spelt urlretrieve wrong `urlretrive -> urlretrieve`:
urllib.request.urlretrieve(url, full_name)
|
numpy.random has no attribute 'choice'
Question: I am using python 2.7.2 |EPD 7.1-1 (64-bit) and for some reason
numpy.random.choice is not working:
from the terminal window:
d-108-179-168-72:~ home$ python
Enthought Python Distribution -- www.enthought.com
Version: 7.1-1 (64-bit)
Python 2.7.2 |EPD 7.1-1 (64-bit)| (default, Jul 3 2011, 15:56:02)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "packages", "demo" or "enthought" for more information.
>>> import numpy as np
>>> np.random.choice(5, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'choice'
any ideas of what the problem could be?
thanks
Answer: I think it could be the version of numpy your distribution is using. From [the
documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html)
choice was only added in 1.7.0 and from [the enthought
package](https://www.enthought.com/products/epd/package-index/) I can see it
only has 1.6.1 in version 7.2, a later version than your own. You may wish to
upgrade your version of numpy.
|
Django UpdateView with ImageField attribute
Question: I am having trouble in making UpdateView Class.I was able to do createView and
UpdateView without adding any form before adding up the imageField. But now I
have imageField, which is creating problem. Fortunately, I am able to do
createView and its working fine.
Following is my code for CreateView
class CreatePostView(FormView):
form_class = PostForm
template_name = 'edit_post.html'
def get_success_url(self):
return reverse('post-list')
def form_valid(self, form):
form.save(commit=True)
# messages.success(self.request, 'File uploaded!')
return super(CreatePostView, self).form_valid(form)
def get_context_data(self, **kwargs):
context = super(CreatePostView, self).get_context_data(**kwargs)
context['action'] = reverse('post-new')
return context
However, I tried to do UpdateView(ViewForm). Following is my code :
class UpdatePostView(SingleObjectMixin,FormView):
model = Post
form_class = PostForm
tempate_name = 'edit_post.html'
# fields = ['title', 'description','content','published','upvote','downvote','image','thumbImage']
def get_success_url(self):
return reverse('post-list')
def form_valid(self, form):
form.save(commit=True)
# messages.success(self.request, 'File uploaded!')
return super(UpdatePostView, self).form_valid(form)
def get_context_data(self, **kwargs):
context = super(UpdatePostView, self).get_context_data(**kwargs)
context['action'] = reverse('post-edit',
kwargs={'pk': self.get_object().id})
return context
When I try to run the updateView, its giving me following error:
> AttributeError at /posts/edit/23/
>
> 'UpdatePostView' object has no attribute 'get_object'
>
> Request Method: GET Request URL: <http://localhost:8000/posts/edit/23/>
> Django Version: 1.8.2 Exception Type: AttributeError Exception Value:
>
> 'UpdatePostView' object has no attribute 'get_object'
>
> Exception Location: /home/PostFunctions/mysite/post/views.py in
> get_context_data, line 72 Python Executable: /usr/bin/python Python Version:
> 2.7.6
Following is my url.py :
#ex : /posts/edit/3/
url(r'^edit/(?P<pk>\d+)/$', post.views.UpdatePostView.as_view(),
name='post-edit',),
Answer: I have a form to update Model with ImageField. I do extend a ModelForm for my
model (which is PostForm for you I guess).
But my CustomUpdateView extend UpdateView, from django generic view.
from django.views.generic.edit import UpdateView
from django.shortcuts import get_object_or_404
class CustomUpdateView(UpdateView):
template_name = 'some_template.html'
form_class = CustomModelForm
success_url = '/some/url'
def get_object(self): #and you have to override a get_object method
return get_object_or_404(YourModel, id=self.request.GET.get('pk'))
You just have to define a get_object method and update view will update the
object with value in form, but it needs to get the object you want to update.
`get_object_or_404()` works like a `get()` function on a Model, so replace id
by the name of your field_id.
Hope it helps
|
How to do Parameterization/Data driven testing using selenium in Python
Question: I'm learning automation, and i have a few set of login id's and i'm trying to
login and logout in amazon.com website and with a set of login id's and
password which are in a excel file.
Issue what i'm facing is to figure out how to hover over the "hello" in the
amazon home page and click on login. I have tried mouse_hover(), click using
xpath. However what i want to do is that after i get into the login page i
want to login using different login id's and logout again and do the same with
different login id's/password.
Here's the code i'm trying to do.
import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import Select
# create a new Firefox session
driver = webdriver.Firefox()
driver.implicitly_wait(30)
driver.maximize_window()
# navigate to the application home page
driver.get("http://www.amazon.com/")
if 'Sign Out' in driver.page_source:
pass
else:
mouse_over("//*[@id='nav-link-yourAccount]")
hover = driver.find_element_by_xpath("//*[@id='nav-link-yourAccount]")
hover.click()
logi = driver.find_element_by_xpath("//*[@id='nav-flyout-ya-signin']")
logi.click()
# username = driver.find_element_by_id("login_login_username")
# username.send_keys("student2")
# password= driver.find_element_by_id("login_login_password")
# password.send_keys("Testing1")
# loginbutton=driver.find_element_by_id("login_submit")
# loginbutton.click()
Answer: Remove this line from the code
mouse_over("//*[@id='nav-link-yourAccount]")
Correct x-path-:
hover = driver.find_element_by_xpath("//*[@id='nav-link-yourAccount']")
rest part is ok.
|
How to remove '\n' from scrapy output in python
Question: I am trying to output to CSV but I realized that when scraping tripadvisor I
am getting many carriage returns thus the array goes over 30 while there are
only 10 reviews so I get many fields missing. Is there a way to remove the
carriage returns.
spider.
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy.http import Request
from scrapingtest.items import ScrapingTestingItem
from collections import OrderedDict
import json
from scrapy.selector.lxmlsel import HtmlXPathSelector
import csv
import html2text
import unicodedata
class scrapingtestspider(Spider):
name = "scrapytesting"
allowed_domains = ["tripadvisor.in"]
base_uri = ["tripadvisor.in"]
start_urls = [
"http://www.tripadvisor.in/Hotel_Review-g297679-d736080-Reviews-Ooty_Elk_Hill_A_Sterling_Holidays_Resort-Ooty_Tamil_Nadu.html"]
def parse(self, response):
item = ScrapingTestingItem()
sel = HtmlXPathSelector(response)
converter = html2text.HTML2Text()
sites = sel.xpath('//a[contains(text(), "Next")]/@href').extract()
## dummy_test = [ "" for k in range(10)]
item['reviews'] = sel.xpath('//div[@class="col2of2"]//p[@class="partial_entry"]/text()').extract()
item['subjects'] = sel.xpath('//span[@class="noQuotes"]/text()').extract()
item['stars'] = sel.xpath('//*[@class="rating reviewItemInline"]//img/@alt').extract()
item['names'] = sel.xpath('//*[@class="username mo"]/span/text()').extract()
item['location'] = sel.xpath('//*[@class="location"]/text()').extract()
item['date'] = sel.xpath('//*[@class="ratingDate relativeDate"]/@title').extract()
item['date'] += sel.xpath('//div[@class="col2of2"]//span[@class="ratingDate"]/text()').extract()
startingrange = len(sel.xpath('//*[@class="ratingDate relativeDate"]/@title').extract())
for j in range(startingrange,len(item['date'])):
item['date'][j] = item['date'][j][9:].strip()
for i in range(len(item['stars'])):
item['stars'][i] = item['stars'][i][:1].strip()
for o in range(len(item['reviews'])):
print unicodedata.normalize('NFKD', unicode(item['reviews'][o])).encode('ascii', 'ignore')
for y in range(len(item['subjects'])):
item['subjects'][y] = unicodedata.normalize('NFKD', unicode(item['subjects'][y])).encode('ascii', 'ignore')
yield item
# print item['reviews']
if(sites and len(sites) > 0):
for site in sites:
yield Request(url="http://tripadvisor.in" + site, callback=self.parse)
Is there possible a regex that I could use to go through the for loop and
replace it. I tried replace but that did not do a thing. And also why does
scrapy do that.
Answer: Simple solution after reading the list docs.
while "\n" in some_list: some_list.remove("\n")
|
How to find the median and standard deviation by job in the below data in python?
Question: I have CSV file like below and i want to calculate the median and standard
deviation of salaries by each job
salaries.csv
City Job Salary
Delhi Doctors 500
Delhi Lawyers 400
Delhi Plumbers 100
London Doctors 800
London Lawyers 700
London Plumbers 300
Tokyo Doctors 900
Tokyo Lawyers 800
Tokyo Plumbers 400
From the above data i want to calculate the median and standard deviation of
salaries by each job
Expected Output 1:
JOB salary_median
Doctors 400
Lawyers 700
plumbers 300
Expected output 2: JOB salary_std_dev Doctors 500 Lawyers 600 plumbers 400
Answer: This is a perfect task for pandas:
import pandas as pd
df = pd.DataFrame([
['Delhi', 'Doctors', 500],
['Delhi', 'Lawyers', 400],
['Delhi', 'Plumbers', 100],
['London', 'Doctors', 800],
['London', 'Lawyers', 700],
['London', 'Plumbers', 300],
['Tokyo', 'Doctors', 900],
['Tokyo', 'Lawyers', 800],
['Tokyo', 'Plumbers', 400]],
columns = ['City', 'Job', 'Salary'])
print df.groupby('Job').Salary.median().to_frame('salary_median')
|
Boost Python, propagate C++ callbacks to Python causing segmentation fault
Question: I have the following listener in C++ that receives a Python object to
propagate the callbacks.
class PyClient {
private:
std::vector<DipSubscription *> subs;
subsFactory *sub;
class GeneralDataListener: public SubscriptionListener {
private:
PyClient * client;
public:
GeneralDataListener(PyClient *c):client(c){
client->pyListener.attr("log_message")("Handler created");
}
void handleMessage(Subscription *sub, Data &message) {
// Lock the execution of this method
PyGILState_STATE state = PyGILState_Ensure();
client->pyListener.attr("log_message")("Data received for topic");
...
// This method ends modifying the value of the Python object
topicEntity.attr("save_value")(valueKey, extractDipValue(valueKey.c_str(), message))
// Release the lock
PyGILState_Release(state);
}
void connected(Subscription *sub) {
client->pyListener.attr("connected")(sub->getTopicName());
}
void disconnected(Subscription *sub, char* reason) {
std::string s_reason(reason);
client->pyListener.attr("disconnected")(sub->getTopicName(), s_reason);
}
void handleException(Subscription *sub, Exception &ex) {
client->pyListener.attr("handle_exception")(sub->getTopicName())(ex.what());
}
};
GeneralDataListener *handler;
public:
python::object pyListener;
PyClient(python::object pyList): pyListener(pyList) {
std::ostringstream iss;
iss << "Listener" << getpid();
sub = Sub::create(iss.str().c_str());
createSubscriptions();
}
~PyClient() {
for (unsigned int i = 0; i < subs.size(); i++) {
if (subs[i] == NULL) {
continue;
}
sub->destroySubscription(subs[i]);
}
}
};
BOOST_PYTHON_MODULE(pytest)
{
// There is no need to expose more methods as will be used as callbacks
Py_Initialize();
PyEval_InitThreads();
python::class_<PyClient>("PyClient", python::init<python::object>())
.def("pokeHandler", &PyClient::pokeHandler);
};
Then, I have my Python program, which is like this:
import sys
import time
import pytest
class Entity(object):
def __init__(self, entity, mapping):
self.entity = entity
self.mapping = mapping
self.values = {}
for field in mapping:
self.values[field] = ""
self.updated = False
def save_value(self, field, value):
self.values[field] = value
self.updated = True
class PyListener(object):
def __init__(self):
self.listeners = 0
self.mapping = ["value"]
self.path_entity = {}
self.path_entity["path/to/node"] = Entity('Name', self.mapping)
def connected(self, topic):
print "%s topic connected" % topic
def disconnected(self, topic, reason):
print "%s topic disconnected, reason: %s" % (topic, reason)
def handle_message(self, topic):
print "Handling message from topic %s" % topic
def handle_exception(self, topic, exception):
print "Exception %s in topic %s" % (exception, topic)
def log_message(self, message):
print message
def sample(self):
for path, entity in self.path_entity.iteritems():
if not entity.updated:
return False
sample = " ".join([entity.values[field] for field in dip_entity.mapping])
print "%d %s %d %s" % (0, entity.entity, 4324, sample)
entity.updated = False
return True
if __name__ == "__main__":
sys.settrace(trace)
py_listener = PyListener()
sub = pytest.PyClient(py_listener)
while True:
if py_listener.sample():
break
So, finally, my problem seems to be that when I start running the while True
in the Python program the script gets stuck checking if the entity is updated,
and randomly, when the C++ listener tries to invoke the callback I get a
segmentation fault.
The same if I just try time.sleep in the python script and call sample time by
time. I know it will be solved if I call sample from the C++ code, but this
script will be run by other Python module that will call the sample method
given a specific delay.So the expected functioning will be for the C++ to
update the value of the entities and the Python script to just read them.
I've debug the error with gdb, but the stack trace I'm getting is not much
explanatory:
#0 0x00007ffff7a83717 in PyFrame_New () from /lib64/libpython2.7.so.1.0
#1 0x00007ffff7af58dc in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#2 0x00007ffff7af718d in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#3 0x00007ffff7af7292 in PyEval_EvalCode () from /lib64/libpython2.7.so.1.0
#4 0x00007ffff7b106cf in run_mod () from /lib64/libpython2.7.so.1.0
#5 0x00007ffff7b1188e in PyRun_FileExFlags () from /lib64/libpython2.7.so.1.0
#6 0x00007ffff7b12b19 in PyRun_SimpleFileExFlags () from /lib64/libpython2.7.so.1.0
#7 0x00007ffff7b23b1f in Py_Main () from /lib64/libpython2.7.so.1.0
#8 0x00007ffff6d50af5 in __libc_start_main () from /lib64/libc.so.6
#9 0x0000000000400721 in _start ()
And if debug with sys.trace inside Python the last line before the
segmentation fault is always in the sample method, but it may vary.
I'm not sure how can I solve this communication problems, so any advice in the
right direction will be much appreciated.
**Edit** Modify the PyDipClient reference to PyClient.
What is happening is I start the program from the Python main method, if then
the C++ listener tries to callback the Python listener it crashes with the
segmentation fault error, the only thread I believe is created is when I
create a subscription, but that is code from inside a library that I don't
know how is working exactly.
If I remove all the callbacks to the Python listener, and force the methods
from Python (like calling the pokehandler) everything is working perfectly.
Answer: The most likely culprit is that the [Global Interpreter
Lock](http://wiki.python.org/moin/GlobalInterpreterLock) (GIL) is not being
held by a thread when it is invoking Python code, resulting in undefined
behavior. Verify all paths that make Python calls, such as
`GeneralDataListener`'s functions, acquire the GIL before invoking Python
code. If copies of `PyClient` are being made, then `pyListener` needs to be
managed in a manner that allows the GIL to be held when it is copied and
destroyed.
Furthermore, consider the [rule of
three](http://stackoverflow.com/q/4172722/1053968) for `PyClient`. Do the
copy-constructor and assignment operator need to do anything with regards to
the subscription?
* * *
The GIL is a mutex around the CPython interpreter. This mutex prevents
parallel operations to be performed on Python objects. Thus, at any point in
time, a max of one thread, the one that has acquired the GIL, is allowed to
perform operations on Python objects. When multiple threads are present,
invoking Python code whilst not holding the GIL results in undefined behavior.
C or C++ threads are sometimes referred to as alien threads in the Python
documentation. The Python interpreter has no ability to control the alien
thread. Therefore, alien threads are responsible for managing the GIL to
permit concurrent or parallel execution with Python threads.
In the current code:
* `GeneralDataListener::handle_message()` manages the GIL in a non-exception safe manner. For example, if the listener's `log_message()` method throws an exception, the stack will unwind and not release the GIL as `PyGILState_Release()` will not be invoked.
void handleMessage(...)
{
PyGILState_STATE state = PyGILState_Ensure();
client->pyListener.attr("log_message")(...);
...
PyGILState_Release(state); // Not called if Python throws.
}
* `GeneralDataListener::connected()`, `GeneralDataListener:: disconnected()`, and `GeneralDataListener:: handleException()` are explicitly invoking Python code, but do not explicitly manage the GIL. If the caller does not own the GIL, then undefined behavior is invoked as Python code is being executed without the GIL.
void connected(...)
{
// GIL not being explicitly managed.
client->pyListener.attr("connected")(...);
}
* `PyClient`'s implicitly created copy-constructor and assignment operator do not manage the GIL, but may indirectly invoke Python code when copying the `pyListener` data member. If copies are being made, then the caller needs to hold the GIL when the `PyClient::pyListener` object is being copied and destroyed. If the `pyListener` is not managed on the free space, then the caller must be Python aware and have acquired the GIL during the destruction of the entire `PyClient` object.
To resolve these, consider:
* Using a [Resource Acquisition Is Initialization](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization) (RAII) guard class to help manage the GIL in an exception safe manner. For example, with the following gil_lock class, when a gil_lock object is created, the calling thread will acquire the GIL. When the gil_lock object is destructed, it releases the GIL
/// @brief RAII class used to lock and unlock the GIL.
class gil_lock
{
public:
gil_lock() { state_ = PyGILState_Ensure(); }
~gil_lock() { PyGILState_Release(state_); }
private:
PyGILState_STATE state_;
};
...
void handleMessage(...)
{
gil_lock lock;
client->pyListener.attr("log_message")(...);
...
}
* Explicitly manage the GIL in any code path that is invokes Python code from within an alien thread.
void connected(...)
{
gil_lock lock;
client->pyListener.attr("connected")(...);
}
* Making `PyClient` non-copyable or explicitly creating the copy-constructor and assignment operator. If copies are being made, then change `pyListener` to be held by a type that allows for explicit destruction while the GIL is being held. One solution is to use a `boost::shared_ptr<python::object>` that manages a copy of the `python::object` provided to the `PyClient` during construction, and has a custom deleter that is GIL aware. Alternatively, one could use something like [`boost::optional`](http://www.boost.org/doc/libs/1_58_0/libs/optional/doc/html/index.html).
class PyClient
{
public:
PyClient(const boost::python::object& object)
: pyListener(
new boost::python::object(object), // GIL locked, so copy.
[](boost::python::object* object) // Delete needs GIL.
{
gil_lock lock;
delete object;
}
)
{
...
}
private:
boost::shared_ptr<boost::python::object> pyListener;;
};
Note that by managing the `boost::python::object` on the free-space, one can
freely copy the `shared_ptr` without holding the GIL. On the other hand, if
one was using something like `boost::optional` to manage the Python object,
then one would need to hold the GIL during copy-construction, assignment, and
destruction.
Consider reading [this](http://stackoverflow.com/a/20828366/1053968) answer
for more details on callbacks into Python and subtle details, such as GIL
management during copy-construction and destruction.
|
Parse multiple subcommands in python simultaneously or other way to group parsed arguments
Question: I am converting Bash shell installer utility to Python 2.7 and need to
implement complex CLI so I am able to parse tens of parameters (potentially up
to ~150). These are names of Puppet class variables in addition to a dozen of
generic deployment options, which where available in shell version.
However after I have started to add more variables I faced are several
challenges: 1\. I need to group parameters into separate dictionaries so
deployment options are separated from Puppet variables. If they are thrown
into the same bucket, then I will have to write some logic to sort them,
potentially renaming parameters and then dictionary merges will not be
trivial. 2\. There might be variables with the same name but belonging to
different Puppet class, so I thought subcommands would allow me to filter what
goes where and avoiding name collisions.
At the momment I have implemented parameter parsing via simply adding multiple
parsers:
parser = argparse.ArgumentParser(description='deployment parameters.')
env_select = parser.add_argument_group(None, 'Environment selection')
env_select.add_argument('-c', '--client_id', help='Client name to use.')
env_select.add_argument('-e', '--environment', help='Environment name to use.')
setup_type = parser.add_argument_group(None, 'What kind of setup should be done:')
setup_type.add_argument('-i', '--install', choices=ANSWERS, metavar='', action=StoreBool, help='Yy/Nn Do normal install and configuration')
# MORE setup options
...
args, unk = parser.parse_known_args()
config['deploy_cfg'].update(args.__dict__)
pup_class1_parser = argparse.ArgumentParser(description=None)
pup_class1 = pup_class1_parser.add_argument_group(None, 'Puppet variables')
pup_class1.add_argument('--ad_domain', help='AD/LDAP domain name.')
pup_class1.add_argument('--ad_host', help='AD/LDAP server name.')
# Rest of the parameters
args, unk = pup_class1_parser.parse_known_args()
config['pup_class1'] = dict({})
config['pup_class1'].update(args.__dict__)
# Same for class2, class3 and so on.
The problem with this approach that it does not solve issue 2. Also first
parser consumes "-h" option and rest of parameters are not shown in help.
I have tried to use [example selected as an
answer](http://stackoverflow.com/questions/10448200/how-to-parse-multiple-sub-
commands-using-python-argparse) but I was not able to use both commands at
once.
## This function takes the 'extra' attribute from global namespace and re-parses it to create separate namespaces for all other chained commands.
def parse_extra (parser, namespace):
namespaces = []
extra = namespace.extra
while extra:
n = parser.parse_args(extra)
extra = n.extra
namespaces.append(n)
return namespaces
pp = pprint.PrettyPrinter(indent=4)
argparser=argparse.ArgumentParser()
subparsers = argparser.add_subparsers(help='sub-command help', dest='subparser_name')
parser_a = subparsers.add_parser('command_a', help = "command_a help")
## Setup options for parser_a
parser_a.add_argument('--opt_a1', help='option a1')
parser_a.add_argument('--opt_a2', help='option a2')
parser_b = subparsers.add_parser('command_b', help = "command_b help")
## Setup options for parser_a
parser_b.add_argument('--opt_b1', help='option b1')
parser_b.add_argument('--opt_b2', help='option b2')
## Add nargs="*" for zero or more other commands
argparser.add_argument('extra', nargs = "*", help = 'Other commands')
namespace = argparser.parse_args()
pp.pprint(namespace)
extra_namespaces = parse_extra( argparser, namespace )
pp.pprint(extra_namespaces)
Results me in:
$ python argtest.py command_b --opt_b1 b1 --opt_b2 b2 command_a --opt_a1 a1
usage: argtest.py [-h] {command_a,command_b} ... [extra [extra ...]]
argtest.py: error: unrecognized arguments: command_a --opt_a1 a1
The same result was when I tried to define parent with two child parsers.
**QUESTIONS**
1. Can I somehow use parser.add_argument_group for argument parsing or is it just for the grouping in help print out? It would solve issue 1 without missing help side effect. Passing it as `parse_known_args(namespace=argument_group)` (if I correctly recall my experiments) gets all the variables (thats ok) but also gets all Python object stuff in resulting dict (that's bad for hieradata YAML)
2. What I am missing in the second example to allow to use multiple subcommands? Or is that impossible with argparse?
3. Any other suggestion to group command line variables? I have looked at Click, but did not find any advantages over standard argparse for my task.
Note: I am sysadmin, not a programmer so be gently on me for the non object
style coding. :)
Thank you
**RESOLVED** Argument grouping solved via the answer suggested by
[hpaulj](http://stackoverflow.com/users/901925/hpaulj).
import argparse
import pprint
parser = argparse.ArgumentParser()
group_list = ['group1', 'group2']
group1 = parser.add_argument_group('group1')
group1.add_argument('--test11', help="test11")
group1.add_argument('--test12', help="test12")
group2 = parser.add_argument_group('group2')
group2.add_argument('--test21', help="test21")
group2.add_argument('--test22', help="test22")
args = parser.parse_args()
pp = pprint.PrettyPrinter(indent=4)
d = dict({})
for group in parser._action_groups:
if group.title in group_list:
d[group.title]={a.dest:getattr(args,a.dest,None) for a in group._group_actions}
print "Parsed arguments"
pp.pprint(d)
This gets me desired result for the issue No.1. until I will have multiple
parameters with the same name. Solution may look ugly, but at least it works
as expected.
python argtest4.py --test22 aa --test11 yy11 --test21 aaa21
Parsed arguments
{ 'group1': { 'test11': 'yy11', 'test12': None},
'group2': { 'test21': 'aaa21', 'test22': 'aa'}}
Answer: Your question is too complicated to understand and respond to in one try. But
I'll throw out some preliminary ideas.
Yes, `argument_groups` are just a way of grouping arguments in the help. They
have no effect on parsing.
Another recent SO asked about parsing groups of arguments:
[Is it possible to only parse one argument group's parameters with
argparse?](http://stackoverflow.com/questions/31519997/is-it-possible-to-only-
parse-one-argument-groups-parameters-with-argparse)
That poster initially wanted to use a group as a parser, but the `argparse`
class structure does not allow that. `argparse` is written in object style.
`parser=ArguementParser...` creates one class of object,
`parser.add_arguement...` creates another, `add_argument_group...` yet
another. You customize it by subclassing `ArgumentParser` or `HelpFormatter`
or `Action` classes, etc.
I mentioned a `parents` mechanism. You define one or more parent parsers, and
use those to populate your 'main' parser. They could be run indepdently (with
parse_known_args), while the 'main' is used to handle help.
We also discussed grouping the arguments after parsing. A `namespace` is a
simple object, in which each argument is an attribute. It can also be
converted to a dictionary. It is easy to pull groups of items from a
dictionary.
There have SO questions about using multiple subparsers. That's an awkward
proposition. Possible, but not easy. Subparsers are like issueing a command to
a system program. You generally issue one command per call. You don't nest
them or issue sequences. You let shell piping and scripts handle multiple
actions.
`IPython` uses `argparse` to parse its inputs. It traps help first, and issues
its own message. Most arguments come from config files, so it is possible to
set values with default configs, custom configs and in the commandline. It's
an example of naming a very large set of arguments.
Subparsers let you use the same argument name, but without being able to
invoke multiple subparsers in one call that doesn't help much. And even if you
could invoke several subparsers, they would still put the arguments in the
same namespace. Also `argparse` tries to handle flaged arguments in an order
independent manner. So a `--foo` at the end of the command line gets parsed
the same as though it were at the start.
There was SO question where we discussed using argument names ('dest') like
`'group1.argument1'`, and I've even discussed using nested namespaces. I could
look those up if it would help.
* * *
Another thought - load `sys.argv` and partition it before passing it to one or
more parsers. You could split it on some key word, or on prefixes etc.
|
Passing Python functions to Gnuplot
Question: Plotting a Python function in Gnuplot is not straightforward although there
are some solutions. For example, one could either cast its values into an
array or manually translate its expression into Gnuplot’s syntax. Here is an
example that uses the module [`Gnuplot.py`](http://gnuplot-py.sourceforge.net/
"Gnuplot.py \(website unavailable at the moment for some strange reason\)") as
an interface:
#!/usr/bin/env python
import Gnuplot
import numpy as np
## define function ##
func = lambda x, x0, y0, w: y0 * np.exp( -4*np.log(2) * ( (x-x0) / w )**2 )
# also works with a regular function:
# def func(x, x0, y0, w):
# return y0 * np.exp( -4*np.log(2) * ( (x-x0) / w )**2 )
popt = (10.1, 5, 2)
## linspace ##
x = np.linspace(0, 20, num=1000) # (x min, x max, number of points)
y = func(x, *popt)
func_linspace = Gnuplot.Data(x, y, with_='lines', title='linspace')
## expression “translation” (lambda only) ##
func_translation = Gnuplot.Func(
'{y0} * exp( -4*log(2) * ( (x-{x0}) / {w} )**2 )'.format(
x0=popt[0],
y0=popt[1],
w=popt[2],
),
title='expression translation')
## plot ##
g = Gnuplot.Gnuplot()
g.plot(func_linspace, func_translation)
The first method works fine with a decent number of points but fails when
zooming-in too much or changing the window out of the array’s limits, while
the second one works at any zoom level. To illustrate this point, let’s zoom-
in the output of the previous script:

For this reason, it would be interesting to find a way to plot Python
functions (lambda or regular functions) as **Gnuplot functions**. I can think
of two solution: automatically translating the expression (works only for
“simple” lambda functions”), or having Gnuplot directly use the Python
function.
## First solution: expression translation (simple lambda functions only)
This method would not only be tricky to automate, it would also be impossible
to implement with elaborate functions. However we could still use this method
for simple lambda functions. To sketch the behaviour of an implementation:
>>> def lambda_to_gnuplot(func, popt):
... # determine if translation is possible
... # extract function expression and replace parameters with values
... return func_expression # str
>>> lambda_to_gnuplot(
... lambda x, x0, y0, w: y0 * np.exp( -4*np.log(2) * ( (x-x0) / w )**2),
... (10.1, 5, 2))
'5 * exp( -4*log(2) * ( (x-10.1) / 2 )**2 )'
_Would there be a way to implement this`lambda_to_gnuplot` function in
python?_
## Second solution: directly passing the Python function to Gnuplot
The “perfect” solution would be having Gnuplot use the Python function. In my
most daring dreams, it is something like:
>>> def func(x, x0, y0, w):
... if x < x0:
... return 0
... else:
... return y0 * np.exp( -4*np.log(2) * ( (x-x0) / w )**2)
>>> func_direct = Gnuplot.PyFunction(lambda x: func(x, 10.1, 5, 2))
>>> g.plot(func_direct)
This is the easiest solution to use, but its implementation would be very
tough, if not impossible. _Any hints on how this solution might be
implemented?_ The answer may of course bypass `Gnuplot.py`.
Answer: I am not sure if I'm fully answering your question, but you could try
executing your python script as a system call within gnuplot passing the
argument(s).
For instance, imagine the simple python script `test.py`:
import sys
x=float(sys.argv[1])
print x**2
which will return the square of the argument when called like this from a
shell:
:~$ python test.py 2
4.0
:~$ python test.py 3
9.0
:~$ python test.py 4
16.0
Now, within gnuplot, turn this into a function:
gnuplot> f(x) = real(system(sprintf("python test.py %g", x)))
gnuplot> print f(1)
1.0
gnuplot> print f(2)
4.0
gnuplot> print f(3)
9.0
gnuplot> print f(4)
16.0
I added the `real()` so that the string output from the system call is
converted to float. This now allows you to use it as a regular gnuplot
function. I don't need to mention this will take a lot longer to execute than
just `plot x**2`:
f(x) = real(system(sprintf("python test.py %g", x)))
plot f(x)
[](http://i.stack.imgur.com/cG4Ds.png)
|
Fitting a Binomial distribution with pymc raises ZeroProbability error for certain FillValues
Question: I'm not sure if I found a bug in pymc. It seems like fitting a Binomial with
missing data can produce a `ZeroProbability` error depending on the chosen
fill_value that masks missing data. But maybe I'm using it wrongly. I tried
the following example with the current master branch from github. I'm aware of
the [bug concerning Binomial distributions in pymc
2.3.4](https://stackoverflow.com/questions/28778725/designing-a-simple-
binomial-distribution-throws-core-dump-in-pymc), but this seems to be a
different issue.
I fitted a Binomial distribution with pymc and everything worked as I
expected:
import scipy as sp
import pymc
def make_model(observed_values):
p = pymc.Uniform('p', lower = 0.0, upper = 1.0, value = 0.1)
values = pymc.Binomial('values', n = 10* sp.ones_like(observed_values), p = p * sp.ones_like(observed_values),\
value = observed_values, observed = True, plot = False)
values = pymc.Binomial('values', n = 10, p = p,\
value = observed_values, observed = True, plot = False)
return locals()
sp.random.seed(0)
observed_values = sp.random.binomial(n = 10.0, p = 0.1, size = 100)
M1 = pymc.MCMC(make_model(observed_values))
M1.sample(iter=10000, burn=1000, thin=10)
pymc.Matplot.plot(M1)
M1.summary()
Output:
[-----------------100%-----------------] 10000 of 10000 complete in 0.7 sec
Plotting p
p:
Mean SD MC Error 95% HPD interval
------------------------------------------------------------------
0.093 0.007 0.0 [ 0.081 0.107]
Posterior quantiles:
2.5 25 50 75 97.5
|---------------|===============|===============|---------------|
0.08 0.088 0.093 0.097 0.106
Now, I tried a very similar situation with the difference that one observed
value would be missing:
mask = sp.zeros_like(observed_values)
mask[0] = True
masked_values = sp.ma.masked_array(observed_values, mask = mask, fill_value = 999999)
M2 = pymc.MCMC(make_model(masked_values))
M2.sample(iter=10000, burn=1000, thin=10)
pymc.Matplot.plot(M2)
M2.summary()
Unexpectedly, I got a `ZeroProbability` error:
---------------------------------------------------------------------------
ZeroProbability Traceback (most recent call last)
<ipython-input-16-4f945f269628> in <module>()
----> 1 M2 = pymc.MCMC(make_model(masked_values))
2 M2.sample(iter=10000, burn=1000, thin=10)
3 pymc.Matplot.plot(M2)
4 M2.summary()
<ipython-input-12-cb8707bb911f> in make_model(observed_values)
4 def make_model(observed_values):
5 p = pymc.Uniform('p', lower = 0.0, upper = 1.0, value = 0.1)
----> 6 values = pymc.Binomial('values', n = 10* sp.ones_like(observed_values), p = p * sp.ones_like(observed_values), value = observed_values, observed = True, plot = False)
7 values = pymc.Binomial('values', n = 10, p = p, value = observed_values, observed = True, plot = False)
8 return locals()
/home/fabian/anaconda/lib/python2.7/site-packages/pymc/distributions.pyc in __init__(self, *args, **kwds)
318 logp_partial_gradients=logp_partial_gradients,
319 dtype=dtype,
--> 320 **arg_dict_out)
321
322 new_class.__name__ = name
/home/fabian/anaconda/lib/python2.7/site-packages/pymc/PyMCObjects.pyc in __init__(self, logp, doc, name, parents, random, trace, value, dtype, rseed, observed, cache_depth, plot, verbose, isdata, check_logp, logp_partial_gradients)
773 if check_logp:
774 # Check initial value
--> 775 if not isinstance(self.logp, float):
776 raise ValueError(
777 "Stochastic " +
/home/fabian/anaconda/lib/python2.7/site-packages/pymc/PyMCObjects.pyc in get_logp(self)
930 (self._value, self._parents.value))
931 else:
--> 932 raise ZeroProbability(self.errmsg)
933
934 return logp
ZeroProbability: Stochastic values's value is outside its support,
or it forbids its parents' current values.
However, if I change the fill value in the masked array to 1, fitting works
again:
masked_values2 = sp.ma.masked_array(observed_values, mask = mask, fill_value = 1)
M3 = pymc.MCMC(make_model(masked_values2))
M3.sample(iter=10000, burn=1000, thin=10)
pymc.Matplot.plot(M3)
M3.summary()
Output:
[-----------------100%-----------------] 10000 of 10000 complete in 2.1 sec
Plotting p
p:
Mean SD MC Error 95% HPD interval
------------------------------------------------------------------
0.092 0.007 0.0 [ 0.079 0.105]
Posterior quantiles:
2.5 25 50 75 97.5
|---------------|===============|===============|---------------|
0.079 0.088 0.092 0.097 0.105
values:
Mean SD MC Error 95% HPD interval
------------------------------------------------------------------
1.15 0.886 0.029 [ 0. 3.]
Posterior quantiles:
2.5 25 50 75 97.5
|---------------|===============|===============|---------------|
0.0 1.0 1.0 2.0 3.0
Is this a bug or is there a problem with my model? Thanks for any help!
Answer: I asked the question the pymc developers on GitHub and there I got an answer
from fonnesbeck (<https://github.com/pymc-
devs/pymc/issues/47#issuecomment-129301002>):
> This is because you filled the missing values with 999999, which is outside
> the support of the variable. You need to give it a valid value, but not one
> that occurs in the non-missing data."
To make sure that the missing value is not in the non-missing data the
following was suggested:
> You can give it a non-integer value in order to avoid the problem that you
> cite. For example, if you fill with, say, 1.5 that should work.
>
> (...) PyMC calculates the log-probability at the first iteration, and
> therefore the values inserted for the missing values at the first iteration
> have to be valid. If you give discrete values a floating point value, it
> should end up getting truncated when converted to integers, so that will
> work."
|
Output from two processes on shell (Python,Linux)
Question: My python script executes program that sends its output on shell. However,
script should simultaneously execute its own commands which also have its
output on shell. Is it possible to do that? Will I see output both from the
script and from the process? Here is the process that has output on shell:
s.exe_cmd('./upgrade')
So, will I be able to write
print "my output..."
and see it on shell to?
Answer: Assuming you're using `subprocess` \- it will print both on the console.
import subprocess
p = subprocess.Popen("sleep 1 && echo from inside subprocess",
shell=True)
print("will print asynchronously")
p.wait()
This is an example program that you can try out
|
Python Import using __init__.py instead of adding to sys.path
Question: This is my file structure.
/working dir
__init__.py
main.py
/packages
__init__.py
snafu.py
/subfolder1
__init__.py
foo.py
/subfolder2
__init__.py
bar.py
/many_more
...
If I run `main.py` it will try to import, `from subfolder1.foo import
something` But `foo.py` will try to `import subfolder2` which won't work
because `subfolder2` is not found.
It would be way too much work to go into every file and change every import
statement to `from packages.a_subfolder.whatever import something`
I have gotten it to work by adding `/packages` to the `sys.path`, but I would
prefer not to do this. Is there a way to fix this using `__init__.py` files?
Would adding `import *` to the /packages **__init__.py** file work?
The many_more/ folders are third party packages I downloaded, since i work on
this on different computers instead of installing the packages on every
computer I work on it just uses the one in the folder. For example: to use
googledrive in your program you need about 10 different packages to get it to
work.
Answer: In your case it seems you want to import modules present in parent dierectory.
Having following code in the file from which you want to import the module in
the parent directory should work:
import sys
sys.path.append('.')
sys.path.append('..')
|
Django Web Development Server Not Working
Question: I'm trying my hands upon Django and just started with this and frankly
speaking i'm very beginner at this. Now recently I encountered a problem which
is linked to some settings I guess but i'm not able to understand and solve
this problem. Whenever I write
python manage.py runserver
on the command line, I get the following error.
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\p
assword_validation.py", line 162, in __init__
common_passwords_lines = gzip.open(password_list_path).read().decode('utf-8'
).splitlines()
File "C:\Python34\lib\gzip.py", line 52, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "C:\Python34\lib\gzip.py", line 181, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Python34\\lib\\site
-packages\\django-1.9-py3.4.egg\\django\\contrib\\auth\\common-passwords.txt.gz'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\core\managemen
t\__init__.py", line 331, in execute_from_command_line
utility.execute()
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\core\managemen
t\__init__.py", line 305, in execute
django.setup()
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\__init__.py",
line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\apps\registry.
py", line 115, in populate
app_config.ready()
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\admin\
apps.py", line 22, in ready
self.module.autodiscover()
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\admin\
__init__.py", line 26, in autodiscover
autodiscover_modules('admin', register_to=site)
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\utils\module_l
oading.py", line 50, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\a
dmin.py", line 7, in <module>
from django.contrib.auth.forms import (
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\f
orms.py", line 261, in <module>
class SetPasswordForm(forms.Form):
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\f
orms.py", line 271, in SetPasswordForm
help_text=password_validation.password_validators_help_text_html())
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\p
assword_validation.py", line 85, in password_validators_help_text_html
help_texts = password_validators_help_texts(password_validators)
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\p
assword_validation.py", line 74, in password_validators_help_texts
password_validators = get_default_password_validators()
File "C:\Python34\lib\functools.py", line 434, in wrapper
result = user_function(*args, **kwds)
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\p
assword_validation.py", line 21, in get_default_password_validators
return get_password_validators(settings.AUTH_PASSWORD_VALIDATORS)
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\p
assword_validation.py", line 32, in get_password_validators
validators.append(klass(**validator.get('OPTIONS', {})))
File "C:\Python34\lib\site-packages\django-1.9-py3.4.egg\django\contrib\auth\p
assword_validation.py", line 164, in __init__
with open(password_list_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Python34\\lib\\site
-packages\\django-1.9-py3.4.egg\\django\\contrib\\auth\\common-passwords.txt.gz'
I have tried settling in the environment variable and all sorts of things but
no end to this problem. Please help!!!
Answer: Django 1.9 is not out yet. If you are new to Django, use the latest release,
currently [1.8.3](https://docs.djangoproject.com/en/1.8/releases/1.8.3/).
|
KeyError: SPARK_HOME during SparkConf initialization
Question: I am a spark newbie and I want to run a Python script from the command line. I
have tested pyspark interactively and it works. I get this error when trying
to create the sc:
File "test.py", line 10, in <module>
conf=(SparkConf().setMaster('local').setAppName('a').setSparkHome('/home/dirk/spark-1.4.1-bin-hadoop2.6/bin'))
File "/home/dirk/spark-1.4.1-bin-hadoop2.6/python/pyspark/conf.py", line 104, in __init__
SparkContext._ensure_initialized()
File "/home/dirk/spark-1.4.1-bin-hadoop2.6/python/pyspark/context.py", line 229, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway()
File "/home/dirk/spark-1.4.1-bin-hadoop2.6/python/pyspark/java_gateway.py", line 48, in launch_gateway
SPARK_HOME = os.environ["SPARK_HOME"]
File "/usr/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'SPARK_HOME'
Answer: It seems like there are two problems here.
The first one is a path you use. `SPARK_HOME` should point to the root
directory of the Spark installation so in your case it should probably be
`/home/dirk/spark-1.4.1-bin-hadoop2.6` not `/home/dirk/spark-1.4.1-bin-
hadoop2.6/bin`.
The second problem is a way how you use `setSparkHome`. If you check [a
docstring](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/python/pyspark/conf.py#L130)
its goal is to
> set path where Spark is installed on worker nodes
`SparkConf` constructor assumes that `SPARK_HOME` on master is already set.
[It
calls](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/python/pyspark/conf.py#L104)
`pyspark.context.SparkContext._ensure_initialized` [which
calls](https://github.com/apache/spark/blob/49351c7f597c67950cc65e5014a89fad31b9a6f7/python/pyspark/context.py#L234)
`pyspark.java_gateway.launch_gateway`, [which tries to
acccess](https://github.com/apache/spark/blob/49351c7f597c67950cc65e5014a89fad31b9a6f7/python/pyspark/java_gateway.py#L48)
`SPARK_HOME` and fails.
To deal with this you should set `SPARK_HOME` before you create `SparkConf`.
import os
os.environ["SPARK_HOME"] = "/home/dirk/spark-1.4.1-bin-hadoop2.6"
conf = (SparkConf().setMaster('local').setAppName('a'))
|
How to ensure a file is closed for writing in Python?
Question: The issue described [here](http://stackoverflow.com/questions/31447975/python-
open-write-on-windows-permission-issue-ioerror-for-file-created) looked
initially like it was solvable by just having the spreadsheet closed in Excel
before running the program.
It transpires, however, that having Excel closed is a necessary, but not
sufficient, condition. The issue still occurs, but not on every Windows
machine, and not every time (sometimes it occurs after a single execution,
sometimes two).
I've modified the program such that it now reads from one spreadsheet and
writes to a different one, still the issue presents itself. I even go on to
programmatically kill any lingering Python processes before running the
program. Still no joy.
The `openpyxl` `save()` function instantiates `ZipFile` thus:
archive = ZipFile(filename, 'w', ZIP_DEFLATED, allowZip64=True)
... with `Zipfile` then using that to attempt to open the file in mode `'wb'`
thus:
if isinstance(file, basestring):
self._filePassed = 0
self.filename = file
modeDict = {'r' : 'rb', 'w': 'wb', 'a' : 'r+b'}
try:
self.fp = open(file, modeDict[mode])
except IOError:
if mode == 'a':
mode = key = 'w'
self.fp = open(file, modeDict[mode])
else:
raise
According to [the
docs](https://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-
files):
> On Windows, 'b' appended to the mode opens the file in binary mode, so there
> are also modes like 'rb', 'wb', and 'r+b'. Python on Windows makes a
> distinction between text and binary files; the end-of-line characters in
> text files are automatically altered slightly when data is read or written.
> This behind-the-scenes modification to file data is fine for ASCII text
> files, but it’ll corrupt binary data like that in JPEG or EXE files. Be very
> careful to use binary mode when reading and writing such files. On Unix, it
> doesn’t hurt to append a 'b' to the mode, so you can use it platform-
> independently for all binary files.
... which explains why mode 'wb' must be used.
Is there something in Python file opening that could possibly leave the file
in some state of "openness"?
**Windows:** 8
**Python:** 2.7.10
**openpyxl:** latest
Answer: Two suggestions:
First is to use `with` to close the file correctly.
with open("some.xls", "wb") as excel_file:
#Do something
At the end of that the file will close on its own (see
[this](https://docs.python.org/2/reference/compound_stmts.html#with)).
You can also make a copy of the file and work on the copied file.
import shutil
shutil.copyfile(src, dst)
<https://docs.python.org/2/library/shutil.html#shutil.copyfile>
|
How to set site-packages directory for local Python 2.7 install
Question: I'm trying to run a certain script in Python, but it requires some other
modules (setuptools) - I don't have write permissions for our /usr/ directory
to install them, so I'm trying to install a local version of Python 2.7 to run
it (not in /usr/).
When I try to run the ez_setup script for setuptools, it tries to access:
> /usr/local/lib/python2.7/site-packages/
But that's not the installation I want (I get write permission error). I can
point the ez_setup script to wherever, but I'm not sure how to get Python to
use it. In my 2.7 install I ran:
> site.getsitepackages()
And got ['/usr/local/lib/python2.7/site-packages', '/usr/local/lib/site-
python'].
Is there a way to change the default site-packages directory so I can do local
installs? Like in the Python installation directory itself?
Thanks, Kaleb
Answer: You really want to read up on Python [virtual
environments](http://docs.python-guide.org/en/latest/dev/virtualenvs/), which
allow you to create a "local" Python tree into which you can install your own
packages and such without requiring root access.
For example, assuming that you have the `virtualenv` command available (you
may need to install this first; it's available as a package for most major
distributions), you can create a new virtual environment like this:
virtualenv --system-site-packages myenv
The `--system-site-packages` option will make any packages in your system
site-packages directory visible to this environment, as well as anything you
install locally. Then activate the environment:
. myenv/bin/activate
And install stuff:
pip install apackage
**NB** For no good reason, `pydoc` will stop working for you in this
configuration. You can just drop a new `pydoc` script into `myenv/bin/pydoc`
that looks like this:
#!/usr/bin/env python
import pydoc
if __name__ == '__main__':
pydoc.cli()
|
How to delete repeating lines starting with specific word in python
Question: I have an inputfile of the form
All tests start with the word "Test" and all errors start with the word
"error"
Test1
Error1
Error1
Error2
Test1
Error3
Test2
Error1
Error4
Test2
Error5
Error1
Test3
Error1
I want it in the format:
Test1
Error1
Error1
Error2
Error3 // Removed test1
Test2
Error1
Error4
Error5
Error1
Test3
Error1
Basically while going through the file, it should delete repeated testnames
and write it in the same order to an output file. Following is my code
import os
import sys
import optparse
def delete_duplicate(inputfile,outputfile):
output = open(outputfile, "w")
from collections import OrderedDict
input = open(inputfile, "r")
lines = (line.strip() for line in input)
unique_lines = OrderedDict.fromkeys((line for line in lines if line))
for unique_line in unique_lines:
output.write(unique_line)
output.write("\n")
My code removes duplicate lines and gives result as below:
Test1
Error1
Error2
Error3
Test2
Error4
Error5
Test3
It is working fine with testnames but not with errors. Can anybody help?
Answer: All you need is to preserve the lines that starts with `Test` in a set and
check if you have it already just don't write it in output file :
def delete_duplicate(inputfile,outputfile,seen={}):
with open(outputfile, "w") as output,open(inputfile, "r") as input:
for line in input:
if line not in seen:
output.write(line+'\n')
if line.startswith('Test'):
seen.add(line)
The advantage of `set` is that its order is O(1) for check the membership and
adding items.
|
python subprocess & popen invalid syntax
Question: I am new to scripting. I have this line in bash I'm trying to write in python.
numcpu = ($(cat /proc/cpuinfo | grep 'physical id' | awk '{print $NF}' | sort | uniq | wc -l))
I have tried using sub and popen and couldnt get it to work. Here's the line:
numcpu = sub.Popen('($(cat /proc/cpuinfo | grep 'physical id' | awk '{print $NF}' | sort | uniq | wc -l))',stdout=sub.PIPE,stderr=sub.PIPE)
It keeps throwing errors. Any ideas on what the problem is or another way I
could do it? I know maybe importing and calling os?
I am using Python v2.6 and I can't upgrade.
Answer: So how should Python know where the string starts or ends if you use the same
quotes as are contained in the string? Switch to using `"` double quotes to
enclose your shell command.
Also, you'll need to tell `subprocess` to use a shell to run your command,
otherwise it'll just try to run the command directly (and thus not support
shell syntax):
numcpu = sub.Popen("($(cat /proc/cpuinfo | grep 'physical id' | awk '{print $NF}' | sort | uniq | wc -l))",
shell=True, stdout=sub.PIPE, stderr=sub.PIPE)
You may want to just read from `proc/cpuinfo` directly, and process that in
Python:
cpu_ids = set()
with open('/proc/cpuinfo') as cpuinfo:
for line in cpuinfo:
if 'physical id' in line:
cpu_ids.add(line.rsplit(None, 1)[-1])
numcpu = len(cpu_ids)
|
Elasticsearch Python client Reindex Timedout
Question: I'm trying to reindex using the Elasticsearch python client, using
<https://elasticsearch-
py.readthedocs.org/en/master/helpers.html#elasticsearch.helpers.reindex>. But
I keep getting the following exception:
`elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by -
ReadTimeout`
The stacktrace of the error is
Traceback (most recent call last):
File "~/es_test.py", line 33, in <module>
main()
File "~/es_test.py", line 30, in main
target_index='users-2')
File "~/ENV/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 306, in reindex
chunk_size=chunk_size, **kwargs)
File "~/ENV/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 182, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File "~/ENV/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 124, in streaming_bulk
raise e
elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeout(HTTPSConnectionPool(host='myhost', port=9243): Read timed out. (read timeout=10))
Is there anyway to prevent this exception besides increasing the timeout?
**EDIT:** python code
from elasticsearch import Elasticsearch, RequestsHttpConnection, helpers
es = Elasticsearch(connection_class=RequestsHttpConnection,
host='myhost',
port=9243,
http_auth=HTTPBasicAuth(username, password),
use_ssl=True,
verify_certs=True,
timeout=600)
helpers.reindex(es, source_index=old_index, target_index=new_index)
Answer: It may be happening because of the OutOfMemoryError for Java heap space, which
means you are not giving elasticsearch enough memory for what you want to do.
Try to look at your `/var/log/elasticsearch` if there is any exception like
that.
<https://github.com/elastic/elasticsearch/issues/2636>
|
A way to display currency?
Question: I have database values such as the following:
price currency
10.99 USD
13.99 EUR
Is there a python library or something that I can use for help with displaying
the currency properly on a storefront? For example, it should be:
10.99 USD ==> $10.99
13.99 EUR (france) ==> 13,99 €
Answer: You could use [babel](http://babel.pocoo.org/docs/api/numbers/) like this:
import babel.numbers as numbers
print(numbers.format_currency(10.99, 'USD', locale='en_US'))
$10.99
print(numbers.format_currency(13.99, 'EUR', locale='fr_FR'))
13,99 €
PS. Under the hood, [money uses
babel](https://github.com/carlospalol/money#installation) for locale-aware
formatting.
|
How to get points inside multiple polydata
Question: I use `vtkAppendPolyData` to merge multiple polydata into one polydata, and
`vtkSelectEnclosedPoints` to get points inside the polydata.
Here is the python code using `tvtk.api`:
from tvtk.api import tvtk
# create some random points
points = np.random.randn(9999, 3)
pd_points = tvtk.PolyData()
pd_points.points = points
pd_points.verts = np.arange(len(points)).reshape(-1, 1)
# create two polydata
cube1 = tvtk.CubeSource()
cube1.update()
cube2 = tvtk.CubeSource()
cube2.center = (0.5, 0, 0)
cube2.update()
# merge the two polydata into one:
append = tvtk.AppendPolyData()
append.add_input_data(cube1.output)
append.add_input_data(cube2.output)
append.update()
# select points inside polydata
sep = tvtk.SelectEnclosedPoints()
sep.set_input_data(pd_points)
sep.set_surface_data(append.output)
sep.update()
# remove outside points
tp = tvtk.ThresholdPoints()
tp.threshold_by_upper(0.5)
tp.set_input_data(sep.output)
tp.update()
res = tp.output
res.point_data.remove_array(0)
the result looks like:
[](http://i.stack.imgur.com/5CTQQ.png)
as you can see, the points inside both the polydata are not included.
I don't want to use a for loop, because I have alot of polydata to clip the
data.
Answer: The surface you created is not a manifold and `vtkSelectEnclosedPoints` works
with manifolds only. Try to use `vtkBooleanOperationPolyDataFilter` with
`SetOperationToUnion()` instead of `vtkAppendPolyData`.
|
Datetime comparisons in python
Question: I have a file with two different dates: one has a timestamp and one does not.
I need to read the file, disregard the timestamp, and compare the two dates.
If the two dates are the same then I need to spit it to the output file and
disregard any other rows. I'm having trouble knowing if I should be using a
datetime function on the input and formatting the date there and then simply
seeing if the two are equivalent? Or should I be using a timedelta?
I've tried a couple different ways but haven't had success.
df = pd.read_csv("File.csv", dtype={'DATETIMESTAMP': np.datetime64, 'DATE':np.datetime64})
Gives me : TypeError: the dtype < M8 is not supported for parsing, pass this
column using parse_dates instead
I've also tried to just remove the timestamp and then compare, but the strings
end up with different date formats and that doesn't work either.
df['RemoveTimestamp'] = df['DATETIMESTAMP'].apply(lambda x: x[:10])
df = df[df['RemoveTimestamp'] == df['DATE']]
Any guidance appreciated.
Here is my sample input CSV file:
"DATE", "DATETIMESTAMP"
"8/6/2014","2014-08-06T10:18:38.000Z"
"1/15/2013","2013-01-15T08:57:38.000Z"
"3/7/2013","2013-03-07T16:57:18.000Z"
"12/4/2012","2012-12-04T10:59:37.000Z"
"5/6/2014","2014-05-06T11:07:46.000Z"
"2/13/2013","2013-02-13T15:51:42.000Z"
Answer:
import pandas as pd
import numpy as np
# your data, both columns are in string
# ================================================
df = pd.read_csv('sample_data.csv')
df
DATE DATETIMESTAMP
0 8/6/2014 2014-08-06T10:18:38.000Z
1 1/15/2013 2013-01-15T08:57:38.000Z
2 3/7/2013 2013-03-07T16:57:18.000Z
3 12/4/2012 2012-12-04T10:59:37.000Z
4 5/6/2014 2014-05-06T11:07:46.000Z
5 2/13/2013 2013-02-13T15:51:42.000Z
# processing
# =================================================
# convert string to datetime
df['DATE'] = pd.to_datetime(df['DATE'])
df['DATETIMESTAMP'] = pd.to_datetime(df['DATETIMESTAMP'])
# cast timestamp to date
df['DATETIMESTAMP'] = df['DATETIMESTAMP'].values.astype('<M8[D]')
DATE DATETIMESTAMP
0 2014-08-06 2014-08-06
1 2013-01-15 2013-01-15
2 2013-03-07 2013-03-07
3 2012-12-04 2012-12-04
4 2014-05-06 2014-05-06
5 2013-02-13 2013-02-13
# compare
df['DATE'] == df['DATETIMESTAMP']
0 True
1 True
2 True
3 True
4 True
5 True
dtype: bool
|
Use NLTK to find reasons within text
Question: For my project at work I am tasked with going through a bunch of user
generated text, and in some of that text are reasons for cancelling their
internet service, as well as how often that reason is occurring. It could be
they are moving, just don't like it, or bad service, etc.
While this may not necessarily be a Python question, I am wondering if there
is some way I can use NLTK or Textblob in some way to determine reasons for
cancellation. I highly doubt there is anything automated for such a
specialized task and I realize that I may have to build a neural net, but any
suggestions on how to tackle this problem would be appreciated.
This is what I have thought about so far: 1) Use stemming and tokenization and
tally up most frequent words. Easy method, not that accurate. 2) n-grams.
Computationally intensive, but may hold some promise. 3) POS tagging and
chunking, maybe find words which follow conjunctions such as "because". 4) Go
through all text fields manually and keep a note of reasons for cancellation.
Not efficient, defeats the whole purpose of finding some algorithm. 5) NN,
have absolutely no idea, and I have no idea if it is feasible.
I would really appreciate any advice on this.
Answer: > Don't worry if this answer is too general or you can't understand something
> - this is academic stuff and needs some basic preparations. Feel free to
> contact me with questions, if you want (ask for my mail in comment or smth,
> we'll figure something out).
I think that this question is more suited for
[CrossValidated](http://stats.stackexchange.com/).
Anyway, first thing that you need to do is to create some training set. You
need to find as many documents with reasons as you can and annotate them,
marking phrases specifying reason. The more documents the better. If you're
gonna work with user reports - use example reports, so that training data and
real data will come from the same source. This is how you'll build some kind
of corpus for your processing.
Then you have to specify what features you'll need. This may be POS tag,
n-gram feature, lemma/stem, etc. This needs experimentation and some practice.
Here I'd use some n-gram features (probably 2-gram or 3-gram) and maybe some
knowledge basing on some Wordnet.
Last step is building you chunker or annotator. It is a component that will
take your training set, analyse it and learn what should it mark. You'll meet
something called "semantic gap" - this term describes situation when your
program "learned" something else than you wanted (it's a simplification). For
example, you may use such a set of features, that your chunker will learn
finding "I don't " phrases instead of reason phrases. It is really dependent
on your training set and feature set. If that happens, you should try changing
your feature set, and after a while - try working on training set, as it may
be not representative.
How to build such chunker? For your case I'd use HMM (Hidden Markov Model) or
- even better - CRF (Conditional Random Field). These two are statistical
methods commonly used for stream annotation, and you text is basically a
stream of tokens. Another approach could be using any "standard" classifier
(from Naive Bayes, through some decision tress, NN to SVM) and using it on
every n-gram in text.
Of course choosing feature set is highly dependent on chosen method, so read
some about them and choose wisely.
PS. This is oversimplified answer, missing many important things about
training set preparation, choosing features, preprocessing your corpora,
finding sources for them, etc. This is not walk-through - these are basic
steps that you should explore yourself.
PPS. Not sure, but NLTK may have some CRF or HMM implementation. If not, I can
recommend [scikit-learn](http://scikit-learn.org/stable/modules/hmm.html) for
Markov and
[CRFP++](http://www.chokkan.org/software/crfsuite/api/crfsuite_hpp_api.html)
for CRF. Look out - the latter is powerful, but is a b*tch to install and to
use from Java or python.
**==EDIT==**
Shortly about features:
First, what kinds of features can we imagine?
* lemma/stem - you find stems or lemmas for each word in your corpus, choose the most important (usually those will have the highest frequency, or at least you'll start there) and then represent each word/n-gram as binary vector, stating whether represented word or sequence after stemming/lemmatization contains that feature lemma/stem
* n-grams - similiar to above, but instead of single words you choose most important sequences of length n. "n-gram" means "sequence of length n", so e.g. bigrams (2-grams) for "I sat on a bench" will be: "I sat", "sat on", "on a", "a bench".
* skipgrams - similiar to n-grams, but contains "gaps" in original sentence. For example, biskipgrams with gap size 3 for "Quick brown fox jumped over something" (sorry, I can't remember this phrase right now :P ) will be: ["Quick", "over"], ["brown", "something"]. In general, n-skipgrams with gap size m are obtained by getting a word, skipping m, getting a word, etc unless you have n words.
* POS tags - I've always mistaken them with "positional" tags, but this is acronym for "Part Of Speech". It is useful when you need to find phrases that have common grammatical structure, not common words.
Of course you can combine them - for example use skipgrams of lemmas, or POS
tags of lemmas, or even *-grams (choose your favourite :P) of POS-tags of
lemmas.
What would be the sense of using POS tag of lemma? That would describe part of
speech of basic form of word, so it would simplify your feature to facts like
"this is a noun" instead of "this is plural female noun".
Remember that choosing features is one of the most important parts of the
whole process (the other is data preparation, but that deserves the whole
semester of courses, and feature selection can be handled in 3-4 lectures, so
I'm trying to put basics here). You need some kind of intuition while
"hunting" for chunks - for example, if I wanted to find all expressions about
colors, I'd probably try using 2- or 3-grams of words, represented as binary
vector described whether such n-gram contains most popular color names and
modifiers (like "light", "dark", etc) and POS tag. Even if you'd miss some
colors (say, "magenta") you could find them in text if your method (I'd go
with CRF again, this is wonderful tool for this kind of tasks) generalized
learned knowledge enough.
|
Python mock Patch os.environ and return value
Question: Unit testing conn() using mock:
app.py
import mysql.connector
import os,urlparse
def conn():
if 'DATABASE_URL' in os.environ:
url=urlparse(os.environ['DATABASE_URL'])
g.db = mysql.connector.connect(user=url.username,password=url.password, host=url.hostname,database=url.path[1:])
else mysql.connector.error.Errors as err:
return "Error
test.py
def test_conn(self):
with patch(app.mysql.connector) as mock_mysql:
with patch(app.os.environ) as mock_environ
con()
mock_mysql.connect.assert_callled_with("credentials")
Error: _Assertion_ `mock_mysql.connect.assert_called_with` is not called.
which i believe it is because 'Database_url' is not in my patched os.environ
and because of that test call is not made to mysql_mock.connect.
Questions:
1 what changes i need to make to make this test code work?
2.Do i also have to patch 'urlparse'?
Answer:
import mysql.connector
import os,urlparse
@mock.patch.dict(os.environ,{'DATABASE_URL':'mytemp'})
def conn(mock_A):
print os.environ["mytemp"]
if 'DATABASE_URL' in os.environ:
url=urlparse(os.environ['DATABASE_URL'])
g.db = mysql.connector.connect(user=url.username,password=url.password, host=url.hostname,database=url.path[1:])
else mysql.connector.error.Errors as err:
return "Error
You can try this way.Just call `conn` with a `dummy` argument.
Or
If you dont want to modify ur original function try this:
def func():
print os.environ["mytemp"]
def test_func():
k=mock.patch.dict(os.environ,{'mytemp':'mytemp'})
k.start()
func()
k.stop()
test_func()
|
Pair of python processes communication
Question: I need to implement 2 python processes:
1. a python client process that will connect to a https server on the web
2. a python server process that will server 1 or 2 clients (UIs) over ssh if client is outside network, or just tcp if client is inside LAN.
Info will flow from 1 to 2 only, then 2 will communicate back/forth with its
clients. 1 will communicate back/forth with https server and pass info to 2. 2
is a sort of slave of 1. 2 can still run if 1 is down but can provide no info
to its clients. 1 can run on its own, but it's important to have the UI look
at what's going in 1 (through 2). if 2 crashes, 1 still runs and that's very
important.
Looking at the std lib, i see a number of modules that can help.
1. socketserver
2. multiprocessing
3. subprocess
4. ssh (nothing in stdlib)
Answer: You may be best advised to do this using a **Message Queue** \- it will give
you a very
1. scalable
2. secure
3. web interface
4. SSL supported
I would suggest that you have a look at RabbitMQ - there is a lovely Python
3.x interface with it.
You could of course do this with TCP Sockets also - but why not let the MQ
handle the nasty synchronisation stuff - and you can concentrate on
implementing the business processes.
|
How to update a graph using matplotlib
Question: I'm using Panda and matplotlib to draw graphs in Python. I would like a live
updating gaph. Here is my code:
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import time
import numpy as np
import MySQLdb
import pandas
def animate():
conn = MySQLdb.connect(host="localhost", user="root", passwd="", db="sentiment_index", use_unicode=True, charset="utf8")
c = conn.cursor()
query = """ SELECT t_date , score FROM mytable where t_date BETWEEN Date_SUB(NOW(), Interval 2 DAY) AND NOW()"""
c.execute(query)
rows=c.fetchall()
df = pandas.read_sql(query, conn, index_col=['t_date'])
df.plot()
plt.show()
animate()
I thought about using FuncAnimation but didn't get the right result. Any help
please?
Answer: The documentation is a bit light on explanation of how to use FuncAnimation.
However, there are [examples in the
gallery](http://matplotlib.sourceforge.net/examples/index.html) and blog
tutorials, such as [Jake
Vanderplas's](http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-
tutorial/) and [Sam Dolan's PDF](http://sam-
dolan.staff.shef.ac.uk/mas212/lectures/l5.pdf).
This example from Jake Vanderplas's tutorial is perhaps the "Hello World" of
matplotlib animation:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def init():
return [line]
def animate(i, ax, line):
x = np.linspace(0, 2*np.pi, N) + i/(N*2)
ax.set_xlim(x.min(), x.max())
line.set_data(x, np.sin(x))
return [line]
N = 100
fig, ax = plt.subplots()
line, = ax.plot([], [])
ax.set_xlim(0, 2*np.pi)
ax.set_ylim(-1, 1)
ani = animation.FuncAnimation(
fig, animate, init_func=init, interval=0, frames=int(4*np.pi*N),
repeat=True, blit=True, fargs=[ax, line])
plt.show()
Change various values or lines of code and see what happens. See what happens
if you change `return [line]` to something else. If you study and play with
these examples, you can learn how the pieces fit together.
Once you understand this example, you should be able to modify it to fit your
goal.
If you have trouble, post your code and describe what error message or
misbehavior you see.
Some tips:
* Since animation requires calling `line.set_data`, I don't think you can use Pandas' `df.plot()`. In fact, I'm not sure if the Pandas DataFrame is useful here. You might be better off sucking the data into lists or NumPy arrays and passing those to `line.set` as above, without getting Pandas involved.
* Opening a connection to the database should be done once. `animate` gets called many times. So it is better to define `conn` and `c` and `query` \-- anything that does not change with each call to `animate` \-- outside of `animate`, and pass them back as arguments to `animate` via the `fargs` parameter.
|
Pyspark VS native python speed
Question: I am running Spark on a Ubuntu VM (4GB, 2 cores). I am doing this simple test
of a word count. I am comparing it to a simple Python `dict()` counter. I find
that Pyspark is 5x slower (takes more time).
Is this because of the initialisation or do I need to tune a parameter?
import sys, os
sys.path.append('/home/dirk/spark-1.4.1-bin-hadoop2.6/python')
os.environ['SPARK_HOME']='/home/dirk/spark-1.4.1-bin-hadoop2.6/'
os.environ['SPARK_WORKER_CORES']='2'
os.environ['SPARK_WORKER_MEMORY']='2g'
import time
import py4j
from pyspark import SparkContext, SparkConf
from operator import add
conf=(SparkConf().setMaster('local').setAppName('app'))
sc=SparkContext(conf=conf)
f='big3.txt'
s=time.time()
lines=sc.textFile(f)
counts=lines.flatMap(lambda x:x.split(' ')).map(lambda x: (x,1)).reduceByKey(add)
output=counts.collect()
print len(output)
for (word, count) in output:
if count>100: print word, count
sc.stop()
print 'elapsed',time.time()-s
s=time.time()
f1=open(f,'r')
freq={}
for line in f1:
words=line.split(' ')
for w in words:
if w not in freq:
freq[w]=0
freq[w]+=1
f1.close()
print len(freq)
for (w,c) in freq.iteritems():
if c>100: print w,c
print 'elapsed',time.time()-s
Answer: As mentioned in comments, Spark isn't ideal for such short-lived jobs. You'd
better run it on hundreds of files, each being at least a few megabytes in
size. Even then it might saturate the HDD reading speed before the CPU gets
100% utilized, ideally you'd have at least a few computers with SSDs to get
real advantage from Spark :)
|
Get Python type of Django's model field?
Question: How can I get corresponding Python type of a Django model's field class ?
from django.db import models
class MyModel(models.Model):
value = models.DecimalField()
type(MyModel._meta.get_field('value')) # <class 'django.db.models.fields.DecimalField'>
I'm looking how can I get corresponding python type for field's value -
`decimal.Decimal` in this case.
Any idea ?
p.s. I've attempted to work around this with field's `default` attribute, but
it probably won't work in all cases where field has no default value defined.
Answer: I don't think you can decide the actual python type programmatically there.
Part of this is due to python's dynamic type. If you look at the doc for
[converting values to python
objects](https://docs.djangoproject.com/en/1.8/howto/custom-model-
fields/#converting-values-to-python-objects), there is no hard _predefined_
type for a field: you can write a custom field that returns object in
different types depending on the database value. The doc of [model
fields](https://docs.djangoproject.com/en/1.8/ref/models/fields/) specifies
what Python type corresponds to each field type, so you can do this
"statically".
But why would you need to know the Python types in advance in order to
serialize them? The serialize modules are supposed to do this for you, just
throw them the objects you need to serialize. Python is a dynamically typed
language.
|
complex ajax query using parse api
Question: Im dealing with parse.com's apis. I want to make complex queries in my
javascript function.
I couldnt manage to equivalant params section below code for ajax get request.
Any help appriciated.
For example, to retrieve scores between 1000 and 3000, they gave example in
python,
import json,httplib,urllib
connection = httplib.HTTPSConnection('api.parse.com', 443)
params = urllib.urlencode({"where":json.dumps({
"score": {
"$gte": 1000,
"$lte": 3000
}
})})
connection.connect()
connection.request('GET', '/1/classes/GameScore?%s' % params, '', {
"X-Parse-Application-Id": "pcHvGniu2bbGggMofkcVKQUK91g5V3U5g4ZGWifK",
"X-Parse-REST-API-Key": "5pI5vb7dNu8mRY6yPFTOtpIdpUHeqv5gMChFaQxK"
})
result = json.loads(connection.getresponse().read())
print result
Answer: The code in javascript can be even shorter:
$.ajax({
type:"GET",
beforeSend: function (request)
{
request.setRequestHeader("X-Parse-Application-Id", 'pcHvGniu2bbGggMofkcVKQUK91g5V3U5g4ZGWifK');
request.setRequestHeader("X-Parse-REST-API-Key", '5pI5vb7dNu8mRY6yPFTOtpIdpUHeqv5gMChFaQxK');
},
url: "https://api.parse.com/1/classes/GameScore",
data: "where=" + escape(JSON.stringify({"score": {"$gte": 1000, "$lte": 3000 }})),
processData: false,
success: function(msg) {
console.log(JSON.parse(msg));
}
});
JQuery will append data to url as query string, the final request url will be:
https://api.parse.com//1/classes/GameScore?where=%7B%22score%22%3A%7B%22%24gte%22%3A1000%2C%22%24lte%22%3A3000%7D%7D
And of course it's OK to construct the url by ourselves:
var url = "https://api.parse.com/1/classes/GameScore?where=" + escape(JSON.stringify({"score": {"$gte": 1000, "$lte": 3000 }}))
$.ajax({
type:"GET",
beforeSend: function (request)
{
request.setRequestHeader("X-Parse-Application-Id", 'pcHvGniu2bbGggMofkcVKQUK91g5V3U5g4ZGWifK');
request.setRequestHeader("X-Parse-REST-API-Key", '5pI5vb7dNu8mRY6yPFTOtpIdpUHeqv5gMChFaQxK');
},
url: url,
processData: false,
success: function(msg) {
console.log(JSON.parse(msg));
}
});
The request URL is the same as the previous one.
|
Overwrite/Don't overwrite option for Dropbox API Upload
Question: I have found the following script from an old StackOverflow question which
gives an example of how to connect to my Dropbox via the API:
import dropbox
client = dropbox.client.DropboxClient(<auth_token>)
print 'linked account: ', client.account_info()
f = open('working-draft.txt', 'rb')
response = client.put_file('/magnum-opus.txt', f)
print 'uploaded: ', response
folder_metadata = client.metadata('/')
print 'metadata: ', folder_metadata
f, metadata = client.get_file_and_metadata('/magnum-opus.txt')
out = open('magnum-opus.txt', 'wb')
out.write(f.read())
out.close()
print metadata
I've played around with this and got it to upload the files I want to the
Dropbox sub folders I want. However, I am unsure how to specify whether to
overwrite a file already present on Dropbox with the same name or not. There
do not appear to be any example scripts written that cover this online
already.
The API documentation makes reference to an overwrite option as part of
chunked uploading
[here](https://www.dropbox.com/developers/core/docs/python#ChunkedUploader),
but I am unsure how I would work this into the example script shown above.
Can anyone assist?
Answer: The documentation for `put_file` is here:
<https://www.dropbox.com/developers/core/docs/python#DropboxClient.put_file>.
Just pass `overwrite=True` to overwrite. (The default is `False`.)
|
NumPy: Importing a Sparse Matrix from R into Python
Question: I have a matrix in R that is very large and sparse, created with the 'Matrix'
package, and I want to handle in python + numpy. The R object is in the csc
format, and if I export it using the function writeMM in the Matrix package,
the output looks something like this:
%%MatrixMarket matrix coordinate real general
4589 17366 160441
22 1 5.954510725783322
36 1 29.77255362891661
41 1 23.81804290313329
74 1 5.954510725783322
116 1 59.54510725783322
127 1 11.909021451566645
159 1 17.863532177349967
Where the first column is the row, the second one the column, and the third
one is the value.
I was wondering how could I import that into python. I see that scipy has a
module to operate with column-compressed sparse matrices, but it has no
function to create one from a file.
Answer: You can use
[scipy.io.mmread](http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.mmread.html#scipy.io.mmread)
which does exactly what you want.
In [11]: mmread("sparse_from_file")
Out[11]:
<4589x17366 sparse matrix of type '<class 'numpy.float64'>'
with 7 stored elements in COOrdinate format>
Note the result is a COO sparse matrix. If you want a `csc_matrix` you can
then use `sparse.coo_matrix.tocsc`.
Now you mention you want to handle this _very large and sparse matrix_ with
numpy. That might turn out to be impractical since numpy operates on dense
arrays only and if your matrix is indeed very large and sparse you probably
can't afford to store it in dense format.
So you could be better off sticking with the most efficient `scipy.sparse`
format for your use case.
|
Error running *requests*
Question: Installed Python/Pip on OSX 10.10 using instructions here <http://docs.python-
guide.org/en/latest/starting/install/osx/>
Only installed to gain access to Pip
Installed requests using `pip install requests`
Ran script
import requests
feed_url = 'http://www.mylocation.com/the_data'
#grab xml from URL
r = requests.get(feed_url)
r.text
print r
Error is
Traceback (most recent call last):
File "get_all_listings.py", line 1, in <module>
import requests
File "/usr/local/lib/python2.7/site-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python2.7/site-packages/requests/utils.py", line 12, in <module>
import cgi
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/cgi.py", line 50, in <module>
import mimetools
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/mimetools.py", line 6, in <module>
import tempfile
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 32, in <module>
import io as _io
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/io.py", line 51, in <module>
import _io
ImportError: dlopen(/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so, 2): Symbol not found: __PyErr_ReplaceException
Referenced from: /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so
Expected in: flat namespace
in /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so
Output from **otool -L $(which python)**
/usr/local/bin/python:
/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/Python (compatibility version 2.7.0, current version 2.7.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0)
Answer: `python` was point to the wrong version
Fixed by running `hash -d python`
|
Identifying Lists of Numbers With Regular Expressions in Python
Question: I'm working with data from an online math tutor program, and I'd like to be
able to identify some features of those problems. For example, for the
following question:
Find the median of the 7 numbers in the following list:
[22, 13, 5, 16, 4, 12, 30]
I'd like to know if
1. the problem includes a list,
2. how long the longest list in the problem is, and
3. how many numbers are in the problem total.
So for the problem above, it has a list, the list is 7 numbers long, and there
are 8 numbers in the problem total.
I've written the following regex script that can identify positive and
negative numbers and floats, but I can't figure out how to identify a series
of numbers that are in a list:
'[-+]{0,1}[0-9]+\.{0,1}(?! )[0-9]+'
Additionally, the data is poorly formatted, all of the following examples are
possible for what a list of numbers can look like:
[1, 2, 3]
1, 2, 3
1,2,3.
1, 2, 3, 4, 5
I've been working on this for a few days now, and have stopped being able to
make any progress on it. Can anyone help? It might not even be a problem to
solve with a regex, I'm just not sure how to go about it from this point.
Answer: Assuming you get the input as a string - you can use
[`re.findall`](https://docs.python.org/2/library/re.html#re.findall) to
extract only the numbers out of it:
import re
s = """[1, -2, 3]
1, 2, 3
1,2,3.
1, 2, 3, 4, 5"""
res = re.findall(r'-?\d+', s)
print res # ['1', '-2', '3', '1', '2', '3', '1', '2', '3', '1', '2', '3', '4', '5']
# and if you want to turn the strings into numbers:
print map(int, res) # [1, -2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 4, 5]
|
How to print current logging configuration used by the python logging module?
Question: I'm using the python logging module.
I update the logging config using logging.dictConfig().
I would like a way to read the current configuration (e.g. level) used by each
logger and print it.
How can I get and print this information?
Answer: If what you want is the logging level for a particular logger, then you can
use -
[`logger.getEffectiveLevel()`](https://docs.python.org/2/library/logging.html#logging.Logger.getEffectiveLevel)
, this would give the integer value for the currently logging level for the
logger, and then you can use it with
[`logging.getLevelName()`](https://docs.python.org/2/library/logging.html#logging.getLevelName)
, to get the string representation for that level.
Example -
>>> import logging
>>> l = logging.getLogger(__name__)
>>> l.setLevel(logging.DEBUG)
>>> logging.getLevelName(l.getEffectiveLevel())
'DEBUG'
|
python ImportError without import call
Question: Okay, I'm stumped on this one. I've looked around but I can't find anything
and can't figure out how to debug this. Basically, python is throwing an
`ImportError` at a line of code where I'm not importing anything. I've a
decently large module `ICgen` which contains the module `ICgen_settings`.
Traceback (most recent call last):
File "<ipython-input-5-105e3826f255>", line 1, in <module>
IC = ICgen.load('IC.p')
File "diskpy/ICgen/ICgen.py", line 339, in load
ICobj.settings.load(input_dict['settings'])
File "diskpy/ICgen/ICgen_settings.py", line 484, in load
tmp_dict = pickle.load(open(settings_filename, 'rb'))
ImportError: No module named ICgen_settings
This doesn't make any sense to me. It clearly has found `ICgen_settings` since
it's throwing the error from within it. Furthermore, I'm not doing an `import`
call when it throws the error!
Any ideas?
Answer: When you try to `pickle.load` the `tmp_dict`, the modules for any of the
objects in the incoming data stream need to be loaded.
So yes, you were in fact doing an `import` call when the error was thrown: you
were unpickling an object of some type that needed `ICgen_settings`. NB:
unpickling code can run arbitrary Python statements. Never unpickle objects
you don't trust!
Now, as to why it's "clearly found `ICgen_settings`": No, just being in a file
called `ICgen_settings.py` does not mean that the line `import ICgen_settings`
will succeed. Whether the import succeeds depends on `sys.path`, which
originates from your `PYTHONPATH` environment variable. It also depends on the
module layout of `ICgen_settings`: normally, it would be an `ICgen_settings`
**folder** (not file) containing a `__init__.py` file.
|
Python Dynamic Import Can't Find Packages in Virtualenv
Question: So, I have a directory structure:
main.py
\_ modules/
\_ a.py
\_ b.py
In main.py, modules are dynamically loaded at runtime, depending on which
modules are specified. (This allows for a hypothetical `c.py` to be added,
`main.py` to be rerun, and the program to detect the addition of `c.py` and
run it.)
The problem is that `b.py` imports a module installed via pip (in a
virtualenv). (I'm going to refer to it as a library to help avoid confusion.)
When `b.py` is run directly (`python b.py`) the library imports just fine.
When the shell is opened and the library is imported by hand, it works.
But, when `main.py` is run and `b.py` is dynamically imported (using
`pkgutil.iter_modules` to detect the modules and then
`importlib.import_module` to import the required ones), the library that
`b.py` imports isn't found - an `ImportError: No module` is thrown.
To recap: a module imports an installed library, and this works when the
module is run directly or when the library in question is imported manually in
the python interpreter, but when the module is dynamically imported, the
library isn't found. What gives?
Answer: Virtualenvs are notorious. They tinker around with paths lots of times and
mess up quite a lot of things.
You need to check the `PATH` variable inside `b.py` in both scenarios. This
will most probably not be the same.
You'd need to set the path to include your `c.py`'s directory. You can check
the PATH in `main.py` and if it is correct there, it would mean that one of
your other imports is changing your `sys.path` ina way to remove that.
|
Full-matrix approach to backpropagation in Artificial Neural Network
Question: I am learning Artificial Neural Network (ANN) recently and have got a code
working and running in Python for the same based on mini-batch training. I
followed the book of [Michael Nilson's Neural Networks and Deep
Learning](http://neuralnetworksanddeeplearning.com/chap1.html) where there is
step by step explanation of each and every algorithm for the beginners. There
is also a fully working code for handwritten digit recognition which works
fine for me too.
However, I am trying to tweak the code a bit by means of passing the whole
mini-batch together to train by backpropagation in the matrix form. I have
also developed a working code for that, but the code performs real slow when
run. Is there any way I can implement a full matrix based approach to mini-
batch learning of the network based on back propagation algorithm?
import numpy as np
import pandas as pd
class Network:
def __init__(self, sizes):
self.layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x) for y, x in zip(sizes[1:], sizes[:-1])]
def feed_forward(self, a):
for w, b in zip(self.weights, self.biases):
a = sigmoid(np.dot(w,a) + b)
return a
# Calculate the cost derivative (Gradient of C w.r.t. 'a' - Nabla C(a))
def cost_derivative(self, output_activation, y):
return (output_activation - y)
def update_mini_batch(self, mini_batch, eta):
from scipy.linalg import block_diag
n = len(mini_batch)
xs = [x for x, y in mini_batch]
features = block_diag(*xs)
ys = [y for x, y in mini_batch]
responses = block_diag(*ys)
ws = [a for a in self.weights for i in xrange(n)]
new_list = []
k = 0
while (k < len(ws)):
new_list.append(ws[k: k + n])
k += n
weights = [block_diag(*elems) for elems in new_list]
bs = [b for b in self.biases for i in xrange(n)]
new_list2 = []
j = 0
while (j < len(bs)):
new_list2.append(bs[j : j + n])
j += n
biases = [block_diag(*elems) for elems in new_list2]
baises_dim_1 = [np.dot(np.ones((n*b.shape[0], b.shape[0])), b) for b in self.biases]
biases_dim_2 = [np.dot(b, np.ones((b.shape[1], n*b.shape[1]))) for b in baises_dim_1]
weights_dim_1 = [np.dot(np.ones((n*w.shape[0], w.shape[0])), w) for w in self.weights]
weights_dim_2 = [np.dot(w, np.ones((w.shape[1], n*w.shape[1]))) for w in weights_dim_1]
nabla_b = [np.zeros(b.shape) for b in biases_dim_2]
nabla_w = [np.zeros(w.shape) for w in weights_dim_2]
delta_b = [np.zeros(b.shape) for b in self.biases]
delta_w = [np.zeros(w.shape) for w in self.weights]
zs = []
activation = features
activations = [features]
for w, b in zip(weights, biases):
z = np.dot(w, activation) + b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
delta = self.cost_derivative(activations[-1], responses) * sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
for l in xrange(2, self.layers):
z = zs[-l] # the weighted input for that layer
activation_prime = sigmoid_prime(z) # the derivative of activation for the layer
delta = np.dot(weights[-l + 1].transpose(), delta) * activation_prime # calculate the adjustment term (delta) for that layer
nabla_b[-l] = delta # calculate the bias adjustments - by means of using eq-BP3.
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) # calculate the weight adjustments - by means of using eq-BP4.
delta_b = [self.split_cases(b, n) for b in nabla_b]
delta_w = [self.split_cases(w, n) for w in nabla_w]
self.weights = [w - (eta/n) * nw for w, nw in zip(self.weights, delta_w)]
self.biases = [b - (eta/ n) * nb for b, nb in zip(self.biases, delta_b)]
def split_cases(self, mat, mini_batch_size):
i = 0
j = 0
dim1 = mat.shape[0]/mini_batch_size
dim2 = mat.shape[1]/mini_batch_size
sum_samples = np.zeros((dim1, dim2))
while i < len(mat):
sum_samples = sum_samples + mat[i: i + dim1, j : j + dim2]
i += dim1
j += dim2
return sum_samples
"""Stochastic Gradient Descent for training in epochs"""
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data = None):
n = len(training_data)
if test_data:
n_test = len(test_data)
for j in xrange(epochs):
np.random.shuffle(training_data) # for each epochs the mini-batches are selected randomly
mini_batches = [training_data[k: k+mini_batch_size] for k in xrange(0, n, mini_batch_size)] # select equal sizes of mini-batches for the epochs (last mini_batch size might differ however)
c = 1
for mini_batch in mini_batches:
print "Updating mini-batch {0}".format(c)
self.update_mini_batch(mini_batch, eta)
c += 1
if test_data:
print "Epoch {0}: {1}/{2}".format(j, self.evaluate(test_data), n_test)
else:
print "Epoch {0} completed.".format(j)
def evaluate(self, test_data):
test_results = [(np.argmax(self.feed_forward(x)), y) for (x, y) in test_data]
return (sum(int(x == y) for x, y in test_results))
def export_results(self, test_data):
results = [(np.argmax(self.feed_forward(x)), y) for (x, y) in test_data]
k = pd.DataFrame(results)
k.to_csv('net_results.csv')
# Global functions
## Activation function (sigmoid)
@np.vectorize
def sigmoid(z):
return 1.0/(1.0 + np.exp(-z))
## Activation derivative (sigmoid_prime)
@np.vectorize
def sigmoid_prime(z):
return sigmoid(z)*(1 - sigmoid(z))
Answer: Here is my code. The time taken to iterate 30 epochs reduces from 800+ seconds
to 200+ seconds on my machine.
As I am new to python, I use what is readily available. This snippet only
requires numpy to run.
Give it a try.
def feedforward2(self, a):
zs = []
activations = [a]
activation = a
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation) + b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
return (zs, activations)
def update_mini_batch2(self, mini_batch, eta):
batch_size = len(mini_batch)
# transform to (input x batch_size) matrix
x = np.asarray([_x.ravel() for _x, _y in mini_batch]).transpose()
# transform to (output x batch_size) matrix
y = np.asarray([_y.ravel() for _x, _y in mini_batch]).transpose()
nabla_b, nabla_w = self.backprop2(x, y)
self.weights = [w - (eta / batch_size) * nw for w, nw in zip(self.weights, nabla_w)]
self.biases = [b - (eta / batch_size) * nb for b, nb in zip(self.biases, nabla_b)]
return
def backprop2(self, x, y):
nabla_b = [0 for i in self.biases]
nabla_w = [0 for i in self.weights]
# feedforward
zs, activations = self.feedforward2(x)
# backward pass
delta = self.cost_derivative(activations[-1], y) * sigmoid_prime(zs[-1])
nabla_b[-1] = delta.sum(1).reshape([len(delta), 1]) # reshape to (n x 1) matrix
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l + 1].transpose(), delta) * sp
nabla_b[-l] = delta.sum(1).reshape([len(delta), 1]) # reshape to (n x 1) matrix
nabla_w[-l] = np.dot(delta, activations[-l - 1].transpose())
return (nabla_b, nabla_w)
|
Wrong variable data type?
Question: I have class, which intended to grab user name and his email from Git commits:
class BitbucketData(object):
def get_user_name(self):
proc = subprocess.Popen("git --no-pager show -s --format='%an'", stdout=subprocess.PIPE)
committer_name = proc.stdout.read()
if committer_name:
return committer_name
def get_user_email(self):
proc = subprocess.Popen("git --no-pager show -s --format='%aE'", stdout=subprocess.PIPE)
committer_email = proc.stdout.read()
if committer_email:
return committer_email
It used then to send notifications for users (bottom is working version -
without variables - all `sender` and `receiver` data set explicitly, not in
variables - they commented here):
class Services(object):
def sendmail(self, event):
repo = BitbucketData()
#to_address = repo.get_user_email()
#to_address = '[email protected]'
#to_name = repo.get_user_name()
#to_name = 'Test user'
subject = 'Bamboo build and deploy ready'
sender = 'Bamboo agent <[email protected]>'
text_subtype = 'plain'
message = """
Hello, {}.
Your build ready.
Link to scenario: URL
Link to build and deploy results: {})
""".format('Test user', os.environ['bamboo_resultsUrl'])
msg = MIMEText(message, text_subtype)
msg['Subject'] = subject
msg['From'] = sender
msg['To'] = 'Test user <[email protected]>'
smtpconnect = smtplib.SMTP('outlook.office365.com', 587)
smtpconnect.set_debuglevel(1)
smtpconnect.starttls()
smtpconnect.login('[email protected]', 'password')
smtpconnect.sendmail('[email protected]', '[email protected]', msg.as_string())
smtpconnect.quit()
print('Mail sent')
print(repo.get_user_email())
Problem is - if I'm using data from variables (e.g. `to_address =
'[email protected]'` or using `BitbucketData()` class with `to_address =
repo.get_user_email()` \- I got an error from **Office365** server:
>
> ...
> reply: '250 2.1.0 Sender OK\r\n'
> reply: retcode (250); Msg: 2.1.0 Sender OK
> send: "rcpt TO:<'[email protected]'>\r\n"
> reply: '501 5.1.3 Invalid address\r\n'
> reply: retcode (501); Msg: 5.1.3 Invalid address
> ...
> File "C:\Python27\lib\smtplib.py", line 742, in sendmail
> raise SMTPRecipientsRefused(senderrs)
> smtplib.SMTPRecipientsRefused: {"'[email protected]'\n": (501, '5.1.3
> Invalid address')}
>
When using variables code looks like next:
class Services(object):
def sendmail(self, event):
repo = BitbucketData()
to_address = repo.get_user_email()
#to_address = '[email protected]'
to_name = repo.get_user_name()
#to_name = 'Test user'
from_address = '[email protected]'
subject = 'Bamboo build and deploy ready'
sender = 'Bamboo agent <[email protected]>'
text_subtype = 'plain'
message = """
Hello, {}.
Your build ready.
Link to scenario: URL
Link to build and deploy results: {})
""".format(to_name, os.environ['bamboo_resultsUrl'])
msg = MIMEText(message, text_subtype)
msg['Subject'] = subject
msg['From'] = sender
msg['To'] = to_name
smtpconnect = smtplib.SMTP('outlook.office365.com', 587)
smtpconnect.set_debuglevel(1)
smtpconnect.starttls()
smtpconnect.login('[email protected]', 'password')
smtpconnect.sendmail(from_address, to_address, msg.as_string())
smtpconnect.quit()
print('Mail sent')
print(repo.get_user_email())
What I'm (or Microsoft SMTP...) doing wrong here?
UPD
def sendmail(self, event):
repo = BitbucketData()
print(repr(repo.get_user_email()))
...
Gives me:
>
> ...
> Creating new AutoEnv config
> "'[email protected]'\n"
> send: 'ehlo pc-user.kyiv.domain.net\r\n'
> ...
>
Answer: Seems like you are receiving the following as output from your git commands -
"'[email protected]'\n"
You are directly passing this to smtpconnect, which is causing the issue, as
this is not a valid email address. You probably need to get the actual string
from it. One way to get it is to use
[`ast.literal_eval()`](https://docs.python.org/2/library/ast.html#ast.literal_eval)
function for that . Example -
>>> import ast
>>> e = ast.literal_eval("'[email protected]'\n")
>>> print(e)
[email protected]
You would need to do this when returning the email from the `get_user_email()`
function . Most probably `committer_name` also has this issue, so you may want
to do this for that as well.
* * *
From documentation of `ast.literal_eval()` \-
> **ast.literal_eval(node_or_string)**
>
> Safely evaluate an expression node or a Unicode or Latin-1 encoded string
> containing a Python literal or container display. The string or node
> provided may only consist of the following Python literal structures:
> strings, numbers, tuples, lists, dicts, booleans, and None.
|
"InsecurePlatformWarning" while installing django
Question: I am using:
-CentOS 6.6 on my Mac on virtual environment,
-Python 2.6.6 I am using this [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-the-django-web-framework-on-centos-7) to install Django:
I am new to Python and Django and i found this tutorial easy to use so i was
following the steps for Install through pip in a Virtualenv as i managed to
install pip. it was working fine till i run pip install django command i got
this output:
(djenv)[root@blue djProject]# pip install django
/root/djProject/djenv/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Collecting django
/root/djProject/djenv/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Using cached Django-1.8.3-py2.py3-none-any.whl
Installing collected packages: django
Successfully installed django-1.8.3
And after that when i try checking the version using django-admin --version
command i got this error which i don't understand:
(djenv)[root@blue djProject]# clear; django-admin --version
Traceback (most recent call last):
File "/root/djProject/djenv/bin/django-admin", line 7, in <module>
from django.core.management import execute_from_command_line
File "/root/djProject/djenv/lib/python2.6/site-packages/django/__init__.py", line 1, in <module>
from django.utils.version import get_version
File "/root/djProject/djenv/lib/python2.6/site-packages/django/utils/version.py", line 7, in <module>
from django.utils.lru_cache import lru_cache
File "/root/djProject/djenv/lib/python2.6/site-packages/django/utils/lru_cache.py", line 28
fasttypes = {int, str, frozenset, type(None)},
^
SyntaxError: invalid syntax
Can anyone please help me with this error please.
Answer: Did you follow the link that it says contains more information?
It says:
> If you encounter this warning, it is strongly recommended you upgrade to a
> newer Python version, or that you use pyOpenSSL as described in the OpenSSL
> / PyOpenSSL section.
Python 2.6.x is very old.
|
NoReverseMatch at / Reverse for 'password_change_done' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
Question: my views part:
@login_required
def password_change(request,
template_name='register.html',
password_change_form=PasswordChangeForm,):
if post_change_redirect is None:
post_change_redirect = reverse('student:login')
else:
post_change_redirect = reverse_url(post_change_redirect)
if request.method == 'POST':
form = password_change_form(user=request.user, data=request.POST)
if form.is_valid():
form.save()
update_session_auth_hash(request, form.user)
return HttpResponseRedirect(post_change_redirect)
else:
form = password_change_form(user=request.user)
context = {
'form': form,
'title': _('Password change'),
}
return TemplateResponse(request, template_name, context)
@login_required
def password_change_done(request,
template_name='password_change_done.html',):
context = {
'title': _('Password change successful'),
}
return TemplateResponse(request, template_name, context)
my project urls:
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^stu/', include('student.urls', namespace='student')),
url(r'^admin/', include(admin.site.urls)),
]
corresponding app urls:
from django.conf.urls import url, patterns
from django.contrib.auth import views as auth_views
from . import views
urlpatterns = [
url(r'^register/$', views.register, name='register'),
url(r'^login/$', views.user_login, name='login'),
url(r'^logout/$', views.user_logout, name='logout'),
url(r'^change-password/$', auth_views.password_change, name='password_reset'),
url(r'^password-change-done/$', views.password_change_done, name='password_change_done'),
url(r'^restricted/', views.restricted, name='restricted'),
url(r'^mains/', views.mains, name = 'mains'),
]
my traceback:
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/stu/change-password/
Django Version: 1.8.1
Python Version: 3.4.3
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'student')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Traceback:
File "C:\Python34\lib\site-packages\django\core\handlers\base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Python34\lib\site-packages\django\views\decorators\debug.py" in sensitive_post_parameters_wrapper
76. return view(request, *args, **kwargs)
File "C:\Python34\lib\site-packages\django\utils\decorators.py" in _wrapped_view
110. response = view_func(request, *args, **kwargs)
File "C:\Python34\lib\site-packages\django\contrib\auth\decorators.py" in _wrapped_view
22. return view_func(request, *args, **kwargs)
File "C:\Python34\lib\site-packages\django\contrib\auth\views.py" in password_change
293. post_change_redirect = reverse('password_change_done')
File "C:\Python34\lib\site-packages\django\core\urlresolvers.py" in reverse
579. return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "C:\Python34\lib\site-packages\django\core\urlresolvers.py" in _reverse_with_prefix
496. (lookup_view_s, args, kwargs, len(patterns), patterns))
Exception Type: NoReverseMatch at /stu/change-password/
Exception Value: Reverse for 'password_change_done' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
I'm following django-auth docs. I'm trying to implement them in my own app.
Everything works fine. I've made my own login and logout views. But when I add
`'password_change'` and `'password_change_done'` views it throws above
exception.
I think that `'password_change_done'` view didn't be able to resolve url as
I've using `'namespacing'` in my project level url's. How can I enforce it to
use my namespacing i.e, `'student'`?
Please! help me to make it correct....
Thanks! in Advance...
Answer: You should be able to use the `current_app` parameter to tell the URL resolver
where your views are.
url(r'^change-password/$', auth_views.password_change, {'current_app': 'student'}, name='password_reset'),
url(r'^password-change-done/$', views.password_change_done, {'current_app': 'student'}, name='password_change_done'),
|
Create random, unique variable names for objects
Question: I'm playing around with Python and any programming language for the first
time, so please bear with me. I started an online class two weeks ago, but try
to develop a small game at the side to learn faster (and have fun). It's a
text adventure, but is shall have random encounters and fights with enemies
that have random equipment that the players can then loot.
This is my problem: If I create random objects/weapons for my random
encounter, I need to make sure that the object has a unique name. The way the
level is designed there could in theory be an infinite number of objects (it
can be open ended, with encounters just popping up).
This is my approach so far
class Item_NPCs: #create objects for NPCs
def__init__(self, item_type, item_number):
# e.g. item_type 1 = weapons, item_type2 = potions, etc.
if item_type == 1 and item number == 1:
self.property1 = 5
self.property2 = 4
if item_type == 1 and item_number ==2:
# etc. :)
def prepare_encounter():
inventory_NPC = [] # empty list for stuff the NPC carries around
XXX = Class_Item(item_type, item_number) # I might randomize the arguments.
What is important is that I want "XXX" to be unique and random, so that no
object exists more than once and can later be put into the player's inventory.
How to do that?
Joe
Answer: Why do you need it to be random ? You could simply use a list, and append
every new object to the list, with its index being its unique identifier :
items = []
items.append( Class_Item(item_type, item_number) )
But if you really need random identifier, maybe you can use a dictionary :
items = dict()
items[random_id] = Class_Item(item_type, item_number)
This requires random_id to be hashable (but it should be if it is a number or
a string).
|
Need table to populate with search results from search bar
Question: I have a webpage with a search bar and a bunch of dropdown menus but just the
search bar is important right now. Anyways I have it working where when I
click the go button after the search bar it brings up a table, but wont put
the searched item in the table as I thought it would. Im using a
SimpleHTTPServer through Python
<form role="form" id="form">
<fieldset>
<div class="form-group">
<div class="row">
<label for="search"> </label>
<div class="col-lg-7 col-lg-offset-2">
<input type="text"
class="form-control" name="search" id="search"
placeholder="Search for..." />
</div>
<div class="col-lg-2">
<br>
<button type="button" class="btn btn-success" id="submit"> Go!
</button> </div> </div> </div>
</fieldset>
</form>
JS: $('#submit').click(function(e){ e.preventDefault();
var search=$('#search').val();
$.get('post2.html', function(returndata){
$('#output').html(returndata);
} );
});
<table class="table">
<thead>
<tr>
<th> Previous Searches: </th>
</tr>
</thead>
<tr>
<script>document.write(search)</script>
</tr>
</table>
Answer: Your "search" is out of scope. It only exists in your anonymous function for
.click(). You somehow need to pass "search" into your HTML, but that's not
really possible with your current setup.
What I'd recommend is using something like Handlebars, which allows you to
define templated HTML with variable placeholders, compile them, and then
insert variable values. For example, you might define in your HTML:
<script type="text/x-handlebars-template" id="table-data-template">
<table class="table">
<thead>
<tr>
<th> Previous Searches: </th>
</tr>
</thead>
<tr>
{{data}}
</tr>
</table>
</script>
In your JS, you'd do something like:
$('#submit').click(function(e){
e.preventDefault();
var renderData = Handlebars.compile($("#table-data-template").html());
var search=$('#search').val();
$('#output').html(renderData({data: search});
}
Super clean. But you'll have to spend some time reading up on Handlebars and
all that.
If you aren't doing a lot of this template-based work in your app and
therefore Handlebars might be overkill, you could simply define the HTML
template inside your JS like so:
$('#submit').click(function(e){
e.preventDefault();
var search=$('#search').val();
$('#output').html(
"<table class='table'>
<thead>
<tr>
<th> Previous Searches: </th>
</tr>
</thead>
<tr>" + search +
"</tr>
</table>");
So literally writing out the string of HTML that would be injected in your
output and concating your search results in there. Not super clean, but if
you're only doing this once, gets the job done.
|
"working outside of application context" in Google App Engine with Python remote API
Question: I deployed one simple project on Google App Engine and there the project is
working. Before I was able to get list of users from datastore through the
remote API console , but now this no longer works. To start the remote API
console I'm connecting this way (running from the project directory):
../../+/google_appengine_PYTHON_SDK/remote_api_shell.py -s project.appspot.com
I found that the easiest way to load all modules is to run:
s~project> help()
help>modules
help>quit
After that I try to get the list of users:
s~project> from app.models import User
s~project> User.query().fetch()
The last row results in this:
WARNING:root:suspended generator _run_to_list(query.py:964) raised RuntimeError(working outside of application context)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/utils.py", line 142, in positional_wrapper
return wrapped(*args, **kwds)
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/query.py", line 1187, in fetch
return self.fetch_async(limit, **q_options).get_result()
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/tasklets.py", line 325, in get_result
self.check_success()
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/tasklets.py", line 368, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/query.py", line 964, in _run_to_list
batch = yield rpc
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/tasklets.py", line 454, in _on_rpc_completion
result = rpc.get_result()
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/datastore/datastore_query.py", line 3014, in __query_result_hook
self.__results = self._process_results(query_result.result_list())
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/datastore/datastore_query.py", line 3047, in _process_results
for result in results]
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/datastore/datastore_rpc.py", line 185, in pb_to_query_result
return self.pb_to_entity(pb)
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/model.py", line 662, in pb_to_entity
entity = modelclass._from_pb(pb, key=key, set_key=False)
File "/home/krasen/Programs/+/google_appengine_PYTHON_SDK/google/appengine/ext/ndb/model.py", line 3119, in _from_pb
ent = cls()
File "app/models.py", line 68, in __init__
if self.email == current_app.config['MAIL_ADMIN']:
File "/home/krasen/Programs/workspacePycharm/0010_1user_profile/venv/local/lib/python2.7/site-packages/werkzeug/local.py", line 338, in __getattr__
return getattr(self._get_current_object(), name)
File "/home/krasen/Programs/workspacePycharm/0010_1user_profile/venv/local/lib/python2.7/site-packages/werkzeug/local.py", line 297, in _get_current_object
return self.__local()
File "/home/krasen/Programs/workspacePycharm/0010_1user_profile/venv/local/lib/python2.7/site-packages/flask/globals.py", line 34, in _find_app
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
I'm very new to python. I found
<http://kronosapiens.github.io/blog/2014/08/14/understanding-contexts-in-
flask.html> but couldn't understand from this how to start working inside the
app context.
I have tried on linux with python versions:
* 2.7.6(on clean installed linux)
* 2.7.9
I have tried with google app engine SDK versions:
* 1.9.17
* 1.9.23
* 1.9.24
Answer: The problem was in the User class itself. There I have:
def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
# FLASKY_ADMIN configuration variable, so as soon as that email address appears in a registration request it can be given the correct role
if self.role is None:
print ("there is no role for this user")
if self.email == current_app.config['MAIL_ADMIN']:
self.role = ndb.Key('Role', 'administrator')
the problematic line is this line:
if self.email == current_app.config['MAIL_ADMIN']:
I don't know how to manage to make it work with this line so I've removed it
and now the user is retrieved.
|
Plot of 3D matrix with colour scale - Python
Question: I would like to plot a 3D matrix - essentially a box of numbers, each labelled
by an `x, y, z` triad of coordinates- by assigning a different colour to each
of the `x, y, z` point, according to its magnitude (for example, bigger
numbers in red and smaller numbers in blue). I cannot plot sections of the
matrix, I rather need to plot the whole matrix together.
If we call `matrix3D` my matrix, its elements are built this way:
matrix3D[x][y][z] = np.exp(-(x**2+y**2+z**2))
How can I obtain the desired plot?
EDIT: Using Mayavi2 Contour3D(), I have tried to write the following:
from mayavi import mlab
X = np.arange(0, n_x, 1)
Y = np.arange(0, n_z, 1)
Z = np.arange(0, n_z, 1)
X, Y, Z = np.meshgrid(X, Y, Z)
obj = mlab.contour3d(X, Y, Z, matrix3D, contours=4, transparent=True)
where `n_x, n_y, n_z` are the dimension of the 3 axes. How can I actually see
and/or save the image now?
Answer: If you need to plot the whole thing I think you're best taking a look at
[mayavi](http://docs.enthought.com/mayavi/mayavi/mlab.html). This will let you
plot a volume and you should be able to get the results you need.
I know you said you need to plot the whole thing at once, but this might still
be of _some_ use. You can use `countourf` to plot like this:
import numpy as np
import matplotlib.pyplot as plt
matrix3D = np.empty((10, 10, 10))
x = np.arange(10)
y = np.arange(10)
z = np.arange(10)
matrix3D[x][y][z] = np.exp(-(x**2+y**2+z**2))
fig = plt.figure()
ax = fig.add_subplot(plt.subplot(1, 1, 1))
ax.contourf(x, y, matrix3D[:, :, 3])
plt.show()
This gives you a _slice_ of the 3D matrix (in this example the 4th slice).
[](http://i.stack.imgur.com/8yM6E.png)
|
How to reach the mediafire direct link that is hidden behind a captcha?
Question: I wrote a python program to download a file from the internet :
url = "http://download2163.mediafire.com/icum151v51zg/55rll9s5ioshz5n/Alcohol52_FE_2-0-3-6850.exe"
file_name ='file'
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
buffer = u.read()
f.write(buffer)
f.close()
And it work correctly. The problem is that in this program the link that is
used to download the file is not costant ! The file that i want to download
was been uploaded using mediafire. I found out that the link of this page
(<http://www.mediafire.com/download/55rll9s5ioshz5n/Alcohol52_FE_2-0-3-6850.exe>)
is costant, and in this page I found the link that i put in my program. Infact
by clicking on the button "download (6.77 MB)" with the right button of my
mouse and selecting "gain this link" , I gained the direct link that I used in
my program :
<http://download2163.mediafire.com/icum151v51zg/55rll9s5ioshz5n/Alcohol52_FE_2-0-3-6850.exe>
But this second direct link - that is the direct link I really need - is
variable!
I have found the way to gain this variable and important direct link : using
the first and costant
link(<http://www.mediafire.com/download/55rll9s5ioshz5n/Alcohol52_FE_2-0-3-6850.exe>)
I downloaded the HTML page, and inside of this HTML file I found the direct
link that I needed!
The problem is: sometimes when my python program try to download the HTML page
it download the right page that contain the direct link, but sometime it
download wrong one, with the captcha! So the direct link can not be founded.
I' m looking for a way to avoid this captcha and to be sure that my python
program Always download the correct HTML page with the direct link inside !
Any suggestions ?
* * *
If there isn't any way, does anyone know how can I gain the direct link of a
file that I want upload on the internet and that I want to be downloaded by my
python program ?
Answer: You'll need to look for something in the page that you can use to always
identify the link. For example, the download link is in a div element with
class "download-link". You can parse the HTML for that div, then grab the link
from it's child element. There are other possibilities too. For example, you
could look for something unique and constant in the URL of interest and use a
regular expression to select for it after grabbing all links from the page.
I'd highly recommend looking into the BeautifulSoup library, which will allow
you to easily parse HTML.
EDIT: Okay, I didn't notice this because I initially looked at the page in my
browser, but apparently mediafire only populates the download div with
javascript after the page has loaded, which makes scraping the link a lot
harder. Thankfully, they still have to include the download link and using an
ugly, hideous little hack we can grab it:
First, you'll need this regex for URLs:
<http://daringfireball.net/2010/07/improved_regex_for_matching_urls>
Then grab the page contents and parse with beautifulsoup as such:
soup = BeautifulSoup(page)
div_tag = soup.find_all(class_="download_link")[0]
script_tag = div_tag("script")[0]
link = re.findall(regex, script_tag.contents[0])[0]
Here's my whole working code:
import requests
import re
from bs4 import BeautifulSoup
pre_regex = r"""(?xi)
\b
( # Capture 1: entire matched URL
(?:
[a-z][\w-]+: # URL protocol and colon
(?:
/{1,3} # 1-3 slashes
| # or
[a-z0-9%] # Single letter or digit or '%'
# (Trying not to match e.g. "URI::Escape")
)
| # or
www\d{0,3}[.] # "www.", "www1.", "www2." ... "www999."
| # or
[a-z0-9.\-]+[.][a-z]{2,4}/ # looks like domain name followed by a slash
)
(?: # One or more:
[^\s()<>]+ # Run of non-space, non-()<>
| # or
\(([^\s()<>]+|(\([^\s()<>]+\)))*\) # balanced parens, up to 2 levels
)+
(?: # End with:
\(([^\s()<>]+|(\([^\s()<>]+\)))*\) # balanced parens, up to 2 levels
| # or
[^\s`!()\[\]{};:'".,<>?] # not a space or one of these punct chars
)
)"""
regex = re.compile(pre_regex)
url = "http://www.mediafire.com/download/raju14e8aq6azbo/Getting+Started+with+MediaFire.pdf"
s = requests.session()
result = s.get(url)
soup = BeautifulSoup(result.content)
div_tag = soup.find_all(class_="download_link")[0]
script_tag = div_tag("script")[0]
link = re.findall(regex, script_tag.contents[0])[0][0]
print link
|
sklearn GridSearchCV (Scoring Function error)
Question: I was wondering if you can help me out with an error I am receiving in running
grid search. I think it might due to misunderstanding on how grid search
actually works.
I am now running an application where I need grid search to evaluate best
parameters using a different scoring function. I am using
RandomForestClassifier to fit a large X dataset to a characterization vector Y
which is a list of 0s and 1s. (completely binary). My scoring function (MCC)
requires the prediction input and actual input to be completely binary.
However, for some reason I keep getting the ValueError: multiclass is not
supported.
My understanding is that the grid search, does cross validation on the data
set, comes up with a prediction input that is based on the cross validation,
then insets the characterization vector and the prediction into the function.
Since my characterization vector is completely binary, my prediction vector
should also be binary as well and cause no problem when evaluating the score.
When I run random forest with a single defined parameter (without using grid
search), inserting the predicted data and characterization vector into MCC
scoring functions runs perfectly fine. So I am a little lost on how running
the grid search would cause any errors.
Snapshot of Data:
print len(X)
print X[0]
print len(Y)
print Y[2990:3000]
17463699
[38.110903683955435, 38.110903683955435, 38.110903683955435, 9.899495124816895, 294.7808837890625, 292.3835754394531, 293.81494140625, 291.11065673828125, 293.51739501953125, 283.6424865722656, 13.580912590026855, 4.976086616516113, 1.1271398067474365, 0.9465181231498718, 0.5066819190979004, 0.1808401197195053, 0.0]
17463699
[0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Code:
def overall_average_score(actual,prediction):
precision = precision_recall_fscore_support(actual, prediction, average = 'binary')[0]
recall = precision_recall_fscore_support(actual, prediction, average = 'binary')[1]
f1_score = precision_recall_fscore_support(actual, prediction, average = 'binary')[2]
total_score = matthews_corrcoef(actual, prediction)+accuracy_score(actual, prediction)+precision+recall+f1_score
return total_score/5
grid_scorer = make_scorer(overall_average_score, greater_is_better=True)
parameters = {'n_estimators': [10,20,30], 'max_features': ['auto','sqrt','log2',0.5,0.3], }
random = RandomForestClassifier()
clf = grid_search.GridSearchCV(random, parameters, cv = 5, scoring = grid_scorer)
clf.fit(X,Y)
Error:
ValueError Traceback (most recent call last)
<ipython-input-39-a8686eb798b2> in <module>()
18 random = RandomForestClassifier()
19 clf = grid_search.GridSearchCV(random, parameters, cv = 5, scoring = grid_scorer)
---> 20 clf.fit(X,Y)
21
22
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/grid_search.pyc in fit(self, X, y)
730
731 """
--> 732 return self._fit(X, y, ParameterGrid(self.param_grid))
733
734
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/grid_search.pyc in _fit(self, X, y, parameter_iterable)
503 self.fit_params, return_parameters=True,
504 error_score=self.error_score)
--> 505 for parameters in parameter_iterable
506 for train, test in cv)
507
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
657 self._iterating = True
658 for function, args, kwargs in iterable:
--> 659 self.dispatch(function, args, kwargs)
660
661 if pre_dispatch == "all" or n_jobs == 1:
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in dispatch(self, func, args, kwargs)
404 """
405 if self._pool is None:
--> 406 job = ImmediateApply(func, args, kwargs)
407 index = len(self._jobs)
408 if not _verbosity_filter(index, self.verbose):
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __init__(self, func, args, kwargs)
138 # Don't delay the application, to avoid keeping the input
139 # arguments in memory
--> 140 self.results = func(*args, **kwargs)
141
142 def get(self):
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/cross_validation.pyc in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, error_score)
1476
1477 else:
-> 1478 test_score = _score(estimator, X_test, y_test, scorer)
1479 if return_train_score:
1480 train_score = _score(estimator, X_train, y_train, scorer)
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/cross_validation.pyc in _score(estimator, X_test, y_test, scorer)
1532 score = scorer(estimator, X_test)
1533 else:
-> 1534 score = scorer(estimator, X_test, y_test)
1535 if not isinstance(score, numbers.Number):
1536 raise ValueError("scoring must return a number, got %s (%s) instead."
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/metrics/scorer.pyc in __call__(self, estimator, X, y_true, sample_weight)
87 else:
88 return self._sign * self._score_func(y_true, y_pred,
---> 89 **self._kwargs)
90
91
<ipython-input-39-a8686eb798b2> in overall_average_score(actual, prediction)
3 recall = precision_recall_fscore_support(actual, prediction, average = 'binary')[1]
4 f1_score = precision_recall_fscore_support(actual, prediction, average = 'binary')[2]
----> 5 total_score = matthews_corrcoef(actual, prediction)+accuracy_score(actual, prediction)+precision+recall+f1_score
6 return total_score/5
7 def show_score(actual,prediction):
/shared/studies/nonregulated/neurostream/neurostream/local/lib/python2.7/site-packages/sklearn/metrics/classification.pyc in matthews_corrcoef(y_true, y_pred)
395
396 if y_type != "binary":
--> 397 raise ValueError("%s is not supported" % y_type)
398
399 lb = LabelEncoder()
ValueError: multiclass is not supported
Answer: I reproduced your experiment but I do not get any error. The error indicates
one of your vectors `actual` or `prediction` **contains more than two discrete
values**.
It is indeed weird that you are able to score a random forest trained outside
`GridSearchCV`.
Could you provide the exact code you run to do this?
Here's the code I used to try to reproduce the error:
from sklearn.datasets import make_classification
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import precision_recall_fscore_support, accuracy_score, \
matthews_corrcoef, make_scorer
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
def overall_average_score(actual,prediction):
precision, recall, f1_score, _ = precision_recall_fscore_support(
actual, prediction, average='binary')
total_score = (matthews_corrcoef(actual, prediction) +
accuracy_score(actual, prediction) + precision + recall + f1_score)
return total_score / 5
grid_scorer = make_scorer(overall_average_score, greater_is_better=True)
print("Without GridSearchCV")
X, y = make_classification(n_samples=500, n_informative=10, n_classes=2)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.5, random_state=0)
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
print("Overall average score: ", overall_average_score(y_test, y_pred))
print("-" * 30)
print("With GridSearchCV:")
parameters = {'n_estimators': [10,20,30],
'max_features': ['auto','sqrt','log2',0.5,0.3], }
gs_rf = GridSearchCV(rf, parameters, cv=5, scoring=grid_scorer)
gs_rf.fit(X_train,y_train)
print("Best score with grid search: ", gs_rf.best_score_)
Now I'd like to make a few comments on the code you provided:
* It's not a great practice to use variable names such as `random` (this is usually a module) or `f1_score` (this conflicts with the `sklearn.metrics.f1_score` method).
* You could unpack `precision`, `recall` and `f1_score` directly instead of calling 3 times `precision_recall_fscore_support`.
* It does not really make sense to grid search on `n_estimators`: **more trees is always better**. If you are worried about overfitting you can reduce the complexity of the individual models by using other parameters such as `max_depth` or `min_samples_split`.
|
Django Error: cannot import name autodiscover_modules
Question: I meet this error when I deploy my Django project on another VPS. The same
codes can run successfully on my Macbook and a staging VPS.
My website based on Django 1.4.20, and import some third python library and
Django apps, for example redis-py, requests, django-import-export, django-
kronos, django-cors-headers. I install these by pip install etc
I'm really confused how these happen. Maybe it's a library dependency problem,
but I can't find detail error log or stacks. Thanks for your time.
Answer: You should have a `requirements.txt` with your webapp. Then do `pip install -r
requirements.txt` when you deploy.
If you did not make such a file, you can create one later by running `pip
freeze > requirements.txt`. But beware that there might be some packages there
that are not needed, if you installed other stuff on the side, so be prepared
to manually screen the file.
If you work with multiple webapps you may also need to containerize your
requirements ([here's why](https://en.wikipedia.org/wiki/Dependency_hell)).
Two options: [Docker](https://www.docker.com) or
[virtualenv](https://virtualenv.pypa.io/en/latest/). If you don't know what
Docker is and don't have some time on your hands I suggest you go with
Virtualenv for now.
|
Multiclass linear SVM in python that return probability
Question: How can I implement a linear SVM for multi-class which returns the
proabability matrix for the test samples. Train samples: mxn Train labels: mxc
Test labels : mxc, where column has the probability of each class.
The function in sklearn which does "one-vs-the-rest"
[LinearSVC](http://scikit-
learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC)
doesn't return probablity array for each sample like [SVC](http://scikit-
learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) which
has `predict_proba`
**Edit**
Code:
print X_train.shape,y.shape
svc = LinearSVC()
clf = CalibratedClassifierCV(svc, cv=10)
clf.fit(X_train, y)
Output:
(7112L, 32L) (7112L, 6L)
Traceback (most recent call last):
File "SVC_Calibirated_Probability.py", line 171, in <module>
clf.fit(X_train, y)
File "C:\Anaconda\lib\site-packages\sklearn\calibration.py", line 110, in fit
force_all_finite=False)
File "C:\Anaconda\lib\site-packages\sklearn\utils\validation.py", line 449, in check_X_y
y = column_or_1d(y, warn=True)
File "C:\Anaconda\lib\site-packages\sklearn\utils\validation.py", line 485, in column_or_1d
raise ValueError("bad input shape {0}".format(shape))
ValueError: bad input shape (7112L, 6L)
Answer: `LinearSVC` does not support probability estimates because it is based on
`liblinear` but `liblinear` supports probability estimates for logistic
regression only.
If you just need confidence scores, but these do not have to be probabilities,
you can use `decision_function` instead.
If it it not required to choose penalties and loss functions of linear SVM,
you can also use `SVC` by setting kernel to be `'linear'`, then you can have
`predict_proba`.
### Update #1:
You can use `SVC` with `OneVsRestClassifier` to support one-vs-rest scheme,
for example
from sklearn import datasets
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto'))
clf.fit(X, y)
proba = clf.predict_proba(X)
### Update #2:
There is another way to estimate probabilities with `LinearSVC` as classifier.
from sklearn.svm import LinearSVC
from sklearn.calibration import CalibratedClassifierCV
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
Y = iris.target
svc = LinearSVC()
clf = CalibratedClassifierCV(svc, cv=10)
clf.fit(X, Y)
proba = clf.predict_proba(X)
However for the other question ([Making SVM run faster in
python](http://stackoverflow.com/questions/31681373/making-svm-run-faster-in-
python)), this solution is not likely to enhance performance either as it
involves additional cross-validation and does not support parallelization.
### Update #3:
For the second solution, because `LinearSVC` does not support multilabel
classification, so you have to wrap it in `OneVsRestClassifier`, here is an
example:
from sklearn.svm import LinearSVC
from sklearn.calibration import CalibratedClassifierCV
from sklearn.multiclass import OneVsRestClassifier
from sklearn.datasets import make_multilabel_classification
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=True,
return_indicator=True,
random_state=1)
clf0 = CalibratedClassifierCV(LinearSVC(), cv=10)
clf = OneVsRestClassifier(clf0)
clf.fit(X, Y)
proba = clf.predict_proba(X)
|
Python script run via cron job returning IOError [error no2]
Question: I'm running a Python feedparser script via cron job on a Centos6 remote server
(SSHing into the server).
In Crontab, this is my cron job:
MAILTO = [email protected]
*/10 * * * * /home/local/COMPANY/malvin/SilverChalice_CampusInsiders/SilverChalice_CampusInsiders.py > /home/local/COMPANY/malvin/SilverChalice_CampusInsiders`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log | mailx -s "Feedparser Output" [email protected]
However, I'm seeing this message in the email that's being sent, which should
just contain the output of the script:
Null message body; hope that's ok
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Traceback (most recent call last):
File "/home/local/COMPANY/malvin/SilverChalice_CampusInsiders/SilverChalice_CampusInsiders.py", line 70, in <module>
BC_01.createAndIngest(name, vUrl, tags, desc)
File "/home/local/COMPANY/malvin/SilverChalice_CampusInsiders/BC_01.py", line 69, in createAndIngest
creds = loadSecret()
File "/home/local/COMPANY/malvin/SilverChalice_CampusInsiders/BC_01.py", line 17, in loadSecret
credsFile=open('brightcove_oauth.json')
IOError: [Errno 2] No such file or directory: 'brightcove_oauth.json'
Normally, this would be a no-brainer issue: something must be wrong with my
code. Except, the script works perfectly fine when I run it on the command
line via `python SilverChalice_CampusInsiders.py`
What am I doing wrong here? Why doesn't the Python script "see" the json oauth
file when run via cron job?
Answer: Cron sets a minimal environment for the jobs (and I think it runs the job from
the home directory).
Inside your python script, when you do something like -
open('<filename>')
It checks for the `filename` in the current working directory , not the
directory in which your scripts exist.
That is true even when running from commandline , if you change directory to
some other directory (maybe your home directory) , and then use absolute path
to your script to run it, you should be getting the same error.
Instead of depending on the current working directory to be correct and have
the file you want to open, you can try either of the below options -
1. Use an absolute paths to the files you want to open , do not use relative path.
2. Or If the above is not an option for you, and the files you want to open are present relative to the script that is getting run (for example purpose lets say in the same directory) , then you can use `__file__` (this gives the script location) and [`os.path`](https://docs.python.org/2/library/os.path.html) , to create the absolute path to your file at runtime, Example -
import os.path
fdir = os.path.abspath(os.path.dirname(__file__)) #This would give the absolute path to the directory in which your script exists.
f = os.path.join(fdir,'<yourfile')
At the end `f` would have the path to your file and you can use that to open
your file.
|
Converting ascii file to netcdf using python
Question: I would like to add all the data from an ascii file to a netcdf file. The
ascii file has data for every 0.25 degree cell on earth.
I am able to create all the lat/lon dimensions but not able to add the data.
The ascii file is here:
<https://www.dropbox.com/s/lybu6yvm4ph7pcr/tmp.txt?dl=0>
Can someone diagnose the code and see what is going wrong?
from netCDF4 import Dataset
import numpy, os, pdb, errno, sys
NUM_LATS = 180.0
NUM_LONS = 360.0
inp_dir = 'C:\\Input\\'
out_dir = 'C:\\Output\\nc\\'
def make_dir_if_missing(d):
try:
os.makedirs(d)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
make_dir_if_missing(out_dir)
# Read ASCII File
fl_name = inp_dir+'tmp.txt'
ascii_fl = numpy.loadtxt(fl_name, delimiter=' ')
# Compute dimensions of nc file based on # rows/cols in ascii file
fl_res = NUM_LATS/ascii_fl.shape[0]
if fl_res != NUM_LONS/ascii_fl.shape[1]:
print 'Incorrect dimensions in ascii file'
sys.exit(0)
# Initialize nc file
out_nc = out_dir+os.path.basename(inp_dir+'tmp.txt')[:-4]+'.nc'
nc_data = Dataset(out_nc, 'w', format='NETCDF4')
nc_data.description = 'Test'
# dimensions
nc_data.createDimension('lat', ascii_fl.shape[0])
nc_data.createDimension('lon', ascii_fl.shape[1])
nc_data.createDimension('data', 1)
# Populate and output nc file
# variables
latitudes = nc_data.createVariable('latitude', 'f4', ('lat',))
longitudes = nc_data.createVariable('longitude', 'f4', ('lon',))
glm_data = nc_data.createVariable('glm_data', 'f4', ('lat', 'lon', 'data'), fill_value=-9999.0)
glm_data.units = ''
# set the variables we know first
latitudes = numpy.arange(-90.5, 89.5, fl_res)
longitudes = numpy.arange(0.5, 360.5, fl_res)
glm_data = ascii_fl ### THIS LINE IS NOT WORKING!!!!!!!
nc_data.close()
Answer: You just need to explicitly write out the 2-dimensions for the variable
`glm_data`:
glm_data[:,:] = ascii_fl[:,:]
|
Python HTTPPasswordMgrWithDefaultRealm error
Question: I am new to programming. I want write a parser for the beginning.
Code:
import urllib
import urllib.request
url = 'https://example.com'
username = 'login1'
password = 'pass'
p = urllib.request.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, url, username, password)
handler = urllib.request.HTTPPasswordMgrWithDefaultRealm(p)
opener = urllib.build_opener(handler)
urllib.install_opener(opener)
f = urllib.request.urlopen(url)
html = f.read()
print ("html")
I get error in IDLE:
Traceback (most recent call last):
File "D:\project\parser.py", line 8, in <module>
handler = urllib.request.HTTPPasswordMgrWithDefaultRealm(p)
TypeError: __init__() takes 1 positional argument but 2 were given
Please help. Thanks.
Answer: If we look at the documentation for
`urllib.request.HTTPPassowrdMgrWithDefaultRealm`, the `__init__` method looks
like this:
__init__(self)
That `self` is provided implicitly. You're trying to pass it a second
positional parameter (`p`), hence the error message.
Take a look at the Basic Authentication example in [the
documentation](https://docs.python.org/3/howto/urllib2.html#id5), which offers
the following example usage:
# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = "http://example.com/foo/"
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)
# use the opener to fetch a URL
opener.open(a_url)
# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)
|
Beyond for-looping: high performance parsing of a large, well formatted data file
Question: I am looking to optimize the performance of a big data parsing problem I have
using `python`. In case anyone is interested: the data shown below is segments
of whole genome DNA sequence alignments for six primate species.
Currently, the best way I know how to proceed with this type of problem is to
open each of my ~250 (size 20-50MB) files, loop through line by line and
extract the data I want. The formatting (shown in examples) is fairly regular
although there are important changes at each 10-100 thousand line segment.
Looping works fine but it is slow.
I have been using `numpy` recently for processing massive (>10 GB) numerical
data sets and I am really impressed at how quickly I am able to perform
different computations on arrays. I wonder if there are some high-powered
solutions for processing formatted text that circumvents tedious for-looping?
My files contain multiple segments with the pattern
<MULTI-LINE HEADER> # number of header lines mirrors number of data columns
<DATA BEGIN FLAG> # the word 'DATA'
<DATA COLUMNS> # variable number of columns
<DATA END FLAG> # the pattern '//'
<EMPTY LINE>
Example:
# key to the header fields:
# header_flag chromosome segment_start segment_end quality_flag chromosome_data
SEQ homo_sapiens 1 11388669 11532963 1 (chr_length=249250621)
SEQ pan_troglodytes 1 11517444 11668750 1 (chr_length=229974691)
SEQ gorilla_gorilla 1 11607412 11751006 1 (chr_length=229966203)
SEQ pongo_pygmaeus 1 218866021 219020464 -1 (chr_length=229942017)
SEQ macaca_mulatta 1 14425463 14569832 1 (chr_length=228252215)
SEQ callithrix_jacchus 7 45949850 46115230 1 (chr_length=155834243)
DATA
GGGGGG
CCCCTC
...... # continue for 10-100 thousand lines
//
SEQ homo_sapiens 1 11345717 11361846 1 (chr_length=249250621)
SEQ pan_troglodytes 1 11474525 11490638 1 (chr_length=229974691)
SEQ gorilla_gorilla 1 11562256 11579393 1 (chr_length=229966203)
SEQ pongo_pygmaeus 1 219047970 219064053 -1 (chr_length=229942017)
DATA
CCCC
GGGG
.... # continue for 10-100 thousand lines
//
<ETC>
I will use segments where the species `homo_sapiens` and `macaca_mulatta` are
both present in the header, and field 6, which I called the quality flag in
the comments above, equals '1' for each species. Since `macaca_mulatta` does
not appear in the second example, I would ignore this segment completely.
I care about `segment_start` and `segment_end` coordinates for `homo_sapiens`
only, so in segments where `homo_sapiens` is present, I will record these
fields and use them as keys to a `dict()`. `segment_start` also tells me the
first positional coordinate for `homo_sapiens`, which increases strictly by 1
for each line of data in the current segment.
I want to compare the letters (DNA bases) for `homo_sapiens` and
`macaca_mulatta`. The header line where `homo_sapiens` and `macaca_mulatta`
appear (i.e. 1 and 5 in the first example) correspond to the column of data
representing their respective sequences.
**Importantly, these columns are not always the same, so I need to check the
header to get the correct indices for each segment, and to check that both
species are even in the current segment.**
Looking at the two lines of data in example 1, the relevant information for me
is
# homo_sapiens_coordinate homo_sapiens_base macaca_mulatta_base
11388669 G G
11388670 C T
For each segment containing info for `homo_sapiens` and `macaca_mulatta`, I
will record start and end for `homo_sapiens` from the header and each position
where the two DO NOT match into a list. Finally, some positions have "gaps" or
lower quality data, i.e.
aaa--A
I will only record from positions where `homo_sapiens` and `macaca_mulatta`
both have valid bases (must be in the set `ACGT`) so the last variable I
consider is a counter of valid bases per segment.
My final data structure for a given file is a dictionary which looks like
this:
{(segment_start=i, segment_end=j, valid_bases=N): list(mismatch positions),
(segment_start=k, segment_end=l, valid_bases=M): list(mismatch positions), ...}
Here is the function I have written to carry this out using a for-loop:
def human_macaque_divergence(chromosome):
"""
A function for finding the positions of human-macaque divergent sites within segments of species alignment tracts
:param chromosome: chromosome (integer:
:return div_dict: a dictionary with tuple(segment_start, segment_end, valid_bases_in_segment) for keys and list(divergent_sites) for values
"""
ch = str(chromosome)
div_dict = {}
with gz.open('{al}Compara.6_primates_EPO.chr{c}_1.emf.gz'.format(al=pd.align, c=ch), 'rb') as f:
# key to the header fields:
# header_flag chromosome segment_start segment_end quality_flag chromosome_info
# SEQ homo_sapiens 1 14163 24841 1 (chr_length=249250621)
# flags, containers, counters and indices:
species = []
starts = []
ends = []
mismatch = []
valid = 0
pos = -1
hom = None
mac = None
species_data = False # a flag signalling that the lines we are viewing are alignment columns
for line in f:
if 'SEQ' in line: # 'SEQ' signifies a segment info field
assert species_data is False
line = line.split()
if line[2] == ch and line[5] == '1': # make sure that the alignment is to the desired chromosome in humans quality_flag is '1'
species += [line[1]] # collect each species in the header
starts += [int(line[3])] # collect starts and ends
ends += [int(line[4])]
if 'DATA' in line and {'homo_sapiens', 'macaca_mulatta'}.issubset(species):
species_data = True
# get the indices to scan in data columns:
hom = species.index('homo_sapiens')
mac = species.index('macaca_mulatta')
pos = starts[hom] # first homo_sapiens positional coordinate
continue
if species_data and '//' not in line:
assert pos > 0
# record the relevant bases:
human = line[hom]
macaque = line[mac]
if {human, macaque}.issubset(bases):
valid += 1
if human != macaque and {human, macaque}.issubset(bases):
mismatch += [pos]
pos += 1
elif species_data and '//' in line: # '//' signifies segment boundary
# store segment results if a boundary has been reached and data has been collected for the last segment:
div_dict[(starts[hom], ends[hom], valid)] = mismatch
# reset flags, containers, counters and indices
species = []
starts = []
ends = []
mismatch = []
valid = 0
pos = -1
hom = None
mac = None
species_data = False
elif not species_data and '//' in line:
# reset flags, containers, counters and indices
species = []
starts = []
ends = []
pos = -1
hom = None
mac = None
return div_dict
This code works fine (perhaps it could use some tweaking), but my real
question is whether or not there might be a faster way to pull this data
without running the for-loop and examining each line? For example, loading the
whole file using `f.read()` takes less than a second although it creates a
pretty complicated string. (In principle, I assume that I could use regular
expressions to parse at least some of the data, such as the header info, but
I'm not sure if this would necessarily increase performance without some bulk
method to process each data column in each segment).
Does anyone have any suggestions as to how I circumvent looping through
billions of lines and parse this kind of text file in a more bulk manner?
Please let me know if anything is unclear in comments, happy to edit or
respond directly to improve the post!
Answer: Yes you could use some regular expressions to make extract the data in one-go;
this is probably the best ratio of effort/performances.
If you need more performances, you could use
[mx.TextTools](http://www.egenix.com/products/python/mxBase/mxTextTools/) to
build a finite state machine; I'm pretty confident this will be significantly
faster, but the effort needed to write the rules and the learning curve might
discourage you.
You also could split the data in chunks and parallelize the processing, this
could help.
|
NetworkX : When I add 'weight' to some node I can not generatee adjacecy_matrix()?
Question: The moment I add 'weight' to a node, I can no longer generate
adjacency_matrix() ? Any ideas of how to still be able to generate it ?
In [73]: g2 = nx.Graph()
In [74]: g2.add_path([1,2,3,5,4,3,1,4,3,7,2])
In [75]: nx.adjacency_matrix(g2)
Out[75]:
matrix([[ 0., 1., 1., 1., 0., 0.],
[ 1., 0., 1., 0., 0., 1.],
[ 1., 1., 0., 1., 1., 1.],
[ 1., 0., 1., 0., 1., 0.],
[ 0., 0., 1., 1., 0., 0.],
[ 0., 1., 1., 0., 0., 0.]])
In [76]: g2[3]['weight'] = 5
In [77]: nx.adjacency_matrix(g2)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-77-532c786b4588> in <module>()
----> 1 nx.adjacency_matrix(g2)
/usr/lib/pymodules/python2.7/networkx/linalg/graphmatrix.pyc in adjacency_matrix(G, nodelist, weight)
144 to_dict_of_dicts
145 """
--> 146 return nx.to_numpy_matrix(G,nodelist=nodelist,weight=weight)
147
148 adj_matrix=adjacency_matrix
/usr/lib/pymodules/python2.7/networkx/convert.pyc in to_numpy_matrix(G, nodelist, dtype, order, multigraph_weight, weight)
522 for v,d in nbrdict.items():
523 try:
--> 524 M[index[u],index[v]]=d.get(weight,1)
525 except KeyError:
526 pass
AttributeError: 'int' object has no attribute 'get'
the same goes for :
In [79]: nx.adjacency_matrix(g2,weight='weight')
Answer: You are close - assign the node weight to `g2.node[2]['weight']` and it will
work.
Note though that the node weights do not appear in the adjacency matrix. It is
the edge weights that are assigned there. For example
In [1]: import networkx as nx
In [2]: g2 = nx.Graph()
In [3]: g2.add_path([1,2,3,5,4,3,1,4,3,7,2])
In [4]: g2.node[2]['weight']=7
In [5]: g2.node
Out[5]: {1: {}, 2: {'weight': 7}, 3: {}, 4: {}, 5: {}, 7: {}}
In [6]: nx.adjacency_matrix(g2).todense()
Out[6]:
matrix([[0, 1, 1, 1, 0, 0],
[1, 0, 1, 0, 0, 1],
[1, 1, 0, 1, 1, 1],
[1, 0, 1, 0, 1, 0],
[0, 0, 1, 1, 0, 0],
[0, 1, 1, 0, 0, 0]])
In [7]: g2.edge[1][2]['weight'] = 42
In [8]: nx.adjacency_matrix(g2).todense()
Out[8]:
matrix([[ 0, 42, 1, 1, 0, 0],
[42, 0, 1, 0, 0, 1],
[ 1, 1, 0, 1, 1, 1],
[ 1, 0, 1, 0, 1, 0],
[ 0, 0, 1, 1, 0, 0],
[ 0, 1, 1, 0, 0, 0]])
Also you will see I am using a newer version of networkx that generates sparse
matrices so I have added the .todense() method to get a dense (numpy) matrix.
|
Accessing an array with ctypes in Python
Question: I am writing a ode-solver in C, exported to a Windows DLL and a Python wrapper
for the DLL. I am very used to Python, but I'm a complete beginner with C and
ctypes too.
A modified solution inspired by the accepted answer
[here](http://stackoverflow.com/questions/18679264/how-to-use-malloc-and-free-
with-python-ctypes) looks like:
The C-code
/* my_clib.c */
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
struct data {
int nr_steps;
double dt;
double* t;
double* x;
double t0, x0;
};
double fun_to_integrate(double t, double y){
return (y - t);
}
double rk4(double t, double y, double dt){
double k1 = dt * fun_to_integrate(t, y),
k2 = dt * fun_to_integrate(t + dt / 2, y + k1 / 2),
k3 = dt * fun_to_integrate(t + dt / 2, y + k2 / 2),
k4 = dt * fun_to_integrate(t + dt, y + k3);
return y + (k1 + 2 * k2 + 2 * k3 + k4) / 6;
}
__declspec(dllexport) void my_fun(struct data* pointer){
int i;
double dt;
dt = pointer->dt;
pointer->t[0] = pointer->t0;
pointer->x[0] = pointer->x0;
for(i = 1; i < pointer->nr_steps; i++){
pointer->t[i] = dt*i + pointer->t0;
pointer->x[i] = rk4(pointer->t[i-1], pointer->x[i-1], dt);
}
}
With the corresponding Python file
# my_python.py
import ctypes
import numpy as np
class DATA(ctypes.Structure):
_fields_ = [
('nr_steps', ctypes.c_int),
('dt', ctypes.c_double),
('t', ctypes.POINTER(ctypes.c_double)),
('x', ctypes.POINTER(ctypes.c_double)),
('t0', ctypes.c_double),
('x0', ctypes.c_double)]
def __init__(self):
self.nr_steps = 1000
self.dt = 0.00001
self.t0 = 0.
self.x0 = 2./3
self.t = (ctypes.c_double * self.nr_steps)()
self.x = (ctypes.c_double * self.nr_steps)()
class SOLVER(object):
def __init__(self):
self.clib = ctypes.CDLL('rk4.dll')
self.clib.my_fun.argtypes = [ctypes.POINTER(DATA)]
self.clib.my_fun.restype = None
def func(self, data_struc):
self.clib.my_fun(ctypes.byref(data_struc))
solver = SOLVER()
data = DATA()
solver.func(data)
Compiled with `gcc -o -c my_clib.o my_clib.c` \+ `gcc -o rk4.dll -shared
my_clib.o` using MinGW on Windows 8.
Everything runs fine and after the final line `solver.func()` the time data
and solution data are stored in `data.t` and `data.x`. Now I need to access
the calculated data from the pointers. It seems it cannot be done directly. If
you do `type(data.x)` you get `<class '__main__.LP_c_double'>`, but if you try
to access `type(data.x[i])` you get a standard `double`.
Every time I tried to for example `plot(data.t, data.x)` or cast it to
`np.array(data.t)`, the Python file crashes and the cmd freezes. However i
figured that `x_python = [data.x[i] for x i in range(*number_of_elements*)]`
works, but it is very slow if the arrays are long.
My question is: what is the correct/best way of accessing the data calculated
in the C-solver?
Also, if this is not the best way of passing an array from C to Python, what
other alternatives are suitable for this kind of application? I.e. for every
time step, or maybe after reaching some final time, passing the solution `(t,
x)` (tuple or two arrays) from C to Python?
Answer: To access values from ctypes objects that acts int, long, etc. use:
x = ctypes.c_int(123)
print x.value
So, you can iterate over them and make your array.
Also, you can pass a clear C static array to Python instead of using
structures, and use numpy's ctypes support to get the numpy.ndarray, or use
Python's array module.
But, I get your point using the structure. These are easy and clean ways to do
it.
|
import error for pyautogui
Question: I installed the pyautogui module and dependencies via pip-3.2 on my raspi
correctly, However when I am trying to do
import pyautogui
I am getting an import error:
ImportError: No module named pyautogui
What am I doing wrong? Did the command change? Sorry I am a total python Noob,
any help is greatly appreciated :D
Answer: It might be because you're trying it from a python 2.x shell. Instead try this
command on a python3 shell and try importing the same.
|
Getting the names of unicode pictograms
Question: I'm trying to analyse a text stream that includes unicode pictograms like
these:
-> 128132 -> Lipstick
-> 128133 -> Nail Polish
-> 128139 -> Kiss Mark
I'd like to be able to look up the name of each of these characters, so for
example any time I encounter the glyph I'd like to replace it with an
alphanumeric symbol like "UNICODE_LIPSTICK".
Is there a data file somewhere that lists the numbers of all of the unicode
characters and their names? I'm after some kind of mapping (preferably in a
plain text format that's easily imported). Does such a data set exist?
I'm actually programming in Python 3, so if there's something built into the
language that would do this, even better!
Answer: You can use the [`unicodedata.name()`
function](https://docs.python.org/3/library/unicodedata.html#unicodedata.name)
to look up the official codepoint name:
>>> import unicodedata
>>> unicodedata.name(chr(128132))
'LIPSTICK'
>>> unicodedata.name(chr(128133))
'NAIL POLISH'
>>> unicodedata.name(chr(128139))
'KISS MARK'
|
Facing issues while importing third party (pygoogle) library in main.py for making android apk via buildozer
Question: I am trying to importing pygoogle by `import pygoogle` in my main.py but when
i make an app via buildozer , in logcat i am finding that `no module named
pygoogle` I have installed pygoogle in my kali linux os and then i went to
usr/local/lib/python2.7 and copied pygoogle folder to my home folder where my
main.py is.....then i tried again but still same error and then i went
`distribute.sh -m 'kivy pygoogle` everything went good but still was facing
same error and then i went to buildozer and in requirments added pygoogle but
in aviable module pygoogle is not showing by buildozer .... not with pygoogle
only ..with all lbrary i am facing same problem ... any solution some know
???I am not linux expert
Answer: Since it's a pure python module, and it's on pypi, you can just add it to
requirements in buildozer, it doesn't need a specific recipe.
|
Python : Printing While Loop data set to a file
Question: I am trying to take sensor data and plot it into a graph via JSON. I would
like to read the first 100 sensor values and create a file, after the 100, I
want to replace the 1st with 101, 2nd with 102... etc so the file is
constantly showing the latest 100 lines.
Say I have a random data stream coming in every 3 seconds to the standard
output such as this..
from random import randint
import time
def loop():
print(randint(100,200))
while True:
loop()
time.sleep(3)
How do I capture say 100 lines of the output, and write this data to a file?
Ideally the file should be replaced every ping by the new data. Thank you for
your help!
Answer: A working crude solution :
from random import randint
import time
def loop():
return randint(100,200)
def save_lst(lst):
with open("mon_fich", "w") as f:
f.write(", ".join(lst))
lst = list()
while True:
lst.append(str(loop()))
if len(lst)>=100:
save_lst(lst)
lst = list()
time.sleep(0.001)
|
Decode using ASN.1 where substrate contains some opaque data
Question: I would like to use `pyasn1` to decode some data, part of which is opaque.
That is, part of the data contained in the ASN.1-defined structure may or may
not be ASN.1 decode-able, and I need to parse the preamble to find out how to
decode it.
Based on what I understand from the [pyasn1 codec
documentation](http://pyasn1.sourceforge.net/codecs.html) on "Decoding
untagged types," I should be able to use the `pyasn.univ.Any` type to handle
this case.
Here is some example code to illustrate the problem I'm having.
#!/usr/bin/env python
from pyasn1.type import univ, namedtype
from pyasn1.codec.der import decoder, encoder
class Example(univ.Sequence):
componentType = namedtype.NamedTypes(
namedtype.NamedType('spam', univ.Integer()),
namedtype.NamedType('eggs', univ.Any())
)
example = Example()
example['spam'] = 42
example['eggs'] = univ.Any(b'\x01\x00abcde') # Some opaque data
substrate = encoder.encode(example)
"""
>>> import binascii
>>> print(binascii.hexlify(substrate).decode('ascii')))
300a02012a01006162636465
^^ ^
|| + Opaque data begins here
++ Note: the length field accounts for all remaining substrate
"""
data, tail = decoder.decode(substrate, asn1Spec=Example())
print(data)
The encoded example is consistent with my expectations. However, this program
fails inside the decoder with the following traceback.
Traceback (most recent call last):
File "./any.py", line 27, in <module>
data, tail = decoder.decode(substrate, asn1Spec=Example())
File "/Users/neirbowj/Library/Python/3.4/lib/python/site-packages /pyasn1-0.1.8-py3.4.egg/pyasn1/codec/ber/decoder.py", line 825, in __call__
File "/Users/neirbowj/Library/Python/3.4/lib/python/site-packages/pyasn1-0.1.8-py3.4.egg/pyasn1/codec/ber/decoder.py", line 342, in valueDecoder
File "/Users/neirbowj/Library/Python/3.4/lib/python/site-packages/pyasn1-0.1.8-py3.4.egg/pyasn1/codec/ber/decoder.py", line 706, in __call__
pyasn1.error.SubstrateUnderrunError: 95-octet short
I believe what's happening is that the decoder is trying to work on the
portion of the data I've tried to identify as `univ.Any` and failing---because
it's not a valid encoding---rather than returning it to me as some binary data
encapsulated in a `univ.Any` object as I expect.
How can I parse data of this form using `pyasn1`?
Incidentally, the actual data I am trying to decode is a SASL token using the
GSSAPI mechanism, as defined in section 4.1 of [RFC 4121: KRB5 GSSAPI
mechanism v2](https://tools.ietf.org/html/rfc4121#section-4.1), which I
excerpt here for convenience.
GSS-API DEFINITIONS ::=
BEGIN
MechType ::= OBJECT IDENTIFIER
-- representing Kerberos V5 mechanism
GSSAPI-Token ::=
-- option indication (delegation, etc.) indicated within
-- mechanism-specific token
[APPLICATION 0] IMPLICIT SEQUENCE {
thisMech MechType,
innerToken ANY DEFINED BY thisMech
-- contents mechanism-specific
-- ASN.1 structure not required
}
END
The innerToken field starts with a two-octet token-identifier
(TOK_ID) expressed in big-endian order, followed by a Kerberos
message.
Following are the TOK_ID values used in the context establishment
tokens:
Token TOK_ID Value in Hex
-----------------------------------------
KRB_AP_REQ 01 00
KRB_AP_REP 02 00
KRB_ERROR 03 00
**EDIT 1: Attach sample data**
Here is a sample GSSAPI-Token (lightly sanitized) that was serialized, I
believe, by cyrus-sasl and heimdal.
YIIChwYJKoZIhvcSAQICAQBuggJ2MIICcqADAgEFoQMCAQ6iBwMFACAAAACjggFm
YYIBYjCCAV6gAwIBBaELGwlBU04uMVRFU1SiNjA0oAMCAQGhLTArGwtzZXJ2aWNl
bmFtZRscc2VydmljZWhvc3QudGVzdC5leGFtcGxlLmNvbaOCARAwggEMoAMCARCh
AwIBBKKB/wSB/A81akUNsyvRCCKtERWg9suf96J3prMUQkabsYGpzijfEeCNe0ja
Eq6c87deBG+LeJqFIyu65cCMF/oXtyZNB9sUxpqFBcfkAYZXTxabNLpZAUmkdt6w
dYlV8JK/G3muuG/ziM14oCbh8hIY63oi7P/Pdyrs3s8B+wkNCpjVtREHABuF6Wjx
GYem65mPqCP9ZMSyD3Bc+dLemxhm7Kap8ExoVYFRwuFqvDf/E5MLCk2HThw46UCF
DqFnU46FJBNGAK+RN2EptsqtY48gb16klqJxU7bwHeYoCsdXyB6GElIDe1qrPU15
9mGxpdmSElcVxB/3Yzei48HzlkUcfqSB8jCB76ADAgEQooHnBIHkZUyd0fJO3Bau
msqz6ndF+kBxmrGS6Y7L20dSYDI2cB8HsJdGDnEODsAAcYQ0L5c2N/mb8QHh7iU9
gtjWHpfq/FqMF4/aox/BJ0Xzuy2gS4sCafs7PTYtSDh2nyLkNYuxKdmQ1ughbIq6
APAegqa7R1iv2oCaNijrpKc2YUfznnwT/CTSsGrJpMwz4KLuBtjI4f74bQty8uNn
LVxxV4J8wU1s7lSj4Ipbi+a1WdCVsLs8lIqFmKXte+1c+qHeadoAGmSTBT3qFZae
SRdT8dpYr6i6fkjRsoyEZs9ZqQtwQAYSdMBU
Answer: My impression is that ANY type serialization can only contain valid BER/DER
serialization. Think of ANY type as a CHOICE type with infinite number of
alternatives (see chapter on ANY type
[here](http://luca.ntop.org/Teaching/Appunti/asn1.html)).
My first instinct is to put innerToken into OCTET STRING like this:
class Example(univ.Sequence):
componentType = namedtype.NamedTypes(
namedtype.NamedType('spam', univ.Integer()),
namedtype.NamedType('eggs', univ.OctetString())
)
what would give you ready-made values upon decoding:
>>> example = Example()
>>> example['spam'] = 42
>>> example['eggs'] = b'\x01\x00abcde'
>>> print(example.prettyPrint())
Example:
spam=42
eggs=0x01006162636465
>>> substrate = encoder.encode(example)
>>> data, tail = decoder.decode(substrate, asn1Spec=Example())
>>> print(data.prettyPrint())
Example:
spam=42
eggs=0x01006162636465
On the other hand, if you would literally use the values from the spec:
KRB_AP_REQ 01 00
KRB_AP_REP 02 00
KRB_ERROR 03 00
they would look like valid DER serialization that could be decoded with your
original Example spec:
>>> KRB_AP_REQ = '\x01\x00'
>>> KRB_AP_REP = '\x02\x00'
>>> KRB_ERROR = '\x03\x00'
>>> class Example(univ.Sequence):
... componentType = namedtype.NamedTypes(
... namedtype.NamedType('spam', univ.Integer()),
... namedtype.NamedType('eggs', univ.Any()),
... namedtype.NamedType('ham', univ.Any()),
... )
...
>>> example = Example()
>>> example['spam'] = 42
>>> example['eggs'] = KRB_AP_REQ
# obtain DER serialization for ANY type that follows
>>> example['ham'] = encoder.encode(univ.Integer(24))
>>> print(example.prettyPrint())
Example:
spam=42
eggs=0x0100
ham=0x020118
>>> substrate = encoder.encode(example)
>>> data, tail = decoder.decode(substrate, asn1Spec=Example())
>>> print(data.prettyPrint())
Example:
spam=42
eggs=0x0100
ham=0x020118
>>> data['eggs'].asOctets()
'\x01\x00'
>>> data['eggs'].asNumbers()
(1, 0)
>>> example['eggs'] == KRB_AP_REQ
True
But that is a sort of cheating and may not work for arbitrary innerToken
values.
So how does GSSAPI-Token serialization produced by other tools looks like?
|
Matplotlib error Line2d object s not iterable error in tkinter callback nothing shows up
Question: The code is shown below. I am attempting to animate using vectors calculated
earlier a figure window is opened so i know it gets this far and the vectors
are being calclated correctly. But matplotlib oututs nothing but the figure
window I have no idea why. Please help.
#finally animateing
fig = plt.figure()
ax = plt.axes(xlim = (-1000,1000) ,ylim = (-1000,1000))#limits were arbitrary
#line = ax.plot([],[])
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return line,
def animate(i):
x = time_vec[i]
y = complex_vec[i]
#y1 = real_vec[i]
#y2 = modulus_vec[i]
line.set_data(x,y)
#line.set_data(x,y1)
#line.set_data(x,y2)
return line,
animation_object = animation.FuncAnimation(fig, animate, init_func= init, frames = num_files,interval = 30, blit = True)
#turnn this line on to save as mp4
#anim.save("give it a name.mp4", fps = 30, extra-args = ['vcodec', 'libx264'])
plt.show()
**THE FULL ERROR MESSAGE IS SHOWN BELOW**
Traceback (most recent call last):
File "the_animation.py", line 71, in <module>
plt.show()
File "/usr/lib/pymodules/python2.7/matplotlib/pyplot.py", line 145, in show
_show(*args, **kw)
File "/usr/lib/pymodules/python2.7/matplotlib/backend_bases.py", line 117, in __call__
self.mainloop()
File "/usr/lib/pymodules/python2.7/matplotlib/backends/backend_tkagg.py", line 69, in mainloop
Tk.mainloop()
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 366, in mainloop
_default_root.tk.mainloop(n)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1484, in __call__
def __call__(self, *args):
**MINIMAL EXAMPLE**
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
complex_vec = np.arange(5,6,.001)
real_vec = np.arange(7,8,.001)
time_vec = np.arange(0,1,.001)
num_files = np.size(time_vec)
#creating the modulus vector
modulus_vec = np.zeros(np.shape(complex_vec))
for k in range (0,complex_vec.size):
a = complex_vec[k]
b = real_vec[k]
calc_modulus = np.sqrt(a**2 + b**2)
modulus_vec[k] = calc_modulus
#finally animateing
fig = plt.figure()
ax = plt.axes(xlim = (-1000,1000) ,ylim = (-1000,1000))#limits were arbitrary
#line = ax.plot([],[])
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return line,
def animate(i):
x = time_vec[i]
y = complex_vec[i]
y1 = real_vec[i]
y2 = modulus_vec[i]
line.set_data(x,y)
line.set_data(x,y1)
line.set_data(x,y2)
return line,
animation_object = animation.FuncAnimation(fig, animate, init_func= init, frames = num_files,interval = 30, blit = True)
#turnn this line on to save as mp4
#anim.save("give it a name.mp4", fps = 30, extra-args = ['vcodec', 'libx264'])
plt.show()
Answer: The problem here is in your `animate` function, you're using `set_data`
multiple times which does not do what you think it does. You're using it like
an _append_ , when it's a _set_. The arguments should be two arrays,
containing the respective x and y values for that line. This will animate your
minimal example:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
complex_vec = np.arange(5,6,.001)
real_vec = np.arange(7,8,.001)
time_vec = np.arange(0,1,.001)
num_files = np.size(time_vec)
#creating the modulus vector
modulus_vec = np.zeros(np.shape(complex_vec))
for k in range (0,complex_vec.size):
a = complex_vec[k]
b = real_vec[k]
calc_modulus = np.sqrt(a**2 + b**2)
modulus_vec[k] = calc_modulus
#finally animateing
fig = plt.figure()
ax = plt.axes(xlim = (-1,1) ,ylim = (-1,15))#limits were arbitrary
#line = ax.plot([],[])
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return line,
def animate(i):
x = time_vec[i]
y = complex_vec[i]
y1 = real_vec[i]
y2 = modulus_vec[i]
# notice we are only calling set_data once, and bundling the y values into an array
line.set_data(x,np.array([y, y1, y2]))
return line,
animation_object = animation.FuncAnimation(fig,
animate,
init_func= init,
frames = num_files,
interval = 30,
blit = True)
#turnn this line on to save as mp4
#anim.save("give it a name.mp4", fps = 30, extra-args = ['vcodec', 'libx264'])
plt.show()
Your previous attempt was setting the **x** and **y** values, then overwriting
the previous with a new **x** and **y** , then doing that once again.
|
Pyspark reduceByKey with (key, Dictionary) tuple
Question: I'm stuck a bit in trying to do a map-reduce on databricks with spark. I want
to process log files and I want to reduce to a (key, dict()) tuple.
However I'm always getting an error. I'm not hundert percent sure if that's
the right way to do it. I'd be very glad about any advice. As a result I want
to have a everything mapped to a (key, dict(values).
This are my Mapper and Reducer
from collections import defaultdict
a = {u'1058493694': {'age': {u'25': 1}, 'areacode': {'unknown': 1}, 'catg': {'unknown': 1}, 'city': {'unknown': 1}, 'country': {'unknown': 1}, 'ethnicity': {'unknown': 1}, 'gender': {u'u': 1}, 'geo_data': {'unknown': 1}, 'user_ip': {u'149.6.187.*': 1}}}
b = {u'1058493694': {'age': {u'25': 1}, 'areacode': {'unknown': 1}, 'catg': {'unknown': 1}, 'city': {'London': 1}, 'country': {'unknown': 1}, 'ethnicity': {'unknown': 1}, 'gender': {u'Male': 1}, 'geo_data': {'unknown': 1}, 'user_ip': {u'149.6.187.*': 1}}}
def getValueFromJson(json_obj, field):
try:
raw = dict(json_obj)
if field in raw:
if raw[field]:
return {raw[field]: 1}
except:
return {'unknown': 1}
return {'unknown': 1}
def mapper(line):
attr = dict(defaultdict())
user_id = line.get("user_id", "unknown")
user_attr = ["age", "gender"]
location_attr = ["city", "state", "post_code", "country", "areacode", "user_ip", "geo_data"]
# combine both lists
attr_list = user_attr + location_attr
for item in attr_list:
attr[item] = getValueFromJson(line, item)
return (user_id, attr)
def reducer(a, b):
results = dict()
for key in a.keys():
val = dict()
for k in a[key].keys() + b[key].keys():
val[k] = a[key].get(k, 0) + b[key].get(k, 0)
results[key] = val
return results
I'm not sure If I can use the reducer the way I'm doing it. Any help on best
practices to achieve my goal would be appreciated.
user_pairs = imps_data.map(extractUserData)
user_totals = user_pairs.reduceByKey(joinUserData)
user_totals.take(25)
I'm then getting following error
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-461-e1b93b972cac> in <module>()
1 user_pairs = imps_data.map(extractUserData)
2 user_totals = user_pairs.reduceByKey(joinUserData)
----> 3 user_totals.take(25)
/home/ubuntu/databricks/spark/python/pyspark/rdd.pyc in take(self, num)
1263
1264 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1265 res = self.context.runJob(self, takeUpToNumLeft, p, True)
1266
1267 items += res
/home/ubuntu/databricks/spark/python/pyspark/context.pyc in runJob(self, rdd, partitionFunc, partitions, allowLocal)
895 mappedRDD = rdd.mapPartitions(partitionFunc)
896 port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions,
--> 897 allowLocal)
898 return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
899
/home/ubuntu/databricks/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
536 answer = self.gateway_client.send_command(command)
537 return_value = get_return_value(answer, self.gateway_client,
--> 538 self.target_id, self.name)
539
540 for temp_arg in temp_args:
/home/ubuntu/databricks/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 6 in stage 688.0 failed 4 times, most recent failure: Lost task 6.3 in stage 688.0 (TID 8308, 10.179.246.224): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/ubuntu/databricks/spark/python/pyspark/worker.py", line 111, in main
process()
File "/home/ubuntu/databricks/spark/python/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/home/ubuntu/databricks/spark/python/pyspark/rdd.py", line 2318, in pipeline_func
return func(split, prev_func(split, iterator))
File "/home/ubuntu/databricks/spark/python/pyspark/rdd.py", line 2318, in pipeline_func
return func(split, prev_func(split, iterator))
File "/home/ubuntu/databricks/spark/python/pyspark/rdd.py", line 304, in func
return f(iterator)
File "/home/ubuntu/databricks/spark/python/pyspark/rdd.py", line 1746, in combineLocally
merger.mergeValues(iterator)
File "/home/ubuntu/databricks/spark/python/pyspark/shuffle.py", line 266, in mergeValues
for k, v in iterator:
File "<ipython-input-456-90f3cdb37d50>", line 5, in extractUserData
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 733 (char 732)
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:315)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Thank you very much C
Answer: thanks for the advice. That's what it was.
I simply added a filter on the rdd filtering out lines that have invalid json
data = sc.textFile(files, 20).filter(lambda line: isJson(line.split("||")[0]))
import json
def isJson(myjson):
try:
json_object = json.loads(myjson)
except ValueError, e:
return False
return True
|
Is there a way to have atom editor automatically open docs related to a particular python module when it is imported?
Question: Basically when I import a module, I want atom to automatically show related
object methods and a little information about the method to speed up
programming. Is it possible?
Answer: There is, for example, [autocomplete-
python](https://atom.io/packages/autocomplete-python) which relies on
[jedi](https://github.com/davidhalter/jedi) to provide _excellent_ information
for autocompletion.
The gif below is from the package's landing page and shows it in action:
[](http://i.stack.imgur.com/p1vOD.gif)
|
how to get week number on python?
Question: I want to get week number like picture below
[](http://i.stack.imgur.com/zTOsQ.png)
If I insert 20150502, It should print "week 1".
If I insert 20150504, It should print "week 2".
If I insert 20150522, It should print "week 4".
How to get week number?
Answer: Here's a quick attempt, as with all date/time code there are probably a lot of
edge cases that can cause strange results.
# dateutil's parser is very good for converting from string to date
from dateutil.parser import parse
date_strs = ['20150502', '20150504', '20150522']
for date_str in date_strs:
d = parse(date_str)
month_start = datetime.datetime(d.year, d.month, 1)
week = d.isocalendar()[1] - month_start.isocalendar()[1] + 1
print(date_str + ':', week)
Output:
20150502: 1
20150504: 2
20150522: 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.