text
stringlengths 226
34.5k
|
---|
how to exclude the non numerical integers from a data frame in Python
Question: I have a data frame which comprises of datatypes integer, string, numeric etc.
Something like below. I want to exclude all the variables which are non-
numeric. is there any automated way in Python?
'data.frame': 891 obs. of 12 variables:
$ PassengerId: int 1 2 3 4 5 6 7 8 9 10 ...
$ Survived : int 0 1 1 1 0 0 0 0 1 1 ...
$ Pclass : int 3 1 3 1 3 3 1 3 3 2 ...
$ Name : Factor w/ 891 levels "Abbing, Mr. Anthony",..: 109 191 358 277 16 559 520 629 417 581 ...
$ Sex : Factor w/ 2 levels "female","male": 2 1 1 1 2 2 2 2 1 1 ...
$ Age : num 22 38 26 35 35 NA 54 2 27 14 ...
$ SibSp : int 1 1 0 1 0 0 0 3 0 1 ...
$ Parch : int 0 0 0 0 0 0 0 1 2 0 ...
$ Ticket : Factor w/ 681 levels "110152","110413",..: 524 597 670 50 473 276 86 396 345 133 ...
$ Fare : num 7.25 71.28 7.92 53.1 8.05 ...
$ Cabin : Factor w/ 148 levels "","A10","A14",..: 1 83 1 57 1 1 131 1 1 1 ...
$ Embarked : Factor w/ 4 levels "","C","Q","S": 4 2 4 4 4 3 4 4 4 2 ...
After exclusion of numeric variables, my dataframe should look like the below:
'data.frame': 891 obs. of 12 variables:
$ PassengerId: int 1 2 3 4 5 6 7 8 9 10 ...
$ Survived : int 0 1 1 1 0 0 0 0 1 1 ...
$ Pclass : int 3 1 3 1 3 3 1 3 3 2 ...
$ Age : num 22 38 26 35 35 NA 54 2 27 14 ...
$ SibSp : int 1 1 0 1 0 0 0 3 0 1 ...
$ Parch : int 0 0 0 0 0 0 0 1 2 0 ...
$ Fare : num 7.25 71.28 7.92 53.1 8.05 ...
Answer: We could use `._get_numeric_data()`
import pandas as pd #import the pandas library
#creating a small dataset for testing
df1 = pd.DataFrame({'PassengerId' : [1, 2, 3],
'Name' : ['Abbing, Mr. Anthony', 'Ann, C', 'John, H'],
'Fare' : [7.25, 71.28, 7.92]})
#extract only the numeric column types
df2 = df1._get_numeric_data()
print(df2)
Or another option is `select_dtypes()`
df3 = df1.select_dtypes(include = ['int64', 'float64'])
print(df3)
|
Python SQLite3 is not working
Question: I am practing using SQLite3 in Python. The version of Python is 2.7 And I am
using mac os.
I cannot find any problem with my code... When I run it, I do not get any
syntax error. My coding suppose to print out every element of database. It
does not print anything at all.
import sqlite3
createDb = sqlite3.connect('sample.db')
queryCurs = createDb.cursor()
def createTable():
queryCurs.execute('''CREATE TABLE customers
(id INTEGER PRIMARY KEY, name TEXT, street TEXT, city TEXT, state TEXT, balance REAL)''')
def addCust(name, street, city, state, balance):
queryCurs.execute('''INSERT INTO customers (name, street, city, state, balance))
VALUES (?,?,?,?,?) ''',(name, street, city, state, balance))
def main():
createTable()
addCust('test1','123 mel','vancouver','bc', 10000000.00)
addCust('test2','145 joy','ottawa','on', 10000000.00)
addCust('test3','521 tick','toronto','on', 10000000.00)
addCust('test4','5832 tock','burnaby','bc', 10000000.00)
createDb.commit()
queryCurs.execute('SELECT * FROM customers')
for i in queryCurs:
print "\n"
for j in i:
print j
Can you tell me what I am doing wrong?
Answer: You never _call_ your `main()` function. Python doesn't automatically call
functions, `main()` has no special meaning. You'd normally add a "script
test":
if __name__ == '__main__':
main()
to the end of your module to call a function when the file is run as a script;
when your file is imported as a module instead `__name__` contains the module
name instead of `'__main__'`.
When you do, you'll see you have a SQL syntax error:
Traceback (most recent call last):
File "test.py", line 33, in <module>
main()
File "test.py", line 18, in main
addCust('test1','123 mel','vancouver','bc', 10000000.00)
File "test.py", line 13, in addCust
VALUES (?,?,?,?,?) ''',(name, street, city, state, balance))
sqlite3.OperationalError: near ")": syntax error
You have one too many `)` in your SQL following the list of column names,
remove it:
def addCust(name, street, city, state, balance):
queryCurs.execute('''INSERT INTO customers (name, street, city, state, balance)
VALUES (?,?,?,?,?) ''',(name, street, city, state, balance))
|
How to read folder structure and assign it to datastructure?
Question: I'm only starting with python, and I'm trying to accomplish following:
I have a folder structure (simplified):
-folder1
---file1
---file2
-folder2
---file3
-folder2
---file4
---file5
---file6
I'd like to read filenames into some kind of a datastructure, that is able to
distinguish which files are from the same folder. I've used glob in a one
folder case, but is it possible to get for example following datastructure
using glob?
files = [{file1, folder1}, {file2, folder1}, {file3, folder2}...]
Answer: I assume you'd rather get this kind of structure:
files = {folder1: [file1, file2], folder2: [file3], ...}
The following code will do the trick:
import os
rootDir = '.'
files = {}
for dirName, subdirList, fileList in os.walk(rootDir):
files[dirName] = fileList
|
Optimizing an Euler-Maruyama implementation in python
Question: I was tasked to solve a specific Stochastic Differential Equation (SDE) and an
ordinary DE that both depend on each other. I've been using the [Euler-
Maruyama method](https://en.wikipedia.org/wiki/Euler%E2%80%93Maruyama_method)
to solve the equations and currently need to solve 100 000 of these (they
simulate particle paths and momentums). The code itself works fine, but the
problem is that as I need to increase the amount of time steps for my
algorithm, the calculation time naturally increases too. I originally chose
python for the task as I'm most accustomed to it, although I was very aware of
it's reputation as not being optimal for HPC stuff (slow loops). I'll probably
have to change to using python as a "glue" with fortran subroutines at some
point, but I was wondering if there's still some way to milk a little
performance out of the current code.
I'm currently using the function EM to solve (simulate) the DE's, and then a
helper function that collects all the simulated times, paths and momenta and
appends them to their respective arrays. The code looks something like this:
def EM(tstart, tend, steps, x1, x2, x3, x4, x5):
dt = float((tend - tstart))/steps # Timestep
T = np.linspace(tstart, tend, steps)
y = [x3*T[0] - 0.01] # Table for the paths
p = [x3*x2*x4] # Table for the momentum
pos = 0.0
mom = 0.0
for i in range(1, steps):
pos = y[i-1] + dt*(((x1*y[i-1])/(x3*T[i-1])) + (3.0*p[i-1]*(T[0]/T[i-1])/x2)*((2*(x3*T[i-1]-y[i-1])/(y[i-1]+x5))-1)) + (np.sqrt(6*(p[i-1]*(T[0]/T[i-1])/x2)*(x3*T[i-1]-y[i-1])) * np.sqrt(dt) * np.random.normal(0,1))
#Boundary condition
if(pos > x3*T[i]):
v = (pos-y[i-1])/dt
tdot = (y[i-1]-v*T[i-1])/(x3-v)
pos = x3*tdot - v*(T[i-1]+dt-tdot)
mom = p[i-1] - (1.0/3.0)*p[i-1]*(x1/(x3*T[i-1]))*(1 + (2*y[i-1]/(y[i-1]+x5)))*dt
y.append(pos)
p.append(mom)
#Boundary condition
if(pos < 0):
break
return T[0:i+1], y, p
Where x1,...,x5 are some constants. As of now, I need to use 108 000 time
steps, and running the code with the built in %timeit test
%timeit EM(1.0, 10.0, 108000, 1.0, 1.0, 2.0, 3.0, 1.0)
gives me best case results between 65 ms to 25 ms.
The helper function I use to collect all of these together is rather simple:
def helper(tstart, tend, steps, x1, x2, x3, x4, x5, no_of):
timespan = []
momentums = []
paths = []
for i in range(0, no_of):
t, y, p = EM(tstart, tend, steps, x1, x2, x3, x4, x5)
timespan.append(t)
paths.append(y)
momentums.append(p)
return timespan, paths, momentums
Running this through timeit with the following parameters
%timeit multi(1.0, 10.0, 108000, 1.0, 1.0, 2.0, 3.0, 1.0, 1000)
gives a best case result of 1 minute and 14 seconds (74 seconds), which with
100 000 particles would amount to 7400 seconds or roughly two hours. I can
still work with this, but it's probable that I have to add simulations or time
steps to this in the future.
I was originally using numpy arrays, but changing to regular python lists
actually made the code faster. I'm guessing this is because you have to
declare the size of numpy arrays before using them (unless you want to use the
np.append method, but that's terribly slow in this situation) with the
np.zeros() method. So although the amount of steps used is for example 108000,
only a fraction of the simulations end up being that long, so I ended up
needing to trim the zeros away from the arrays with np.trim_zeros().
I've been trying to use the [Numba library](http://numba.pydata.org/) and it's
@jit method, but I can't get it to work. It gives me the following error:
NotImplementedError: Failed at nopython (nopython frontend)
(<class 'numba.ir.Expr'>, build_list(items=[Var($0.20, <ipython-input- 32-4529989cafc0> (5))]))
Could removing the helper function, and just running the code with the for-
loop that appends the simulated arrays improve the run time? Is there a way to
run the code without using for-loops and instead use array operations? I've
hear that speeds things up quite a bit.
Any other thoughts? You help is much appreciated.
Answer: Because of the iterative nature of this problem it's not possible to replace
the loop with array operations.
As an alternative I think Numba is indeed a good choice. Numba doesn't work
with Python lists (hence the exception you received), so then you're
restricted to Numpy arrays. To handle the a priori unknown array size, the
`ndarray.resize` instance method is nice to use, because it frees the unused
memory (as opposed to taking a slice, which keeps a reference to the whole
array). The code would look something like this:
from numba import jit
@jit(nopython=True)
def EM_helper(T, x1, x2, x3, x5, y, p):
dt = (T[-1] - T[0]) / T.shape[0]
for i in range(1, T.shape[0]):
# ...big calculation
y[i] = pos
p[i] = mom
#Boundary condition
if(pos < 0):
break
return i+1
def EM(T, x1, x2, x3, x4, x5):
y = np.empty_like(T, dtype=float) # Table for the path
p = np.empty_like(T, dtype=float) # Table for the momentum
y[0] = x3*T[0] - 0.01
p[0] = x3*x2*x4
count = EM_helper(T, x1, x2, x3, x5, y, p)
y.resize(count)
p.resize(count)
return T[:count], y, p
Instead of the `EM_helper` function you could also try to rely on automatic
"loop lifting" but this is less robust in my experience.
The creation of the time array `T = np.linspace(tstart, tend, steps)` I moved
outside of the function, because in a quick test I found it would become a
performance bottleneck.
|
Speeding up guassian EM algorithm
Question: My python code is as follows...it takes forever. There must be some numpy
tricks I can use? The picture I am analyzing is tiny and in grayscale...
def gaussian_probability(x,mean,standard_dev):
termA = 1.0 / (standard_dev*np.sqrt(2.0*np.pi))
termB = np.exp(-((x - mean)**2.0)/(2.0*(standard_dev**2.0)))
g = (termA*termB)
return g
def sum_of_gaussians(x):
return sum([self.mixing_coefficients[i] *
gaussian_probability(x, self.means[i], self.variances[i]**0.5)
for i in range(self.num_components)])
def expectation():
dim = self.image_matrix.shape
rows, cols = dim[0], dim[1]
responsibilities = []
for i in range(self.num_components):
gamma_k = np.zeros([rows, cols])
for j in range(rows):
for k in range(cols):
p = (self.mixing_coefficients[i] *
gaussian_probability(self.image_matrix[j,k],
self.means[i],
self.variances[i]**0.5))
gamma_k[j,k] = p / sum_of_gaussians(self.image_matrix[j,k])
responsibilities.append(gamma_k)
return responsibilities
I included only the expectation step, because, while the maximization step
loops through every element of the responsibility array of matrices, it seems
to go relatively quickly (so maybe the bottleneck is all of the
gaussian_probability calculations?)
Answer: You can greatly speed up your calculation by doing two things:
* don't compute the normalization within each loop! As currently written, for an NxN image with M components, you're computing each relevant calculation `N * N * M` times, leading to an `O[N^4 M^2]` algorithm! Instead you should compute all the elements once, and then divide by the sum, which will be `O[N^2 M]`.
* use numpy vectorization rather than explicit loops. This can be done very straightforwardly the way you've set up the code.
Essentially, your `expectation` function should look something like this:
def expectation(self):
responsibilities = (self.mixing_coefficients[:, None, None] *
gaussian_probability(self.image_matrix,
self.means[:, None, None],
self.variances[:, None, None] ** 0.5))
return responsibilities / responsibilities.sum(0)
You didn't provide a complete example, so I had to improvise a bit to check
and benchmark this, but here's a quick take:
import numpy as np
def gaussian_probability(x,mean,standard_dev):
termA = 1.0 / (standard_dev*np.sqrt(2.0*np.pi))
termB = np.exp(-((x - mean)**2.0)/(2.0*(standard_dev**2.0)))
return termA * termB
class EM(object):
def __init__(self, N=5):
self.image_matrix = np.random.rand(20, 20)
self.num_components = N
self.mixing_coefficients = 1 + np.random.rand(N)
self.means = 10 * np.random.rand(N)
self.variances = np.ones(N)
def sum_of_gaussians(self, x):
return sum([self.mixing_coefficients[i] *
gaussian_probability(x, self.means[i], self.variances[i]**0.5)
for i in range(self.num_components)])
def expectation(self):
dim = self.image_matrix.shape
rows, cols = dim[0], dim[1]
responsibilities = []
for i in range(self.num_components):
gamma_k = np.zeros([rows, cols])
for j in range(rows):
for k in range(cols):
p = (self.mixing_coefficients[i] *
gaussian_probability(self.image_matrix[j,k],
self.means[i],
self.variances[i]**0.5))
gamma_k[j,k] = p / self.sum_of_gaussians(self.image_matrix[j,k])
responsibilities.append(gamma_k)
return responsibilities
def expectation_fast(self):
responsibilities = (self.mixing_coefficients[:, None, None] *
gaussian_probability(self.image_matrix,
self.means[:, None, None],
self.variances[:, None, None] ** 0.5))
return responsibilities / responsibilities.sum(0)
Now we can instantiate the object and compare the two implementations of the
expectation step:
em = EM(5)
np.allclose(em.expectation(),
em.expectation_fast())
# True
Looking at the timings, we're about a factor of 1000 faster for a 20x20 image
with 5 components:
%timeit em.expectation()
10 loops, best of 3: 65.9 ms per loop
%timeit em.expectation_fast()
10000 loops, best of 3: 74.5 µs per loop
This improvement will grow as the image size and number of components
increase. Best of luck!
|
Approximation by sin waves using DFT on python. What's wrong?
Question: I'm writing the prorgram on python that can approximate time series by sin
waves. The program uses DFT to find sin waves, after that it chooses sin waves
with biggest amplitudes.
Here's my code:
__author__ = 'FATVVS'
import math
# Wave - (amplitude,frequency,phase)
# This class was created to sort sin waves:
# - by anplitude( set freq_sort=False)
# - by frequency (set freq_sort=True)
class Wave:
#flag for choosing sort mode:
# False-sort by amplitude
# True-by frequency
freq_sort = False
def __init__(self, amp, freq, phase):
self.freq = freq #frequency
self.amp = amp #amplitude
self.phase = phase
def __lt__(self, other):
if self.freq_sort:
return self.freq < other.freq
else:
return self.amp < other.amp
def __gt__(self, other):
if self.freq_sort:
return self.freq > other.freq
else:
return self.amp > other.amp
def __le__(self, other):
if self.freq_sort:
return self.freq <= other.freq
else:
return self.amp <= other.amp
def __ge__(self, other):
if self.freq_sort:
return self.freq >= other.freq
else:
return self.amp >= other.amp
def __str__(self):
s = "(amp=" + str(self.amp) + ",frq=" + str(self.freq) + ",phase=" + str(self.phase) + ")"
return s
def __repr__(self):
return self.__str__()
#Discrete Fourier Transform
def dft(series: list):
n = len(series)
m = int(n / 2)
real = [0 for _ in range(n)]
imag = [0 for _ in range(n)]
amplitude = []
phase = []
angle_const = 2 * math.pi / n
for w in range(m):
a = w * angle_const
for t in range(n):
real[w] += series[t] * math.cos(a * t)
imag[w] += series[t] * math.sin(a * t)
amplitude.append(math.sqrt(real[w] * real[w] + imag[w] * imag[w]) / n)
phase.append(math.atan(imag[w] / real[w]))
return amplitude, phase
#extract waves from time series
# series - time series
# num - number of waves
def get_waves(series: list, num):
amp, phase = dft(series)
m = len(amp)
waves = []
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / m, phase[i]))
waves.sort()
waves.reverse()
waves = waves[0:num]#extract best waves
print("the program found the next %s sin waves:"%(num))
print(waves)#print best waves
return waves
#approximation by sin waves
#series - time series
#num- number of sin waves
def sin_waves_appr(series: list, num):
n = len(series)
freq = get_waves(series, num)
m = len(freq)
model = []
for i in range(n):
summ = 0
for j in range(m): #sum by sin waves
summ += freq[j].amp * math.sin(freq[j].freq * i + freq[j].phase)
model.append(summ)
return model
if __name__ == '__main__':
import matplotlib.pyplot as plt
N = 500 # length of time series
num = 2 # number of sin wawes, that we want to find
#y - generate time series
y = [2 * math.sin(0.05 * t + 0.5) + 0.5 * math.sin(0.2 * t + 1.5) for t in range(N)]
model = sin_waves_appr(y, num) #generate approximation model
## ------------------plotting-----------------
plt.figure(1)
# plotting of time series and his approximation model
plt.subplot(211)
h_signal, = plt.plot(y, label='source timeseries')
h_model, = plt.plot(model, label='model', linestyle='--')
plt.legend(handles=[h_signal, h_model])
plt.grid()
# plotting of spectre
amp, _ = dft(y)
xaxis = [2*math.pi*i / N for i in range(len(amp))]
plt.subplot(212)
h_freq, = plt.plot(xaxis, amp, label='spectre')
plt.legend(handles=[h_freq])
plt.grid()
plt.show()
But I've got a strange result: [](http://i.stack.imgur.com/mIonA.png)
In the program I've created a time series from two sin waves:
y = [2 * math.sin(0.05 * t + 0.5) + 0.5 * math.sin(0.2 * t + 1.5) for t in range(N)]
And my program found wrong parameters of the sin waves:
> the program found the next 2 sin waves:
> [(amp=0.9998029885151699,frq=0.10053096491487339,phase=1.1411803525843616),
> (amp=0.24800925225626422,frq=0.40212385965949354,phase=0.346757128184013)]
I suppuse, that my problem is wrong scaling of wave parameters, but I'm not
sure. There're two places, where the program does scaling. The first place is
creating of waves:
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / m, phase[i]))
And the second place is sclaling of the x-axis:
xaxis = [2*math.pi*i / N for i in range(len(amp))]
But my suppose may be wrong. I've tried to change scaling many times, and it
haven't solved my problem.
What may be wrong with the code?
Answer: So, these lines I believe are wrong:
for t in range(n):
real[w] += series[t] * math.cos(a * t)
imag[w] += series[t] * math.sin(a * t)
amplitude.append(math.sqrt(real[w] * real[w] + imag[w] * imag[w]) / n)
phase.append(math.atan(imag[w] / real[w]))
I believe it should be dividing by m instead of n, since you are only working
with computing half the points. That will fix the amplitude problem. Also, the
computation of `imag[w]` is missing a negative sign. Taking into account the
atan2 fix, it would look like:
for t in range(n):
real[w] += series[t] * math.cos(a * t)
imag[w] += -1 * series[t] * math.sin(a * t)
amplitude.append(math.sqrt(real[w] * real[w] + imag[w] * imag[w]) / m)
phase.append(math.atan2(imag[w], real[w]))
The next one is here:
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / m, phase[i]))
The divide by `m` is not right. `amp` has only half the points it should, so
using the length of amp isn't right here. It should be:
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / (m * 2), phase[i]))
Finally, your model reconstruction has a problem:
for j in range(m): #sum by sin waves
summ += freq[j].amp * math.sin(freq[j].freq * i + freq[j].phase)
It should use cosine instead (sine introduces a phase shift):
for j in range(m): #sum by cos waves
summ += freq[j].amp * math.cos(freq[j].freq * i + freq[j].phase)
When I fix all of that, the model and the DFT both make sense:
[](http://i.stack.imgur.com/xXLsG.png)
|
Python select() not waiting for terminal input after forkpty()
Question: I am trying to write a python script that will automatically log in to a
remote host via ssh and update a users password. Since ssh demands that it
take its input from a terminal, I am using os.forkpty(), running ssh in the
child process and using the parent process to send command input through the
pseudo terminal. Here is what I have so far:
import os, sys, time, getpass, select, termios
# Time in seconds between commands sent to tty
SLEEP_TIME = 1
NETWORK_TIMEOUT = 15
#------------------------------Get Passwords------------------------------------
# get username
login = getpass.getuser()
# get current password
current_pass = getpass.getpass("Enter current password: ")
# get new password, retry if same as current password
new_pass = current_pass
first_try = True
while new_pass == current_pass:
if first_try:
first_try = False
else:
# New password equal to old password
print("New password must differ from current password.")
# Get new password
new_pass = getpass.getpass("Enter new password: ")
new_pass_confirm = getpass.getpass("Confirm new password: ")
while new_pass != new_pass_confirm:
# Passwords do not match
print("Passwords do not match")
new_pass = getpass.getpass("Enter new password: ")
new_pass_confirm = getpass.getpass("Confirm new password: ")
#------------------------------End Get Passwords--------------------------------
ssh = "/usr/bin/ssh" # ssh bin location
args = ["ssh", login + "@localhost", "-o StrictHostKeyChecking=no"]
#fork
pid, master = os.forkpty()
if pid == 0:
# Turn off echo so master does not need to read back its own input
attrs = termios.tcgetattr(sys.stdin.fileno())
attrs[3] = attrs[3] & ~termios.ECHO
termios.tcsetattr(sys.stdin.fileno(), termios.TCSADRAIN, attrs)
os.execv(ssh, args)
else:
select.select([master], [], [])
os.write(master, current_pass + "\n")
select.select([master], [], [])
os.write(master, "passwd\n")
select.select([master], [], [])
os.write(master, current_pass + "\n")
select.select([master], [], [])
os.write(master, new_pass + "\n")
select.select([master], [], [])
os.write(master, new_pass + "\n")
select.select([master], [], [])
os.write(master, "id\n")
select.select([master], [], [])
sys.stdout.write(os.read(master, 2048))
os.wait()
The script prompts the user for his/her current and new passwords, then forks
and sends appropriate responses to ssh login prompt and then passwd prompts.
The problem I am having is that the select syscalls are not behaving as I
would expect. They don't appear to be blocking at all. I'm thinking that I am
misunderstanding something about the way select works with the master end of a
pty.
If I replace them all with time.sleep(1), the script works fine, but I don't
want to have to rely on that solution because I can't always guarantee the
network will respond in a short time, and I don't want to make it something
rediculous that will take forever (I intend to use this to programatically log
into several servers to update passwords)
Is there a way to reliably poll the master side of a pty to wait for the
slave's output?
Note: I realize there are better solutions to this problem with things like
sshpass and chpasswd, but I am in an environment where this cannot be run as
root and very few utilities are available. Thankfully python is.
Answer: `select` doesn't read any data; it simply blocks until data is available to be
read.
Since you don't read any data after the first `select`, there will still be
data left in the buffer for you to read, so any subsequent `select` will not
block.
You need to read the data in the buffer before calling `select` again. Doing
this without blocking means that you will likely have to set the file to non-
blocking mode (I don't know how to do that in Python).
* * *
A better way of providing the password over SSH would be to [use the `--stdin`
flag if your `passwd` supports
it](http://stackoverflow.com/questions/714915/using-the-passwd-command-from-
within-a-shell-script), and to run the command directly over SSH instead of
through the created shell.
handle = subprocess.Popen(["ssh", login + "@localhost", "-o StrictHostKeyChecking=no", "passwd --stdin"])
handle.communicate("\n".join([oldpass, newpass, newpass, ""]))
|
Cutting list python base on condition
Question: For example:
my_string='ONETWOTHREEFOURFIVESIXSEVENEIGHTNINETEN'
and how to make it become
['ONE','TWO','THREE','FOUR','FIVE','SIX','SEVEN','EIGHT','NINE','TEN']
How can I do this efficiently without importing anything?
Answer:
lengths = (3, 3, 5, 4, 4, 3)
it = iter('ONETWOTHREEFOURFIVESIXSEVENEIGHTNINETEN')
words = (''.join(next(it) for _ in range(length)) for length in lengths)
|
Python variable range
Question: I'm trying to have a range that I can change on the fly. What I have found
here didn't help me to fix my error.
I am currently using a fixed variable to test what I am doing
I can not figure out how to post the actual code; I can't get it to paste. So
this is the general idea (I made this run; it fails with variables but works
with numbers).
I am using anaconda3 in Spyder (python 3.5).
hi = 20
hf = 0
step = .1
H = np.arange(hi, hf, step) ####range(start,end,step)
U = []
for l in H:
U.append(l)
print(U)
>>> []
I tried using `float` and `int` both when defining the variables (in every
way) and inside the `range()`. I tried a few other things as well. But the
only way I can get anything but an empty `[]` being printed is by using
numbers (any type of number works).
Answer: Running exactly this code:
import numpy
U=[]
H=numpy.arange(0, 20, 0.1)
for l in H:
U.append(l)
print(U)
works exactly as you'd expect. The problem is that you're trying to run with
bounds switched around - with your constants the call is actually `arange(20,
0, 0.1)`
|
wxPython automatically does event at startup when it shouldn't
Question: So, I am pretty new to programming. I am currently making a simple text editor
(claaaassic) in wxPython on Mac. I have a the "Save" menu item bound to the
event saveFile() which takes a custom class file() as a parameter for the
directory and file type of the designated file. Everything looks normal, but
for some reason, the method is called automatically at startup, and doesn't
run when I actually clicked the Save menu item. Any help would be much
appreciated!
Also, the code below is just a sample of my program.
import wx
# Custom class file
class file():
def __init__(self, directory, file_type):
self.directory = directory #location of file
self.file_type = file_type #type of file (text, HTML, etc.)
# Main Frame
class Frame1(wx.Frame):
def __init__(self, *args, **kwargs):
wx.Frame.__init__(self, None, -1, 'Text Program', size=(500, 700))
self.Bind(wx.EVT_CLOSE, self.Exit)
self.f1 = file("/Users/Sasha/Desktop/File.txt", "Text File") # an example file
self.InitUI()
def InitUI(self):
self.text = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.CreateStatusBar()
# the reason we use "self.text" instead of just "text", is because if we want to use multiple windows, I believe
self.menu()
# Last. Centers and shows the window.
self.Centre()
self.Show()
# Save file, WHERE I HAVE PROBLEMS
def saveFile(self, a, file):
directory = file.directory
file_type = file.file_type
file = open(directory, "r+") #r+ is reading and writing, so if file is same, no need to write
print file.name + ",", file_type
l = file.read()
print l
file.close()
# Exit method, pops up with dialog to confirm quit
def Exit(self, a):
b = wx.MessageDialog(self, "Do you really want to close this application?", 'Confirm Exit', wx.CANCEL | wx.OK)
result = b.ShowModal()
if result == wx.ID_OK:
self.Destroy()
b.Destroy()
def menu(self):
# A shortcut method to bind a menu item to a method
def bindMethod(item, method):
self.Bind(wx.EVT_MENU, method, item)
# FILE
fileMenu = wx.Menu()
new = fileMenu.Append(wx.ID_NEW, "New")
save = fileMenu.Append(wx.ID_SAVE, "Save")
saveAs = fileMenu.Append(wx.ID_SAVEAS, "Save As")
bindMethod(new, self.newFile)
bindMethod(save, self.saveFile(self, self.f1)) # WHERE I MAY HAVE PROBLEMS
bindMethod(saveAs, self.saveAs)
menuBar = wx.MenuBar()
menuBar.Append(fileMenu, "&File") # Adding the "filemenu" to the MenuBar
self.SetMenuBar(menuBar) # Adding the MenuBar to the Frame content.
if __name__ == '__main__':
app = wx.App()
frame = Frame1()
app.MainLoop()
Answer: You call the method on the same line where you think you bind it. Instead of
passing the method, you pass the return value of the method called.
In this line, the second parameter is a method:
bindMethod(new, self.newFile)
In this line, the method is called and second parameter is whatever is
returned, e.g. None:
bindMethod(save, self.saveFile(self, self.f1))
When you bind methods, the parameters and not used, so you bind to the method
name and do not call it at that place. The wxPython should call the bound
method with correct parameters when event is fired.
That also probably means that you cannot use parameters that you have, as
callback for `wx.EVT_MENU` is given. You would need to create some kind of
file dialog to get those parameters in the `saveFile` method.
|
pyodbc autocommit does not appear to work with sybase and sqlalchemy
Question: I am connecting to a sybase ASE 15 database from Python 3.4 using pyodbc and
executing a stored procedure.
All works as expected if I use native pyodbc:
import pd
import pyodbc
con = pyodbc.connect('DSN=dsn_name;UID=username;PWD=password', autocommit=True)
df = pd.read_sql("exec p_procecure @GroupName='GROUP'", con)
[Driver is Adaptive Server Enterprise].
I have to have **autocommit=True** and if I do no I get the following error:
> DatabaseError: Execution failed on sql 'exec ....': ('ZZZZZ', "[ZZZZZ]
> [SAP][ASE ODBC Driver][Adaptive Server Enterprise]Stored procedure
> 'p_procedure' may be run only in unchained transaction mode. The 'SET
> CHAINED OFF' command will cause the current session to use unchained
> transaction mode.\n (7713) (SQLExecDirectW)")
I attempt to achieve the same using SQLAlchemy (1.0.9):
from sqlalchemy import create_engine, engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql import text
url = r'sybase+pyodbc://username:password@dsn'
engine = create_engine(url, echo=True)
sess = sessionmaker(bind=engine).Session()
df = pd.read_sql(text("exec p_procedure @GroupName='GROUP'"),conn.execution_options(autocommit=True))
The error message is the same despite the fact I have specified
**autocommit=True** on the connection. (I have also tested this at the session
level but should not be necessary and made no difference).
> DBAPIError: (pyodbc.Error) ('ZZZZZ', "[ZZZZZ] [SAP][ASE ODBC
> Driver][Adaptive Server Enterprise]....
Can you see anything wrong here?
As always, any help would be much appreciated.
Answer: Passing the autocommit=True argument as an item in the connect_args argument
dictionary does work:
connect_args = {'autocommit': True}
create_engine(url, connect_args=connect_args)
> connect_args – a dictionary of options which will be passed directly to the
> DBAPI’s connect() method as additional keyword arguments.
|
Python numerical integration with Simson's rule
Question: I have started to work through this book ([Computational Physics Exercise
5.4](http://www-personal.umich.edu/~mejn/cp/chapters/int.pdf)) and its
exercises and I got stuck with the following question:
[](http://i.stack.imgur.com/A2cLq.png)
Write a Python function J(m,x) that calculates the value of Jm(x) using
Simpson’s rule with N = 1000 points. Use your function in a program to make a
plot, on a single graph, of the Bessel functions J0, J1, and J2 as a function
of x from x = 0 to x = 20.
I have created the following code to evaluate the first part of the question
but not sure if even this is correct:
def f(x, t):
return 1 / pi * (math.cos(x - t * math.sin(x)))
def float_range(a, b, c):
while a < b:
yield a
a += c
N = 1000
a = 0.0
b = 20.0
h = (b - a) / N
c = 0.0
d = pi
h2 = (d - c) / N
s = 0.5 * f(a, 1) + 0.5 * f(b, 1)
s / 3
S1 = 0
S2 = 0
for k in range(1, N):
for j in range(0, N):
if k%2 == 0:
S1 += 2 / 3 * f(a + k * h, c + k * h2)
else:
S2 += 4 / 3 * f(a + k * h, c + k * h2)
s += S1 + S2
print(h * s)
Could anyone please help me to solve this question, I have never used Bessel
functions before?
Answer: Your code is a little messy. You've defined a generator function `float_range`
but you never use it, and the rest of your code is tacked onto that function,
but should be separated from it, and not indented at the same level.
You've also messed up the definition of the core function that needs to be
integrated to get the Bessel function. You left out the constant `m`, and you
mixed up the `x` and the `t` (theta). When you're evaluating the integral only
the parameter `t` should vary - `m` and `x` are fixed.
Anyway, here's some working code:
from math import sin, cos, pi
# The core function of the Bessel integral
def f(m, x, t):
return cos(m * t - x * sin(t))
#The number of steps used in the Simpson's integral approximation
N = 1000
def J(m, x):
''' Approximate Bessel function Jm(x) for integer m '''
# lower & upper limits of the integral
a = 0.0
b = pi
# step size
h = (b - a) / N
# Sum the values for Simpson's integration
s = f(m, x, a) + f(m, x, b)
for i in range(1, N):
t = a + i * h
if i % 2 == 1:
s += 4.0 * f(m, x, t)
else:
s += 2.0 * f(m, x, t)
# multiply by h/3 to get the integral
# and divide by pi to get the Bessel function.
return s * h / (3.0 * pi)
for x in range(21):
print(x, J(0, x), J(1, x), J(2, x))
**output**
0 1.0 3.59712259979e-17 5.16623780792e-17
1 0.765197686558 0.440050585745 0.114903484932
2 0.223890779141 0.576724807757 0.352834028616
3 -0.260051954902 0.339058958526 0.486091260586
4 -0.397149809864 -0.0660433280235 0.364128145852
5 -0.177596771314 -0.327579137591 0.0465651162778
6 0.150645257251 -0.276683858128 -0.24287320996
7 0.30007927052 -0.00468282348235 -0.301417220086
8 0.171650807138 0.234636346854 -0.112991720424
9 -0.0903336111829 0.245311786573 0.144847341533
10 -0.245935764451 0.0434727461689 0.254630313685
11 -0.171190300407 -0.176785298957 0.139047518779
12 0.0476893107968 -0.223447104491 -0.0849304948786
13 0.206926102377 -0.0703180521218 -0.217744264242
14 0.17107347611 0.133375154699 -0.152019882582
15 -0.0142244728268 0.205104038614 0.0415716779753
16 -0.174899073984 0.0903971756613 0.186198720941
17 -0.169854252151 -0.0976684927578 0.158363841239
18 -0.013355805722 -0.187994885488 -0.0075325148878
19 0.14662943966 -0.105701431142 -0.157755906096
20 0.167024664341 0.0668331241758 -0.160341351923
* * *
The accuracy of this approximation is surprisingly good. Here are some values
generated using the [mpmath](http://mpmath.org/) module, which supplies
various Bessel functions.
0 1.0 0.0 0.0
1 0.76519768655796655145 0.44005058574493351596 0.11490348493190048047
2 0.22389077914123566805 0.5767248077568733872 0.35283402861563771915
3 -0.26005195490193343762 0.33905895852593645893 0.48609126058589107691
4 -0.39714980986384737229 -0.066043328023549136143 0.36412814585207280421
5 -0.17759677131433830435 -0.32757913759146522204 0.046565116277752215532
6 0.15064525725099693166 -0.27668385812756560817 -0.24287320996018546772
7 0.30007927051955559665 -0.0046828234823458326991 -0.30141722008594012028
8 0.17165080713755390609 0.23463634685391462438 -0.11299172042407525
9 -0.090333611182876134336 0.24531178657332527232 0.14484734153250397263
10 -0.2459357644513483352 0.04347274616886143667 0.25463031368512062253
11 -0.17119030040719608835 -0.17678529895672150114 0.13904751877870126996
12 0.047689310796833536624 -0.22344710449062761237 -0.084930494878604805352
13 0.206926102377067811 -0.070318052121778371157 -0.21774426424195679117
14 0.17107347611045865906 0.13337515469879325311 -0.15201988258205962291
15 -0.014224472826780773234 0.20510403861352276115 0.04157167797525047472
16 -0.17489907398362918483 0.090397175661304186239 0.18619872094129220811
17 -0.16985425215118354791 -0.097668492757780650236 0.15836384123850347142
18 -0.013355805721984110885 -0.18799488548806959401 -0.0075325148878013995603
19 0.14662943965965120426 -0.1057014311424092668 -0.15775590609569428497
20 0.16702466434058315473 0.066833124175850045579 -0.16034135192299815017
I'll let you figure out the plotting part of your exercise. :)
* * *
Here's the mpmath code I used to generate those Bessel function values above,
which are accurate to 20 significant figures:
from mpmath import mp
# set precision
mp.dps = 20
for x in range(21):
print(x, mp.besselj(0, x), mp.besselj(1, x), mp.besselj(2, x))
|
Getting the name of a chosen file in applescript
Question: I need my AppleScript to know the name of a file chosen by a user in a `choose
file` command. It sounds like it should be really simple, but I can't figure
out the answer. The script extracts the frames from a gif file and puts the
individual images in a folder inside the application's contents. It then
changes the desktop background rapidly to the images inside the folder, hence
giving you a gif for a wallpaper. However, I can't do this without knowing the
name of the chosen gif file, as I don't know the names of the images in the
folder. If there's some other easy way around this, that would be great too.
This is what I have so far:
on delay duration
set endTime to (current date) + duration
repeat while (current date) is less than endTime
tell AppleScript to delay duration
end repeat
end delay
set gifFiles to choose file of type "com.compuserve.gif" with prompt "Select GIF"
set dest to quoted form of POSIX path of ((path to me as string) & "Contents:Resources:Gif")
set pScript to quoted form of "from AppKit import NSApplication, NSImage, NSImageCurrentFrame, NSGIFFileType; import sys, os
tName=os.path.basename(sys.argv[1])
dir=sys.argv[2]
app=NSApplication.sharedApplication()
img=NSImage.alloc().initWithContentsOfFile_(sys.argv[1])
if img:
gifRep=img.representations()[0]
frames=gifRep.valueForProperty_('NSImageFrameCount')
if frames:
for i in range(frames.intValue()):
gifRep.setProperty_withValue_(NSImageCurrentFrame, i)
gifRep.representationUsingType_properties_(NSGIFFileType, None).writeToFile_atomically_(dir + tName + ' ' + str(i + 1).zfill(2) + '.gif', True)
print (i + 1)"
repeat with f in gifFiles
set numberOfExtractedGIFs to (do shell script "/usr/bin/python -c " & pScript & " " & (quoted form of POSIX path of f) & " " & dest) as integer
end repeat
repeat
set desktop_image to (path to me as string) & "Contents:Resources:Gif:"
tell application "Finder" to set the desktop picture to desktop_image
delay 0.05
set desktop_image to (path to me as string) & "Contents:Resources:Gif:"
tell application "Finder" to set the desktop picture to desktop_image
delay 0.05
set desktop_image to (path to me as string) & "Contents:Resources:Gif:"
tell application "Finder" to set the desktop picture to desktop_image
delay 0.05
set desktop_image to (path to me as string) & "Contents:Resources:Gif:"
tell application "Finder" to set the desktop picture to desktop_image
delay 0.05
set desktop_image to (path to me as string) & "Contents:Resources:Gif:"
tell application "Finder" to set the desktop picture to desktop_image
Answer: You can get the name with `System Events`
set gifFiles to choose file of type "com.compuserve.gif" with prompt "Select GIF"
tell application "System Events" to set gifFileName to name of gifFiles
or with the `info for` command, although it's deprecated since years it's
still working even in El Capitan
set gifFiles to choose file of type "com.compuserve.gif" with prompt "Select GIF"
set gifFileName to name of (info for gifFiles)
Of course it's possible using the Finder but it's recommended to avoid
involving the Finder as much as possible
|
python-requests post with unicode filenames
Question: I've read through several related questions here on SO but didn't manage to
find a working solution.
I have a Flask server with this simplified code:
app = Flask(__name__)
api = Api(app)
class SendMailAPI(Resource):
def post(self):
print request.files
return Response(status=200)
api.add_resource(SendMailAPI, '/')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
Then in the client:
# coding:utf-8
import requests
eng_file_name = 'a.txt'
heb_file_name = u'א.txt'
requests.post('http://localhost:5000/', files={'file0': open(eng_file_name, 'rb')})
requests.post('http://localhost:5000/', files={'file0': open(heb_file_name, 'rb')})
When sending the first request with the non-utf-8 filename the server receives
the request with the file and prints `ImmutableMultiDict([('file0',
<FileStorage: u'a.txt' (None)>)])`, but when sending the file with the utf-8
filename the server doesn't seem to receive the file as it prints
`ImmutableMultiDict([])`.
I'm using requests `2.3.0` but the problem doesn't resolve with the latest
version as well (`2.8.1`), Flask version is `0.10.1` and Flask-RESTful version
is `0.3.4`.
I've done some digging in `requests` code and the request seems to be sent ok
(ie with the file), and I printed the request right before it is being sent
and see the file name was indeed encoded to RFC2231:
--6ea257530b254861b71626f10a801726
Content-Disposition: form-data; name="file0"; filename*=utf-8''%D7%90.txt
To sum things up, I'm not entirely sure if the problem lies within `requests`
that doesn't properly attach the file to the request or if `Flask` is having
issues with picking up files with file names that are encoded according to
RFC2231.
**UPDATE:** Came across this issue in `requests` GitHub:
<https://github.com/kennethreitz/requests/issues/2505>
Answer: I think maybe there's confusion here on encoding here -
eng_file_name = 'a.txt' # ASCII encoded, by default in Python 2
heb_file_name = u'א.txt' # NOT UTF-8 Encoded - just a unicode object
To send the second one to the server what you want to do is this:
requests.post('http://localhost:5000/', files={'file0': open(heb_file_name.encode('utf-8'), 'rb')})
I'm a little surprised that it doesn't throw an error on the client trying to
open the file though - you see nothing on the client end indicating an error?
EDIT: An easy way to confirm or deny my idea is of course to print out the
contents from inside the client to ensure it's being read properly.
|
WebBrowser Library Python 2.7
Question: Python Webbrowser library has the new varibale for its open where, by default,
it will open up a file in the same browser window. Is there a way to open up
the same file in the same TAB. Like whatever the current page that is open on
the browser, redirect that page the specified url.
current code is:
import webbrowser
url = "http://www.google.com"
webbrowser.open(url)
but this opens in a new tab but I want it to open in my current tab. Thanks in
advance.
Answer: It should be as easy as adding new=0 like this: `webbrowser.open(url, new=0)`
> Display url using the default browser. If new is 0, the url is opened in the
> same browser window if possible.
according to: <https://docs.python.org/2/library/webbrowser.html>
|
Python Google Drive API discovery.build fails with exit code -1073740777 (0xC0000417)
Question: I am building a python application which uploads images to google drive.
However after working for some some time my google drive upload suddenly
stopped working. Whenever I try to initialize the service, the program exits
with code -1073740777 (0xC0000417).
I have already tried to create a new client_secret.json file with the
developer console (also with a completely different Google account) and
deleting the drive-python-quickstart.json credential file.
My friends do not have this problem with the same code and as I said, this has
worked for me for some time, too, but suddenly stopped working.
I am running Windows 10 Pro x64 with Python 3.5 32 Bit.
The problem occurrs when running this example program (taken from [the Google
quickstart guide](https://developers.google.com/drive/web/quickstart/python)):
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
import oauth2client
from oauth2client import client
from oauth2client import tools
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Drive API Python Quickstart'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'drive-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
"""Shows basic usage of the Google Drive API.
Creates a Google Drive API service object and outputs the names and IDs
for up to 10 files.
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('drive', 'v2', http=http)
results = service.files().list(maxResults=10).execute()
items = results.get('items', [])
if not items:
print('No files found.')
else:
print('Files:')
for item in items:
print('{0} ({1})'.format(item['title'], item['id']))
if __name__ == '__main__':
main()
Answer: Had the same problem while working with Google Youtube API, also had the same
os (Win 10 Pro x64) and used the same version oh python (3.5).
I have no idea how but adding this line in the main function helped
sys.modules['win32file'] = None
|
heroku & django: server doesn't load staticfiles
Question: Last night I decided to put my django school project on production. For this I
used Heroku. Everything is going fine but "the last touch" is driving me
crazy. It seems that it doesn't see my staticfiles like bootstrap css'es on
the website and I don't know why because for example the ball.jpg is loaded
and the admin css are loaded too. If someone could find the problem that I'm
dealing with for the last 10h it would be a life saver :)
You can see the results [here](http://footballapp.herokuapp.com/)
My settings.py
from django.conf import settings
if not settings.DEBUG:
import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '*****'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ["*"]
# Honor the 'X-Forwarded-Proto' header for request.is_secure()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'crispy_forms',
'team',
'tournament',
'geoposition',
'emailing',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'football.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'football.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = settings.DATABASES
DATABASES['default'] = dj_database_url.config()
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static asset configuration
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "media", "static_root")
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "media_in_pro", "static"),
)
# Simplified static file serving.
# https://warehouse.python.org/project/whitenoise/
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
# MEDIA SETTINGS
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, "media", "media_root")
# CRISPY FORM SETTINGS
CRISPY_TEMPLATE_PACK = 'bootstrap3'
# Django-Geoposition
GEOPOSITION_MAP_OPTIONS = {
'minZoom': 8,
'maxZoom': 15,
'center': {'lat': 53.13248859999999,
'lng': 23.168840300000056 },
}
GEOPOSITION_MAP_WIDGET_HEIGHT = 240
GEOPOSITION_MARKER_OPTIONS = {
'cursor': 'move'
}
While Git pushing
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Using set buildpack heroku/python
remote: -----> Python app detected
remote: -----> Installing dependencies with pip
remote:
remote: -----> Preparing static assets
remote: Running collectstatic...
remote: Post-processed 'admin/js/admin/DateTimeShortcuts.js' as 'admin/js/admin/DateTimeShortcuts.140919a6a17e.js'
(...)
remote: 82 static files copied to '/app/media/static_root', 82 post-processed.
remote:
remote:
remote: -----> Discovering process types
remote: Procfile declares types -> web
remote:
remote: -----> Compressing... done, 48.6MB
remote: -----> Launching... done, v51
remote: https://footballapp.herokuapp.com/ deployed to Heroku
remote:
remote: Verifying deploy.... done.
My heroku logs --tail:
2015-11-15T20:37:12.426289+00:00 heroku[slug-compiler]: Slug compilation started
2015-11-15T20:37:12.426301+00:00 heroku[slug-compiler]: Slug compilation finished
2015-11-15T20:37:12.456467+00:00 heroku[web.1]: State changed from up to starting
2015-11-15T20:37:16.171136+00:00 heroku[web.1]: Starting process with command `gunicorn football.wsgi`
2015-11-15T20:37:16.794898+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2015-11-15T20:37:17.969513+00:00 app[web.1]: [2015-11-15 20:37:17 +0000] [3] [INFO] Shutting down: Master
2015-11-15T20:37:17.956039+00:00 app[web.1]: [2015-11-15 20:37:17 +0000] [9] [INFO] Worker exiting (pid: 9)
2015-11-15T20:37:17.956047+00:00 app[web.1]: [2015-11-15 20:37:17 +0000] [10] [INFO] Worker exiting (pid: 10)
2015-11-15T20:37:17.956935+00:00 app[web.1]: [2015-11-15 20:37:17 +0000] [3] [INFO] Handling signal: term
2015-11-15T20:37:18.765840+00:00 heroku[web.1]: Process exited with status 0
2015-11-15T20:37:18.579708+00:00 app[web.1]: [2015-11-15 20:37:18 +0000] [3] [INFO] Listening at: http://0.0.0.0:46020 (3)
2015-11-15T20:37:18.579713+00:00 app[web.1]: [2015-11-15 20:37:18 +0000] [3] [INFO] Using worker: sync
2015-11-15T20:37:18.588372+00:00 app[web.1]: [2015-11-15 20:37:18 +0000] [9] [INFO] Booting worker with pid: 9
2015-11-15T20:37:18.578828+00:00 app[web.1]: [2015-11-15 20:37:18 +0000] [3] [INFO] Starting gunicorn 19.3.0
2015-11-15T20:37:18.655018+00:00 app[web.1]: [2015-11-15 20:37:18 +0000] [10] [INFO] Booting worker with pid: 10
2015-11-15T20:37:19.993385+00:00 heroku[web.1]: State changed from starting to up
2015-11-15T20:37:26.757037+00:00 heroku[router]: at=info method=GET path="/" host=footballapp.herokuapp.com request_id=16bde5f5-98bd-48a0-bde4-269947950c15 fwd="178.42.56.94" dyno=web.1 connect=1ms service=114ms status=200 bytes=4597
2015-11-15T20:37:27.090478+00:00 heroku[router]: at=info method=GET path="/static_root/img/cartoon-soccer-6.3dda712041b4.jpg" host=footballapp.herokuapp.com request_id=726fdf17-a3bb-4edc-94ee-8e0291c4fb68 fwd="178.42.56.94" dyno=web.1 connect=0ms service=4ms status=200 bytes=24769
2015-11-15T20:37:27.314794+00:00 heroku[router]: at=info method=GET path="/static_root/bootstrap-3.3.5-dist/js/npm.ccb7f3909e30.js" host=footballapp.herokuapp.com request_id=d22d2938-6955-4c0a-9c30-71a2dc408106 fwd="178.42.56.94" dyno=web.1 connect=1ms service=5ms status=200 bytes=566
2015-11-15T20:37:27.299303+00:00 heroku[router]: at=info method=GET path="/static_root/bootstrap-3.3.5-dist/js/bootstrap.8015042d0b4a.js" host=footballapp.herokuapp.com request_id=afceaef7-b06c-4523-b855-f6966a2829fa fwd="178.42.56.94" dyno=web.1 connect=0ms service=3ms status=200 bytes=14380
2015-11-15T20:37:43.569729+00:00 heroku[router]: at=info method=GET path="/" host=footballapp.herokuapp.com request_id=ab185988-12f4-4b78-a7d7-53eae067e891 fwd="178.42.56.94" dyno=web.1 connect=0ms service=8ms status=200 bytes=4597
2015-11-15T20:37:43.769540+00:00 heroku[router]: at=info method=GET path="/static_root/img/cartoon-soccer-6.3dda712041b4.jpg" host=footballapp.herokuapp.com request_id=e01d8276-1220-45f8-b9b1-866cd5ab2e7d fwd="178.42.56.94" dyno=web.1 connect=0ms service=2ms status=200 bytes=24769
2015-11-15T20:37:44.086677+00:00 heroku[router]: at=info method=GET path="/static_root/bootstrap-3.3.5-dist/js/npm.ccb7f3909e30.js" host=footballapp.herokuapp.com request_id=3172e050-63d5-421d-b3b2-cca661739c34 fwd="178.42.56.94" dyno=web.1 connect=0ms service=1ms status=200 bytes=566
2015-11-15T20:37:44.064945+00:00 heroku[router]: at=info method=GET path="/static_root/bootstrap-3.3.5-dist/js/bootstrap.8015042d0b4a.js" host=footballapp.herokuapp.com request_id=06b09fae-36db-4ac2-9d8d-fe0a2b3cfb06 fwd="178.42.56.94" dyno=web.1 connect=0ms service=1ms status=200 bytes=14380
2015-11-15T20:38:54.229389+00:00 heroku[api]: Deploy b75838d by [email protected]
2015-11-15T20:38:54.229389+00:00 heroku[api]: Release v51 created by [email protected]
2015-11-15T20:38:54.375061+00:00 heroku[slug-compiler]: Slug compilation started
2015-11-15T20:38:54.375070+00:00 heroku[slug-compiler]: Slug compilation finished
2015-11-15T20:38:54.509049+00:00 heroku[web.1]: State changed from up to starting
2015-11-15T20:38:57.508059+00:00 heroku[web.1]: Starting process with command `gunicorn football.wsgi`
2015-11-15T20:38:59.282919+00:00 app[web.1]: [2015-11-15 20:38:59 +0000] [3] [INFO] Starting gunicorn 19.3.0
2015-11-15T20:38:59.349517+00:00 app[web.1]: [2015-11-15 20:38:59 +0000] [10] [INFO] Booting worker with pid: 10
2015-11-15T20:38:59.283351+00:00 app[web.1]: [2015-11-15 20:38:59 +0000] [3] [INFO] Listening at: http://0.0.0.0:57838 (3)
2015-11-15T20:38:59.283475+00:00 app[web.1]: [2015-11-15 20:38:59 +0000] [3] [INFO] Using worker: sync
2015-11-15T20:38:59.286946+00:00 app[web.1]: [2015-11-15 20:38:59 +0000] [9] [INFO] Booting worker with pid: 9
2015-11-15T20:38:59.734335+00:00 heroku[web.1]: State changed from starting to up
2015-11-15T20:39:00.743234+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2015-11-15T20:39:02.232759+00:00 app[web.1]: [2015-11-15 20:39:02 +0000] [9] [INFO] Worker exiting (pid: 9)
2015-11-15T20:39:02.232763+00:00 app[web.1]: [2015-11-15 20:39:02 +0000] [10] [INFO] Worker exiting (pid: 10)
2015-11-15T20:39:02.252311+00:00 app[web.1]: [2015-11-15 20:39:02 +0000] [3] [INFO] Handling signal: term
2015-11-15T20:39:02.297059+00:00 app[web.1]: [2015-11-15 20:39:02 +0000] [3] [INFO] Shutting down: Master
EDIT: I Found out that it seems not to load my include html's to the
base.html, anyone had this problem?
Answer: I had a similar issue on Heroku recently. It seems that Django cannot serve
static assets by itself on production. I was able to use
[WhiteNoise](http://whitenoise.evans.io/en/stable/) to serve the statics
instead. First install it:
`$ pip install whitenoise`
Make sure you add `whitenoise` to your `requirements.txt` as well.
Now add the following to your **wsgi.py** :
from django.core.wsgi import get_wsgi_application
from whitenoise.django import DjangoWhiteNoise
application = get_wsgi_application()
application = DjangoWhiteNoise(application)
Finally, configure whitenoise in your settings.py:
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
or depending on which versions of Django and WhiteNoise you are running:
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
Also, make sure you have your statics correctly configured in your
**settings.py**
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_ROOT = os.path.join(PROJECT_ROOT, 'staticfiles')
STATIC_URL = '/static/'
# Extra places for collectstatic to find static files.
STATICFILES_DIRS = (
os.path.join(PROJECT_ROOT, 'static'),
)
More info: <https://devcenter.heroku.com/articles/django-assets>
|
How to remove a repeated dictionary in a list of dictionaries?
Question: I have a list in Python:
[{u'key': u'Central District', u'doc_count': 21468},
{u'key': u'Central District', u'doc_count': 6190},
{u'key': u'Central District', u'doc_count': 2060},
{u'key': u'Mexico', u'doc_count': 1884}]
but I need to turn it into this:
[{u'key': u'Central District', u'doc_count': 29718},
{u'key': u'Mexico', u'doc_count': 1884}]
How can I eliminate one of the repeated elements (in this case "Central
District"), and get the sum of the `doc_count` values of each "Central
District"?
Answer: Itertools and reduce can help sum the values grouped by key.
from itertools import groupby
original = [{u'key': u'Central District', u'doc_count': 21468},
{u'key': u'Central District', u'doc_count': 6190},
{u'key': u'Central District', u'doc_count': 2060},
{u'key': u'Mexico', u'doc_count': 1884}]
def sum_reduce(obj1, obj2):
return {'key': obj1['key'], 'doc_count': obj1['doc_count'] + obj2['doc_count']}
combined = [reduce(sum_reduce, group) for _, group in groupby(original, lambda x: x['key'])]
print combined
# output:
# [{'key': u'Central District', 'doc_count': 29718}, {u'key': u'Mexico', u'doc_count': 1884}]
|
Re-arrange List of Tuple in Python
Question: Here, I have a list of tuple as in the following format:
mylist = [('t1', 0, 23),
('t1', 1, 24),
('t1', 2, 25),
('t2', 0, 22),
('t2', 1, 25),
('t2', 2, 26)]
Now, I would like to arrange the list in a way such that it could be save into
a CSV file as shown in the following fomart:
t1 t2
0 23 22
1 24 25
2 25 26
What should I do?
Answer: Pandas is the best way.
If you cannot use Pandas, you can use the [cvs
module](https://docs.python.org/2/library/csv.html):
mylist = [('t1', 0, 23),
('t1', 1, 24),
('t1', 2, 25),
('t2', 0, 22),
('t2', 1, 25),
('t2', 2, 26)]
import csv
di={}
for t in mylist:
di.setdefault(t[0], []).append(t[2])
with open(fn, 'w') as cf:
w=csv.writer(cf)
w.writerow(['index']+di.keys())
line=0
for t in zip(*di.values()):
w.writerow([line]+list(t))
line+=1
File is then:
index,t2,t1
0,22,23
1,25,24
2,26,25
If order in the list of `t1, t2` is relevant, use an
[OrderedDict](https://docs.python.org/2/library/collections.html#collections.OrderedDict)
instead of dict.
|
3 ValueError: dictionary update sequence element #0 has length 3; 2 is required 1
Question: I am using tkinter and trying to create a library of frames instead of having
my program open new windows every time. I have begun creating a welcome page
and I am trying to display what I have created only for it to give me this
error message. "ValueError: dictionary update sequence element #0 has length
1; 2 is required" Here is my code:
#!/usr/bin/python
from tkinter import *
import tkinter as tk
Large_Font = ("Verdana", 18)
class ATM(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
container.pack(side = "top", fill ="both", expand =True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for i in (WelcomePage, Checking):
frame = i(container, self)
self.frames[i] = frame
frame.grid(row= 0, column = 0, sticky= "nsew")
self.show_frame(WelcomePage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class WelcomePage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, "Welcome to the ATM Simulator", font = Large_Font)
label.pack(pady=100, padx=100)
checkButton = Button(self, text = "Checking Account",
command = lambda: controller.show_frame(Checking))
checkButton.pack()
class Checking(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent, controller)
self.controller = controller
label = tk.Label(self, "Welcome to the ATM Simulator", font = Large_Font)
label.pack(pady=100, padx=100)
homeButton = Button(self, text = "Back to Home Page",
command = lambda: controller.show_frame(WelcomePage))
homeButton.pack()
app = ATM()
app.mainloop()
The error message is occurring because I state that
`frame = i(container, self)`
but when I create the class I state
`class WelcomePage(tk.Frame):`
The dictionary element in my WelcomePage class only has 1 parameter but I need
two. I tried putting `self` as the second parameter but that did not work.
This worked in Python 3.4 but now that I am using Python 3.5 it gives me this
error. How would I fix this?
Answer:
class Checking(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent, controller)
I don't think `Frame`'s initializer can accept that many arguments unless
`controller` is a dictionary. Try:
class Checking(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
You should also use the `text` named argument to specify the text for your
labels.
label = tk.Label(self, text="Welcome to the ATM Simulator", font = Large_Font)
|
Python: Transform [str] type Json data into [dict] type
Question:
import json
data1 = {'b': 789, 'c': 456, 'a': 123}
encode_line = json.dumps(data1)
decode_json = json.loads(encode_line)
print type(encode_line)
print type(decode_line)
The type(decode_line) is [dict], but when we save data1 into an txt file, and
read it using below script:
file = open('test.txt','r')
for line in file:
encode_line = json.dumps(line)
decode_line = json.loads(encode_line)
print type(encode_line)
print type(decode_line)
Now, the type(decode_line) is [unicode]. why ?
I want to read data from txt file and then retrieve info from dict type. How
should I do ?
Thanks!
Answer: **Hi!**
This should do the trick:
import json;
def readJSON(filename):
"Reads JSON file, returns correspoding dict.";
with open(filename) as f:
return json.loads(f.read());
**How it works:**
`json.dumps(.)` converts Python dict to JSON string.
`json.loads(.)` converts JSON string to Python dict.
In Your code, you call `json.dumps(line)`, but `line` is a string!
For `json.dumps(.)` to work correctly, you must supply a Python dict, not
string.
In this context, `dumps` is read as "dump-s", "s" for string. It takes a dict
and dumps (i.e. returns) it's string representation. Similarly, `loads` is
read as "load-s". It loads (i.e. accepts) a JSON string and returns it's
corresponding dict.
**Hope this helps.**
|
Passing variables onto a MongoDB Query
Question: My collections has the following documents
{
cust_id: "0044234",
Address: "1234 Dunn Hill",
city: "Pittsburg",
comments : "4"
},
{
cust_id: "0097314",
Address: "5678 Dunn Hill",
city: "San Diego",
comments : "99"
},
{
cust_id: "012345",
Address: "2929 Dunn Hill",
city: "Pittsburg",
comments : "41"
}
I want to write a block of code that extracts and stores all cust_id's from
the same city. I am able to get the answer by running the below query on
MongoDB :
db.custData.find({"city" : 'Pittsburg'},{business_id:1}).
However, I am unable to do the same using Python. Below is what I have tried :
ctgrp=[{"$group":{"_id":"$city","number of cust":{"$sum":1}}}]
myDict={}
for line in collection.aggregate(ctgrp) : #for grouping all the cities in the dataset
myDict[line['_id']]=line['number of cust']
for key in myDict:
k=db.collection.find({"city" : 'key'},{'cust_id:1'})
print k
client.close()
Also, I am unable to figure out how can I store this. The only thing that is
coming to my mind is a dictionary with a 'list of values' corresponding to a
particular 'key'. However, I could not come up with an implementation about
the same.I was looking for an output like this
For Pitssburg, the values would be 0044234 and 012345.
Answer: You can use the
[`.distinct`](http://api.mongodb.org/python/current/api/pymongo/cursor.html#pymongo.cursor.Cursor.distinct)
method which is the best way to do this.
import pymongo
client = pymongo.MongoClient()
db = client.test
collection = db.collection
then:
collection.distinct('cust_id', {'city': 'Pittsburg'})
Yields:
['0044234', '012345']
or do this client side which is not efficient:
>>> cust_ids = set()
>>> for element in collection.find({'city': 'Pittsburg'}):
... cust_ids.add(element['cust_id'])
...
>>> cust_ids
{'0044234', '012345'}
Now if you want all "cust_id" for a given city here it is
>>> list(collection.aggregate([{'$match': {'city': 'Pittsburg'} }, {'$group': {'_id': None, 'cust_ids': {'$push': '$cust_id'}}}]))[0]['cust_ids']
['0044234', '012345']
Now if what you want is group your document by city then here and find
distinct "cust_id" then here is it:
>>> from pprint import pprint
>>> pipeline = [{'$group': {'_id': '$city', 'cust_ids': {'$addToSet': '$cust_id'}, 'count': {'$sum': 1}}}]
>>> pprint(list(collection.aggregate(pipeline)))
[{'_id': 'San Diego', 'count': 1, 'cust_ids': ['0097314']},
{'_id': 'Pittsburg', 'count': 2, 'cust_ids': ['012345', '0044234']}]
|
Python program and regex
Question: Would you be so kind to help me with some regex and Python code? I have
recently received a lot of help for a Python script to go through some
firewall logs. I am almost there, but still need a little help.
Here some log output from a firewall:
Nov 11 00:00:09 firewall %ASA-3-710003: TCP access denied by ACL from 1.1.1.1/50624 to internet:2.2.2.2/80
Nov 6 12:42:23 firewall %ASA-4-106023: Deny tcp src inside:3.3.3.3/42059 dst internet:4.4.4.4/389 by access-group "access_out" [0x0, 0x0]
The logs can both have IPv4 and IPv6 addresses. I need to extract the protocol
(tcp|udp|icmp), source_ip, the destination_ip, the destination_port and a
counter for each unique line. But as you can see from the two above lines, the
log output is different in the two lines, e.g. in one line the protocol is
written in uppercase, the other in lowercase, and also different text in
between the IP addresses. I am able to get the correct output from one of the
lines but not both.
prot source destination port hitcnt
------ ------------------ ------------------ ------ ------------------
tcp 1111:2222:0:abcd::12 2222:3333:0:efab::101 389 180
tcp 2222:3333:0:efab::50 1111:2222:0:abcd::12 389 29
ump 1111:4444:0:1111::2 1111:2222:0:abcd::12 123 4
tcp 1.1.1.1 3.3.3.3 23 4
imp 2.2.2.2 4.4.4.4 4
The code from python script looks like this (thanks to holroy)
import re #for regular expressions - to match ip's
import sys #for parsing command line opts
from collections import Counter
DENY_PATTERN = re.compile(r'Deny\s(?P<protocol>.+?)\ssrc.*?:(?P<src>[0-9a-f\.]*)/?.*?\s.*?dst.*?:(?P<dst>[0-9a-f\.]*)((/(?P<dst_port>[0-9]*)\s)|\s)')
LINE_FORMAT='{0:<6.6} {1:<25.25} {2:<25.25} {3:<6.6} {4}'
def process_log_file(log file):
"""Reads through the log_file, and returns a counter based on Deny-lines."""
# Process file line by line
with open(logfile, 'r') as data :
seen = Counter()
# find all Deny line and append them in a list
for line in data :
# If line has 'Deny ' in it, then check it some more
if 'Deny ' in line :
seen.update(DENY_PATTERN.findall(line))
return seen
def print_counter(counter):
"""Pretty print the result of the counter."""
print(LINE_FORMAT.format('prot', 'source', 'destination', 'port', 'hitcnt'))
print(LINE_FORMAT.format(*tuple(('------------------',) * 5)))
for (protocol, src, dst, _, _, dst_port), count in counter.most_common():
print(LINE_FORMAT.format(protocol, src, dst, dst_port, count))
if __name__ == '__main__':
# if file is specified on command line, parse, else ask for file
if sys.argv[1:] :
print "File: %s" % (sys.argv[1])
logfile = sys.argv[1]
else:
logfile = raw_input("Please enter a file to parse, e.g /var/log/secure: ")
denial_counter = process_log_file(logfile)
print_counter(denial_counter)
I've tried a lot of regex combinations (e.g.
<https://regex101.com/r/wC6gS7/2>, but am not able to get output right.
Also if I try with an or (|) between two regex, I am not able to call it
`(?P<protocol>)` in both the regex since it has already been used. If I use a
DENY_PATTERN and DENY_PATTERN2 with each regex and call it like this it also
doesn't work:
for line in data :
if 'Deny ' in line :
seen.update(DENY_PATTERN.findall(line))
for line in data :
if 'ASA-3 ' in line :
seen.update(DENY_PATTERN2.findall(line))
Thank you in advance.
Answer: ## Testing multiple patterns
To test for multiple complex pattern, you'll want to add those tests within
the one and only loop through your data file. That is you want the following
code:
for line in data :
# If line has 'Deny ' in it, then check it some more
if 'Deny ' in line:
seen.update(DENY_PATTERN.findall(line))
# If line has 'ASA-3' in it, then check for the other pattern
if 'ASA-3' in line:
seen.update(DENIED_PATTERN.findall(line))
This way you read through the file one time, and only do the heavier pattern
matching if the basic test against `Deny` or `ASA-3` has already occured.
## Regexp comments
In your regexp you have a few extra groups, which are not needed, and you test
against `[A-Z].*?`, which is _one_ uppercase character, followed by a non-
greedy match of anything. Unless that is specific what you are after, it would
be better to use `[A-Z]*?` (or `[A-Za-z]*?`) to match for a sequence of
characters. I took the liberty of saving a few extra version, ending with the
following (see <https://regex101.com/r/wC6gS7/8>):
DENIED_PATTERN = re.compile(r'''
:\s+? # Start with leading white space
(?P<protocol>[A-Z]*?) # Capture "<protocol>" start
\saccess\s+denied\s+by\s+ACL\s+from\s+ # Matches literal string
(?P<src>[0-9a-f\.:]*)/.*?: # Captures "<src>" plus extra text
(?P<dst>[0-9a-f\.:]*) # Captures "<dst>"
(?:/(?P<dst_port>[0-9]*?))?\s # Optional captures "/<dst_port>"
''', re.X)
Note also that using the `re.X` (or `re.VERBOSE`) can drastically increase the
understanding of your patterns. Using this flag you can include whitespace and
end of line comments using `#`.
Using `(?: ... )` is a nice construct to create a group, which can be made
optional by adding a `?`, like in `(?:/(?P<dst_port>[0-9]+))?`. This would
optionally capture stuff like "/80". Note however that since it is now made
optional, I added a `\s` to satisify the greadiness of earlier constructs, as
it otherwise could be left out.
Regarding alternate groups, they are often useful to construct using the non-capturing group construct, like in `(?: ... | ... | ...)`. But be aware that you if combined with named capture groups, the names needs to be unique.
## Changes to `print_counter()`
As the number of captured groups varies in the two patterns, there is a need
to modify the print out, as well. To differentiate between the two patterns
you could either depend upon the count of captured groups, or you could
introduce an identifier into your match groups. I.e. you could extend the
regexp's to include `Deny` and `ASA-3`.
I chose to use the number of capture groups. As now the number of capture
groups varies in the for loop this poses a slight issue with expanding the
tuple directly as you either have too many or too few elements to unpack. The
solution to this is to delay the expansion a little. This ends up with the
following code:
def print_counter(counter):
"""Pretty print the result of the counter."""
print(LINE_FORMAT.format('prot', 'source', 'destination', 'port', 'hitcnt'))
print(LINE_FORMAT.format(*tuple(('------------------',) * 5)))
for re_match, count in counter.most_common():
re_match_length = len(re_match)
# Pick elements based on match from DENY_PATTERN
if re_match_length == 8:
protocol, src, src_port, dst, dst_port, icmp_spec, icmp_type, icmp_code = re_match
if icmp_code or icmp_type:
dst_port = '{}, {}'.format(icmp_code, icmp_type)
# Pick elements based on match from DENIED_PATTERN
if re_match_length == 4:
protocol, src, dst, dst_port = re_match
print(LINE_FORMAT.format(protocol.lower(), src, dst, dst_port, count))
To beautify the output a bit, I added a `protocol.lower()` so that they are
always presented in lowercase. This could also have been done within the
corresponding `if` statement, along side with other needed tweaking before
outputting the line.
|
ImportError: DLL load failed: when importing statsmodels
Question: My Python version is 3.5 on win32. I successfully installed Numpy+MKL, Scipy
and Statsmodels from here <http://www.lfd.uci.edu/~gohlke/pythonlibs/>
However, when I run
import statsmodels as sm
I get the following error:
Traceback (most recent call last):
File "D:\Python\Innovation\try\Try_Reg.py", line 6, in <module>
import statsmodels as sm
File "C:\Python35\lib\site-packages\statsmodels\__init__.py", line 8, in <module>
from .tools.sm_exceptions import (ConvergenceWarning, CacheWriteWarning, File "C:\Python35\lib\site-packages\statsmodels\tools\__init__.py", line 1, in <module>
from .tools import add_constant, categorical File "C:\Python35\lib\site-packages\statsmodels\tools\tools.py", line 9, in <module>
from statsmodels.distributions import (ECDF, monotone_fn_inverter, File "C:\Python35\lib\site-packages\statsmodels\distributions\__init__.py", line 1, in <module>
from .empirical_distribution import ECDF, monotone_fn_inverter, StepFunction File "C:\Python35\lib\site-packages\statsmodels\distributions\empirical_distribution.py", line 5, in <module>
from scipy.interpolate import interp1d File "C:\Python35\lib\site-packages\scipy\interpolate\__init__.py", line 145, in <module>
from .interpolate import * File "C:\Python35\lib\site-packages\scipy\interpolate\interpolate.py", line 16, in <module>
import scipy.special as spec File "C:\Python35\lib\site-packages\scipy\special\__init__.py", line 601, in <module>
from ._ufuncs import * File "scipy\special\_ufuncs.pyx", line 1, in init scipy.special._ufuncs (scipy\special\_ufuncs.c:26071)
ImportError: DLL load failed: The specified module could not be found.
How can I fix this?
Answer: Try installing Visual C++ Redistributable from <https://www.microsoft.com/en-
us/download/details.aspx?id=48145>
|
Why doesn't Python auto escape '\' in __doc__?
Question: It seems that some escape chars still matter in docstring. For example, if we
run `python foo.py` (`Python 2.7.10`), it will emit error like `ValueError:
invalid \x escape`.
def f():
"""
do not deal with '\x0'
"""
pass
And in effect, it seem the correct docsting should be:
"""
do not deal with '\\\\x0'
"""
Additionally it also affects `import`.
For `Python 3.4.3+`, the error message is:
File "foo.py", line 4
"""
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 24-25: truncated \xXX escape
I feel it a bit strange since I was thinking it would only affect `__doc__`
and have no side effect on the module itself.
Why designed to be so? Is it a flaw/bug in Python?
### NOTE
I know the meaning of `"""` and raw literals, however I think python
interpreter should be able to treat docstring specially, at least in theory.
Answer: From [**PEP 257**](https://www.python.org/dev/peps/pep-0257/):
> For consistency, always use `"""triple double quotes"""` around docstrings.
> Use `r"""raw triple double quotes"""` if you use any backslashes in your
> docstrings. For Unicode docstrings, use`u"""Unicode triple-quoted
> strings"""` .
>
> There are two forms of docstrings: one-liners and multi-line docstrings.
* * *
Also from
[**here**](http://error.news/question/4831101/get-%60object__doc__%60-as-raw-
string/):
> There's no such **python** type as "raw string" -- there are raw string
> **_literals_** , which are just one **_syntax_** approach (out of many) to
> specify constants (i.e., literals) that are of string types.
>
> So "getting" something "as a raw string" just makes no sense. You can
> **write** docstrings as raw string literals (i.e., with the prefix `r` \--
> that's exactly what denotes a raw string literal, the specific syntax that
> identifies such a constant to the python compiler), or else double up any
> backslashes in them (an alternative way to specify constant strings
> including backslash characters), but that has nothing to do with "getting"
> them one way or another.
|
Python Regex for extracting specific part from string
Question: I have the following string:
SOURCEFILE: file_name.dc : 1 : log: the logging area
I am trying to store anything inbetween the third and the fourth colon in a
variable and discard the rest.
I've tried to make a regular expression to grab this but so far i have this
which is wrong :
([^:]:[^:]*)
I would appreciate some help with this and an explanation of the valid regex
so i can learn from my mistake.
Answer:
>>> import re
>>> s = "SOURCEFILE: file_name.dc : 1 : log: the logging area"
>>> s1 = re.sub(r"[^\:]*\:[^\:]*\:[^\:]*\:([^\:]*)\:.*", r"\1", s)
>>> print s1
log
|
sentry install error in step webpack -p --bail
Question: install sentry 8.0.0 with:
pip install -e .
got following error:
running [webpack]
Container#eachAtRule is deprecated. Use Container#walkAtRules instead.
Container#eachRule is deprecated. Use Container#walkRules instead.
Container#eachDecl is deprecated. Use Container#walkDecls instead.
Node#removeSelf is deprecated. Use Node#remove.
Traceback (most recent call last):
File "/www/sentry/sentry/setup.py", line 264, in run
self._build_static()
File "/www/sentry/sentry/setup.py", line 306, in _build_static
cwd=work_path, env=env)
File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['node_modules/.bin/webpack', '-p', '--bail']' returned non
-zero exit status -9
unable to build Sentry's static assets!
Hint: You might be running an invalid version of NPM.
----------------------------------------
Command "/www/sentry/bin/python -c "import setuptools, tokenize; __file__='/www/sentry/sentry/setup.py'; exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" develop --no-deps" failed with error code 1 in /www/sentry/sentry
I had tried [email protected] [email protected] [email protected] all did't work
Answer: What you might try is this to see if you get a better error:
* Pull down the Sentry source code
* Run "npm install" from the Sentry root directory
We require Node 0.10.x or newer, but it should run on the latest versions as
well. I believe we test on 0.10 and 0.12 locally.
|
PHP excute Python SFTP script
Question: in PHP I need to do some SFTP, but I am having issues because I am not allowed
to install the SSH extension, and phpseclib is not working how I need it to.
As such, I am going to execute a Python script to do the SFTP for me. What I
imaging is doing something like the following
exec("SFTPUpload.py remoteFile serverLocation");
So I execute SFTPUpload.py passing it the location of the file on my server
which needs transferring, and the location on the server it needs transferring
too.
In terms of the Python script (I am not too familiar with Python), I imagine
it would be something like the following
username='sftpUser'
password='sftpPassword'
port=22
#SFTP
client.load_system_host_keys()
print " hostname =%s \n username=%s \n password=%s \n" (hostname,username,password)
t = paramiko.Transport((hostname, port))
t.connect(username=username,password=password)
sftp = paramiko.SFTPClient.from_transport(t)
sftp.put(source,destination)
sftp.close()
t.close()
However, the way I am calling it from PHP, I need the Python to be in a class
or something so I can pass it the variables.
How would I achieve something like this?
Thanks
Answer: I believe you can do it with the `exec()` function as you described by simply
parsing the command line parameters in Python.
Like:
import sys
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
Where you can access the elements in `sys.argv` like a list.
(You could also take a look at the [getopt
module](https://pymotw.com/2/getopt/) which offers even more (C-like)
parameter passing options, but I believe the above solution will do.)
If the `exec()` function does not work, I believe you could consider to use
the [system function](http://php.net/manual/en/function.system.php) in PHP:
Something like this:
$mystring = system('SFTPUpload.py remoteFile serverLocation', $retval);
|
Simple Python inquiry - MD5 hashing
Question: I'm trying write a little python script to hash some words. I'm using hashcat
to verify my outputs and something is wrong and it should be very simple
process to do.. but I can't identify what I have done incorrectly. Only the
last hash of my outputs is getting hashed correctly. When using "123456" as a
test for 5 lines in my sample file I get the following output:
f447b20a7fcbf53a5d5be013ea0b15af
f447b20a7fcbf53a5d5be013ea0b15af
f447b20a7fcbf53a5d5be013ea0b15af
f447b20a7fcbf53a5d5be013ea0b15af
e10adc3949ba59abbe56e057f20f883e
Can someone point out the error of my ways. Would be greatly appreciated.
import hashlib
my_file = open("sample.txt" , "r")
for line in my_file:
try:
hash_object = hashlib.md5(line)
print(hash_object.hexdigest())
except:
print "Error"
my_file.close()
Answer: You get a different hash for the first four lines because you include the
carriage return is in the MD5 hash calculation. The last line does not have
this carriage return and therefore returns a different value. Simply remove
the carriage return using `strip()` and you will get the same hash for all
five lines:
import hashlib
my_file = open("sample.txt" , "r")
for line in my_file:
try:
hash_object = hashlib.md5(line.strip())
print(hash_object.hexdigest())
except:
print "Error"
my_file.close()
This gives as output:
e10adc3949ba59abbe56e057f20f883e
e10adc3949ba59abbe56e057f20f883e
e10adc3949ba59abbe56e057f20f883e
e10adc3949ba59abbe56e057f20f883e
e10adc3949ba59abbe56e057f20f883e
|
Moving from PHP/Laravel to Python/Django
Question: I want some clarity. I want to learn more about django and use it as
replacement for php/laravel. But the default structure and convention of
django confuses me a bit.
My PHP/Laravel project has 3 parts:
\- Administration
\- Core (Web app for regular users)
\- API Service (REST-API for mobile apps)
However all of controllers, models and views are contained in a single Laravel
application. I separated Auth, Admin, Api controllers into their own
folders/namespaces.
One thing that confuses me is the default Django structure 1 view 1 model
file. How should i go about reworking this application in Django should each
of my controllers be a separate app in my django project or should I have same
approach as in Laravel. 3 Django apps in one project one for admin one for
core and one for api ? Where should I keep my models than since in Laravel all
models are used by all 3 parts ?
My current structure:
./
./controllers/
./auth/
LoginController.php
RegistrationController.php
...
./admin/
ReportsController.php
UserController.php (Admins overview of all users)
...
./api/
HealthController.php (API CRUD for Health resource)
ExerciseController.php
HomeController.php
UserController.php (Regular users profile page CRUD)
...
./models/
User.php
Health.php
Exercise.php
...
Answer: One thing to remember about Django is that an app in Laravel doens't necessary
translate to an app in Django. In Django, there are projects, and each project
can have any number of apps. For example, I have a "Backup Admin" project
where I manage a lot of the day-to-day issues of managing a tape backup
environment. I have an app for media (that has 3 models, one for regular
media, one for cleaning media, and one for media that we want to exclude from
tape ejections). I have an app that represents the backup images, and another
for backup jobs (to check status codes). Each sub-piece of my project goes
into another app.
If I wanted to do another Django project that had nothing to do with backups,
I'd make that a completely separate project, which would have a separate
directory structure from my backup project. It'd have it's own urls.py,
settings.py, etc.
Regarding the models piece, I put all of one app's models in the same file.
For example, in my media app, I have models.py, which contains all three
models that I mentioned above. This is completely optional, but I do it just
so while importing these models into other parts of the project, I don't have
to remember what the file names are, instead I can just do this:
from media.models import CleaningMedia,Media,EjectExclusions
Otherwise I'd have to have 3 different import statements if they were in
different files. It's completely possible, but based on your preferences.
Regarding the controller, Django lets you do it either way. You have a
project-wide urls.py file that you can use to control all of the traffic, or
you can have separate urls.py files in each app to control that app's traffic.
I prefer a single file, but that's just me. Personally if you have a lot of
controller entries, you should probably split them up into app-specific
urls.py files, just to keep it clean, but again, either method would work. I
think of maintainability (especially with respect to teammates having to
support it) when I make these types of decisions.
The admin interface is built-in, so there's not really an app for that, but
you can decide which models and which apps have entries on the admin interface
quite easily. Each app has an admin.py file that controls this.
A side note, for a RESTful API, you also might want to consider Django Rest
Framework. It's a great piece of software, and the documentation (and
tutorials) are very helpful.
Edit: The 1 view/1 model thing again is just preference. You can have as many
files as you want. The only trade off is when you import them into other
files, you have to specify the file you're importing it from. That's really
all there is to it. I know people who have a views/ directory, and inside
there, have separate files for each view, keeping each class/function
separate. Totally a matter of preference.
|
How can I print a jinja dict in a deterministic order?
Question: I have a configuration file where a particular setting takes a long ugly json-
style dictionary. I would like to define this dictionary in YAML and print it
out with Jinja. Currently this looks something like this:
{%- load_yaml as some_conf_val %}
val1: something
val2: completely
val3: different
{%- endload %}
option = {{ some_conf_val }}
# Results in:
option = {'val2': 'completely', 'val1': 'something', 'val3': 'different'}
By happy coincidence this is exactly the format expected by the program being
configured, and the yaml block is much easier to read and modify than the
inline version. However, the fact that the keys come out in neither
alphabetical order nor the order they were defined leads me to suspect their
output order is non-deterministic. On re-running the state a few times they
always come out in the same order, but that doesn't prove much.
This is a problem because if the output string is altered, it gets treated as
a change to the file and triggers a service restart, even if nothing
functional has been changed. I don't care what specific order the keys are in,
but I do need that order to be the same every time.
How can I accomplish this? Or is it already deterministic and just doesn't
look like it?
(If I understand correctly jinja dicts are really python dicts under the hood,
and python dicts are unordered, so this may be impossible without including
code that's messier than the line I'm trying to not have to write. But I'm
hoping not.)
Answer: You are right, Python's standard dict is unordered. There is an ordered
version in the `collections` module:
[`collections.OrderedDict`](https://docs.python.org/3.4/library/collections.html#collections.OrderedDict).
We can use it for this purpose with a custom Jinja2 filter.
The custom filter first converts the input to OrderedDict, then to JSON, and
finally replaces the double quotation marks:
from json import dumps
from collections import OrderedDict
def conffilter(value):
return dumps(OrderedDict(value)).replace('"', '\'')
Then we have to [add this new
filter](http://jinja.pocoo.org/docs/dev/api/#custom-filters) to Jinja2:
env.filters['conffilter'] = conffilter
where `env` is the Jinja2
[`Environment`](http://jinja.pocoo.org/docs/dev/api/#basics) object from your
app (e.g. in Flask it's the `app.jinja_env`).
Now we can use the new filter with Jinja2's
[`dictsort`](http://jinja.pocoo.org/docs/dev/templates/#dictsort) filter.
# Define a list for testing:
{%- set some_conf_val = {'val2': 'completely', 'val1': 'something', 'val3': 'different'} -%}
# Unordered dict output
option = {{ some_conf_val }}
# Result:
option = {'val1': 'something', 'val3': 'different', 'val2': 'completely'}
# Ordered list of tuples
option = {{ some_conf_val|dictsort }}
# Result:
option = [('val1', 'something'), ('val2', 'completely'), ('val3', 'different')]
# Ordered dict-like output:
option = {{ some_conf_val|dictsort|conffilter }}
# Result:
option = {'val1': 'something', 'val2': 'completely', 'val3': 'different'}
|
What does from __future__ import absolute_import actually do?
Question: I have [answered](http://stackoverflow.com/a/22679558/2588818) a question
regarding absolute imports in Python, which I thought I understood based on
reading [the Python 2.5
changelog](https://docs.python.org/2.5/whatsnew/pep-328.html) and accompanying
[PEP](https://www.python.org/dev/peps/pep-0328/). However, upon installing
Python 2.5 and attempting to craft an example of properly using `from
__future__ import absolute_import`, I realize things are not so clear.
Straight from the changelog linked above, this statement accurately summarized
my understanding of the absolute import change:
> Let's say you have a package directory like this:
>
>
> pkg/
> pkg/__init__.py
> pkg/main.py
> pkg/string.py
>
>
> This defines a package named `pkg` containing the `pkg.main` and
> `pkg.string` submodules.
>
> Consider the code in the main.py module. What happens if it executes the
> statement `import string`? In Python 2.4 and earlier, it will first look in
> the package's directory to perform a relative import, finds pkg/string.py,
> imports the contents of that file as the `pkg.string` module, and that
> module is bound to the name `"string"` in the `pkg.main` module's namespace.
So I created this exact directory structure:
$ ls -R
.:
pkg/
./pkg:
__init__.py main.py string.py
`__init__.py` and `string.py` are empty. `main.py` contains the following
code:
import string
print string.ascii_uppercase
As expected, running this with Python 2.5 fails with an `AttributeError`:
$ python2.5 pkg/main.py
Traceback (most recent call last):
File "pkg/main.py", line 2, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
However, further along in the 2.5 changelog, we find this (emphasis added):
> In Python 2.5, you can switch `import`'s behaviour to absolute imports using
> a `from __future__ import absolute_import` directive. This absolute-import
> behaviour will become the default in a future version (probably Python 2.7).
> **Once absolute imports are the default,`import string` will always find the
> standard library's version.**
I thus created `pkg/main2.py`, identical to `main.py` but with the additional
future import directive. It now looks like this:
from __future__ import absolute_import
import string
print string.ascii_uppercase
Running this with Python 2.5, however... fails with an `AttributeError`:
$ python2.5 pkg/main2.py
Traceback (most recent call last):
File "pkg/main2.py", line 3, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
This pretty flatly contradicts the statement that `import string` will
**always** find the std-lib version with absolute imports enabled. What's
more, despite the warning that absolute imports are scheduled to become the
"new default" behavior, I hit this same problem using both Python 2.7, with or
without the `__future__` directive:
$ python2.7 pkg/main.py
Traceback (most recent call last):
File "pkg/main.py", line 2, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
$ python2.7 pkg/main2.py
Traceback (most recent call last):
File "pkg/main2.py", line 3, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
as well as Python 3.5, with or without (assuming the `print` statement is
changed in both files):
$ python3.5 pkg/main.py
Traceback (most recent call last):
File "pkg/main.py", line 2, in <module>
print(string.ascii_uppercase)
AttributeError: module 'string' has no attribute 'ascii_uppercase'
$ python3.5 pkg/main2.py
Traceback (most recent call last):
File "pkg/main2.py", line 3, in <module>
print(string.ascii_uppercase)
AttributeError: module 'string' has no attribute 'ascii_uppercase'
* * *
I have tested other variations of this. Instead of `string.py`, I have created
an empty module -- a directory named `string` containing only an empty
`__init__.py` \-- and instead of issuing imports from `main.py`, I have `cd`'d
to `pkg` and run imports directly from the REPL. Neither of these variations
(nor a combination of them) changed the results above. I cannot reconcile this
with what I have read about the `__future__` directive and absolute imports.
It seems to me that this is easily explicable by [the
following](https://docs.python.org/2/library/sys.html#sys.path) (this is from
the Python 2 docs but this statement remains unchanged in the same docs for
Python 3):
> ### sys.path
>
> (...)
>
> As initialized upon program startup, the first item of this list, `path[0]`,
> is the directory containing the script that was used to invoke the Python
> interpreter. If the script directory is not available (e.g. if the
> interpreter is invoked interactively or if the script is read from standard
> input), `path[0]` is the empty string, **which directs Python to search
> modules in the current directory first.**
So what am I missing? Why does the `__future__` statement seemingly not do
what it says, and what is the resolution of this contradiction between these
two sections of documentation, as well as between described and actual
behavior?
Answer: The changelog is sloppily worded. `from __future__ import absolute_import`
does not care about whether something is part of the standard library, and
`import string` will not always give you the standard-library module with
absolute imports on.
`from __future__ import absolute_import` means that if you `import string`,
Python will always look for a top-level `string` module, rather than
`current_package.string`. However, it does not affect the logic Python uses to
decide what file is the `string` module. When you do
python pkg/script.py
`pkg/script.py` doesn't look like part of a package to Python. Following the
normal procedures, the `pkg` directory is added to the path, and all `.py`
files in the `pkg` directory look like top-level modules. `import string`
finds `pkg/string.py` not because it's doing a relative import, but because
`pkg/string.py` appears to be the top-level module `string`. The fact that
this isn't the standard-library `string` module doesn't come up.
To run the file as part of the `pkg` package, you could do
python -m pkg.script
In this case, the `pkg` directory will not be added to the path. However, the
current directory will be added to the path.
You can also add some boilerplate to `pkg/script.py` to make Python treat it
as part of the `pkg` package even when run as a file:
if __name__ == '__main__' and __package__ is None:
__package__ = 'pkg'
However, this won't affect `sys.path`. You'll need some additional handling to
remove the `pkg` directory from the path, and if `pkg`'s parent directory
isn't on the path, you'll need to stick that on the path too.
|
Scrape 3 tables from multiple URLs with beautifulsoup
Question: I am fairly new to web scrapping and also to python. For my bachelor thesis I
need data from rivercruise ships. I was able to write the following code,
which is working on <http://www.cruiseshipschedule.com>.
There is also a second code which I wrote to get all the links for the ships I
am interested in. I kindly ask for help, to combine those two codes in order
to scrape the same 3 tables from all the links. This is my scraper, which
works without any errors.
from mechanize import Browser
from bs4 import BeautifulSoup
import lxml
import csv
url1 = 'http://www.cruiseshipschedule.com/ama-waterways/ms-amabella-cruises/'
mech = Browser()
page1 = mech.open(url1)
html = page1.read()
soup1 = BeautifulSoup(html, "lxml")
ship_in = soup1.h1
ship_in = ship_in.text
ship = u' '.join(ship_in.split())
u' '.join(ship_in.split())
ship = [ship]
h21 = soup1.h2
h22 = h21.findNext('h2')
h23 = h22.findNext('h2')
h24 = h23.findNext('h2')
h25 = h24.findNext('h2')
h_y1 = h22.text
h_y2 = h23.text
h_y3 = h24.text
itinerary1_header = u' '.join(h_y1.split())
u' '.join(h_y1.split())
itinerary2_header = u' '.join(h_y2.split())
u' '.join(h_y2.split())
itinerary3_header = u' '.join(h_y3.split())
u' '.join(h_y3.split())
table_1 = soup1.findAll('table')[0]
table_2 = soup1.findAll('table')[1]
table_3 = soup1.findAll('table')[2]
rows_1 = table_1.findAll("tr")
rows_2 = table_2.findAll("tr")
rows_3 = table_3.findAll("tr")
for row_1 in rows_1:
cells_1 = row_1.findAll('td')
list_1 = table_1.findAll('li')
decks = str(list_1[0].get_text()).split(':')
cabin = str(list_1[1].get_text()).split(':')
cabin_number = str(list_1[2].get_text()).split(':')
list_key = ''.join(list(decks[0] + '|' + cabin[0] + '|' + cabin_number[0]))
list_value = ''.join(list(decks[1] + '|' + cabin[1] + '|' + cabin_number[1]))
list_key = list_key.split('|')
list_value = list_value.split('|')
try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error.
col1_1 = str(cells_1[0].get_text()).split('\n') # This structure isolate the item by its column in the table and converts it into a string.
col2_1 = str(cells_1[1].get_text()).split('\n')
col3_1 = str(cells_1[2].get_text()).split('\n')
col4_1 = str(cells_1[3].get_text()).split('\n')
except:
continue #This tells the computer to move on to the next item after it encounters an error
keys_1 = ['ship'] + col1_1 + col3_1 + list_key
values_1 = ship + col2_1 + col4_1 + list_value
dict_1 = dict(zip(keys_1, values_1))
with open('Z:/Cruiseshipschedule/details/details_'+ ship_in + '.csv', 'wb') as f: # Just use 'w' mode in 3.x
w = csv.DictWriter(f, dict_1.keys())
w.writeheader()
w.writerow(dict_1)
f.close()
list_of_rows_2=[]
for row_2 in rows_2:
cells_2 = row_2.find_all("td")
list_of_cells_2 = [itinerary1_header]
try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error.
date1_2 = str(cells_2[0].get_text()) # This structure isolate the item by its column in the table and converts it into a string.
itinerary2_2 = str(cells_2[1].get_text()).split('\n')
price3_2 = str(cells_2[2].get_text()).split('\n')
list_of_cells_2.append(date1_2)
list_of_cells_2.append(itinerary2_2)
list_of_cells_2.append(price3_2)
except:
continue #This tells the computer to move on to the next item after it encounters an error
list_of_cells_2.append(ship)
list_of_rows_2.append(list_of_cells_2)
outfile_2 = open('Z:/Cruiseshipschedule/itinerary1/itinerary1_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_2, delimiter='|')
writer.writerows(list_of_rows_2)
outfile_2.close()
list_of_rows_3=[]
for row_3 in rows_3:
cells_3 = row_3.find_all("td")
list_of_cells_3 = [itinerary2_header]
try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error.
date1_3 = str(cells_3[0].get_text()) # This structure isolate the item by its column in the table and converts it into a string.
itinerary2_3 = str(cells_3[1].get_text()).split('\n')
price3_3 = str(cells_3[2].get_text()).split('\n')
list_of_cells_3.append(date1_3)
list_of_cells_3.append(itinerary2_3)
list_of_cells_3.append(price3_3)
except:
continue #This tells the computer to move on to the next item after it encounters an error
list_of_cells_3.append(ship)
list_of_rows_3.append(list_of_cells_3)
outfile_3 = open('Z:/Cruiseshipschedule/itinerary2/itinerary2_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_3, delimiter='|')
writer.writerows(list_of_rows_3)
outfile_3.close()
print "check out the data!"
Here is the second code - which also works and gets all links and stores them
in the `page_array` variable.
from mechanize import Browser
from bs4 import BeautifulSoup
import lxml
from lxml import html
import csv
import requests
page_array = []
mech = Browser()
url = 'http://www.cruiseshipschedule.com/'
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html, "lxml")
table1 = soup.table #Ocean Cruise
table2 = table1.findNext('table')#River Cruise
pages = table2.findAll('a')
for page in pages:
page_array.append(page.get('href').replace('http://www.cruiseshipschedule.com/', ''))
What is a proper python method to combine those codes and get all the data I
need. Your help would be appreciated.
Edit:
from mechanize import Browser
from bs4 import BeautifulSoup
import lxml
from lxml import html
import csv
import requests
page_array = []
mech = Browser()
url = 'http://www.cruiseshipschedule.com/'
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html, "lxml")
table1 = soup.table #Ocean Cruise
table2 = table1.findNext('table')#River Cruise
pages = table2.findAll('a')
for page in pages:
page_array.append(page.get('href'))
for page in page_array:
mech = Browser()
page1 = mech.open(page)
html = page1.read()
soup1 = BeautifulSoup(html, "lxml")
ship_in = soup1.h1
ship_in = ship_in.text
ship = u' '.join(ship_in.split())
u' '.join(ship_in.split())
ship = [ship]
h21 = soup1.h2
h22 = h21.findNext('h2')
h23 = h22.findNext('h2')
h24 = h23.findNext('h2')
h25 = h24.findNext('h2')
h_y1 = h22.text
h_y2 = h23.text
h_y3 = h24.text
itinerary1_header = u' '.join(h_y1.split())
u' '.join(h_y1.split())
itinerary2_header = u' '.join(h_y2.split())
u' '.join(h_y2.split())
itinerary3_header = u' '.join(h_y3.split())
u' '.join(h_y3.split())
table_1 = soup1.findAll('table')[0]
table_2 = soup1.findAll('table')[1]
table_3 = soup1.findAll('table')[2]
rows_1 = table_1.findAll("tr")
rows_2 = table_2.findAll("tr")
rows_3 = table_3.findAll("tr")
for row_1 in rows_1:
cells_1 = row_1.findAll('td')
list_1 = table_1.findAll('li')
decks = str(list_1[0].get_text()).split(':')
cabin = str(list_1[1].get_text()).split(':')
cabin_number = str(list_1[2].get_text()).split(':')
list_key = ''.join(list(decks[0] + '|' + cabin[0] + '|' + cabin_number[0]))
list_value = ''.join(list(decks[1] + '|' + cabin[1] + '|' + cabin_number[1]))
list_key = list_key.split('|')
list_value = list_value.split('|')
try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error.
col1_1 = str(cells_1[0].get_text()).split('\n') # This structure isolate the item by its column in the table and converts it into a string.
col2_1 = str(cells_1[1].get_text()).split('\n')
col3_1 = str(cells_1[2].get_text()).split('\n')
col4_1 = str(cells_1[3].get_text()).split('\n')
except:
continue #This tells the computer to move on to the next item after it encounters an error
keys_1 = ['ship'] + col1_1 + col3_1 + list_key
values_1 = ship + col2_1 + col4_1 + list_value
dict_1 = dict(zip(keys_1, values_1))
with open('Z:/Cruiseshipschedule/details/details_'+ ship_in + '.csv', 'wb') as f: # Just use 'w' mode in 3.x
w = csv.DictWriter(f, dict_1.keys())
w.writeheader()
w.writerow(dict_1)
f.close()
list_of_rows_2=[]
for row_2 in rows_2:
cells_2 = row_2.find_all("td")
list_of_cells_2 = [itinerary1_header]
try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error.
date1_2 = str(cells_2[0].get_text()) # This structure isolate the item by its column in the table and converts it into a string.
itinerary2_2 = str(cells_2[1].get_text()).split('\n')
price3_2 = str(cells_2[2].get_text()).split('\n')
list_of_cells_2.append(date1_2)
list_of_cells_2.append(itinerary2_2)
list_of_cells_2.append(price3_2)
except:
continue #This tells the computer to move on to the next item after it encounters an error
list_of_cells_2.append(ship)
list_of_rows_2.append(list_of_cells_2)
outfile_2 = open('Z:/Cruiseshipschedule/itinerary1/itinerary1_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_2, delimiter='|')
writer.writerows(list_of_rows_2)
outfile_2.close()
list_of_rows_3=[]
for row_3 in rows_3:
cells_3 = row_3.find_all("td")
list_of_cells_3 = [itinerary2_header]
try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error.
date1_3 = str(cells_3[0].get_text()) # This structure isolate the item by its column in the table and converts it into a string.
itinerary2_3 = str(cells_3[1].get_text()).split('\n')
price3_3 = str(cells_3[2].get_text()).split('\n')
list_of_cells_3.append(date1_3)
list_of_cells_3.append(itinerary2_3)
list_of_cells_3.append(price3_3)
except:
continue #This tells the computer to move on to the next item after it encounters an error
list_of_cells_3.append(ship)
list_of_rows_3.append(list_of_cells_3)
outfile_3 = open('Z:/Cruiseshipschedule/itinerary2/itinerary2_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_3, delimiter='|')
writer.writerows(list_of_rows_3)
outfile_3.close()
print "check out the data!"
This is now my edited version. The output are 3 .csv files. And then it throws
an error:
check out the data!
Traceback (most recent call last): File "C:/Python27/ship scraper editedt.py",
line 55, in table_3 = soup1.findAll('table')[2] IndexError: list index out of
range
However I do not get the error, when running only a single url (url1 =
'<http://www.cruiseshipschedule.com/ama-waterways/ms-amabella-cruises/>').
**Edit:**
from mechanize import Browser
from bs4 import BeautifulSoup
import lxml
from lxml import html
import csv
import requests
base_url = 'http://www.cruiseshipschedule.com/'
def get_links():
links_array = []
mech = Browser()
mech.set_handle_robots(False)
page = mech.open(base_url)
html = page.read()
soup = BeautifulSoup(html, "lxml")
tables = soup.findAll('table')
for table in tables:
links = tables[1].findAll('a')
for link in links:
links_array.append(link.get('href').replace('http://www.cruiseshipschedule.com/', ''))
return links_array
def get_headings(url):
mech = Browser()
mech.set_handle_robots(False)
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html, "lxml")
headings = soup.findAll('h2')
return headings
get_links()
urls = [base_url + link for link in get_links()]
for url in urls:
mech = Browser()
mech.set_handle_robots(False)
try:
page = mech.open(url)
except:
continue
html = page.read()
soup = BeautifulSoup(html, "lxml")
tables = soup.findAll('table')
ship_in = soup.h1
ship_in = ship_in.text
ship = u' '.join(ship_in.split())
u' '.join(ship_in.split())
ship = [ship]
try:
details = tables[0]
except:
continue
rows_1 = details.findAll("tr")
for row_1 in rows_1:
cells_1 = row_1.findAll('td')
try:
list_1 = details.findAll('li')
decks = list_1[0].text.encode('utf8').split(':')
cabin = list_1[1].text.encode('utf8').split(':')
cabin_number = list_1[2].text.encode('utf8').split(':')
list_key = ''.join(list(decks[0] + '|' + cabin[0] + '|' + cabin_number[0]))
list_value = ''.join(list(decks[1] + '|' + cabin[1] + '|' + cabin_number[1]))
list_key = list_key.split('|')
list_value = list_value.split('|')
try:
col1_1 = str(cells_1[0].get_text()).split('\n')
col2_1 = str(cells_1[1].get_text()).split('\n')
col3_1 = str(cells_1[2].get_text()).split('\n')
col4_1 = str(cells_1[3].get_text()).split('\n')
except:
continue
keys_1 = ['ship'] + col1_1 + col3_1 + list_key
values_1 = ship + col2_1 + col4_1 + list_value
dict_1 = dict(zip(keys_1, values_1))
with open('Z:/Cruiseshipschedule/details/details_'+ ship_in + '.csv', 'wb') as f:
w = csv.DictWriter(f, dict_1.keys())
w.writeheader()
w.writerow(dict_1)
f.close()
except:
if not list_1:
list_of_rows_1=[]
for row_1 in rows_1:
cells_1 = row_1.findAll('td')
try:
col1_1 = cells_1[0].text.encode('utf8').split(':')
col2_1 = cells_1[1].text.encode('utf8').split(':')
col3_1 = cells_1[2].text.encode('utf8').split(':')
col4_1 = cells_1[3].text.encode('utf8').split(':')
list_of_cells_1.append(col1_1)
list_of_cells_1.append(col2_1)
list_of_cells_1.append(col3_1)
list_of_cells_1.append(col4_1)
except:
continue
list_of_rows_1.append(list_of_cells_1)
outfile_1 = open('Z:/Cruiseshipschedule/details/details_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_1, delimiter='|')
writer.writerows(list_of_rows_1)
outfile_1.close()
else:
continue
try:
itineray1 = tables[1]
rows_2 = itineray1.findAll("tr")
list_of_rows_2=[]
for row_2 in rows_2:
cells_2 = row_2.find_all("td")
list_of_cells_2 = [get_headings(url)[2].text]
try:
date1_2 = str(cells_2[0].get_text())
itinerary2_2 = str(cells_2[1].get_text()).split('\n')
price3_2 = str(cells_2[2].get_text()).split('\n')
list_of_cells_2.append(date1_2)
list_of_cells_2.append(itinerary2_2)
list_of_cells_2.append(price3_2)
except:
continue
list_of_cells_2.append(ship)
list_of_rows_2.append(list_of_cells_2)
outfile_2 = open('Z:/Cruiseshipschedule/itinerary1/itinerary1_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_2, delimiter='|')
writer.writerows(list_of_rows_2)
outfile_2.close()
except:
continue
try:
itineray2 = tables[2]
list_of_rows_3=[]
for row_3 in rows_3:
cells_3 = row_3.find_all("td")
list_of_cells_3 = [get_headings(url)[3].text]
try:
date1_3 = str(cells_3[0].get_text())
itinerary2_3 = str(cells_3[1].get_text()).split('\n')
price3_3 = str(cells_3[2].get_text()).split('\n')
list_of_cells_3.append(date1_3)
list_of_cells_3.append(itinerary2_3)
list_of_cells_3.append(price3_3)
except:
continue
list_of_cells_3.append(ship)
list_of_rows_3.append(list_of_cells_3)
outfile_3 = open('Z:/Cruiseshipschedule/itinerary2/itinerary2_'+ ship_in + '.csv', "wb")
writer = csv.writer(outfile_3, delimiter='|')
writer.writerows(list_of_rows_3)
outfile_3.close()
except:
continue
print "check out the data!"
Thanks for all the support! The code works - which means I get data. But
strange wise for some ships it takes only a few itineraries. I can't find the
mistake in my code. Python doesn't throw a error.
While scraping I saw that a few urls do not have exactly the same structure of
tables. That is why I put try and except to avoid stopping the script.
I would really appreciate some thoughts.
Answer:
#Second program here:
...
...
page_array = [....]
for page in page_array:
mech = Browser()
page1 = mech.open(page)
#...The rest of the 1st program here
Another option would be to convert your second program to a function:
from mechanize import Browser
from bs4 import BeautifulSoup
import lxml
from lxml import html
import csv
import requests
def get_links(url):
links_array = []
mech = Browser()
#url = 'http://www.cruiseshipschedule.com/'
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html, "lxml")
table1 = soup.table #Ocean Cruise
table2 = table1.findNext('table')#River Cruise
links = table2.findAll('a')
for link in links:
links_array.append(link.get('href').replace('http://www.cruiseshipschedule.com/', ''))
return links_array #<****HERE
Then in the first program, you would import the file that contains your second
program:
import second_prog
url = 'http://www.cruiseshipschedule.com/ama-waterways/ms-amabella-cruises/'
mech = Browser()
for link in second_prog.get_links(url):
page = mech.open(link)
#Continue with first program here
And, if you want to keep everything in the same file, then you could move the
function in your second program into your first program--which would mean you
wouldn't need the import statement anymore.
**Edit:**
Does any of the following code have anything to do with your error:
ship_in = soup1.h1
ship_in = ship_in.text
ship = u' '.join(ship_in.split())
u' '.join(ship_in.split())
ship = [ship]
h21 = soup1.h2
h22 = h21.findNext('h2')
h23 = h22.findNext('h2')
h24 = h23.findNext('h2')
h25 = h24.findNext('h2')
h_y1 = h22.text
h_y2 = h23.text
h_y3 = h24.text
itinerary1_header = u' '.join(h_y1.split())
u' '.join(h_y1.split())
itinerary2_header = u' '.join(h_y2.split())
u' '.join(h_y2.split())
itinerary3_header = u' '.join(h_y3.split())
u' '.join(h_y3.split())
Let's see:
soup1 = BeautifulSoup(html, "lxml")
#Code above here
table_1 = soup1.findAll('table')[0]
table_2 = soup1.findAll('table')[1]
table_3 = soup1.findAll('table')[2]
Calculating table_1 does not reference anything that was removed.
Calculating table_2 does not reference anything that was removed.
Calculating table_3 does not reference anything that was removed.
So, you make a copy of your program, and you delete that whole section. Then
you try to figure out what went wrong. The process of deleting code to isolate
a problem is called _debugging_ code.
Next, why should you make BS go to the trouble--and the time--of searching
through the whole html page 3 times to get all the tables?
table_1 = soup1.findAll('table')[0]
table_2 = soup1.findAll('table')[1]
table_3 = soup1.findAll('table')[2]
Every time you write `soup1.findAll('table')`, BS has to search through the
whole html page to find all the `<table>` tags.
Instead, you can search just once:
tables = soup1.findAll('table')
table1 = tables[0]
table2 = tables[1]
table3 = tables[2]
Making python retrieve an element from a list is very fast--much faster than
making BS search through an entire web page hunting for all the `<table>`
tags.
Next, whenever you find yourself writing variables with names:
table1
table2
table3
and they only differ by a number, you need to STOP what you are doing--and use
a list instead. In this case, you already have a list: `tables`, and the
elements inside `tables` already have the names `tables[0], tables[1],
tables[2]`, etc., so you don't need to create the variables `table1, table2,
table3`. In fact, you don't even have to refer to the elements of `tables` by
the names `tables[0], tables[1], tables[2]`\--instead you can use a `for loop`
to step through all the tables:
for table in tables:
#Do something with the table variable
That has two advantages:
1) You don't have to write all the names `tables[0], tables[1], tables[2]` in
your code. What if you had to examine 1,000 tables? Are you really going to
write:
tables[0] = ...
tables[1] = ...
...
...
<an hour later>
tables[999] = ...
2) The second advantage of using a for loop is that you only have to write the
code that processes a table ONCE, and the `for loop` will apply the code to
each table in the `tables` list.
If `tables` has more table tags than you want to examine, then you can write:
first_three = tables[:3]
But note that if tables only has two tables, then `first_three` will only
contain two tables. That _can_ be an advantage: a for loop doesn't need to
know how many table tags are in the tables list--a for loop will blindly
process all the elements in the list you give it, no matter how many tables
are in the list.
Finally, your error states:
table_3 = soup1.findAll('table')[2] IndexError: list index out of range
That is the same error you would get if you did this:
data = ['a', 'b']
print(data[2])
That means that findAll() found less than 3 tables on the page. How do you fix
that? If the page doesn't contain three tables then it doesn't contain three
tables. You can process only the tables a page contains with a max of three,
like this:
target_tables = tables[:3]
for table in target_tables:
#Do stuff here
|
print list of tuples without brackets python
Question: I have a list of tuples and I want to print flattened form of this list. I
don't want to transform the list, just print it without parenthesis and
brackets.
input: [ ("a", 1), ("b",2), ("c", 3)]
output: a 1 b 2 c 3
Here's what I do:
l = [ ("a", 1), ("b",2), ("c", 3)]
f = lambda x: " ".join(map(str,x))
print " ".join(f(x) for x in l)
I'm interested in if anybody has a more elegant and possibly a more efficient
solution,possibly without doing join, only print. Thanks in advance.
Answer:
from __future__ import print_function
l = [("a", 1), ("b",2), ("c", 3)]
print(*(i for j in l for i in j))
a 1 b 2 c 3
Or using itertools.chain to flatten:
from itertools import chain
print(*chain(*l))
|
Python mysql-connector multi insert not working?
Question: I just started using the Mysql Connector library for Python and I seem to be
running into an issue I can't get my head around.
I have Python dynamically generate Insert statements to be executed on MySQl
database.
I batch them together in one string and then I execute every n together in the
one `cusrsor.execute()` statement however for some reason when I check the DB
and table, the new rows are nowhere to be found.
I tried it with even just two insert statements together and they still didn't
get persisted to the DB even though I call execute, call a commit and then
close the cursor and connection in that order.
If I have just one insert statement in the execute command and I remove the
multi flag then it works fine.
Does anyone know why this could be happening?
Sample code:
from mysql import connector
my_sql_connection = connector.connect(user='user', password='pass',
host='hostname',
database='db')
mysql_cursor = my_sql_connection.cursor()
statement = "Insert into table (col) values (1); Insert into table (col) values (2);"
mysql_cursor.execute(statement, multi=True)
my_sql_connection.commit()
mysql_cursor.close()
my_sql_connection.close()
After this code is called, there are no errors however no new rows show up in
the table.
Answer: You should try using [executemany](https://dev.mysql.com/doc/connector-
python/en/connector-python-api-mysqlcursor-executemany.html) instead of
`execute` with the multi flag, it's generally much simpler to implement.
Also,I would recommend you switch to
[PyMySQL](https://github.com/PyMySQL/PyMySQL), it has a very complete
documentation and from my experience works much better 'out of the box' than
the mysql connector library.
|
Working with SVG paths in BeautifulSoup & Python
Question: I'm writing a Python script that will color in various areas of my city's
Census Block Groups (of which there are 18) different colors according to
their respective median household incomes on a map that's in the SVG format.
Sounds simple enough, right? Well, I can't figure out how, though I'm making
slight progress. What I've tried so far is making a list of each of the block
group paths according to how the SVG references them, making a list of the
median household incomes, then passing in the code that colors them. However,
this just.. doesn't seem to be working, for whatever reason. Can any of you
wonderful people help figure out where I'm misfiring?
import csv
from bs4 import BeautifulSoup
icbg = []
reader = csv.reader(open('censusdata.csv'),delimiter=",")
#read and get income
for row in reader:
income = row[6]
income = int(income)
icbg.append(income)
svg = open('NM2.svg','r')
soup = BeautifulSoup(svg,"lxml")
#find CBGs and incomes
path1 = soup.find('path')
path2 = path1.find_next('path')
path3 = path2.find_next('path')
path4 = path3.find_next('path')
path5 = path4.find_next('path')
path6 = path5.find_next('path')
path7 = path6.find_next('path')
path8 = path7.find_next('path')
path9 = path8.find_next('path')
path10 = path9.find_next('path')
path11 = path10.find_next('path')
path12 = path11.find_next('path')
path13 = path12.find_next('path')
path14 = path13.find_next('path')
path15 = path14.find_next('path')
path16 = path15.find_next('path')
path17 = path16.find_next('path')
path18 = path17.find_next('path')
incomep1 = icbg[0]
incomep2 = icbg[1]
incomep3 = icbg[2]
incomep4 = icbg[3]
incomep5 = icbg[4]
incomep6 = icbg[5]
incomep7 = icbg[6]
incomep8 = icbg[7]
incomep9 = icbg[8]
incomep10 = icbg[9]
incomep11 = icbg[10]
incomep12 = icbg[11]
incomep13 = icbg[12]
incomep14 = icbg[13]
incomep15 = icbg[14]
incomep16 = icbg[15]
incomep17 = icbg[16]
incomep18 = icbg[17]
paths = (path1, path2, path3, path4, path5, path6, path7, path8, path9, path10,
path11, path12, path13, path14, path15, path16, path17, path18)
incomes = (incomep1,incomep2,incomep3,incomep4,incomep5,incomep6,incomep7,incomep8,
incomep9,incomep10,incomep11,incomep12,incomep13,incomep14,incomep15,incomep16,incomep17,incomep18)
#set colors
colors = ['fee5d9','fcae91','fb6a4a','de2d26','a50f15']
for p in paths:
for i in range(0,17):
it = incomes[i]
if it > 20000:
color_class = 2
elif it > 25000:
color_class = 1
elif it > 30000:
color_class = 3
elif it > 35000:
color_class = 4
color = colors[color_class]
path_style = "font-size:12px;fill:#%s;fill-rule:nonzero;stroke:#000000;stroke-opacity:1;stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt;marker-start:none;stroke-linejoin:bevel" % color
p['style'] = path_style
print(soup.prettify())
Running this gives me an SVG file like so: `fill:#fb6a4a;fill-
rule:nonzero;stroke:#000000;stroke-opacity:1;stroke-width:0.1;stroke-
miterlimit:4;stroke-dasharray:none;stroke-linecap:butt;marker-
start:none;stroke-linejoin:bevel">` comes up 18 times, meaning for every
available path, even though these paths have different incomes.
could the problem be with the way I wrote my comparisons?
Answer: From my understanding of what you are trying to do, your problem is that you
have 2 `for` loops instead of one. You should loop through the paths and
incomes at the same time. The way you are doing it now is you are looping
through all the incomes for each path. The following code simply moves the
paths into the same loop as the income so they are looped through at the same
time.
for i in range(0,17):
it = incomes[i]
p = paths[i]
if it > 20000:
color_class = 2
elif it > 25000:
color_class = 1
elif it > 30000:
color_class = 3
elif it > 35000:
color_class = 4
color = colors[color_class]
path_style = "font-size:12px;fill:#%s;fill-rule:nonzero;stroke:#000000;stroke-opacity:1;stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt;marker-start:none;stroke-linejoin:bevel" % color
p['style'] = path_style
|
Geoip2's python library doesn't work in pySpark's map function
Question: I'm using geoip2's python library and pySpark to get the geographical address
of some IPs. My code is like:
geoDBpath = 'somePath/geoDB/GeoLite2-City.mmdb'
geoPath = os.path.join(geoDBpath)
sc.addFile(geoPath)
reader = geoip2.database.Reader(SparkFiles.get(geoPath))
def ip2city(ip):
try:
city = reader.city(ip).city.name
except:
city = 'not found'
return city
I tried
print ip2city("128.101.101.101")
It works. But when I tried to do this in rdd.map:
rdd = sc.parallelize([ip1, ip2, ip3, ip3, ...])
print rdd.map(lambda x: ip2city(x))
It reported
Traceback (most recent call last):
File "/home/worker/software/spark/python/pyspark/rdd.py", line 1299, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File "/home/worker/software/spark/python/pyspark/context.py", line 916, in runJob
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File "/home/worker/software/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/home/worker/software/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/worker/software/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/worker/software/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/worker/software/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
TypeError: Required argument 'fileno' (pos 1) not found
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Could any one tell my how to make the ip2city function work in rdd.map().
Thanks!
Answer: It looks like the problem with your code comes from a `reader` object. It
cannot be correctly serialized as a part of a closure and send to the
workers.To deal with this you have instantiate it on the workers. One way you
can handle this is to use `mapPartitions`:
from pyspark import SparkFiles
geoDBpath = 'GeoLite2-City.mmdb'
sc.addFile(geoDBpath)
def partitionIp2city(iter):
from geoip2 import database
def ip2city(ip):
try:
city = reader.city(ip).city.name
except:
city = 'not found'
return city
reader = database.Reader(SparkFiles.get(geoDBpath))
return [ip2city(ip) for ip in iter]
rdd = sc.parallelize(['128.101.101.101', '85.25.43.84'])
rdd.mapPartitions(partitionIp2city).collect()
## ['Minneapolis', None]
|
django: sort by foreignkey in model manager
Question:
class Author(models.Model):
name = models.CharField()
class Article(models.Model):
author = models.ForeignKey(Author)
created_at = models.DateTimeField(auto_now_add=True)
In a custom model manager for `Author`, how could one proceed to sort all the
authors by the creation date of their most recent `Article`? Is this possible
without using raw sql or for loops?
To show research effort and clarify the question, here's the (pseudo)code of
how I'd do that in raw Python. It is untested and would probably not work
because of the `if not article.author in sorted_authors` condition.
from django.db import models
from .models import Article
class AuthorsManager(models.Manager):
def get_sorted_authors(self):
articles = Article.objects.all().order_by('creation_date')
sorted_authors = []
for article in articles:
# not sure if this if condition would work
if not article.author in sorted_authors:
sorted_authors.append(article.author)
return sorted_authors
* * *
Another possible solution: add a `last_article_datetime` field to `Author` and
sort by that. Widely more efficient and concise. Question still standing for
any other use-case though.
Answer: Maybe like this:
Author.objects.annotate(lc=Max('article__created_at')).order_by('lc')
|
Invalid gopkg.in package and rev in
Question: When attempting to build with Pants, I am seeing the following error:
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/tasks/go_fetch.py", line 154, in _transitive_download_remote_libs
all_known_addresses)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/tasks/go_fetch.py", line 105, in _transitive_download_remote_libs
fetcher.fetch(go_remote_lib.import_path, dest=tmp_fetch_root, rev=go_remote_lib.rev)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/subsystems/fetchers.py", line 437, in fetch
github_root, github_rev = self._map_import_path(import_path, rev)
File "/Users/chad/.cache/pants/setup/bootstrap/pants.mbFDa8/install/lib/python2.7/site-packages/pants/util/memo.py", line 95, in memoize
result = func(*args, **kwargs)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/subsystems/fetchers.py", line 454, in _map_import_path
raise self.FetchError('Invalid gopkg.in package and rev in: {}'.format(import_path))
Exception message: Invalid gopkg.in package and rev in: gopkg.in/amz.v1/aws
Here is the contents of my BUILD file:
# Auto-generated by pants!
# To re-generate run: `pants buildgen.go --materialize --remote`
go_remote_library(rev='v1')
Looking into the code, I see that the error comes from a failure to match a
regex in fetchers.py, [on line
453](https://github.com/pantsbuild/pants/blob/ed5badda3d518c869aa22d6401f604aa864fcfc4/contrib/go/src/python/pants/contrib/go/subsystems/fetchers.py#L453).
I am running Pants version 0.0.59 on Mac OS X 10.10 (Yosemite)
Answer: Noting that [@Huckphin](http://stackoverflow.com/users/38009/huckphin)
stumbled on a bug here in `pantsbuild.pants<=0.0.59`. He filed an
[issue](https://github.com/pantsbuild/pants/issues/2581) and now things are
fixed up for handling `gopkg.in` remote import paths that point to sub-
packages in the remote repo. The fix will be released with the regular Friday
release on 11/20/2015 in `0.0.60`.
|
Binary search through nested list in python
Question: I have a homework question asking:
> Write a function called readCountries that reads a file and returns a list
> of countries. The countries should be read from this file (countries.txt),
> which contains an incomplete list of countries with their area and
> population. Each line in this file represents one country in the following
> format:
>
>
> name, area(in km2), population
>
>
> When opening the file your function should handle any exceptions that may
> occur. Your function should completely read in the file, and separate the
> data into a 2-dimensional list. You may need to split and strip the data as
> appropriate. Numbers should be converted to their correct types. Your
> function should return this list so that you can use it in the remaining
> questions.
I have a text file called "countries.txt" with a list of a bunch of countries,
their area, and their population.
Sample of "countries.txt":
Afghanistan, 647500.0, 25500100
Albania, 28748.0, 2821977
Algeria, 2381740.0, 38700000
This is the code I have and it works:
def readCountries(filename):
'''read a file and print it to the screen'''
countryList = []
for line in open(filename):
with open(filename) as aFile:
countries = aFile.read()
countryList.append(line.strip().split())
aFile.close()
return countryList
Sample of output when I ran the question:
>>> countryList = readCountries("countries.txt")
>>> countryList
[['Afghanistan,', '647500.0,', '25500100'], ['Albania,', '28748.0,', '2821977'], ['Algeria,', '2381740.0,', '38700000']
The next question asks:
> Write a function called printCountry that takes a string representing a
> country name as a parameter. First call your answer from question 1 to get
> the list of countries, then do a binary search through the list and print
> the country's information if found. And should print out:
>
>
> printCountry("Canada")
> Canada, Area: 9976140.0, Population: 35295770
> printCountry("Winterfell")
> I'm sorry, could not find Winterfell in the country list.
>
But I can't figure it out.
When I tried to do the coding for this question, I typed:
countryList = readCountries("countries.txt")
def printCountry(name):
lo, hi = 0, len(countryList) - 1
while lo <= hi:
mid = lo + (hi - lo) // 2
country = countryList[mid]
test_name = country[0]
if name > test_name:
lo = mid + 1
elif name < test_name:
hi = mid - 1
else:
return country[0] + ", Area: " + str(country[1]) + ", Population: " + str(country[2])
return "I'm sorry can not find " + str(name)
and the result was:
>>> printCountry("Canada")
'Sorry can not find Canada'
even though Canada is in the text. Where did I go wrong?
Answer: Your binary search code is (mostly) ok, but there are a couple of problems in
your code that reads in the list of countries.
Your file opening & reading code is strange. It's like you've combined two
different approaches to reading data, so you are opening the file multiple
times.
Fortunately, the effects of these lines:
with open(filename) as aFile:
countries = aFile.read()
don't affect the output of the `readCountries` function because you don't do
anything else with `countries`.
Also, in the description of your assignment it says to "strip the data as
appropriate. Numbers should be converted to their correct types", which your
code doesn't do. And as my hint above implied, that means the country names in
your list still had the commas attached to them, so the binary search couldn't
find them (unless you included the comma in the search name).
Anyway, here's a cleaned up version that's designed to run on Python 2.6 or
later.
from __future__ import print_function
def readCountries(filename):
countryList = []
with open(filename) as aFile:
for line in aFile:
line = line.strip().split()
#Remove anny trailing commas from each field
line = [s.rstrip(',') for s in line]
#Convert area to float and population to int
line = [line[0], float(line[1]), int(line[2])]
#print line
countryList.append(line)
return countryList
countryList = readCountries("countries.txt")
def printCountry(name):
lo, hi = 0, len(countryList) - 1
while lo <= hi:
mid = lo + (hi - lo) // 2
country = countryList[mid]
test_name = country[0]
if name > test_name:
lo = mid + 1
elif name < test_name:
hi = mid - 1
else:
print(' {0}, Area: {1}, Population: {2}'.format(*country))
break
else:
print(" I'm sorry, could not find {0} in the country list.".format(name))
#tests
printCountry("Canada")
printCountry("Winterfell")
print('- ' * 20)
#make sure we can find the first & last countries.
printCountry("Afghanistan")
printCountry("Nowhere")
Here's the data file I ran it on:
**countries.txt**
Afghanistan, 647500.0, 25500100
Albania, 28748.0, 2821977
Algeria, 2381740.0, 38700000
Canada, 9976140.0, 35295770
Nowhere, 1000.0 2345678
And this is the output it produced:
Canada, Area: 9976140.0, Population: 35295770
I'm sorry, could not find Winterfell in the country list.
- - - - - - - - - - - - - - - - - - - -
Afghanistan, Area: 647500.0, Population: 25500100
Nowhere, Area: 1000.0, Population: 2345678
|
CUDA ERROR: initialization error when using parallel in python
Question: I use CUDA for my code, but it still slow run. Therefore I change it to run
parallel using multiprocessing (pool.map) in python. But I have `CUDA ERROR:
initialization error`
This Is function :
def step_M(self, iter_training):
gpe, e_tuple_list = iter_training
g = gpe[0]
p = gpe[1]
em_iters = gpe[2]
e_tuple_list = sorted(e_tuple_list, key=lambda tup: tup[0])
data = self.X[e_tuple_list[0][0]:e_tuple_list[0][1]]
cluster_indices = np.array(range(e_tuple_list[0][0], e_tuple_list[0][1], 1), dtype=np.int32)
for i in range(1, len(e_tuple_list)):
d = e_tuple_list[i]
cluster_indices = np.concatenate((cluster_indices, np.array(range(d[0], d[1], 1), dtype=np.int32)))
data = np.concatenate((data, self.X[d[0]:d[1]]))
g.train_on_subset(self.X, cluster_indices, max_em_iters=em_iters)
return g, cluster_indices, data
And here code call:
pool = Pool()
iter_bic_list = pool.map(self.step_M, iter_training.items())
The iter_training same: [](http://i.stack.imgur.com/Tv5rE.png)
And this is errors [](http://i.stack.imgur.com/fi6LM.png)
could you help me to fix.Thanks you.
Answer: I realize this is a bit old but I ran into the same problem, while running
under celery in my case:
syncedmem.cpp:63] Check failed: error == cudaSuccess (3 vs. 0) initialization error
Switching from prefork to an eventlet based pool has resolved the issue. Your
code could be updated similarly to:
from eventlet import GreenPool
pool = GreenPool()
iter_bic_list = list(pool.imap(self.step_M, iter_training.items()))
|
Django unicode method not working
Question: I am following the app tutorial on the Django website and using **_Python
2.7.5_** with **_Django 1.8_**. It suggests Python 2.7 users to include a
**unicode** method in the models.py file to return readable output in the
python shell.
# I have added the unicode method into the Question and Choice classes as so:
from django.db import models
import datetime
from django.utils import timezone
from django.util.encoding import python_2_unicode_compatible
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def was_published_recently(self):
return self.pub_date >= timezone.now() - datetime.timedelta(days = 1)
def __unicode__(self):
return u"%i" % self.question_text
def __str__(self):
return question_text
class Choice(models.Model):
question = models.ForeignKey(Question)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
def __unicode__(self):
return u"%i" % self.choice_text
def __str__(self):
return choice_text
**_This is my output from the python shell:_**
from polls.models import Question, Choice
>>> Question.objects.all()
[<Question: Question object>]
>>>
**_When it really should be this:_**
>>> Question.objects.all()
[<Question: What's up?>]
I don't understand what I am doing wrong. Please help!
Answer: Neither the `was_published_recently` nor the `__unicode__` method are within
the Question class. Indentation is significant: make sure they are indented to
the same level as the fields.
|
error: not found: value sqlContext
Question: I would like to create a python application to analyze twitter streaming data
using Apache Spark.
Is there any way I can use the functionality of Apache Spark streaming without
setting up the Hadoop environment. How to run Apache Spark in standalone mode?
I just downloaded the binaries and tried to run spark-shell, getting
NullPointerException. Can someone please help.
<console>:10: error: not found: value sqlContext
import sqlContext.implicits.
<console>:10: error: not found: value sqlContext
import sqlContext.sql
Answer: I install spark 1.5.2 using homebrew, and when I started the spark-shell, I
met the same error. I add export SPARK_LOCAL_IP=127.0.0.1 to .bashrc or
.bash_profile. It Works.
|
How to convert Unicode string to dictionary?
Question: I have a string with utf-8 encoding, like
>>> print s
"{u'name':u'Pradip Das'}"
Basically I load some data from JSON file and it loads as above. Now, I want
to covert this string in to python dictionary. So I can do like:
>>> print d['name']
'Pradip Das'
Answer: You can use the built-in
[`ast.literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval):
>>> import ast
>>> a = ast.literal_eval("{'a' : '1', 'b' : '2'}")
>>> a
{'a': '1', 'b': '2'}
>>> type(a)
<type 'dict'>
This is safer than using `eval`. As the docs itself recommends it:
>>> help(ast.literal_eval)
Help on function literal_eval in module ast:
literal_eval(node_or_string)
Safely evaluate an expression node or a string containing a Python
expression. The string or node provided may only consist of the following
Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
and None.
Additionally, you can read this
[article](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html)
which will explain why you should avoid using `eval`.
|
Calculate logarithm in python
Question: I am wondering why the result of `log base 10 (1.5)` in python =
0.405465108108 while the real answer = 0.176091259.
This is the code that I wrote:
import math
print math.log(1.5)
Can someone know how to solve this issue
Answer: From [the
documentation](https://docs.python.org/2/library/math.html#math.log):
> With one argument, return the natural logarithm of _x_ (to base _e_).
>
> With two arguments, return the logarithm of _x_ to the given _base_ ,
> calculated as `log(x)/log(base)`.
But the log10 is made available as `math.log10()`, which does not resort to
log division if possible.
|
Writing to a xml file using pythons element tree
Question: I'm using python's Element tree library to parse/write to an xml file:
<?xml version='1.0' encoding='utf-8'?>
<data>
<reminder id="9">
<enabled>true</enabled>
<programmename>Show Me Show Me</programmename>
<starttime>10:05</starttime>
<sdate>Tuesday, 17 November 2015</sdate>
<channel>NA</channel>
</reminder>
</data>
I'm using the following to write this out:
from xml.etree import ElementTree as ET
data = ET.Element("data")
reminder = ET.SubElement(data, "reminder")
reminder.set("id", "9")
enabled = ET.SubElement(reminder, "enabled")
enabled.text = "true"
programmename = ET.SubElement(reminder, "programmename")
programmename.text = new_programmename
starttime = ET.SubElement(reminder, "starttime")
starttime.text = new_programmetime
sdate = ET.SubElement(reminder, "sdate")
sdate.text = new_programmedate
channel = ET.SubElement(reminder, "channel")
channel.text = "NA"
indent(data, level=0)
tree = ET.ElementTree(data)
tree.write(xmlfile, xml_declaration=True, encoding='utf-8', method="xml")
Although I'm struggling to understand how to add an additional
"record"/element? for example, so the xml looks like this:
<?xml version='1.0' encoding='utf-8'?>
<data>
<reminder id="9">
<enabled>true</enabled>
<programmename>Show Me Show Me</programmename>
<starttime>10:05</starttime>
<sdate>Tuesday, 17 November 2015</sdate>
<channel>NA</channel>
</reminder>
<reminder id="10">
<enabled>true</enabled>
<programmename>foo</programmename>
<starttime>13:05</starttime>
<sdate>Wednesday, 18 November 2015</sdate>
<channel>NA</channel>
</reminder>
<data>
Any assistance would be greatly appreciated, i cant find any reference to this
or rather reference that i can understand...
Answer: Try as below-
from lxml import etree as et
xml="""<?xml version='1.0' encoding='utf-8'?>
<data>
<reminder id="9">
<enabled>true</enabled>
<programmename>Show Me Show Me</programmename>
<starttime>10:05</starttime>
<sdate>Tuesday, 17 November 2015</sdate>
<channel>NA</channel>
</reminder>
</data>"""
tree = et.fromstring(xml)
element = tree.find(".//reminder[@id='9']")
element_parent = element.getparent()
element_parent.insert(element_parent.index(element)+1,et.XML("""
<reminder id="10">
<enabled>true</enabled>
<programmename>foo</programmename>
<starttime>13:05</starttime>
<sdate>Wednesday, 18 November 2015</sdate>
<channel>NA</channel>
</reminder>"""))
print(et.tostring(tree))
Prints-
<data>
<reminder id="9">
<enabled>true</enabled>
<programmename>Show Me Show Me</programmename>
<starttime>10:05</starttime>
<sdate>Tuesday, 17 November 2015</sdate>
<channel>NA</channel>
</reminder>
<reminder id="10">
<enabled>true</enabled>
<programmename>foo</programmename>
<starttime>13:05</starttime>
<sdate>Wednesday, 18 November 2015</sdate>
<channel>NA</channel>
</reminder>
</data>
acknowledge "unutbu"
|
How to get value from date filter input field with Selenium
Question: There is a date filter with input fields and drop-down datepicker like on
below image[](http://i.stack.imgur.com/O6BYm.png)
I want to get current value of, for example, "From" input field (expected
output = `"03/13/2013"`). So for following element's code
<div class="input-group date" id="inputValidFrom">
<input name="validFrom" class="form-control" id="inputValidFromValue" required="" type="text"> </input>
I use `Python's` lines:
>>>from selenium import webdriver
>>>driver = webdriver.Ie()
>>>input = driver.get_element_by_xpath('//input[@name="validFrom"]')
>>>input.text # returns empty string
''
>>>input.get_attribute('value') # also returns ''
''
>>>input.value_of_css_property('text') # returns again just empty string
''
Who knows the way how to get this input field box property?
Answer: You are using the IE-Driver with XPath. Try writing the attribute name in
lower case. <http://www.seleniumhq.org/docs/03_webdriver.jsp#by-xpath>
|
Copying over part of a table from SQL Server to Aurora DB (Based on MySQL by AWS)
Question: I have a legacy SQL Server DB and I need to copy part of a very very big table
on it over to a new Aurora DB cluster from AWS (RDS).
The old table in SQL server has 1.8 billion records and 43 columns, however in
the new DB I will only have 13 of those columns carried over and almost all
rows.
I was wondering if anyone has any ideas on the best way that I can move this
data across?
I wrote a simple Python script to query the SQL server and then execute insert
statements on the new DB but I estimate this would take about 30 hours to run
after I did some tests on smaller sets of data.
Any ideas?
P.S Aurora is based off of MySQL so I would imagine if it works for MySQL it
would work for Aurora.
Answer: Assuming you can get the data you want into something like a CSV file, `LOAD
DATA LOCAL INFILE` should be pretty performant.
I did wonder whether it would be allowed on RDS and discovered an AWS article
on [importing data into MySQL on
RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html).
I couldn't find an equivalent one for Aurora, only [migrating from an RDS
based MySQL
instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Migrate.html).
There's an [Amazon RDS for Aurora Export/Import Performance Best
Practices](https://d0.awsstatic.com/product-
marketing/Aurora/Aurora_Export_Import_Best_Practices_v1-3.pdf) document that
has one reference to `LOAD DATA LOCAL INFILE`, however.
|
Cannot run Google App Engine custom managed VM: --custom-entrypoint must be set error
Question: **PROBLEM DESCRIPTION**
I am trying to create a custom managed VM for Google App Engine that behaves
identically to the standard python27 managed VM provided by Google. (I'm doing
this as a first step to adding a C++ library to the runtime).
From google [documentation](https://cloud.google.com/appengine/docs/managed-
vms/tutorial/step2#dockerfile), the following Dockerfile specifies the
standard python27 runtime:
FROM gcr.io/google_appengine/python-compat
ADD . /app
I have verified that this is the right Dockerfile by examining the one
generated by `gcloud preview app run` when using the standard python27
runtime. It is identical to this.
But when I run my application with this Dockerfile using `dev_appserver.py` or
with `gcloud preview app run` I get an error saying:
The --custom_entrypoint flag must be set for custom runtimes
I am using the latest versions of gcloud (1.9.86, with app-engine-python
component version 1.9.28) and the standalone python app engine SDK (1.9.28). I
had the same problem with earlier versions, so I updated to the latest.
**THINGS I HAVE TRIED:**
`gcloud preview app run --help` has the following to say about `--custom-
entrypoint`:
--custom-entrypoint CUSTOM_ENTRYPOINT
Specify an entrypoint for custom runtime modules. This is required when
such modules are present. Include "{port}" in the string (without
quotes) to pass the port number in as an argument. For instance:
--custom_entrypoint="gunicorn -b localhost:{port} mymodule:application"
I am not sure what to make of this. Should the docker image not already
contain an ENTRYPOINT? Why am I being required to provide one in addition?
Also, what should the entrypoint be for the `gcr.io/google_appengine/python-
compat` image be? Google provides no documentation for this.
I have tried a meaningless `--custom-entrypoint="echo"`, which silences the
error, but the application does not response to any HTTP requests.
The two other relevant stackoverflow questions I have found have not helped.
The accepted answers seem to suggest that this is a bug in the SDK that was
resolved. But I have tried it in two versions of the SDK, including the
latest, and I still have the problem.
* [How to fix “`The --custom_entrypoint flag must be set for custom runtimes`”?](http://stackoverflow.com/questions/31280849/how-to-fix-the-custom-entrypoint-flag-must-be-set-for-custom-runtimes)
* [Google Managed VM error - custom entry point](http://stackoverflow.com/questions/33255674/google-managed-vm-error-custom-entry-point)
**STEPS TO REPRORDUCE:**
To highlight my problem, I have created a trivial application that generates
the error. It consists of just three files:
`app.yaml`:
module: default
version: 1
runtime: custom
api_version: 1
threadsafe: true
vm: true
handlers:
- url: /.*
script: wsgi.app
`Dockerfile`:
FROM gcr.io/google_appengine/python-compat
ADD . /app
This `Dockerfile` is the same one that is used for the python27 runtime (and
in fact literally copy-pasted from the Dockerfile generated by `gcloud preview
app run` when using the python27 runtime), so this should be identical to
setting `runtime: python27`.
`wsgi.py`:
import webapp2
class Hello(webapp2.RequestHandler):
def get(self):
self.response.write(u'Hello')
app = webapp2.WSGIApplication([('/Hello', Hello)], debug=True)
When I run `dev_appserver.py app.yaml` in the directory containing these three
files however, I get the following error:
Traceback (most recent call last):
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 83, in <module>
_run_file(__file__, globals())
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 79, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 1033, in <module>
main()
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 1026, in main
dev_server.start(options)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 818, in start
self._dispatcher.start(options.api_host, apis.port, request_data)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 194, in start
_module.start()
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1555, in start
self._add_instance()
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1707, in _add_instance
expect_ready_request=True)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/custom_runtime.py", line 73, in new_instance
assert self._runtime_config_getter().custom_config.custom_entrypoint
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 383, in _get_runtime_config
raise ValueError('The --custom_entrypoint flag must be set for '
ValueError: The --custom_entrypoint flag must be set for custom runtimes
Answer: **UPDATE**
THIS MAY NO LONGER BE ACCURATE. SEE NICK'S ANSWER.
(Though I could not get that working. But I did not try very hard)
* * *
There is a completely undocumented but absolutely essential piece of
information w.r.t. custom managed VMs:
**THEY CANNOT BE RUN ON THE DEVELOPMENT SERVER!**
If you think this crucial fact would be mentioned anywhere sane, like say, the
documentation page for custom managed VMs, or for `dev_appserver.py`, or even
as an error message when running `dev_appserver.py`, then you are giving
Google far too much credit.
The only place where I can find any kind of statement about this is in the
Readme file of the [appengine-java-vm-guestbook-extras
demo](https://github.com/GoogleCloudPlatform/appengine-java-vm-guestbook-
extras) on github (seriously):
> The Cloud SDK does not support anymore running custom runtimes when a
> Dockerfile is provided. You'll have to deploy the application to App Engine
Google does not care to:
1. Implement this basic and important feature.
2. Document that the development server is missing such an important feature.
3. Give any reasonable error message when the user tires to perform the action.
I hope this answer saves some sorry developer from the days of torment I
suffered because of this.
|
Wand install: TypeError: bad argument type for built-in operation
Question: System: Windows 7 Professional 64-bit
Python 3.5.0 |Anaconda 2.4.0 (64-bit)
ImageMagick-6.9.2-Q16 installed and runs standalone.
pip install wand successful
[1]:import wand #imports OK
[2]:from wand.image import Image
. . . TypeError: bad argument type for built-in operation
MAGICK_HOME env is set correctly. I manually added the ImageMagick directories
to PATH
Any idea on how to resolve?
Answer: I'm not around windows, but I would assume you would do could set
`MAGICK_HOME` at the system, user, session, or runtime level.
For runtime, it might be as simple as updating your python script
import os
os.environ["MAGICK_HOME"] = "C:\ImageMagick-X.Y.Z"
from wand.image import Image
For a [cmd](/questions/tagged/cmd "show questions tagged 'cmd'") session, run
the following before executing the python script.
setx MAGICK_HOME "C:\ImageMagick-X.Y.Z"
> Note: The path should be a absolute system path to the root-directly of your
> ImageMagick installation.
|
Artifactory - Tomcat errors when making REST calls
Question: I am trying to perform certain actions against my Artifactory instance using
the REST API, but when I make certain calls I get the following Tomcat error:
HTTP Status 404
The requested resource is not available.
I'm doing this using a Python script and my Artifactory instance is v4.2.2 rev
40049 deployed using an RPM package via these
[instructions](https://www.jfrog.com/confluence/display/RTF/Installing+on+Linux+Solaris+or+Mac+OS#InstallingonLinuxSolarisorMacOS-
RPMorDebianInstallation).
The problem only seems to happen when I made certain REST calls that use the
`api/` URI vs the `artifactory/` URI. Here are some examples of what I mean:
I can successfully deploy an artifact using this command:
>>import requests
>>session = requests.session()
>>response = session.put('http://artifactory.domain.com/artifactory/repo/test.txt')
Response:
>>response.status_code
201
>>response.text
u'{\n "repo" : "repo",\n "path" : "/",\n "created" : "2015-11-17T12:10:12.679-07:00",\n "createdBy" : "anonymous",\n "downloadUri" : "http://artifactory.domain.com:8081/artifactory/repo/test.txt",\n "mimeType" : "application/octet-stream",\n "size" : "0",\n "checksums" : {\n "sha1" : "da39a3ee5e6b4b0d3255bfef95601890afd80709",\n "md5" : "d41d8cd98f00b204e9800998ecf8427e"\n },\n "originalChecksums" : {\n },\n "uri" : "http://artifactory.domain.com:8081/artifactory/repo/test.txt"\n}'
But if I try to do something like copy or move which uses the `api/` URI then
I get a Tomcat error message:
>>response = session.post('http://artifactory.domain.com/api/copy/repo/test.txt?to=/repo/folder/test.txt')
>>response.status_code
404
>>response.text
u'<!DOCTYPE html><html><head><title>Apache Tomcat/8.0.22 - Error report</title><style type="text/css">H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}.line {height: 1px; background-color: #525D76; border: none;}</style> </head><body><h1>HTTP Status 404 - /api/copy/repo/test.txt</h1><div class="line"></div><p><b>type</b> Status report</p><p><b>message</b> <u>/api/copy/repo/test.txt</u></p><p><b>description</b> <u>The requested resource is not available.</u></p><hr class="line"><h3>Apache Tomcat/8.0.22</h3></body></html>'
I've searched around online and found some people were having `resource not
available` issues when the application's folder had the wrong permissions, but
as far as I can tell from JFrog's website my folder permissions are correct.
I've also checked several log files on the system and they don't even contain
any references to the REST calls I've made which ended up in a Tomcat error:
Catalina log:
/var/opt/jfrog/artifactory/logs/catalina/catalina.out
2015-11-17 12:16:47,434 [http-nio-8081-exec-5] [INFO ] (o.a.e.UploadServiceImpl:453) - Deploy to 'repo:test.txt' Content-Length: 6
Main Artifactory log:
/var/opt/jfrog/artifactory/logs/artifactory.log
2015-11-17 12:16:47,434 [http-nio-8081-exec-5] [INFO ] (o.a.e.UploadServiceImpl:453) - Deploy to 'repo:test.txt' Content-Length: 6
Here are the contents of the `/var/opt/jfrog/artifactory` directory:
drwxrwxr-x 3 artifactory artifactory 4096 Sep 22 02:00 backup
drwxrwxr-x 5 artifactory artifactory 4096 Nov 6 08:06 data
lrwxrwxrwx 1 artifactory artifactory 26 Nov 6 08:06 etc -> /etc/opt/jfrog/artifactory
drwxrwxr-x 3 artifactory artifactory 4096 Sep 21 13:24 logs
lrwxrwxrwx 1 artifactory artifactory 27 Nov 6 08:06 misc -> /opt/jfrog/artifactory/misc
drwxrwxr-x 2 artifactory artifactory 4096 Nov 6 09:23 temp
lrwxrwxrwx 1 artifactory artifactory 29 Nov 6 08:06 tomcat -> /opt/jfrog/artifactory/tomcat
lrwxrwxrwx 1 artifactory artifactory 30 Nov 6 08:06 webapps -> /opt/jfrog/artifactory/webapps
drwxrwxr-x 3 artifactory artifactory 4096 Sep 21 13:24 work
Any ideas as to why certain REST calls are successful and others are not?
Answer: Ok so apparently the reason my API calls for copy and move, etc aren't working
is because I need to include `artifactory/` before the `api/` prefix in the
URI string, like this:
>>response = session.post('http://artifactory.domain.com/artifactory/api/copy/repo/test.txt?to=/repo/folder/test.txt')
>>response.status_code
200
>>response.text
u'{\n "messages" : [ {\n "level" : "INFO",\n "message" : "copying repo:test.txt to repo:folder/test.txt completed successfully, 1 artifacts and 0 folders were copied"\n } ]\n}'
|
Code Eval Challenge #20
Question: I joined code eval to improve my Python 3.5 programming. Code Eval is a
website, which offers challenges to any programmer looking to improve their
skills. Currently, I'm stumped on challenge #20.
* * *
Challenge #20 indicates that it needs the following:
Given a string write a program to convert it into lowercase.
INPUT SAMPLE:
The first argument will be a path to a filename containing sentences, one per
line. You can assume all characters are from the english language. E.g.
HELLO CODEEVAL This is some text OUTPUT SAMPLE:
Print to stdout, the lowercase version of the sentence, each on a new line.
E.g.
hello codeeval this is some text
* * *
I wrote the program for Code Eval. The strangest thing happened is that I
received an error after submitting my file. Here's what I programmed in Python
3.5:
>Python 3.5 Code
>Code Eval Challenge #20
>t3xx3r
>
>text1 = "HELLO THIS IS CODEEVAL"
>text2 = "This is some text"
>
>print(text1.lower() + '\n' + text2.lower())
I successfully submitted my third revision. It ended up with a "Failed" icon.
It seems that I did not understand the Code Eval Challenge. What was I missing
that did not fulfill their requirement(s)?
My first revision was partially 'filled' for Code Eval but here is my first
revision which was an actual partial success:
>__author__ = 't3xx3r'
>
>text = 'HELLO CODEEVAL'
>print(text.lower())
Yes, once again, so how did I get a partially successful code in revision 1
but not in the other revision?
I'm currently learning Programming Logic and will be moving on to Programming
Fundamentals I & II. The most difficult thing about Code Eval is that they're
asking for arguments (which are enclosed within functions, methods, etc.) but
I can't seem to find a way to successfully fulfill that challenge!
Answer: This is the basic code you could write:
import sys
test_cases = open(sys.argv[1], 'r')
for test in test_cases:
print(test.lower())
test_cases.close()
Explanation: to access the first argument passed to the program, we need to
use `sys.argv`; index is `1` because at position `0` is the running path of
the script.
We iterate through each line in the file and print it in lower-case calling
the `lower()` method.
Finally we close the file to prevent wasting system resources by keeping the
file handle open.
|
Convert from tuple of tuples to nested tuples in Python
Question: Is there a simple way to convert
t = ((1,), (1, 2), (1, 2, 3), (1, 2, 3, 4), (1, 2, 3, 4, 5))
to the following recursive structure, where each following tuple is appended
as an element of the prior tuple
(1, (1, 2, (1, 2, 3, (1, 2, 3, 4, (1, 2, 3, 4, 5)))))
What is the limit to this nesting? Can I have a 1000 or 10000 such nested
tuples?
UPDATE: It seems `t` nesting is unlimited (tried with 10000 after setting
recursion limit to 100).
On Window 7, Python 3.5) the recursion limit is around 300 at first, but can
be lifted as ([reference](http://stackoverflow.com/questions/3323001/maximum-
recursion-depth)). This is not related to structure `t`, but may be related to
Python routine accessing nested levels of the resulting structure.
sys.getrecursionlimit() # display current recursion level
sys.setrecursionlimit(10000) # set recursion level to 1000
Answer: Using [`functools.reduce`](https://docs.python.org/3/library/functools.html):
>>> from functools import reduce
>>> t = ((1,), (1, 2), (1, 2, 3), (1, 2, 3, 4), (1, 2, 3, 4, 5))
>>> reduce(lambda a, b: b + (a,), reversed(t), ())
(1, (1, 2, (1, 2, 3, (1, 2, 3, 4, (1, 2, 3, 4, 5)))))
|
How to obtain JSON value from Post in a web.py server app?
Question: Am using Python 2.7.6 along with web.py server to experiment with some simple
Rest calls...
Wish to send a JSON payload to my server and then print the value of the
payload...
Sample payload
{"name":"Joe"}
Here's my python script
#!/usr/bin/env python
import web
import json
urls = (
'/hello/', 'index'
)
class index:
def POST(self):
# How to obtain the name key and then print the value?
print "Hello " + value + "!"
if __name__ == '__main__':
app = web.application(urls, globals())
app.run()
Here's my cURL command:
curl -H "Content-Type: application/json" -X POST -d '{"name":"Joe"}' http://localhost:8080/hello
Am expecting this for the response (plain text):
Hello Joe!
Thank you for taking the time to read this...
Answer: You have to parse the json:
#!/usr/bin/env python
import web
import json
urls = (
'/hello/', 'index'
)
class index:
def POST(self):
# How to obtain the name key and then print the value?
data = json.loads(web.data())
value = data["name"]
return "Hello " + value + "!"
if __name__ == '__main__':
app = web.application(urls, globals())
app.run()
Also, make sure you're url is `http://localhost:8080/hello/` in your `cURL`
request; you have `http://localhost:8080/hello` in your example, which throws
an error.
|
How to group categorical values in Pandas?
Question: I'm trying Converting to categorical value and grouping in pandas.
For example, I have tried the following:
import pandas as pd
df = pd.DataFrame()
df['A'] = ['C1', 'C1', 'C2', 'C2', 'C3', 'C3']
df['B'] = [1,2,3,4,5,6]
df['A'] = df.loc[:,'A'].astype('category')
df2 = df[0:3]
result = df2.groupby(by='A')['B'].nunique()
print(result)
Unfortunately, I get the exception
> File "C:\Python34\lib\site-packages\pandas\core\internals.py", line 86, in
> **init** len(self.values), len(self.mgr_locs)))
>
> ValueError: Wrong number of items passed 2, placement implies 3
**Edit** Unfortunately, the workaround suggested by @joris does not work for
my application. New counterexample:
import pandas as pd
df = pd.DataFrame()
df['A'] = ['C1', 'C1', 'C2', pd.np.nan, 'C3', 'C3']
df['B'] = [1,2,3,4,5,6]
df['A'] = df.loc[:,'A'].astype('category')
df2 = df[0:4]
df2['A'] = df2['A'].cat.remove_unused_categories()
result = df2.groupby(by='A')['B'].nunique()
print(result)
Answer: As mentioned in the comments, this is a regression in pandas 0.17.0, and
reported here: <https://github.com/pydata/pandas/issues/11635>
As a workaround for now, you can easily use the `nunique` Series method
through `apply` instead of calling it directly on the groupby object:
In [22]: df2.groupby(by='A')['B'].apply(lambda x: x.nunique())
Out[22]:
A
C1 2
C2 1
C3 0
Name: B, dtype: int64
The other problem you encountered with `remove_unused_categories()` is also a
bug, and this will be fixed in 0.17.1
(<https://github.com/pydata/pandas/pull/11639>)
|
Consume message from RabbitMQ as list using pika in Python
Question: I have a list like below in RabbitMq queue
[{'id':'10','url':'https://www.google.co.in/search?q=rabbitmq&oq=rabbitmq'},{'id':'11','url':'https://www.google.co.in/search?q=python&oq=python'}]
while consuming this message, I am getting this message like below as string
but not as list
"[{'id':'10','url':'https://www.google.co.in/search?q=rabbitmq&oq=rabbitmq'},{'id':'11','url':'https://www.google.co.in/search?q=python&oq=python'}]"
I tried to convert this string to list using `ast.literal_eval(my_list)` but
getting `SyntaxError: EOL while scanning string literal`
How can I get/convert this RabbitMQ message as list?
Answer: Here is the steps to do that:
1. Use double quote for json array `"`
2. Use json module
> `import json`
>
> `text =
> "[{'id':'10','url':'https://www.google.co.in/search?q=rabbitmq&oq=rabbitmq'},{'id':'11','url':'https://www.google.co.in/search?q=python&oq=python'}]"`
>
> `text2 = text.replace("'", '"')`
>
> `print json.loads(text2)`
|
CKAN API calls via python using urllib2 gives HTTP Error 409: Conflict
Question: Hi I have been trying to import a dataset using ckan api call via python's
urllib2 following the documentation at <http://docs.ckan.org/en/latest/api/>
the Code I am running is `
#!/usr/bin/env python
import urllib2
import urllib
import json
import pprint
dataset_dict = {
'name': 'my_dataset_name5',
'notes': 'A long description of my dataset',
}
data_string = urllib.quote(json.dumps(dataset_dict))
request = urllib2.Request(
'http://<ckan server ip>/api/action/package_create')
request.add_header('Authorization', 'my api key')
response = urllib2.urlopen(request, data_string)
assert response.code == 200
response_dict = json.loads(response.read())
assert response_dict['success'] is True
created_package = response_dict['result']
pprint.pprint(created_package)`
However it gives the following error:
> Traceback (most recent call last):File "autodatv2.py", line 26, in response
> = urllib2.urlopen(request, data_string)File
> "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen return
> opener.open(url, data, timeout)File "/usr/lib64/python2.7/urllib2.py", line
> 437, in open response = meth(req, response)File
> "/usr/lib64/python2.7/urllib2.py", line 550, in http_response 'http',
> request, response, code, msg, hdrs) File "/usr/lib64/python2.7/urllib2.py",
> line 475, in error return self._call_chain(*args) File
> "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain result =
> func(*args) File "/usr/lib64/python2.7/urllib2.py", line 558, in
> http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
> urllib2.HTTPError: HTTP Error 409: Conflict
I am running CKAN version 2.4 with python 2.7.10 on a Amazon ec2 instance and
echo $HTTP_PROXY shows nothing so I'm assuming it not a proxy issue.. Could
someone please provide any help to resolve this issue
Answer: CKAN is returning HTTP error 409, which could mean nearly anything. e.g You
could have a missing field or there may be already a dataset of that name in
the CKAN.
There will be an error message explaining the problem in the response body and
also on in the CKAN log.
Frankly, using urllib2 is making life hard for yourself. To talk to the CKAN
API in python, at the very least use 'requests', but best practice is to use
<https://github.com/ckan/ckanapi> e.g.
import ckanapi
demo = ckanapi.RemoteCKAN('http://demo.ckan.org',
apikey='phony-key',
user_agent='ckanapiexample/1.0 (+http://example.com/my/website)')
pkg = demo.action.package_create(name='my-dataset', title='not going to work')
|
Python update dictionary based on key value pair
Question: this question is how to check two dictionary list and for same keys and update
main dictionary with key value pair from other dictionary.
# First dictionary
main = [
{'country': u'CYPRUS', 'naziv': 'AKEL', 'FCI': 2},
{'country': u'CYPRUS', 'naziv': 'DIKO', 'FCI': 4},
]
# Second
second = [
{'likes': '8625.00', 'talks': '1215.00', 'naziv': 'AKEL'},
{'likes': '2746.00', 'talks': 0, 'naziv': 'DIKO'},
]
# Final output - example
output = [
{'country': u'CYPRUS', 'naziv': 'AKEL', 'FCI': 2,'likes': '8625.00', 'talks': '1215.00'}
{'country': u'CYPRUS', 'naziv': 'DIKO', 'FCI': 4,'likes': '2746.00', 'talks': 0},
]
Is there any fast way to do this. I was trying with update one dictionary but
dont works.
for dt, k in itertools.groupby(sorted(second, key=itemgetter('naziv')), key=itemgetter('naziv')):
maindict = {'naziv': dt}
for d in k:
maindict.update(d)
main.append(maindict)
Python 2.7
Answer: Another one may be -
import itertools
main = [
{'country': u'CYPRUS', 'naziv': 'AKEL', 'FCI': 2},
{'country': u'CYPRUS', 'naziv': 'DIKO', 'FCI': 4},
]
second = [
{'likes': '8625.00', 'talks': '1215.00', 'naziv': 'AKEL'},
{'likes': '2746.00', 'talks': 0, 'naziv': 'DIKO'},
]
lst = sorted(itertools.chain(main,second), key=lambda x:x['naziv'])
list_c = []
for k,v in itertools.groupby(lst, key=lambda x:x['naziv']):
d = {}
for dct in v:
d.update(dct)
list_c.append(d)
print list_c
"Shemeless copy of mgilson"
|
Issues trying to insert string to SQL Server
Question: I am trying to insert a string to a SQL DB starting with `0x` but it keeps
failing on the insert. The characters that come after `0x` are random
characters that range from A-Z, a-z and 0-9 with no set length. I tried to get
around it by adding a letter in front of the string and update it afterwards
but it does not work. I am using
SQL statement I am trying to mimic
insert into [TestDB].[dbo].[S3_Files] ([Key],[IsLatest],[LastModified],[MarkedForDelete],[VersionID]) values ('pmtg-dox/CCM/Trades/Buy/Seller_Provided_-_Raw_Data/C''Ds_v2/NID3153422.pdf','1','2015-10-11','Yes', '0xih91kjhdaoi23ojsdpf')
Python Code
import pymssql as mssql
...
cursor.execute("insert into [TestDB].[dbo].[S3_Files] ([Key],[IsLatest],[LastModified],[MarkedForDelete],[VersionID]) values (%s,%s,%s,%s,%s)",(deleteitems['Key'],deleteitems['IsLatest'],deleteitems['LastModified'],MarkedforDelete, deleteitems['VersionId']))
conn_db.commit()
> pymssql.ProgrammingError: (102, "Incorrect syntax near
> 'qb_QWQDrabGr7FTBREfhCLMZLw4ztx'.DB-Lib error message 20018, severity 15:
> General SQL Server error: Check messages from the SQL Server")
Is there a way to make Python, pymssql\mysql force insert the string? Is there
a string manipulation technique that I am not using? I have tried pypyodbc but
no luck.
Edit: My current patch is to alter the string and add a flag to the row so I
remember that the string starts with `0x`
Answer: This is the solution that I came up with.
Since running the insert command with the appropriate values worked, I created
a stored procedure in SQL to handle my request
USE [TestDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[stp_S3Files]
@Key_ varchar(4000),
@IsLatest_ varchar(200),
@Size_ varchar(200),
@LastModified_ varchar(200),
@MarkedForDelete_ varchar(200),
@VersionID_ varchar(200)
AS
insert into [TestDB].[dbo].[S3_Files] ([Key],[IsLatest],[Size(Bytes)],[LastModified],[MarkedForDelete],[VersionID]) values (@Key_, @IsLatest_, @Size_, @LastModified_, @MarkedForDelete_, @VersionID_)
Then I call it through Python
modkey = deleteitems['Key'].replace("'", "''")
cursor.execute("""exec TestDB.dbo.stp_S3Files
@Key_ = '%s'
,@IsLatest_ = %s
,@Size_ = '%s'
,@LastModified_ = '%s'
,@MarkedForDelete_ = '%s'
,@VersionID_ = '%s' """ %(modkey, deleteitems['IsLatest'],deleteitems['Size'],deleteitems['LastModified'],MarkedforDelete,deleteitems['VersionId']))
conn_db.commit()
Note: the string replace is to handle path names with ' to escape the
character. I hope this helps someone who has the same issue down the road.
|
How to make a complete NTP call using several servers?
Question: I need to independently check the current time (and, incidentally, cross check
with system provided time) and I wanted to use the NTP protocol for that.
There is an NTP module ([`ntplib`](https://pypi.python.org/pypi/ntplib/))
which allows to query **one** NTP server:
>>> import ntplib
>>> import time
>>> time.ctime(ntplib.NTPClient().request('europe.pool.ntp.org', version=3).tx_time)
'Wed Nov 18 17:17:16 2015'
I would like to use the NTP ability to combine the responses from several NTP
servers to get an [optimum
response](https://en.wikipedia.org/wiki/Network_Time_Protocol):
> [the NTP protocol] uses a modified version of Marzullo's algorithm to select
> accurate time servers and is designed to mitigate the effects of variable
> network latency.
What would be the right way in Python (ideally 3.x) to leverage that?
Answer: > What would be the right way in Python (ideally 3.x) to leverage that?
**Basically, you would have to implement Marzullo's algorithm in Python.**
Once you have queried two servers, how are you going to determine which clock
is closer to an atomic clock?
Determining the "optimum response"(a.k.a. most accurate time source) is tricky
business.
What you are looking for is an NTP `implementation`, unless you feel like
rolling your own in python.
The NTP Protocol discussed on the wiki page is talking about the NTP
[Reference
Implementation](https://en.wikipedia.org/wiki/Reference_implementation). This
consists of an ntp daemon running on all your servers. The ntp daemons keep
track of remote reference servers, and which ones are alive, and are most
accurate. Ntp daemon then slews or steps your clock to match the remote clock.
**Perhaps the easiest way** for you to achieve what you want, is to run an ntp
daemon(i.e. use the Reference Implementation) on your server in Client mode.
In your ntp.conf, list all of the time servers that you are trying to use to
find that "optimum response" from. When you have ntpd running, the daemon will
determine the lowest stratum, most accurate and available reference server for
you.
All you need to do is `ntpq -p`, and the remote server with the most accurate
time will be the server with the asterisk next to it.
Here's some sample output:
remote refid st t when poll reach delay offset jitter
==============================================================================
LOCAL(0) .LOCL. 10 l 96h 64 0 0.000 0.000 0.000
*ns2.example.com 10.193.2.20 2 u 936 1024 377 31.234 3.353 3.096
|
PEP 8 and deferred import
Question: I am working on a large Python program which makes use of a multitude of
modules depending on command-line options, in particular, `numpy`. We have
recently found a need to run this on a small embedded module which precludes
the use of numpy. From our perspective, this is easy enough (just don't use
the problematic command line options.)
However, following [PEP 8](https://www.python.org/dev/peps/pep-0008/), our
`import numpy` is at the beginning of each module that might need it, and the
program will crash due to `numpy` not being installed. The straightforward
solution is to move `import numpy` from the top of the file to the functions
that need it. The question is, "How bad is this"?
(An alternative solution is to wrap `import numpy` in a `try .. except`. Is
this better?)
Answer: Here is a best practice pattern to check if a module is installed and make
code branch depending on it.
# GOOD
import pkg_resources
try:
pkg_resources.get_distribution('numpy')
except pkg_resources.DistributionNotFound:
HAS_NUMPY = False
else:
HAS_NUMPY = True
# You can also import numpy here unless you want to import it inside the function
Do this in every module imports having soft dependency to numpy. [More
information in Plone CMS coding
conventions](http://docs.plone.org/develop/plone.api/docs/contribute/conventions.html).
|
Python plot 3d surface drawing
Question: I have a function `F = 0.8*X+0.6*Y+0.9*Z` But my problem is X Y and Z related
to each other. `X+Y+Z = 1`. The values and intervals should be like:
X = np.arange(0,1,0.01)
Y = np.arange(0,(1-X),0.01)
Z = 1-(X+Y)
Could you please give a simple way of doing this in Python. Thanks in advance.
Answer: If you want to do a surface plot in matplotlib your x, y and z arrays should
be 2D arrays. If interval of your variables depends on each other, you can
define your variables in terms of variables whose intervals is independent of
each other like u and v in here.
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import Normalize
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
u = np.linspace(0, 1, 100)
v = np.linspace(1, 0, 100)
x = np.outer(np.ones_like(u), v)
y = np.outer(v, u)
z = 1 - x - y
f = 0.8*x + 0.6*y + 0.9*z
norm = Normalize (f.min(), f.max())
m = plt.cm.ScalarMappable()
m.set_array([])
fcolors = m.to_rgba(f)
ax.view_init(azim=45)
ax.plot_surface(x, y, z, norm=norm, rstride=2, cstride=2, facecolors=fcolors, shade=False)
plt.show()
[](http://i.stack.imgur.com/XQBC1.png)
|
Python regex to find characters inside delimiters
Question: I have a more challenging task, but first I am faced with this issue. Given a
string `s`, I want to extract all the groups of characters marked by some
delimiter, e.g. parentheses. How can I accomplish this using regular
expressions (or any Pythonic way)?
import re
>>> s = '(3,1)-[(7,2),1,(a,b)]-8a'
>>> pattern = r'(\(.+\))'
>>> re.findall(pattern, s).group() # EDITED: findall vs. search
['(3,1)-[(7,2),1,(a,b)']
# Desire result
['(3,1)', '(7,2)', '(a,b)']
Answer: Use `findall()` instead of `search()`. The former finds all occurences, the
latter only finds the first.
Use the non-greedy `?` operator. Otherwise, you'll find a match starting at
the first `(` and ending at the final `)`.
Note that regular expressions aren't a good tool for finding nested
expressions like: `((1,2),(3,4))`.
import re
s = '(3,1)-[(7,2),1,(a,b)]-8a'
pattern = r'(\(.+?\))'
print re.findall(pattern, s)
|
python numpy operation instead of for loops
Question: I wrote some lines in python which work fine but is very slow; I think due to
the for loops. I hope one can speed up the following operations using numpy
commands. Let me define the goal.
Let's assume I have (always) a 3d numpy array (first dim: row, second dim:
col, third dim: frame); for simplecity I take a 2d array (since the operation
must be applied for every 2d array in the third dimension). Moreover, I take a
6x11 array, also for simplicity reasons (see drawing below).
1. I want to calculate the mean for all rows, i.e. sum_j a_ij resulting in an array. This, of course can be easily done. (I call this calue CM_tilde)
2. Now, for **each row** I want to calculate a selected mean, i.e. adding all values below a certain threshold and dividing them by the number of all columns (=N). If the value is above this defined threshold, the CM_tilde value (mean of the entire row) is added. This value is called CM
3. Afterwards, the CM value is subtracted from each element in the row
In addition to this I want to have a numpy array or list where all those CM
values are listed.
The figure:
[](http://i.stack.imgur.com/w0V6p.png)
The following code is working but very slow (especially if the arrays getting
large)
CM_tilde = np.mean(data, axis=1)
N = data.shape[1]
data_cm = np.zeros(( data.shape[0], data.shape[1], data.shape[2] ))
all_CMs = np.zeros(( data.shape[0], data.shape[2]))
for frame in range(data.shape[2]):
for row in range(data.shape[0]):
CM=0
for col in range(data.shape[1]):
if data[row, col, frame] < (CM_tilde[row, frame]+threshold):
CM += data[row, col, frame]
else:
CM += CM_tilde[row, frame]
CM = CM/N
all_CMs[row, frame] = CM
# calculate CM corrected value
for col in range(data.shape[1]):
data_cm[row, col, frame] = data[row, col, frame] - CM
print "frame: ", frame
return data_cm, all_CMs
Any ideas?
Answer: It's quite easy to vectorize what you're doing:
import numpy as np
#generate dummy data
nrows=6
ncols=11
nframes=3
threshold=0.3
data=np.random.rand(nrows,ncols,nframes)
CM_tilde = np.mean(data, axis=1)
N = data.shape[1]
all_CMs2 = np.mean(np.where(data < (CM_tilde[:,None,:]+threshold),data,CM_tilde[:,None,:]),axis=1)
data_cm2 = data - all_CMs2[:,None,:]
Comparing this with your originals:
In [684]: (data_cm==data_cm2).all()
Out[684]: True
In [685]: (all_CMs==all_CMs2).all()
Out[685]: True
The logic is that we work with arrays of size `[nrows,ncols,nframes]`
simultaneously. The main trick is to make use of python's broadcasting, by
turning `CM_tilde` of size `[nrows,nframes]` into `CM_tilde[:,None,:]` of size
`[nrows,1,nframes]`. Python will then use the same values for each column,
since that is a singleton dimension of this modified `CM_tilde`.
By using `np.where` we choose (based on the `threshold`) whether we want to
get the corresponding value of `data`, or, again, the broadcast value of
`CM_tilde`. A new use of `np.mean` allows us to compute `all_CMs2`.
In the final step we made use of broadcasting by directly subtracting this new
`all_CMs2` from the corresponding elements of `data`.
It might help in vectorizing code this way by looking at the implicit indices
of your temporary variables. What I mean is that your temporary variable `CM`
lives inside a loop over `[nrows,nframes]`, and its value is reset with each
iteration. This means that `CM` is in effect a quantity `CM[row,frame]` (later
explicitly assigned to the 2d array `all_CMs`), and from here it's easy to see
that you can construct it by summing up an appropriate `CMtmp[row,col,frames]`
quantity along its column dimension. If it helps, you can name the
`np.where(...)` part as `CMtmp` for this purpose, and then compute
`np.mean(CMtmp,axis=1)` from that. Same result, obviously, but probably more
transparent.
|
generating random values based on condition in structured array python
Question: I have an array representing a position (x,y) and colors (r,g,b). Right now
the array turns all zeros (as it should). I need to make it so that the values
of x and y are random and <=10. I need to make the values of r,g, and b to
also be random but <=255. How do you specify to only generate to 10 for
'position' and likewise to 255 for 'color'.
array = np.zeros(10, [ ('position', [ ('x', float, 1),
('y', float, 1)]),
('color', [ ('r', float, 1),
('g', float, 1),
('b', float, 1)])])
Answer: Try this:
import random
import numpy
array = numpy.zeros(10, [ ('position', [ ('x', float, 1),
('y', float, 1)]),
('color', [ ('r', float, 1),
('g', float, 1),
('b', float, 1)])])
ranPos = lambda: random.random() * 10
ranColor = lambda: random.random() * 255
for i in range(10):
array[i] = ((ranPos(), ranPos()),
(ranColor(), ranColor(), ranColor()))
If you want to generate integers use this instead:
ranPos = lambda: random.randint(0,10)
ranColor = lambda: random.randint(0,255)
|
numpy genfromtxt date conversion error
Question: I am loading a file using `numpy.genfromtxt` and some of the fields are in a
date format, however, when I setup a converter to process these items I get an
error that I am not sure how to address (see below):
strptime() argument 0 must be str, not <class 'bytes'>
At this point, my objective is to find a simple way to load files (using
genfromtxt) and convert selected columns from date format to an integer
(epoch, for example) -- however, I am looking for any approach that will work
(not necessarily the one I have illustrated).
Any help is appreciated.
(details below)
* Running in virtualenv
* Using MacOS
* Using Python v3.5
The file (abbreviated) is a simple CSV format:
SomeField,SomeDate
1,2013-08-16
The code:
import numpy as np
import time
def main():
pathRaw = 'data/homesite-quote-conversion/temp.csv'
str2epoch = lambda x: time.mktime(time.strptime(x, '%Y-%m-%d'))
converters = {1:str2epoch}
dataset = np.genfromtxt(open(pathRaw,'rb'), converters=converters, dtype=None, delimiter=',', skip_header=1)
return
if __name__ == '__main__':
main()
The error:
Traceback (most recent call last):
File "test.py", line 12, in <module>
main()
File "test.py", line 8, in main
dataset = np.genfromtxt(open(pathRaw,'rb'), converters=converters, dtype=None, delimiter=',', skip_header=1)
File "{user-directory}/.virtualenvs/demo/lib/python3.5/site-packages/numpy/lib/npyio.py", line 1731, in genfromtxt
converter.iterupgrade(current_column)
File "{user-directory}/.virtualenvs/demo/lib/python3.5/site-packages/numpy/lib/_iotools.py", line 766, in iterupgrade
_ strict_call(_m)
File "/{user-directory}/.virtualenvs/demo/lib/python3.5/site-packages/numpy/lib/_iotools.py", line 688, in _strict_call
new_value = self.func(value)
File "test.py", line 6, in <lambda>
str2epoch = lambda x: time.mktime(time.strptime(x, '%Y-%m-%d'))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/_strptime.py", line 494, in _strptime_time
tt = _strptime(data_string, format)[0]
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/_strptime.py", line 306, in _strptime
raise TypeError(msg.format(index, type(arg)))
TypeError: strptime() argument 0 must be str, not <class 'bytes'>
Answer: The error is saying that `time.strptime` expects the first argument to be a
[`str`, not `bytes`](http://eli.thegreenplace.net/2012/01/30/the-bytesstr-
dichotomy-in-python-3):
>>> x = b'2013-08-16' # x is bytes
>>> import time
>>> time.strptime(x, '%Y-%m-%d')
TypeError: strptime() argument 0 must be str, not <class 'bytes'>
>>> time.strptime(x.decode('ascii'), '%Y-%m-%d') # x.decode('ascii') is a str
time.struct_time(tm_year=2013, tm_mon=8, tm_mday=16, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=4, tm_yday=228, tm_isdst=-1)
So you can avoid the error by using
def str2epoch(x):
return time.mktime(time.strptime(x.decode('ascii'), '%Y-%m-%d'))
|
Subclass Django ModelBase (metaclass for Django models)
Question: I want some of my Django models to have an "owner" property. I may need to
change or augment the logic later, and the logic is reused across many
classes. So I'd like to just inherit from an `Owned` class that lets me store
the user who created the class. I'm not trying to populate the field yet, I
just need it to exist.
First I tried this:
from django.db import models
from django.contrib.auth.models import User
class Owned(models.Model):
owner = models.ForeignKey(User, related_name='owner')
class Meta:
abstract = True
But when I inherited from `Owned` in several subclasses, I got a Django
reverse accessor error: [Django Reverse Accessor
Clashes](http://stackoverflow.com/questions/26955319/django-reverse-accessor-
clashes)
It looks like this "owner" property needs to have a different "related_name"
in subclasses of the `Owned` class.
So I tried this:
from django.db import models
from django.db.models.base import ModelBase
from django.contrib.auth.models import User
class _OwnedMeta(ModelBase):
'''
Should makes "Owned" class below work.
Gets around problem with reverse accessor clashes:
'''
def __init__(cls, name, bases, dct):
related_name = '{}_owner'.format(name)
dct['owner'] = models.ForeignKey(User, related_name=related_name)
super(_OwnedMeta, cls).__init__(name, bases, dct)
class Owned(models.Model):
'''
Instances get an "owner" attribute
that is a foreign key to '<class_name>_owner'
'''
__metaclass__ = _OwnedMeta
owner = models.ForeignKey(User, related_name='owner')
class Meta:
abstract = True
The idea is that when I subclass `Owned` I'll get an `owner` property with
related name `*class_name*_owner`.
Like this:
Class Subclass(Owned):
pass
instance = Subclass()
and now, if this worked, instance.subclassed would be a foreign key to the
Django `User` model and the `related_name` would be "Subclass_owner."
But it doesn't work. This is an excerpt of the error message:
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/base.py", line 297, in add_to_class
value.contribute_to_class(cls, name)
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 1588, in contribute_to_class
super(ForeignObject, self).contribute_to_class(cls, name, virtual_only=virtual_only)
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 272, in contribute_to_class
add_lazy_relation(cls, self, other, resolve_related_class)
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 84, in add_lazy_relation
operation(field, model, cls)
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 271, in resolve_related_class
field.do_related_class(model, cls)
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 307, in do_related_class
self.set_attributes_from_rel()
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 304, in set_attributes_from_rel
self.rel.set_field_name()
File "/Users/maxwellheiber/dev/dc/lib/python2.7/site-packages/django/db/models/fields/related.py", line 1259, in set_field_name
self.field_name = self.field_name or self.to._meta.pk.name
AttributeError: 'NoneType' object has no attribute 'name'
What am I doing wrong?
Answer: Actually, django solves _exactly_ your problem (of having a foreign key with a
related_name to an abstract class)! Please check the docs @
<https://docs.djangoproject.com/en/1.8/topics/db/models/#be-careful-with-
related-name>
Copying from there for answer completeness:
> If you are using the related_name attribute on a ForeignKey or
> ManyToManyField, you must always specify a unique reverse name for the
> field. This would normally cause a problem in abstract base classes, since
> the fields on this class are included into each of the child classes, with
> exactly the same values for the attributes (including related_name) each
> time.
>
> To work around this problem, when you are using related_name in an abstract
> base class (only), part of the name should contain `'%(app_label)s'` and
> `'%(class)s'`.
>
> `'%(class)s'` is replaced by the lower-cased name of the child class that
> the field is used in. `'%(app_label)s'` is replaced by the lower-cased name
> of the app the child class is contained within. Each installed application
> name must be unique and the model class names within each app must also be
> unique, therefore the resulting name will end up being different.
So, for example in your case, just change `Owned` to:
class Owned(models.Model):
owner = models.ForeignKey(User, related_name='%(app_label)s_%(class)s_owner')
class Meta:
abstract = True
|
read a file(.txt,.csv..xls) and assign them to respective variable in python script
Question: i have a file(mail.txt) and that have content like
emailfrom = '[email protected]'
emailto = '[email protected]','[email protected]','[email protected]'
filepath = 'D:\A_2.csv'
subject = 'sells report for xyz'
body = 'hi axx,find the attached file of sells.'
And i am running a script to send mail and in my script my msg variables
emailfrom,emailto,filepath,subject,body are present.
So how can i read that text file and read values from there and assign
variables to my script variables.
please suggest.
Answer: Use a dictionary to hold the information, with `ast.literal_eval()` to
properly evaluate your strings and `tuple`s (or else you would end up with
extra `'` everywhere):
import ast
with open('mail.txt') as f:
d = {k:ast.literal_eval(v) for k,v in (line.strip().split(' = ') for line in f)}
You will then have a dictionary holding each value:
>>> print(*d.items(), sep='\n')
('emailfrom', '[email protected]')
('emailto', ('[email protected]', '[email protected]', '[email protected]'))
('filepath', 'D:\\A_2.csv')
('body', 'hi axx,find the attached file of sells.')
('subject', 'sells report for xyz')
Access the values as you would expect:
>>> d['subject']
'sells report for xyz'
|
Why can't I pickle a typing.NamedTuple while I can pickle a collections.namedtuple?
Question: Why can't I pickle a `typing.NamedTuple` while I can pickle a
`collections.namedtuple`? How can I manage to do pickle a `NamedTuple`?
This code shows what I have tried so far:
from collections import namedtuple
from typing import NamedTuple
PersonTyping = NamedTuple('PersonTyping', [('firstname',str),('lastname',str)])
PersonCollections = namedtuple('PersonCollections', ['firstname','lastname'])
pt = PersonTyping("John","Smith")
pc = PersonCollections("John","Smith")
import pickle
import traceback
try:
with open('personTyping.pkl', 'wb') as f:
pickle.dump(pt, f)
except:
traceback.print_exc()
try:
with open('personCollections.pkl', 'wb') as f:
pickle.dump(pc, f)
except:
traceback.print_exc()
Output on the shell:
$ python3 prova.py
Traceback (most recent call last):
File "prova.py", line 16, in <module>
pickle.dump(pt, f)
_pickle.PicklingError: Can't pickle <class 'typing.PersonTyping'>: attribute lookup PersonTyping on typing failed
$
Answer: It's a bug. I have opened a ticket on it: <http://bugs.python.org/issue25665>
The issue is that
[`namedtuple`](https://hg.python.org/cpython/file/3.5/Lib/collections/__init__.py#l340)
function while creating the class sets its `__module__` attribute by looking
up `__name__` attribute from the calling frame's globals. In this case the
caller is `typing.NamedTuple`.
result.__module__ = _sys._getframe(1).f_globals.get('__name__', '__main__')
So, it ends up setting it up as `'typing'` in this case.
>>> type(pt)
<class 'typing.PersonTyping'> # this should be __main__.PersonTyping
>>> type(pc)
<class '__main__.PersonCollections'>
>>> import typing
>>> typing.NamedTuple.__globals__['__name__']
'typing'
**Fix:**
Instead of this the
[`NamedTuple`](https://hg.python.org/cpython/file/3.5/Lib/typing.py#l1463)
function should set it itself:
def NamedTuple(typename, fields):
fields = [(n, t) for n, t in fields]
cls = collections.namedtuple(typename, [n for n, t in fields])
cls._field_types = dict(fields)
try:
cls.__module__ = sys._getframe(1).f_globals.get('__name__', '__main__')
except (AttributeError, ValueError):
pass
return cls
* * *
For now you can also do:
PersonTyping = NamedTuple('PersonTyping', [('firstname',str),('lastname',str)])
PersonTyping.__module__ = __name__
|
python error when import (using xlwings lib)
Question: Below is the error message when I import:
>>> from xlwings import workbook, range
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
from xlwings import workbook, range
File "C:\Users\user\AppData\Local\Programs\Python\Python35\lib\site-packages\xlwings\__init__.py", line 20, in <module>
from . import _xlwindows as xlplatform
File "C:\Users\user\AppData\Local\Programs\Python\Python35\lib\site-packages\xlwings\_xlwindows.py", line 10, in <module>
import win32api
ImportError: No module named 'win32api'
I have tried Python 3.5 and 2.7.10 but both have no luck.
Answer: Make sure you have win32api package installed. Install it using
pip install win32api
|
AWS Lambda: Does Language change affect RAM(Memory) Usage or cost of the service?
Question: AWS Lambda has 2 different programming language options and as you know while
we are speaking about the Lambda, RAM(Memory) usage is very important thing.
Is there any difference about Node.js's RAM usage and Python's RAM Usage?
If there is a something like when we use it, it will be more cost efficient,
I'm thinking about using only that language. Also maybe Node.js execution can
be much faster than Python and because of that RAM disadvantage could not be a
problem.
Answer: From the [AWS Lambda pricing page](https://aws.amazon.com/lambda/pricing/):
> Duration is calculated from the time your code begins executing until it
> returns or otherwise terminates, rounded up to the nearest 100ms. The price
> depends on the amount of memory you allocate to your function. You are
> charged $0.00001667 for every GB-second used.
RAM usage for the language runtime does not _directly_ affect the pricing.
You choose the RAM _statically_ when you define your Lambda function, and that
does have an effect on the pricing, but it done _once_ , and all invocation
get the same RAM amount, and cost accordingly.
However - you do pay more according to the **duration** of the invocation.
I would _expect_ Amazon not to charge you for the time it takes to bootstrap
the environment - the pricing page does say _"from the time**your code**
begins executing until it returns"_.
(this is speculative, of course, but it wouldn't look fair to charge for the
overhead)
Different languages and runtimes can have [different execution
times](https://bjpelc.wordpress.com/2015/01/10/yet-another-language-speed-
test-counting-primes-c-c-java-javascript-php-python-and-ruby-2/), even for
identical (or idiomatic) programs.
This difference in speed can, and will, affect the price.
|
Tkinter delay a copy command over ssh (Paramiko)
Question: This script in Tkinter zip a file that reside in a linux server and then copy
the zipped file in the local machine. To do so, Paramiko first execute the zip
command in the server, then a new function is called that copy the file in the
local machine. However, while the zip is being created, the new file gets a
temporary name and only at the end of the zip procedure it gets the expected
name (compressed_file.zip in the example). The file is large and the zipè
procedure takes as much as ten seconds. The copy function return therefore an
error of "IOError No such file" because the file is present but still with a
temporary name.
I need to add a pause somehow, in a way that the copy function is delayed and
allows the first to be completed. I tried with **after** with no luck. I can
see from debugging that the copy_zip() is being executed without pausing for
10 seconds as I exepected.
(I cut all the logic that is not strictly needed to understand the problem)
from Tkinter import *
import ttk
import paramiko
from tkFileDialog import *
import os
root = Tk()
root.title("Extract file")
root.geometry('200x400+200+200')
root.resizable(width= FALSE, height= FALSE)
def copy_file():
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
copy_command = ("zip compressed_file /log_folder/log.xml")
client.connect(ip_server, username=user, password=psw)
(stdin, stdout, stderr) = client.exec_command(copy_command)
root.after(10000, copy_zip())
def copy_zip():
remotepath = "/home/user/compressed_file.zip"
localpath = "c:\python27\copied_file.zip"
t = paramiko.Transport(comboserver)
t.connect(username=user, password=psw)
sftp = paramiko.SFTPClient.from_transport(t)
sftp.get(remotepath, localpath)
sftp.close
main_container = Frame(root, bg="bisque")
main_container.pack(side=TOP, fill="both", expand=False)
top_frame = LabelFrame(main_container, text =" Get File ")
top_frame.pack(side = TOP, fill = BOTH)
t_go = Button(top_frame, text= "Go!", width =8, command = copy_file)
t_go.pack(side = LEFT, padx = 15, pady = 20)
mainloop()
Answer: You are using `after` incorrectly. `after` must be given a _reference_ to the
function but you are _calling_ the function and passing the result to _after_.
change your use of `after` to this:
root.after(10000, copy_zip)
If you want to pass arguments to `copy_file`, include them as arguments to
`after`:
root.after(10000, copy_zip, arg1, arg2, arg3)
|
for loop & if in a dictionary
Question: Note: I am new to Python.
I have a task to design a program that will output **foreign** number plates
out of a list of 10 number plates (a mix of UK and foreign ones), but only if
they're speeding. I have done a few mistakes along the way, and I'm not sure
how I can fix these problems. #UK and #F are just my notes for me to be able
to have a quick look at which is a UK Number Plate and which is a Foreign.
import re
distance=750 #variable for the distance between the Camera A and B (in m)
speedlimit=60 # (mps)
NumberPlates=["DV61 GGB",#UK
"D3S11 EUBG 20",#F
"5T314 10A02",#F
"24TEG 5063",#F
"TR09 TRE",#UK
"524 WAL 75",#F
"TR44 VCZ",#UK
"FR52 SWD",#UK
"100 GBS 12",#F
"HG55 BPO"#UK
]
Enter=[7.12,7.17,7.20,7.45,7.23,7.33,7.18,7.25,7.33,7.38]
#A list for the times of cars passing Camera A
Leave=[7.56,7.24,7.48,7.52,7.45,7.57,7.22,7.31,7.37,7.47]
#A list for the times of cars passing Camera B
Timestaken=[]
Timestaken2=[]
Timestaken3={}
for enter_data, leave_data in zip(Enter, Leave):
Timestaken.append(leave_data-enter_data)
Timestaken=["%.2f" % (leave_data-enter_data) for enter_data, leave_data in zip(Enter, Leave)]
Timestaken2=[s.strip("0") for s in Timestaken]
Timestaken2=[s.strip('.') for s in Timestaken2]
for key,value in zip(NumberPlates,Timestaken2):
Timestaken3[key]=value
print(Timestaken3)
for item in NumberPlates:
UK_Numbers=list(filter(lambda x: re.match("[A-Z]{2}\d{2}\s+[A-Z]{3}$",x),NumberPlates))
for item in UK_Numbers:
if item in UK_Numbers:
NumberPlates.remove(item)
print(NumberPlates) #These are foreign number plates only.
Timestaken4={}
for key,value in zip(NumberPlates,Timestaken2):
Timestaken4[key]=value
print(Timestaken4) #NumberPlate:Time
print("10 cars have passed Camera A, then Camera B\n")
for key,value in Timestaken4.items():
speed=distance/int(value)
if speed>speedlimit:
print(key,"is speeding with",distance/int(value),"mps")
I get this output:
>>>
{'5T314 10A02': '28', '100 GBS 12': '04', '524 WAL 75': '24', 'D3S11 EUBG 20': '07', '24TEG 5063': '07', 'HG55 BPO': '09', 'TR44 VCZ': '04', 'TR09 TRE': '22', 'DV61 GGB': '44', 'FR52 SWD': '06'}
['D3S11 EUBG 20', '5T314 10A02', '24TEG 5063', '524 WAL 75', '100 GBS 12']
{'5T314 10A02': '07', '100 GBS 12': '22', '524 WAL 75': '07', '24TEG 5063': '28', 'D3S11 EUBG 20': '44'}
10 cars have passed Camera A, then Camera B
5T314 10A02 is speeding with 107.14285714285714 mps
524 WAL 75 is speeding with 107.14285714285714 mps
The two last lines were supposed to have different speeds. I realize that the
speeds resulting from enter & leave time are inhumane but this is not the
problem I have.
The third output line showed that times were assigned to different number
plates. I am looking for a way to fix that.
The last two output lines are to do with:
for key,value in Timestaken4.items():
speed=distance/int(value)
if speed>speedlimit:
print(key,"is speeding with",distance/int(value),"mps")
Aside from the times being assigned to different number plates, how can I
modify the code so that it displays the right speed?
Answer: You could do something like this:
import re
# DATA
distance = 750 # Distance between the Camera A and B (in m)
speed_limit = 60 # (mps)
number_plates = ["DV61 GGB", #UK
"D3S11 EUBG 20", #F
"5T314 10A02", #F
"24TEG 5063", #F
"TR09 TRE", #UK
"524 WAL 75", #F
"TR44 VCZ", #UK
"FR52 SWD", #UK
"100 GBS 12", #F
"HG55 BPO" #UK
]
enter = [7.12,7.17,7.20,7.45,7.23,7.33,7.18,7.25,7.33,7.38]
leave = [7.56,7.24,7.48,7.52,7.45,7.57,7.22,7.31,7.37,7.47]
# Find the non-UK plates
pattern = "(?![A-Z]{2}\d{2}\s+[A-Z]{3}$)"
foreign_numbers = list(filter(lambda x: re.match(pattern, x), number_plates))
# Compute speeds
elapsed = [l - e for l, e in zip(leave, enter)]
speed = [distance/t for t in elapsed]
# Conditional dictionary comprehension
foreign_speeders = {plate: speed
for plate, speed in zip(number_plates, speed)
if (plate in foreign_numbers) and (speed > speed_limit)}
foreign_speeders
This gives:
{'100 GBS 12': 18749.99999999998,
'24TEG 5063': 10714.285714285807,
'524 WAL 75': 3124.9999999999973,
'5T314 10A02': 2678.571428571426,
'D3S11 EUBG 20': 10714.28571428567}
Which you could format:
for plate, speed in foreign_speeders.items():
print("{0:>14s} was speeding at {1:8.1f} m/s".format(plate, speed))
The units seem screwy. My guess is that the speed limit is actually in miles
per hour. BTW if it was me and there's a lot of data, I'd probably be in
[pandas](http://pandas.pydata.org/) or at least NumPy... then you don't have
to be so careful about keeping all these lists in the right order and of the
right length. But these joys can wait till you've seen more Python.
|
Running Python code: subprocess.check_output(), getting error: [Winerror 2]
Question: I'm running some Python code that is meant to run an Apache Maven program on a
file and produce an output:
import os, subprocess
os.chdir("C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/")
logtoolDir = "C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/"
processEnv = {'JAVA_HOME': 'C:/Program Files/Java/jdk1.8.0_66/jre',
'mvn': 'C:/Program Files/apache-maven-3.3.3/bin/'}
#processEnv = "C:/Program Files/Java/jdk1.8.0_66"
args = 'org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput.data'
subprocess.check_output(['mvn', ' exec:exec',
' -Dexec.args=' + args],
env = processEnv,
cwd = logtoolDir)
However, it gives me this error:
File "C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/test.py", line 25, in <module>
cwd = logtoolDir)
File "C:\WinPython-64bit-3.4.3.6\python-3.4.3.amd64\lib\subprocess.py", line 607, in check_output
with Popen(*popenargs, stdout=PIPE, **kwargs) as process:
File "C:\WinPython-64bit-3.4.3.6\python-3.4.3.amd64\lib\subprocess.py", line 859, in __init__
restore_signals, start_new_session)
File "C:\WinPython-64bit-3.4.3.6\python-3.4.3.amd64\lib\subprocess.py", line 1112, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
I've investigated some things and I've narrowed it down to what I'm expecting
is _winapi.CreateProcess being unable to _find_ the Apache Maven installation
(and run the _mvn_ command). The installation is already in my path env
variables (the line runs just fine through CMD). It may also be true that I've
somehow defined the directories wrongly, but I fail to find a problem there...
Can anyone offer a suggestion on how to fix this issue?
Cheers.
Answer: You can try appending "shell = True" to the arguments passed to subprocess
like so:
subprocess.check_output(['mvn', ' exec:exec',
' -Dexec.args=' + args],
env = processEnv,
cwd = logtoolDir, shell = True)
Though as it says
[here](https://docs.python.org/2/library/subprocess.html#frequently-used-
arguments), this can be a security risk if the contents of the call can be
determined by an external source, but I can't tell whether that's the case
from the code you've given.
|
Using Tkinter to scraper Twitter. Receiving indent error or says master is not defined.
Question: I keep receiving an indent error in this code:
File "tkkkk2.py", line 153
search_button = Button(master, text = "Search", command = search)
^
IndentationError: unexpected indent
I'm trying to use Tkinter to build a GUI that takes values entered into text
boxes and a selection from a listbox and returns it to the runit method that
scrapes Twitter for this content. Everything works except for the search
button. I either get the above error or the following error when I unindent
it:
File "tkkkk2.py", line 153, in MyFirstGUI
search_button = Button(master, text = "Search", command = search)
NameError: name 'master' is not defined
I'm at a loss as to what I'm doing. I've been working on this code nonstop for
a day and a half and am very frustrated. This is the last bit that I need to
get a working project.
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
import tweepy
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import json
import random
from Tkinter import Tk, Label, Button, Entry, StringVar, DISABLED, NORMAL, END, W, E
from Tkinter import BOTH, END, LEFT
import Tkinter as tk
class MyFirstGUI:
def __init__(self, master):
self.master = master
master.title("Python Scraper")
self.label1 = Label(master, text="Please enter a hashtag (#hashtag) to search twitter for.")
self.label1.pack()
#Box to input hashtag
v = StringVar()
e = Entry( master, textvariable = v)
e.pack()
#gets the value from the box.
#We could do this on the button call and pass all values on
#to the real program.
self.label2 = Label(master, text = "How many results would you like per search? Recommended: 200-1000.")
self.label2.pack()
#Box to enter results per search
results = StringVar()
r = Entry( master, textvariable = results)
r.pack()
self.label3= Label(master, text = "How many times would you like Twitter scraped? Warning: do not exceed the data cap! Recommended 6 or less times")
self.label3.pack()
#box to enter number of times to scrape twitter
times = StringVar()
t = Entry( master, textvariable = times)
t.pack()
self.label4 = Label(master, text = "What would you like on your y-axis?")
self.label4.pack()
#drop down box with the 5 options
List1 = tk.Listbox(exportselection=0)
for item in ["created_at", "retweet_count", "favorite_count", "source", "user_id", "user_screen_name", "user_name", "user_created_at", "user_followers_count", "user_friends_count", "user_location"]:
List1.insert(END, item)
List1.pack()
self.label5 = Label(master, text = "The x axis will be the number of sources")
def runit(hashtag, other_variable, other_variable_times, yaxis):
#Credentials go here:
#Redacted because internet.
api = tweepy.API(auth)
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
#results
results = []
x = 0
for x in range(0,other_variable_times):
#geocode="39.999726,-83.015202,5mi"
for tweet in tweepy.Cursor(api.search,q = hashtag , return_type="recent").items(other_variable):
results.append(tweet)
#Begin making the data structure:
id_list = [tweet.id for tweet in results]
data_set = pd.DataFrame(id_list, columns=["id"])
# Processing Tweet Data
data_set["text"] = [tweet.text for tweet in results]
data_set["created_at"] = [tweet.created_at for tweet in results]
data_set["retweet_count"] = [tweet.retweet_count for tweet in results]
data_set["favorite_count"] = [tweet.favorite_count for tweet in results]
data_set["source"] = [tweet.source for tweet in results]
# Processing User Data
data_set["user_id"] = [tweet.author.id for tweet in results]
data_set["user_screen_name"] = [tweet.author.screen_name for tweet in results]
data_set["user_name"] = [tweet.author.name for tweet in results]
data_set["user_created_at"] = [tweet.author.created_at for tweet in results]
data_set["user_description"] = [tweet.author.description for tweet in results]
data_set["user_followers_count"] = [tweet.author.followers_count for tweet in results]
data_set["user_friends_count"] = [tweet.author.friends_count for tweet in results]
data_set["user_location"] = [tweet.author.location for tweet in results]
data_set["followers_count"] = [tweet.author.followers_count for tweet in results]
#data_set = process_results(results)
#sources = data_set["retweet_count"].value_counts()[:5][::-1]
sources = data_set[yaxis].value_counts()[:20][::-1]
plt.barh(xrange(len(sources)), sources.values)
plt.yticks(np.arange(len(sources)) + 0.4, sources.index)
if x == 0:
plt.show(block=False)
else:
plt.draw()
print x
x = x + 1
plt.show()
def search():
#This should take the inputs from the boxes and selections from the x and y axis and
#enter it into another program that will be pasted below. I don't want to share due to
#OAuth credentials etc..
#gets hashtag
hashtag = v.get()
#gets results per search
if r.get() != '':
other_variable = int(r.get())
#gets times to scrape twitter
if t.get() != '':
other_variable_times = int(t.get())
yaxis = List1.curselection()
runit(hashtag, other_variable, other_variable_times, yaxis)
search_button = Button(master, text = "Search", command = search)
search_button.pack()
class StdOutListener(StreamListener):
def on_data(self, data):
print data
return true
root = tk.Tk()
my_gui = MyFirstGUI(root)
root.mainloop()
Answer: you can not access `master` as `master` from `search()` namespace because it
is in `__init__` namespace, you shoud use `self.master`. For indentation i
propose to you to use `idle`, it is useful to detect indentation.
|
Python function subprocess.check_output returns CalledProcessError: command returns non-zero exit status
Question: As a follow-up to my previous question, which got solved quickly, I'm running
the Python code below in WinPython:
import os, subprocess
os.chdir("C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/")
logtoolDir="C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/ "
#processEnv = {'JAVA_HOME': 'C:/Program Files/Java/jdk1.8.0_66/'}
args = r'"org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput.data"'
subprocess.check_output(['mvn', 'exec:exec', '-Dexec.args=' + args],
shell = True, cwd = logtoolDir)
And get the following error:
CalledProcessError: Command '['mvn', 'exec:exec', '-Dexec.args="org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput.data"']' returned non-zero exit status 1
The Apache Maven executable does not seem to run. My guess is that the
arguments are being passed on to the program incorrectly. I couldn't find any
typos in the _args_ or the _logtoolDir_ arguments, but maybe I'm missing
something there? Any ideas?
UPDATE: The mvn exec:exec was not running because `check_output` has somehow
been unable to access Windows' environmental variables. I added the _path_
variable to `processEnv` and now `'mvn','--version'` in the `check_output`
args confirms Maven runs. The code still doesn't run but I imagine it's
probably an issue with how I've defined the directories.
Cheers.
Answer: Problem solved. Basically: a) `subprocess.check_output` could not read
Windows' environment variables (e.g. PATH, JAVA_HOME), so I had to redefine
the one I was using in `processEnv` and pass it along in the function's
arguments. Also, b) The _args_ variable was defined incorrectly. I needed to
remove one set of quotation marks, and also make it raw using `r`.
The corrected code:
logtoolDir='C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/'
processEnv = {'JAVA_HOME': 'C:/Program Files/Java/jdk1.8.0_66/jre/',
'Path' : 'C:/Program Files/apache-maven-3.3.3/bin/'}
args = r"org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput2.data"
print(subprocess.check_output(['mvn', 'exec:exec', '-Dexec.args='+ args],
shell = True, env = processEnv, cwd = logtoolDir))
Unfortunately I can't find a way to go around using the `shell = True`
argument, which probably won't be a problem since this will only be used for
data analysis.
Cheers.
|
convert a list of repeating values to dictionary of their frequency count in python
Question: I have a list of strings which have repeating values and I want to create
dictionary of words where key will be the word and its value will be the
frequency count and then write these words and their values in the csv:
the following has been my approach to do the same:
#!/usr/bin/env python
# encoding: utf-8
# -*- coding: utf8 -*-
import csv
from nltk.tokenize import TweetTokenizer
import numpy as np
tknzr = TweetTokenizer()
#print tknzr.tokenize(s0)
with open("dispn.csv","r") as file1,\
open("dispn_tokenized.csv","w") as file2,\
open("dispn_tokenized_count.csv","w") as file3:
mycsv = list(csv.reader(file1))
words = []
words_set = []
tokenize_count = {}
for row in mycsv:
lst = tknzr.tokenize(row[2])
for l in lst:
file2.write("\""+str(row[2])+"\""+","+"\""+str(l.encode('utf-8'))+"\""+"\n")
l = l.lower()
words.append(l)
words_set = list(set(words))
print "len of words_set : " + str(len(words_set))
for word in words_set:
tokenize_count[word] = 1
for word in words:
tokenize_count[word] = tokenize_count[word]+1
print "len of tokenized words_set : " + str(len(tokenize_count))
#print "Tokenized_words count : "
#print tokenize_count
#print "================================================================="
i = 0
for wrd in words_set:
#i = i+1
print "i : " +str(i)
file3.write("\""+str(i)+"\""+","+"\""+str(wrd.encode('utf-8'))+"\""+","+"\""+str(tokenize_count[wrd])+"\""+"\n")
but in csv I still found some repeating values like 1,5,4,7,9.
some info of the approach:
- dispn.csv = contains usernames of the users
which i am tokenizing with the help of nltk module
- after tokenizing them, i am storing these words in the list 'words'
and writing the words corresponding to the username to csv.
- creating set of it so as to get unique values out of list 'words'
and storing it in 'words_set'
- then creating dictionary 'tokenize_count' with key as word and
value as its frequency count and writing the same to csv.
why am I getting only some of the numerical values repeated ? is there any
better way to do the same? Please help.
Answer: [`import Counter from
collections](https://docs.python.org/2/library/collections.html#collections.Counter)
Counter can be called on a list of strings and return a dict-like object where
the key values are words and their frequencies
|
Permission denied, while trying to launch a python script from inside a python script using OS
Question: I'm using this python script to try to launch, another. The intention is to do
this multiple times so that I can launch 4 similar scripts with just one
command. " import os
os.system("/path/to/file/.../script.py")
" However when I run the script I get : Permission denied
What am I doing wrong?
Answer: You should be importing scripts and py files instead of doing an OS
workaround. You could compile your scripts as modules into scripts.py and do
import [something] from scripts
Or you could wrap it all up in a main() operation and run scripts in your
current file.
Hope this helps. I'd reserve using os.system() to run exe files or things
outside of Python's immediate scope.
|
Date Time split in python
Question: I have to split a date time which I get from a software in the below format to
separate variables (year,month,day,hour, min,sec)
19 Nov 2015 18:45:00.000
Note : There is two spaces between the date and time. The whole date and time
is stored in a single string variable. Please help me in this regards.
Thanks in advance.
Answer: As an alternative to wim's
[answer](http://stackoverflow.com/a/33811125/189134), if you don't want to
install a package, you can do it like so:
import datetime
s = "19 Nov 2015 18:45:00.000"
d = datetime.datetime.strptime(s, "%d %b %Y %H:%M:%S.%f")
print d.year
print d.month
print d.day
print d.hour
print d.minute
print d.second
This outputs:
2015
11
19
18
45
0
* * *
This utilizes
[`strptime`](https://docs.python.org/2/library/datetime.html#strftime-and-
strptime-behavior) to parse the string.
|
How to feed a placeholder?
Question: I am trying to implement a simple feed forward network. However, I can't
figure out how to feed a `Placeholder`. This example:
import tensorflow as tf
num_input = 2
num_hidden = 3
num_output = 2
x = tf.placeholder("float", [num_input, 1])
W_hidden = tf.Variable(tf.zeros([num_hidden, num_input]))
W_out = tf.Variable(tf.zeros([num_output, num_hidden]))
b_hidden = tf.Variable(tf.zeros([num_hidden]))
b_out = tf.Variable(tf.zeros([num_output]))
h = tf.nn.softmax(tf.matmul(W_hidden,x) + b_hidden)
sess = tf.Session()
with sess.as_default():
print h.eval()
Gives me the following error:
...
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 419, in _do_run
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape dim { size: 2 } dim { size: 1 }
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[2,1], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'Placeholder', defined at:
File "/home/sfalk/workspace/SemEval2016/java/semeval2016-python/slot1_tf.py", line 8, in <module>
x = tf.placeholder("float", [num_input, 1])
...
I have tried
tf.assign([tf.Variable(1.0), tf.Variable(1.0)], x)
tf.assign([1.0, 1.0], x)
but that does not work apparently.
Answer: To feed a placeholder, you use the `feed_dict` argument to `Session.run()` (or
`Tensor.eval()`). Let's say you have the following graph, with a placeholder:
x = tf.placeholder(tf.float32, shape=[2, 2])
y = tf.constant([[1.0, 1.0], [0.0, 1.0]])
z = tf.matmul(x, y)
If you want to evaluate `z`, you must feed a value for `x`. You can do this as
follows:
sess = tf.Session()
print sess.run(z, feed_dict={x: [[3.0, 4.0], [5.0, 6.0]]})
For more information, see the [documentation on
feeding](http://tensorflow.org/how_tos/reading_data/index.html#Feeding).
|
Can you pass weights for a text search to create_index in PyMongo?
Question: I have an automated process that creates a text index on various fields in a
Mongo instance (current Mongo 2.6 / PyMongo 2.72).
from pymongo import MongoClient, TEXT
db = MongoClient()
collection = db.collection
collection.create_index([("foo",TEXT), ("bar",TEXT), ("baz",TEXT)])
I want to weight the collection according to the [Mongo
docs](https://docs.mongodb.org/manual/tutorial/control-results-of-text-
search/). In the Mongo shell, this would be:
db.collection.create_index( { foo: text, bar: text, baz: text}, {weights: {foo: 10, bar: 5, baz 1}})
However, since I'm doing this in Python and not the Mongo shell. Does PyMongo
support this?
Answer: In Python (PyMongo 2.4.2 and newer), the index declaration looks like:
collection.create_index(
[
('foo', TEXT),
('bar', TEXT),
('baz', TEXT)
],
weights={
'foo': 10,
'bar': 5,
'baz': 1
}
)
|
Best way to decode command line inputs to Unicode Python 2.7 scripts
Question: All my scripts use Unicode literals throughout, with
from __future__ import unicode_literals
but this creates a problem when there is the potential for functions being
called with bytestrings, and I'm wondering what the best approach is for
handling this and producing clear helpful errors.
I [gather](http://stackoverflow.com/a/33743668/656912) that one common
approach, which I've adopted, is to simply make this clear when it occurs,
with something like
def my_func(somearg):
"""The 'somearg' argument must be Unicode."""
if not isinstance(arg, unicode):
raise TypeError("Parameter 'somearg' should be a Unicode")
# ...
for all arguments that need to be Unicode (and might be bytestrings). However
even if I do this, I encounter problems with my `argparse` command line script
if supplied parameters correspond to such arguments, and I wonder what the
best approach here is. It seems that I can simply check the encoding of such
arguments, and decode them using that encoding, with, for example
if __name__ == '__main__':
parser = argparse.ArgumentParser(...)
parser.add_argument('somearg', ...)
# ...
args = parser.parse_args()
some_arg = args.somearg
if not isinstance(config_arg, unicode):
some_arg = some_arg.decode(sys.getfilesystemencoding())
#...
my_func(some_arg, ...)
Is this combination of approaches a common design pattern for Unicode modules
that may receive bytestring inputs? Specifically,
* can I reliable decode command line arguments in this way, and
* will `sys.getfilesystemencoding()` give me the correct encoding for command line arguments; or
* does `argparse` provide some builtin facility for accomplishing this that I've missed?
Answer: I don't think `getfilesystemencoding` will necessarily get the right encoding
for the shell, it depends on the shell (and can be customised by the shell,
independent of the filesystem). The file system encoding is only concerned
with how non-ascii filenames are stored.
Instead, you should probably be looking at `sys.stdin.encoding` which will
give you the encoding for standard input.
Additionally, you might consider using the
[`type`](https://docs.python.org/2/library/argparse.html#type) keyword
argument when you add an argument:
import sys
import argparse as ap
def foo(str_, encoding=sys.stdin.encoding):
return str_.decode(encoding)
parser = ap.ArgumentParser()
parser.add_argument('my_int', type=int)
parser.add_argument('my_arg', type=foo)
args = parser.parse_args()
print repr(args)
Demo:
$ python spam.py abc hello
usage: spam.py [-h] my_int my_arg
spam.py: error: argument my_int: invalid int value: 'abc'
$ python spam.py 123 hello
Namespace(my_arg=u'hello', my_int=123)
$ python spam.py 123 ollǝɥ
Namespace(my_arg=u'oll\u01dd\u0265', my_int=123)
If you have to work with non-ascii data a lot, I would highly recommend
upgrading to python3. Everything is a lot easier there, for example, parsed
arguments will already be unicode on python3.
* * *
Since there is conflicting information about the command line argument
encoding around, I decided to test it by changing my shell encoding to
[latin-1](https://en.wikipedia.org/wiki/ISO/IEC_8859-1) whilst leaving the
file system encoding as utf-8. For my tests I use the [c-cedilla
character](https://en.wikipedia.org/wiki/%C3%87) which has a different
encoding in these two:
>>> u'Ç'.encode('ISO8859-1')
'\xc7'
>>> u'Ç'.encode('utf-8')
'\xc3\x87'
Now I create an example script:
#!/usr/bin/python2.7
import argparse as ap
import sys
print 'sys.stdin.encoding is ', sys.stdin.encoding
print 'sys.getfilesystemencoding() is', sys.getfilesystemencoding()
def encoded(s):
print 'encoded', repr(s)
return s
def decoded_filesystemencoding(s):
try:
s = s.decode(sys.getfilesystemencoding())
except UnicodeDecodeError:
s = 'failed!'
return s
def decoded_stdinputencoding(s):
try:
s = s.decode(sys.stdin.encoding)
except UnicodeDecodeError:
s = 'failed!'
return s
parser = ap.ArgumentParser()
parser.add_argument('first', type=encoded)
parser.add_argument('second', type=decoded_filesystemencoding)
parser.add_argument('third', type=decoded_stdinputencoding)
args = parser.parse_args()
print repr(args)
Then I change my shell encoding to `ISO/IEC 8859-1`:
[](http://i.stack.imgur.com/gnEMQ.png)
And I call the script:
wim-macbook:tmp wim$ ./spam.py Ç Ç Ç
sys.stdin.encoding is ISO8859-1
sys.getfilesystemencoding() is utf-8
encoded '\xc7'
Namespace(first='\xc7', second='failed!', third=u'\xc7')
As you can see, the command line arguments were encoding in latin-1, and so
the second command line argument (using `sys.getfilesystemencoding`) fails to
decode. The third command line argument (using `sys.stdin.encoding`) decodes
correctly.
|
Python: folder creation when copying files
Question: I'm trying to create a shell script that will copy files from one computer
(employee's old computer) to another (employee's new computer). I have it to
the point where I can copy files over, thanks to the lovely people here, but
I'm running into a problem - if I'm going from, say, this directory that has 2
files:
C:\Users\specificuser\Documents\Test Folder ....to this directory...
C:\Users\specificuser\Desktop ...I see the files show up on the Desktop, but
the folder those files were in (Test Folder) isn't created.
Here is the copy function I'm using:
#copy function
def dir_copy(srcpath, dstpath):
#if the destination path doesn't exist, create it
if not os.path.exists(dstpath):
os.makedir(dstpath)
#tag each file to the source path to create the file path
for file in os.listdir(srcpath):
srcfile = os.path.join(srcpath, file)
dstfile = os.path.join(dstpath, file)
#if the source file path is a directory, copy the directory
if os.path.isdir(srcfile):
dir_copy(srcfile, dstfile)
else: #if the source file path is just a file, copy the file
shutil.copyfile(srcfile, dstfile)
I know I need to create the directory on the destination, I'm just not quite
sure how to do it.
Edit: I found that I had a type (os.makedir instead of os.mkdir). I tested it,
and it creates directories like it's supposed to. HOWEVER I'd like it to
create the directory one level up from where it's starting. For example, in
Test Folder there is Sub Test Folder. It has created Sub Test Folder but won't
create Test Folder because Test Folder is not part of the dstpath. Does that
make sense?
Answer: You might want to look at
[shutil.copytree()](https://docs.python.org/3.5/library/shutil.html). It
performs the recursive copy functionality, including directories, that you're
looking for. So, for a basic recursive copy, you could just run:
shutil.copytree(srcpath, dstpath)
However, to accomplish your goal of copying the source directory to the
destination directory, creating the source directory inside of the destination
directory in the process, you could use something like this:
import os
import shutil
def dir_copy(srcpath, dstdir):
dirname = os.path.basename(srcpath)
dstpath = os.path.join(dstdir, dirname)
shutil.copytree(srcpath, dstpath)
Note that your `srcpath` must not contain a slash at the end for this to work.
Also, the result of joining the destination directory and the source directory
name must not already exist, or copytree will fail.
|
Global Variable 'getInfo' is undefined
Question: I'm working on developing a GUI for a project and once I put all of this into
a class, it is returning saying
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python33\lib\tkinter\__init__.py", line 1475, in __call__
return self.func(*args)
File "c:\users\ryan\documents\visual studio 2015\Projects\Group_3_Project\Group_3_Project\Group_3_Project.py", line 30, in <lambda>
b1 = Button(root, text = 'Submit', command = (lambda e = ents: getInfo(e)))
NameError: global name 'getInfo' is not defined
Here is my code so far:
from tkinter import*
class GUI:
fields = 'Last Name', 'First Name', 'Field', 'Phone Number', 'Office number'
def getInfo(entries):
for entry in entries:
field = entry[0]
text = entry[1].get()
print('%s: "%s"' % (field, text))
def makeForm(root, fields):
entries = []
for field in fields:
row = Frame(root)
lab = Label(row, width = 15, text = field, anchor = 'w')
ent = Entry(row)
row.pack(side = TOP, fill = X, padx = 5, pady = 5)
lab.pack(side = LEFT)
ent.pack(side = RIGHT, expand = YES, fill = X)
entries.append((field, ent))
return entries
if __name__ == '__main__':
root = Tk()
root.wm_title("HoursWizard")
ents = makeForm(root, fields)
root.bind('<Return>', (lambda event, e = ents: getInfo(e)))
b1 = Button(root, text = 'Submit', command = (lambda e = ents: getInfo(e)))
b2 = Button(root, text = 'Quit', command = root.quit)
b1.pack(side = LEFT, padx = 5, pady = 5)
b2.pack(side = LEFT, padx = 5, pady = 5)
root.mainloop()
I have no idea what is going on and why it isn't working correctly. I'm sure
it is an easy fix and I'm just missing something. Any help is appreciated.
Thanks!
Answer: You should check the [official Python
tutorial](https://docs.python.org/3.4/tutorial/index.html) and look at the
section on classes. Basically, your scoping and namespaces are not what you
think they are. Every class method (unless it's been designated as static) is
first passed the instance itself, usually denoted with `self`. You would then
refer to instance attributes with `self.myattribute`. In `getInfo`, for
example, what you call `entries` isn't entries at all, but rather the instance
of the `GUI` class that has been created.
I highly recommend you look up some tutorials for how to make an OO Tkinter
app. It generally goes like this:
class App:
def __init__(self, parent):
self.parent = parent
self.parent.after(5000, self.other_method) # just a demo
# create buttons, lay out geometry, etc.
def other_method(self):
self.do_print()
def do_print(self):
print('hello world')
root = Tk()
app = App(root)
root.mainloop()
|
matplotlib one data point with error bars
Question: How can I add asymmetric error bars to one data point using matplotlib python.
Right now I have something like
x = 1
y = 2
yerr = 0.5
pl.errorbar(x, y, yerr=[yerr, 2*yerr],fmt='o')
but I get
In safezip, `len(args[0])=1` but `len(args[1])=2`
Thanks!
(I tried the same plot line but with arrays for x and y and it works fine)
Answer: It is not very well documented, but `errorbar()` expects `yerr` to be a 2xN
array if you want asymmetrical errorbars. For example:
import matplotlib.pyplot as plt
fig = plt.figure()
pl = fig.add_subplot(1, 1, 1)
x = 1
y = 2
yerr = 0.5
pl.errorbar(x, y, yerr=[[yerr], [2*yerr]],fmt='o')
plt.savefig('t.png')
[](http://i.stack.imgur.com/2cT92.png)
|
Sending email from Python throwing SSLError 'unknown protocol'
Question: I was sending emails from flask-mail but since trying to use the mailservers
at namecheap or bluehost, I'm getting the following error:
SSLError: [Errno 1] _ssl.c:510: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
So now I'm trying to send the email without flask-mail but I'm still getting
the same error. Any fix? My code is as follows:
from smtplib import SMTP
smtp = SMTP()
smtp.set_debuglevel(debuglevel)
smtp.connect('xxxxxx', 26)
smtp.login('[email protected]', 'xxxxxxx')
from_addr = "xxx <[email protected]>"
to_addr = [email protected]
subj = "hello"
date = datetime.datetime.now.strftime( "%d/%m/%Y %H:%M" )
message_text = "Hello\nThis is a mail from your server\n\nBye\n"
msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( from_addr, to_addr, subj, date, message_text )
smtp.sendmail(from_addr, to_addr, msg)
smtp.quit()
My application is running on Ubuntu 14.04 on Amazon EC2.
Thanks.
Answer: The reason that this is giving you this error is because your mail server is
not a SMTP server. Use Gmail or another smtp mail service to send the mail.
Try sending it through a gmail account with the server being `smtp.gmail.com`
and the port being `587`. First though you will need to
[configure](https://www.google.com/settings/security/lesssecureapps) your
account for it.
|
Python: assign to sys.argv item, and use it further
Question: Script 'top.py' makes use of another script 'myScript.py' by passing an
argument to it. Script 'myScript.py' contains:
if __name__ == "__main__":
...
sys.argv[1] = myObject
sys.argv[1].functionSpecificToMyObject() # works as expected
The use is achieved in 'top.py' by simply calling 'myScript.py' with an
argument 'myHelperArgument'. However, in top.py I cannot make use
'myHelperArgument'. Is this possible? If not, any suggestion on a possible
solution (given the intention above) is welcome. Of course, I prefer not to
write to a file and read later.
Answer: I would recommend [import](https://docs.python.org/3/reference/import.html).
This is designed to use objects defined in one Python source file (module) in
another.
Create two files in the same directory:
First file `my_lib.py`:
def my_func():
return 42
Second file `my_script.py`:
import my_lib
print(my_lib.my_func())
Now, `python my_script.py` in the same directory shows `42`.
|
Timeout issue while running python script Phantomjs and Selenium
Question: I am running a python script with Phontomjs and Selenium. I am facing timeout
issue. It is stopping after 20-50min. I need a solution so that I can run my
script without this timeout issue. where is the problem please and how can I
solve it?
The input file cannot be read or no in proper format.
Traceback (most recent call last):
File "links_crawler.py", line 147, in <module>
crawler.Run()
File "links_crawler.py", line 71, in Run
self.checkForNextPages()
File "links_crawler.py", line 104, in checkForNextPages
self.next.click()
File "/home/dev/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 75, in click
self._execute(Command.CLICK_ELEMENT)
File "/home/dev/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 454, in _execute
return self._parent.execute(command, params)
File "/home/dev/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 199, in execute
response = self.command_executor.execute(driver_command, params)
File "/home/dev/.local/lib/python2.7/site-packages/selenium/webdriver/remote/remote_connection.py", line 395, in execute
return self._request(command_info[0], url, body=data)
File "/home/dev/.local/lib/python2.7/site-packages/selenium/webdriver/remote/remote_connection.py", line 463, in _request
resp = opener.open(request, timeout=self._timeout)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1200, in do_open
r = h.getresponse(buffering=True)
File "/usr/lib/python2.7/httplib.py", line 1127, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 453, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 417, in _read_status
raise BadStatusLine(line)
httplib.BadStatusLine: ''
_Code:_
class Crawler():
def __init__(self,where_to_save, verbose = 0):
self.link_to_explore = ''
self.TAG_RE = re.compile(r'<[^>]+>')
self.TAG_SCRIPT = re.compile(r'<(script).*?</\1>(?s)')
if verbose == 1:
self.driver = webdriver.Firefox()
else:
self.driver = webdriver.PhantomJS()
self.links = []
self.next = True
self.where_to_save = where_to_save
self.logs = self.where_to_save + "/logs"
self.outputs = self.where_to_save + "/outputs"
self.logfile = ''
self.rnd = 0
try:
os.stat(self.logs)
except:
os.makedirs(self.logs)
try:
os.stat(self.outputs)
except:
os.makedirs(self.outputs)
try:
fin = open(file_to_read,"r")
FileContent = fin.read()
fin.close()
crawler =Crawler(where_to_save)
data = FileContent.split("\n")
for info in data:
if info!="":
to_process = info.split("|")
link = to_process[0].strip()
category = to_process[1].strip().replace(' ','_')
print "Processing the link: " + link : " + info
crawler.Init(link,category)
crawler.Run()
crawler.End()
crawler.closeSpider()
except:
print "The input file cannot be read or no in proper format."
raise
Answer: If you don't want Timeout to stop your script you can catch the exception
`selenium.common.exceptions.TimeoutException` and pass it.
You can set the default page load timeout using the `set_page_load_timeout()`
method of `webdriver`.
Like this
`driver.set_page_load_timeout(10)`
This will throw a TimeoutException if your page didn't load in 10 seconds.
EDIT: Forgot to mention that you will have to put your code in a loop.
Add import
`from selenium.common.exceptions import TimeoutException`
while True:
try:
# Your code here
break # Loop will exit
except TimeoutException:
pass
|
Python global list not being updated
Question: I have a global list and it seems that it doesn't update the list in the file
it is declared. I have seen several question similar to my issue which I can
use to fix my issue. But I was trying to understand why it does not work in my
case.
**HelloWorld.py**
import TestFile
store_val = ["10", "20"]
def main():
store_val.append("30")
print store_val
TestFile.list_val()
if __name__ == '__main__':
main()
**TestFile.py**
import HelloWorld
def list_val():
HelloWorld.store_val.append("40")
print "Store Value : ", HelloWorld.store_val
HelloWorld.store_val.append("60")
print "Updated Value : ", HelloWorld.store_val
The problem that I see is that I am able to append a value to the list in
TestFile.py but can't seem to add a value to the list in HelloWorld.py even
though it is declared there. What is the best way to rectify this issue so
that I can append the value from HelloWorld.py
Result of running HelloWorld
['10', '20', '30']
Store Value : ['10', '20', '40']
Updated Value : ['10', '20', '40', '60']
Answer: Should be this way instead . In your case only the store_val list and main
function gets imported from HelloWorld in TestFile.py but the main function is
not run in TestFile.py
HelloWorld.py
import TestFile
store_val = ["10", "20"]
def main(n=1):
store_val.append("30")
print store_val
if n>0:
TestFile.list_val(n)
if __name__ == '__main__':
main()
TestFile.py import HelloWorld
def list_val(n):
if (n>=0):
HelloWorld.main(n-1)
HelloWorld.store_val.append("40")
print "Store Value : ", HelloWorld.store_val
HelloWorld.store_val.append("60")
print "Updated Value : ", HelloWorld.store_val
if __name__ == '__main__':
list_val()
Run Code:
python HelloWorld.py
['10', '20', '30']
['10', '20', '30']
Store Value : ['10', '20', '30', '40']
Updated Value : ['10', '20', '30', '40', '60']
|
Run python script on arduino
Question: i have a python script to slice wav file using start, end time. I want to
execute this in arduino. can i run this directly into arduino? or if not, can
you suggest idea how can i do this?
import wave
import pygame
import time
import sys
def slice(infile, outfilename, start_ms, end_ms):
width = infile.getsampwidth() #Returns sample width in bytes
rate = infile.getframerate() #Returns sampling frequency
fpms = rate / 1000 # frames per ms
length = (end_ms - start_ms) * fpms
start_index = start_ms * fpms
out = wave.open(outfilename, "w")
out.setparams((infile.getnchannels(), width, rate, length, infile.getcomptype(), infile.getcompname()))
infile.rewind() #Rewind the file pointer to the beginning of the audio stream
anchor = infile.tell() #Return current file pointer position
infile.setpos(anchor + start_index) #Set the file pointer to the specified position
out.writeframes(infile.readframes(length)) #Write audio frames and make sure nframes is correct
if __name__ == "__main__":
slice(wave.open("song1.wav", "r"), "out.wav", int(sys.argv[1]), int(sys.argv[2]))
pygame.mixer.init()
pygame.mixer.music.load("out.wav")
pygame.mixer.music.play()
while pygame.mixer.music.get_busy() == True:
continue
Answer: If you are using an arduino mega you could take a look at
[PyMite](http://playground.arduino.cc/CommonTopics/PyMite).
For arduino uno there was a post ([Is there a way to "compile" Python code
onto an Arduino (Uno)?](http://stackoverflow.com/questions/8114916/is-there-a-
way-to-compile-python-code-onto-an-arduino-uno)), about compiling python for
arduino and in it there were mentioned two projects
[pyMCU](http://www.pymcu.com/overview.html) and the other one in developement
by a SOer, you could give it a try
But as they said, porting it to C++ is the easier way
|
How to concatenate strings within a br in python XPath?
Question: Trying to solve how to concatenate strings within a `<br>` is not working.
Here is the code:
<li class="attr">
<span>
Size:L
<br>
Color:RED
</span>
</li>
I tried using these but is not working:
color_and_size = row.xpath('.//li[@class="attr"][1]/span[1]/text()')[0]
Answer: It seems your xml structure is corrupt since no closing `</br>` tag- So if you
use `lxml` then try [soupparser](http://lxml.de/elementsoup.html) that use
Beautifulsoup- Or you can use standalone Beutifulsoup as below-
from bs4 import BeautifulSoup
s = """<li class="attr">
<span>
Size:L
<br>
Color:RED
</span>
</li>
"""
soup = BeautifulSoup(s)
print map(lambda x: x.text.strip().replace("\n",""),soup.find_all('span'))
Prints-
[u'Size:L Color:RED']
N.B. Beautifulsoup organises xml internally e.g. if you want valid xml of your
malformed xml then try-
print soup.prettify()
Prints-
<html>
<body>
<li class="attr">
<span>
Size:L
<br/>
Color:RED
</span>
</li>
</body>
</html>
If your `xml` was valid the below `xpath` would work-
`//li[@class='attr']/span/text()[preceding-sibling::br or following-
sibling::br]`
`[Live
Demo](http://www.xpathtester.com/xpath/467bdd7d5a59eb29289d08ea3eca9951)`
**Just click the** `Test` **button**
|
How to log scrapy spiders running from script
Question: Hi all i have multiple spider running from the script. Script will schedule
daily once.
1. I want to log the infos, errors separately. log filename must be a **spider_infolog_[date]** and **spider_errlog_[date]** i am trying following code,
spider __init__ file
from twisted.python import log
import logging
LOG_FILE = 'logs/spider.log'
ERR_FILE = 'logs/spider_error.log'
logging.basicConfig(level=logging.INFO, filemode='w+', filename=LOG_FILE)
logging.basicConfig(level=logging.ERROR, filemode='w+', filename=ERR_FILE)
observer = log.PythonLoggingObserver()
observer.start()
within spider:
import logging
.
.
.
logging.error(message)
2. if any exception happens in spider code [like i am fetching start urls from the MysqlDB, if the connection fails i need to close the specific spider not other spiders because i am running all spiders from the script]
raise CloseSpider(message)
is above code sufficent to close the particular spider ?
## EDIT @eLRuLL
import logging
from scrapy.utils.log import configure_logging
LOG_FILE = 'logs/spider.log'
ERR_FILE = 'logs/spider_error.log'
configure_logging()
logging.basicConfig(level=logging.INFO, filemode='w+', filename=LOG_FILE)
logging.basicConfig(level=logging.ERROR, filemode='w+', filename=ERR_FILE)
i have put the above code in a script that schedules spiders. not working file
not created but in console i got log messages.
## EDIT 2
i have added install_root_handler=False in configure_logging() it gives all
the console output in spider.log file error is not differenciated.
configure_logging(install_root_handler=False)
Answer: You can do this:
from scrapy import cmdline
cmdline.execute("scrapy crawl myspider --logfile mylog.log".split())
Put that script in the path where you put `scrapy.cfg`
|
argparse: parse specific options first, while having all options shown in usage hint
Question: I have an application whose options handling partially depends on the value
given for other options, as in
python tool.py --limit=s:100 stuff property
A simple argument parser could look like this:
import argparse
parser=argparse.ArgumentParser()
parser.add_argument('--limit')
parser.add_argument('stuff')
parser.add_argument('property')
args = parser.parse_args(['--limit', 's:100', 'something', '123'])
However, how the `--limit` argument must be handled depends on both `stuff`
and `property`. I could make `--limit` unknown to the parser, parse only known
arguments with
args, unknown = parser.parse_known_args(['--limit', 's:100', 'something', '123'])
and pass `unknown` to a second parser which can act according to the values of
`stuff` and `property`, but then I couldn't include the second parser's help
in the output of `python tool.py --help`, or could I?
Answer: You could give the first parser a custom `usage` parameter that includes all
the arguments. You could write that by hand, or fetch it from a (unused)
parser that includes all arguments.
astr = parser.format_usage()
fetches the usage string
parser = ArgumentParser(usage=astr,...)
passes it to a new parser. You may need to edit `astr` a bit.
Including the extra `help` lines will be trickier. You could add them to the
`description` or `epilog`.
* * *
The order in which arguments are parsed is determined by their order in the
`sys.argv`. The parser looks for positional arguments, then an optional, more
positionals, next optional, etc until the strings are consumed.
As matter of philosophy I prefer to think of `argparse` as a parser, not a
'handler'. It's primary purpose is to figure out what the user wants, and it
give useful feedback if they want something unexpected.
Interactions between arguments are best handled after parsing. It is possible
to implement interactions with custom Action classes, but that code typically
gets more complex.
In your example `--limit` occurs first, putting 's:100' in the namespace
Initial namespace with defaults:
namespace(limit=None, stuff=None, property=None)
namespace after parsing `--limit`:
namespace(limit='s:100', stuff=None, property=None)
namespace after parsing the 2 positionals:
namespace(limit='s:100', stuff='something', property=123)
The `limit` custom Action could look at the values for `stuff` and `property`,
but that doesn't help if those strings come after. Similarly, `stuff` action
could look at and modify the `limit` attribute, but a latter `--limit` could
overwrite that.
If you expand on how these arguments interact we could give you more concrete
ideas.
* * *
Another thought - use one parser to handle the help, and handle `--limit` in a
most general way. Then creat or modify a new parser using the useful values
from the first, and parse the input again. The same `sys.argv` can be parsed
multiple times by multiple parsers. Nothing is destroyed in parsing.
|
Is There a Threading module in Python Equivalent in PHP
Question: Is there a Threading module in Python equivalent in PHP? My code in Python
would look something like
from Treading import thread
def somefunction():
print("function")
t = Thread(target = somefunction)
t.start()
Is there a way to convert code like this to PHP, if so, how? Thanks.
Answer: There is. PHP has [`pthreads`](http://php.net/pthreads). As for how similar or
how easy it is to convert your Python code using that, remains unclear.
|
Python checking users input
Question: So, basically, this is my code:
import random
import os
answer = input('What is the problem with your mobile phone? Please do not enter more than one sentence.')
print('The program has detected that you have entered a query regarding: '
if 'wet' or 'water' or 'liquid' or 'mobile' in answer:
print('Put your mobile phone inside of a fridge, it sounds stupid but it should work!')
What I want to know is, say for example if the user enters the keywords 'wet'
and 'mobile' as their input, how do I feed back to them knowing that my
program has recognised their query.
So by saying something like 'The program has detected that you have entered a
query regarding:' how do I filter their keywords into this sentence, say, if
they entered 'My mobile phone has gotten wet recently', I want to pick out
'mobile' and 'wet' without saying:
print('The program has detected that you have entered wet')
Because that sounds stupid IMO.
Thanks
Answer: If I understand your question correctly, this should solve your problem. Just
put the print statement inside the if condition! Very simple, I guess :)
import random
import os
answer = input('What is the problem with your mobile phone? Please do not enter more than one sentence.')
if 'wet' or 'water' or 'liquid' or 'mobile' in answer:
print('The program has detected that you have entered a query regarding: water') # or anything else wet or whatever
print('Put your mobile phone inside of a fridge, it sounds stupid but it should work!')
|
Python-How do I find if a variable is a particular class?
Question: I'm writing a script in python3. I have a variable named `r` my code is like
this:
if type(r) == 'twx.botapi.botapi.Error':
print('Error!')
else:
print('success!')
When i use `type(r)` i get this:
<class 'twx.botapi.botapi.Error'>
or
<class 'twx.botapi.botapi.Message'>
I tried using these too but i got no answer:
if type(r) == <class 'twx.botapi.botapi.Error'>:
print('Error!')
else:
print('success!')
How can I find if the class type is a kind of **error** or **message**?
Answer: Depending on whether the variable points to an instance of a class or the
class object itself you can use
[`isinstance`](https://docs.python.org/3.5/library/functions.html#isinstance)
or
[`issubclass`](https://docs.python.org/3.5/library/functions.html#issubclass),
e.g.
from twx.botapi.botapi import Error, Message
if issubclass(r, Error):
print('Error!')
else:
print('success!')
Both functions accept a tuple of objects for the second parameter.
|
Python 3- "local variable referenced before assignment" with imported code
Question: I am working on a school assignment that requires me to use the following
code:
#!/usr/bin/python3
import re
jfif = re.compile(b'\xff\xd8\xff\xe0..JFIF\x00',
flags = re.DOTALL)
exif = re.compile(b'\xff\xd8\xff\xe1..EXIF\x00',
flags = re.DOTALL)
spiff = re.compile(b'\xff\xd8\xff\xe8..SPIFF\x00',
flags = re.DOTALL)
f = open('manson.dd', 'rb')
sector = f.read(512)
count = 0
while len(sector) != 0:
if jfif.match(sector):
count += 1
print('found JFIF at offset {:X}, sector {}'.format(
f.tell()-512, (f.tell() - 512) // 512))
elif exif.match(sector):
count += 1
print('found EXIF at offset {:X}, sector {}'.format(
f.tell()-512, (f.tell() - 512) // 512))
elif pdf.match(sector):
pdfCount += 1
print('found PDF at offset {:X}, sector {}'.format(
f.tell()-512, (f.tell() - 512) // 512))
sector = f.read(512)
print("Found", count, "JPEG Files")
**The assignment requires me to make a new script that finds GIF files and
imports the actual search function into a script that prompts for the file and
prints the result**
**This is what I have so far**
_Main Script_
#!/usr/bin/python3
import search
fileName = input("Enter the file you would like to search: ")
f = open(fileName, 'rb')
sector = f.read(512)
search.crawl()
print("Found", count, "GIF Files")
_Imported Code_
#!/usr/bin/python3
import re
gif = re.compile(b'\x47\x49\x46\x38..GIF\x61',
flags = re.DOTALL)
def crawl():
count = 0
while len(sector) != 0:
if gif.match(sector):
count += 1
print('found GIF at offset {:X}, sector {}'.format(
f.tell()-512, (f.tell() - 512) // 512))
sector = f.read(512)
When I run the main script, I am told that "the local variable 'sector' is
referenced before assignment'. I haven't the slightest idea how to rectify
this.
Answer: Instead of `def crawl()`, use `def crawl(sector)`. And then instead of calling
`search.crawl()`, call `search.crawl(sector)`.
Also, you will encounter a similar error with `count`. So you should modify
`search.crawl(sector)` to return `count` and then when you call
search.crawl(sector) in main(), assign to variable count.
In `main`:
count = search.crawl(sector)
In `search`:
def crawl(sector):
count = 0
while len(sector) != 0:
if gif.match(sector):
count += 1
print('found GIF at offset {:X}, sector {}'.format(
f.tell()-512, (f.tell() - 512) // 512))
sector = f.read(512)
return count
The reason for this is because when you import the module `search`, it has no
notion of the `sector` variable. The only way for `main`'s `sector` to be sent
into `search.crawl()` is for it to be passed in as a function parameter a la
`search.crawl(sector)`.
|
Can't upgrade my version of pip. Something is wrong with my setup
Question: I was trying to install something using pip, and I keep getting this
UserWarning error (see below).
I tried to upgrade pip and you can see the error below.
I haven't used python on my system in a while and it seems to be broken. What
could it be?
pip install -U pip
/usr/local/bin/pip:5: UserWarning: Module _markerlib was already imported from /Library/Python/2.7/site-packages/distribute-0.6.49-py2.7.egg/_markerlib/__init__.pyc, but /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python is being added to sys.path
from pkg_resources import load_entry_point
Downloading/unpacking pip from https://pypi.python.org/packages/py2.py3/p/pip/pip-7.1.2-py2.py3-none-any.whl#md5=5ff9fec0be479e4e36df467556deed4d
Downloading pip-7.1.2-py2.py3-none-any.whl (1.1MB): 1.1MB downloaded
Installing collected packages: pip
Found existing installation: pip 1.5.6
Uninstalling pip:
Cleaning up...
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 283, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "/Library/Python/2.7/site-packages/pip/req.py", line 1431, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req.py", line 598, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req.py", line 1836, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/util.py", line 295, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
os.unlink(src)
OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pip-1.5.6.dist-info/DESCRIPTION.rst'
Storing debug log for failure in /Users/blankman/Library/Logs/pip.log
Answer: I see you have
OSError: [Errno 13] Permission denied:
this means you need root access to install the package. since you are not
installing in a virtual env. it's installing for the global system.
try with `sudo`
|
How can I break down a large csv file into small files based on common records by python
Question: What I want to do:
What I want to do is that I have a big .csv file. I want to break down this
big csv file into many small files based on the common records in BB column
that alos contain 1 in the HH column, and all uncommon records that contain 0
in HH column.
As a result, all files will contain common records in BB column that contain 1
in the HH column, and all uncommon records that has no records in BB column
and contain 0 in the HH column. The file name should be based on the common
record of column 2 (BB). Please take a look below for the scenarion. Any
suggestion idea is appreciated highly.
bigFile.csv :
AA BB CC DD EE FF GG HH
12 53 115 10 3 3 186 1
12 53 01e 23 3 2 1
12 53 0ce 65 1 3 1
12 53 173 73 4 2 1
12 59 115 0 3 3 186 1
12 59 125 0 3 3 186 1
12 61 01e 23 3 2 1
12 61 b6f 0 1 1 1
12 61 b1b 0 6 5 960 1
12 68b 95 3 5 334 0
12 31a 31 2 2 0
12 221 0 4 5 0
12 12b 25 5 4 215 0
12 a10 36 5 1 0
My expected results files woud be as follows:
53.csv :
AA BB CC DD EE FF GG HH
12 53 115 10 3 3 186 1
12 53 01e 23 3 2 1
12 53 0ce 65 1 3 1
12 53 173 73 4 2 1
12 68b 95 3 5 334 0
12 31a 31 2 2 0
12 221 0 4 5 0
12 12b 25 5 4 215 0
12 a10 36 5 1 0
59.csv :
AA BB CC DD EE FF GG HH
12 59 115 0 3 3 186 1
12 59 125 0 3 3 186 1
12 68b 95 3 5 334 0
12 31a 31 2 2 0
12 221 0 4 5 0
12 12b 25 5 4 215 0
12 a10 36 5 1 0
61.csv :
AA BB CC DD EE FF GG HH
12 61 01e 23 3 2 1
12 61 b6f 0 1 1 1
12 61 b1b 0 6 5 960 1
12 68b 95 3 5 334 0
12 31a 31 2 2 0
12 221 0 4 5 0
12 12b 25 5 4 215 0
12 a10 36 5 1 0
Answer: For the data you have provided, the following script will produce your
requested output files. It will perform this operation on ALL CSV files found
in the folder:
from itertools import groupby
import glob
import csv
import os
def remove_unwanted(rows):
return [['' if col == 'NULL' else col for col in row[2:]] for row in rows]
output_folder = 'temp' # make sure this folder exists
# Search for ALL CSV files in the current folder
for csv_filename in glob.glob('*.csv'):
with open(csv_filename) as f_input:
basename = os.path.splitext(os.path.basename(csv_filename))[0] # e.g. bigfile
csv_input = csv.reader(f_input)
header = next(csv_input)
# Create a list of entries with '0' in last column
id_list = remove_unwanted(row for row in csv_input if row[7] == '0')
f_input.seek(0) # Go back to the start
header = remove_unwanted([next(csv_input)])
for k, g in groupby(csv_input, key=lambda x: x[1]):
if k == '':
break
# Format an output file name in the form 'bigfile_53.csv'
file_name = os.path.join(output_folder, '{}_{}.csv'.format(basename, k))
with open(file_name, 'wb') as f_output:
csv_output = csv.writer(f_output)
csv_output.writerows(header)
csv_output.writerows(remove_unwanted(g))
csv_output.writerows(id_list)
This will result in the files `bigfile_53.csv`, `bigfile_59.csv` and
`bigfile_61.csv` being created in an output folder called `temp`. For example
`bigfile_53.csv` will appear as follows:
[](http://i.stack.imgur.com/Zg4FT.png)
Entries containing the string 'NULL' will be converted to an empty string, and
the first two columns will be removed (as per OP's comment).
Tested in Python 2.7.9
|
Empirical complexity of my "library sort" implementation doesn't seem to match anything like O(n log n)
Question: I recently heard about the [Library
sort](https://en.wikipedia.org/wiki/Library_sort) and since I have to make my
students work on the [Insertion
sort](https://en.wikipedia.org/wiki/Insertion_sort) (from which the Library
sort is derived), I decided to build an exercise for them on this new topic.
The great thing is that this algorithm claims to have a O(n log n) complexity
(see the title [Insertion Sort is O(n log
n)](http://link.springer.com/article/10.1007%2Fs00224-005-1237-z) or the text
in the Wikipedia page from the link above).
I am aware that empirical measurements aren't always reliable but I tried to
do my best and I am a little disappointed by the following plot (blue is the
Library sort, green is the in-place quicksort from [Rosetta
Code](http://rosettacode.org/wiki/Sorting_algorithms/Quicksort#Python));
vertical axes is the average time computed as the mean of many different
attempts; horizontal axes is the size of the list. Random lists of size n have
integer elements between 0 and 2n. The shape of the curve doesn't really look
as something related to O(n log n).
[](http://i.stack.imgur.com/n7SD3.png)
Here is my code (including the test part); did I miss something?
# -*- coding: utf8 -*-
def library_sort(l):
# Initialization
d = len(l)
k = [None]*(d<<1)
m = d.bit_length() # floor(log2(n) + 1)
for i in range(d): k[2*i+1] = l[i]
# main loop
a,b = 1,2
for i in range(m):
# Because multiplication by 2 occurs at the beginning of the loop,
# the first element will not be sorted at first pass, wich is wanted
# (because a single element does not need to be sorted)
a <<= 1
b <<= 1
for j in range(a,min(b,d+1)):
p = 2*j-1
s = k[p]
# Binary search
x, y = 0, p
while y-x > 1:
c = (x+y)>>1
if k[c] != None:
if k[c] < s: x = c
else: y = c
else:
e,f = c-1,c+1
while k[e] == None: e -= 1
while k[f] == None: f += 1
if k[e] > s: y = e
elif k[f] < s: x = f
else:
x, y = e, f
break
# Insertion
if y-x > 1: k[ (x+y)>>1 ] = s
else:
if k[x] != None:
if k[x] > s: y = x # case may occur for [2,1,0]
while s != None:
k[y], s = s, k[y]
y += 1
else: k[x] = s
k[p] = None
# Rebalancing
if b > d: break
if i < m-1:
s = p
while s >= 0:
if k[s] != None:
# In the following line, the order is very important
# because s and p may be equal in which case we want
# k[s] (which is k[p]) to remain unchanged
k[s], k[p] = None, k[s]
p -= 2
s -= 1
return [ x for x in k if x != None ]
def quicksort(l):
array = list(l)
_quicksort(array, 0, len(array) - 1)
return array
def _quicksort(array, start, stop):
if stop - start > 0:
pivot, left, right = array[start], start, stop
while left <= right:
while array[left] < pivot:
left += 1
while array[right] > pivot:
right -= 1
if left <= right:
array[left], array[right] = array[right], array[left]
left += 1
right -= 1
_quicksort(array, start, right)
_quicksort(array, left, stop)
import random, time
def test(f):
print "Testing", f.__name__,"..."
random.seed(42)
x = []
y = []
for i in [ 25, 50, 100, 200, 300, 500, 1000, 5000,
10000, 15000, 25000, 40000, 50000, 100000 ]:
n = 100000//i + 1
m = 2*i
x.append(i)
t = time.time()
for j in range(n):
f( [ random.randrange(0,m) for _ in range(i) ] )
y.append( (time.time()-t)/n )
return x,y
import matplotlib.pyplot as plt
x1, y1 = test(library_sort)
x2, y2 = test(quicksort)
plt.plot(x1,y1)
plt.plot(x2,y2)
plt.show()
**Edit:** I computed more values, switched to the pypy interpreter in order to
compute a little more in the same time; here is another plot; I also added
reference curves. It is difficult to be sure of it, but it still looks a
little more like O(n²) than O(n log n)...
[](http://i.stack.imgur.com/t4tZt.png)
Answer: Link to a pdf file that uses a different algorithm for what it also describes
as a library sort on page 7:
<http://classes.engr.oregonstate.edu/eecs/winter2009/cs261/Textbook/Chapter14.pdf>
The pdf file describes a hybrid counting and insertion sort, which seems
significantly different than the wiki article, even though the pdf file
mentions the wiki article. Because of the counting phase, the so called gaps
are exactly sized so that the destination array is the same size as the
original array, rather than having to be larger by some constant factor as
mentioned in the wiki article.
For example, one method to perform what the pdf file calls a library sort on
an array of integers, using the most significant byte of each integer, an
array of counts == array of bucket sizes is created. These are converted into
an array of starting indexes for each bucket by accumulative summing. Then the
array of starting indexes is copied to an array of ending indexes for each
bucket (which would mean all buckets are initially empty). The sort then
starts, for each integer from the source arrray, select the bucket via the
most significant byte, then do an insertion sort based on the starting and
ending indexes for that bucket, then increment the ending index.
The pdf file mentions using some type of hash to select buckets for generic
objects. The hash would have to preserve the sort order.
My guess is that the time complexity would be the time to do the insertion
sorts, which is O(bucket_size^2) for each bucket, since the pass to create the
counts / bucket indexes is linear.
For integers, since most of the logic for counting-radix sort is already
there, might as well perform a multi-pass least significant to most
significant byte counting-radix sort and forget about doing the insertion
sort.
Getting back to the wiki article, there's no explanation on how to detect an
empty gap. Assuming that no sentinel value to represent an empty gap is
available, then a third array could be used to indicate if a location in the
second array contained data or was empty.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.