text
stringlengths 226
34.5k
|
---|
Parsing a string with spaces from command line in Python
Question: Is there a way to call my program in python and pass it a string I want it to
parse without declaring the string as `'String I want to parse'` but as
`String I want to parse`
import argparse
#Parse command line for input
parser = argparse.ArgumentParser(description='Parse input string')
#position input argument
parser.add_argument('string', help='Input String')
args = parser.parse_args()
arg_str = args.string
print(arg_str)
when I run `$ python test.py String I want to parse` I get the error:
`test.py: error: unrecognized arguments: I want to parse `
Is there anyway to tell the script to account for spaces and take the input as
one string until either the end of the input is reached or another parse
argument such as `-s` is reached?
Answer: The problem is you give five arguments instead of one. Put your string with
space between double quote and it will work.
~ ❯❯❯ python test.py asd asd
usage: test.py [-h] string
test.py: error: unrecognized arguments: asd
~ ❯❯❯ python test.py "asd asd"
asd asd
~ ❯❯❯
|
matplotlib: Don't compress plot in the horizontal direction
Question: I am very new to matplotlib and I am having some difficulty with this figure:
[](http://i.stack.imgur.com/CDuGd.png)
I have a text file with x y point groups that I should plot. However, the x
points overlap in each group, so I add an offset to each x axis point for each
group.
Usually, the single groups look like this:
[](http://i.stack.imgur.com/6vzDa.png)
[](http://i.stack.imgur.com/lJ9lb.png)
Note that the x axis in the first image ends where the x-axis in the second
image begins.
My problem is that the resulting image is squelched/compressed and not really
"readable".
I tried increasing the value that is added to the x-axis for each group/image,
but it just compresses each group even more.
I tried suggestions to use `rcParams` or set the `dpi` value of the resulting
image, but nothing does the job:
from pylab import rcParams
rcParams['figure.figsize'] = 50, 100
plt.savefig('result.png', dpi=200,pad_inches=5)
What am I doing wrong or looking for?
PS: The data and code is [here](https://github.com/ctfs/write-
ups-2015/tree/master/volgactf-quals-2015/stego/strange-text). To see what my
problem is, call `python2.7 plotit.py text.txt`
Answer: If I understand your question, you don't like your image being too compressed
in the horizontal direction. It happens because by default `matplotlib`
chooses the aspect ratio necessary to fill the given figure size. You were on
the right track with changing changing `figsize`, but if you want to change it
in `rcParams`, you have to put this call somewhere **before you start
plotting.** The other approach is to use the stateless API, that is `fig =
plt.figure(figsize=(8,2)); s = fig.add_subplot(111); s.plot(...)`. That's what
you get:
from pylab import rcParams
rcParams['figure.figsize'] = 8, 2
[](http://i.stack.imgur.com/uRUQ7.png)
Note that I shrunk the circle sizes to make the lines more distinguishable:
plt.scatter(x,y,s=1)
if px!='':
plt.plot([px,x],[py,y],'-o',markersize=1)
For a more accurate control you can actually set the aspect ratio directly:
plt.axes().set_aspect(1)
[](http://i.stack.imgur.com/tc2Ui.png)
or use some of the predefined modes, e.g.
plt.axis('equal')
plt.tight_layout()
[](http://i.stack.imgur.com/DbLJh.png)
_Edit:_ for the reference, full code for the final picture:
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import sys
from pylab import rcParams
rcParams['figure.figsize'] = 8, 2
def parsexydata(fdata):
keys=[]
xy=[]
with open(fdata,'r') as f:
pre=''
for idx, i in enumerate(f.read().replace('\n','').split(',')[2:]):
if idx%2==0:
pre=i
continue
tmp = pre.split('.')
if len(tmp)!=3: continue
[before,key,after] = pre.split('.')
pre = before+'.'+after
if key not in keys: keys.append(key)
xy.append([pre,i,key])
return [xy,keys]
[xydata, keys] = parsexydata(sys.argv[1])
for idx, k in enumerate(keys):
px=py=''
for [x,y,key] in xydata:
if key!=k: continue
x=float(x)+float(k)
if key=='01': print(x)
plt.scatter(x,y,s=1)
if px!='':
plt.plot([px,x],[py,y],'-o',markersize=1)
px,py=x,y
plt.axis('equal')
plt.tight_layout()
plt.savefig('result.png', dpi=200)
|
How to make columns and rows dynamically in PyQt?
Question: I am making a GUI desktop application using python3, PyQt5 in windows 7.
What I'm trying to do is:
1. when my application runs, it makes an empty table using QTableWidget. There're also four buttons.
2. When a user clicks the first button, the empty table gets 2 rows and 4 columns.
3. In this situation, if the user clicks another button, the previous columns and rows are removed, and the table gets 10 columns and 20 rows.
I made a empty table, but **I don't know how to make columns and rows
dynamically.**
Here is my code:
import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QTableWidget, QPushButton
class mainClass(QMainWindow):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.tableWidget = tableManager()
self.returnedTableWidget = self.tableWidget.makeTable(self)
btnMaker = buttonManager()
btnMaker.makeTestBtn(self)
self.setGeometry(100, 100, 700, 600)
self.show()
class buttonManager(QWidget):
def __init__(self):
super().__init__()
def makeTestBtn(self, parent):
testBtn01 = QPushButton("2 X 4", parent)
testBtn02 = QPushButton("4 X 8", parent)
testBtn03 = QPushButton("8 X 16", parent)
testBtn04 = QPushButton("16 X 32", parent)
testBtn01.move(50, 450)
testBtn02.move(200, 450)
testBtn03.move(350, 450)
testBtn04.move(500, 450)
class tableManager(QWidget):
def __init__(self):
super().__init__()
def makeTable(self, parent):
self.tableMaker = QTableWidget(parent)
self.tableMaker.setGeometry(50, 50, 600, 400)
return self.tableMaker
if __name__ == '__main__':
app = QApplication(sys.argv)
mc = mainClass()
sys.exit(app.exec_())
I know there are `setRowCount()` and `setColumnCount()` methods in
`QTableWidget` class. But I don't know how and where to use those methods in
my code.
Answer: to create rows and cols:
1 -fetch all your data from db
2 -use for statement as follows :
all_data = db.fetch
tbl = QtGui.QTableWidget(len(all_data),X) # X is The number of columns that you need
header_labels = ['Column 1', 'Column 2', 'Column 3', 'Column 4',...]
tbl.setHorizontalHeaderLabels(header_labels)
for row in all_data:
inx = all_data.index(row)
tbl.insertRow(inx)
tbl.setItem(inx,Y,QTableWidgetItem(your data)) # Y is the column that you want to insert data
for example:
all_data = [[1,2,3,4],[5,6,7,8]]
tbl = QtGui.QTableWidget(len(all_data),4)
header_labels = ['Column 1', 'Column 2', 'Column 3', 'Column 4']
tbl.setHorizontalHeaderLabels(header_labels)
for row in all_data:
inx = all_data.index(row)
tbl.insertRow(inx)
tbl.setItem(inx,0,QTableWidgetItem(str(row[0]))
tbl.setItem(inx,0,QTableWidgetItem(str(row[0]))
tbl.setItem(inx,0,QTableWidgetItem(str(row[0]))
I hope it was useful
Of course, if I did not understand your question, I apologize
|
Grouping all tests Python
Question:
I have 2 modules: test1.py and test2.py.
**test1.py**
import unittest
class ArithTest (unittest.TestCase):
def test_run (self):
""" Test addition and succeed. """
self.failUnless (1+1==2, 'one plus one fails!')
self.failIf (1+1 != 2, 'one plus one fails again!')
self.failUnlessEqual (1+1, 2, 'more trouble with one plus one!')
if __name__ == '__main__':
unittest.main()
**test2.py**
import unittest
class AlgTest (unittest.TestCase):
def test_alg (self):
""" Test addition and succeed. """
self.assertEqual(1+1, 2, '1+1 != 2? whaaat?')
self.assertEqual(6-5, 1, '6-5 != 5 wft python?')
if __name__ == '__main__':
unittest.main()
* * *
**-Now-** I wanna create a new module test3.py to test test1.py and test2.py.
I don't now how, i read on internet about suit tests but i don't understand. I
don't want to create one more method with calling tests, and call them on
test3.py. I wanna group them and call in test3.py and they run as unitests
Answer: test1.py
import unittest
class ArithTest (unittest.TestCase):
def test_run (self):
""" Test addition and succeed. """
self.failUnless (1+1==2, 'one plus one fails!')
self.failIf (1+1 != 2, 'one plus one fails again!')
self.failUnlessEqual (1+1, 2, 'more trouble with one plus one!')
def runTest(self):
self.test_run()
if __name__ == '__main__':
unittest.main()
test2.py
import unittest
class AlgTest (unittest.TestCase):
def test_alg (self):
""" Test addition and succeed. """
self.assertEqual(1+1, 2, '1+1 != 2? whaaat?')
self.assertEqual(6-5, 1, '6-5 != 5 wft python?')
def runTest(self):
self.test_alg()
if __name__ == '__main__':
unittest.main()
test3.py
from .test1 import ArithTest
from .test2 import AlgTest
import unittest
def suite_2():
suite = unittest.TestSuite()
suite.addTest(ArithTest())
suite.addTest(AlgTest())
return suite
if __name__ == '__main__':
runner = unittest.TextTestRunner()
test_suite = suite_2()
runner.run(test_suite)
Also add a `__init__.py`
Run it with `python3 -m folder_name.test3`
|
Adding numbers from a file - Python
Question: I am trying to add numbers by reading a file.
The file has texts and numbers, so I need only numbers and add them to display
the total.
It is always showing the value as zero. Don't know where I do wrong.
Here's the code:
import re
file1 = open('Actual.txt')
line = file.read().rstrip()
numlist = list()
for line in file1:
list = re.findall(r'[0-9.]+',line)
if len(list)>0:
num = int(list)
numlist.append(num)
total = 0
for sum in numlist:
total += sum
print (total)
Answer: This may help you:
import re
file1 = open('so.txt')
numlist = list()
for line in file1:
line_list = re.findall(r'[0-9.]+',line)
if len(line_list) > 0:
numlist.extend(line_list)
print numlist
total = 0
for sum in numlist:
total += int(sum)
print (total)
|
Most efficient way to find neighbors in list
Question: I have a list of length 2016, but only 242 contain data, the rest is set to
None. My aim is to interpolate between the values to fill all gaps with a
simple form of IDW (inverse distance weighting). So the task of my script is:
* Iterate over all items of myList
* If the myList contains a value (that is **not** None), just copy it
* If you find a "None" in myList, get position/value of the left and right neighbor by calculating the distance to all items in myList
* Calculate an interpolated value for the gap from both neighbors (the farer they are away, the less weight they shall get)
Assume we have a smaller list of only 14 items (5 valid ones):
myList = [26, None, None, None, 31, None, None, 58, None, 42, None, None, None, 79]
resultList = [None] * len(myList)
for i in range(len(myList):
if not myList[i] is None:
resultList[i] = myList[i]
else:
distance = [i - j for j in range(len(myList)) if not myList[j] is None]
neighbors = min([n for n in dist if n>0]), max([n for n in dist if n<0])
# rest of the interpolation (not important for my question):
neighbors_c = [(1/float(n))**2 for n in neighbors]
c_sum = sum(neighbors_c)
neighbors_c = [n/c_sum for n in neighbors_c]
resultList = myList[i-neighbors[0]]*neighbors_c[0] + myList[i-neighbors[1]]*neighbors_c[1]
I am doing that for many many data sets. I found out that this method takes
about 0.59sec per data set. What bothers me is the fact that my list is all
sorted, but I will only need 2 values from it. So 99% of the distances are
calculated for nothing. That led me to attempt two: stop the iteration after
i-j becomes negative, because then obviously it ran into the closest values:
So instead of the list comprehension:
distance = [i - j for j in range(len(myList)) if not myList[j] is None]
I do a proper for-loop which I quit after distances pass zero and thus become
larger again:
dist = []
for j in range(len(myList)):
if not myList[j] is None:
dist.append(i-j)
if i-j < 0: break
With this method I was able to get down to 0.38sec per data set. When
iterating over all items in myList, this second method is quick at the
beginning (item is hit after 2nd, 3rd, 4th, ... loop and quit immediately),
but there is no improvement for the last items, since iteration always starts
at j=0.
I wonder if you can think of any quicker way to find the two neighbors of a
specific number in a data set, without having to check ALL distances and only
taking the largest negative and smalles positive one.
Also, I'm quite new to python, so please let me know if you find other un-
pythonic expressions in my script. Thank you guys very much!
Answer: **UPDATE:** Here's how to do it with numpy
[`interp`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html#numpy.interp):
import numpy as np
myList = [26, None, None, None, 31, None, None, 58, None, 42, None, None, None, 79]
values = [(i, val) for i, val in enumerate(myList) if val is not None]
xp, fp = zip(*values)
print(xp) # (0, 4, 7, 9, 13)
print(fp) # (26, 31, 58, 42, 79)
result = np.interp(np.arange(len(myList)), xp, fp)
print(result) # [ 26. 27.25 28.5 29.75 31. 40. 49. 58. 50. 42. 51.25 60.5 69.75 79. ]
**Original Post:**
As others have already suggested, your be best off with using some
interpolation already implemented in numpy or pandas.
However for completeness here's a a quick solution I came up with:
myList = [26, None, None, None, 31, None, None, 58, None, 42, None, None, None, 79]
resultList = []
# first lets split the list into sublists that group the numbers
# and the Nones into groups
for i, item in enumerate(myList):
if i == 0:
resultList.append([item])
else:
if type(resultList[-1][-1]) == type(item):
resultList[-1].append(item)
else:
resultList.append([item])
print(resultList) # [[26], [None, None, None], [31], [None, None], [58], [None], [42], [None, None, None], [79]]
# now lets interpolate the sublists that contain Nones
for i, item in enumerate(resultList):
if item[0] is not None:
continue
# this is a bit problematic, what do we do if we have a None at the beginning or at the end?
if i == 0 or i + 1 == len(resultList):
continue
prev_item = resultList[i - 1][-1]
next_item = resultList[i + 1][0]
difference = next_item - prev_item
item_length = len(item) + 1
for j, none_item in enumerate(item):
item[j] = prev_item + float(j + 1) / item_length * difference
# flatten the list back
resultList = [item for sublist in resultList for item in sublist]
print(resultList) # [26, 27.25, 28.5, 29.75, 31, 40.0, 49.0, 58, 50.0, 42, 51.25, 60.5, 69.75, 79]
I suggest you use this only for learning or for simple cases as it does not
handle cases where you have lists beginning or ending with `None`
|
How to convert HEXEWKB to Latitude, Longitude (in python)?
Question: I downloaded some Point of Interest data from OpenStreetMap and as it turn
outs the locations are encoded in HEXEWKB format:
CSV fields
==============================================
1 : Node type; N|W|R (in upper case), wheter it is a Node, Way or Relation in the openstreetmap model
2 : id; The openstreetmap id
3 : name; The default name of the city
4 : countrycode; The iso3166-2 country code (2 letters)
5 : alternatenames; the names of the POI in other languages
6 : location; The middle location of the POI in HEXEWKB
7 : tags; the POI tags : amenity,aeroway,building,craft,historic,leisure,man_made,office,railway,tourism,shop,sport,landuse,highway separated by '___'
I need to transform these into longitude/latitude values. This very same
question was asked before for the Java language ([How to convert HEXEWKB to
Latitude, Longitude (in
java)?](http://stackoverflow.com/questions/20346113/how-to-convert-hexewkb-to-
latitude-longitude-in-java)), however I need a Python solution.
My attempts so far have been focused in trying to use GeoDjango's GEOS module
(<https://docs.djangoproject.com/en/1.8/ref/contrib/gis/geos/#creating-a-
geometry>), but since I'm not using Django in my application this feels a bit
like an overshooting. Is there any simpler approach?
Answer: After trying different libraries I found the most practical solution in a
somewhat related question: [Why can shapely/geos parse this 'invalid' Well
Known Binary?](http://stackoverflow.com/questions/29500460/why-can-shapely-
geos-parse-this-invalid-well-known-binary). This involves using **shapely**
(<https://pypi.python.org/pypi/Shapely>):
from shapely import wkb
hexlocation = "0101000020E6100000CB752BC86AC8ED3FF232E58BDA7E4440"
point = wkb.loads(hexlocation, hex=True)
longitude = point.x
latitude = point.y
That is, you just need to use **wkb.loads** to transform the HEXEWKB string to
a shapely Point object, then extract the long/lat coordinates from that point.
|
django 1.8 does not work on CentOs 6.5 server
Question:
Installing collected packages: Django
Successfully installed Django-1.8
[root@manage ~]# PYTHON
-bash: PYTHON: command not found
[root@manage ~]# python
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/site-packages/django/__init__.py", line 1, in <module>
from django.utils.version import get_version
File "/usr/lib/python2.6/site-packages/django/utils/version.py", line 7, in <module>
from django.utils.lru_cache import lru_cache
File "/usr/lib/python2.6/site-packages/django/utils/lru_cache.py", line 28
fasttypes = {int, str, frozenset, type(None)},
^
SyntaxError: invalid syntax
>>>
hello ,I am new to Django and CentOS ,,I just install django 1.8 by pip
successfully,,but when I try to import django in python shell ,,it shows the
error message above,,,can any one tell me what's happening?? thank you !
Answer: > Django 1.8 requires Python 2.7, 3.2, 3.3, 3.4, or 3.5.
<https://docs.djangoproject.com/fr/1.9/releases/1.8/#python-compatibility>
|
Function to write lines in CSV file? Or parse list?
Question: I have a python script with no functions. At the moment, it takes in an input
`file1.txt` and outputs a string with results from the calculation, i.e.
file1.txt 12 16.0 True
I would like to run through dozens of text files and write these strings into
a CSV file, row by row.
Does one append each output into a list, and then parse the list? This feels
like you should write a function to take each input and output the
calculations directly into a CSV file.
How do I take the inputs from each time I run the script, e.g. filename, data
result, data result 2, Boolean result, and write this to the file using
`csv.writer.writerow()`?
import csv
import sys
f = open(sys.argv[1], 'wt')
try:
writer = csv.writer(f)
writer.writerow( ('Filename', 'data', 'data', 'Boolean' )
writer.writerow( % (f, ..., ..., str(...) )
finally:
f.close()
print open(sys.argv[1], 'rt').read()
Answer: you can obtain the list of your arguments from sys.argv
arg_list = sys.argv[1:]
and then create a loop after create the writer
f = open(sys.argv[1], 'wt') arg_list = sys.argv[1:]
try:
writer = csv.writer(f)
writer.writerow( ('Filename', 'data', 'data', 'Boolean' )
for arg in arg_list:
writer.writerow( % (f, ..., ..., str(...) ) # your row
finally:
f.close()
print open(sys.argv[1], 'rt').read()
meanwhile, if you need a single row to write in csv, the code'll be
f = open(sys.argv[1], 'wt')
try:
writer = csv.writer(f)
writer.writerow( ('Filename', 'data', 'data', 'Boolean' )
writer.writerow(sys.argv[2], sys.argv[3], sys.argv[4])
finally:
f.close()
print open(sys.argv[1], 'rt').read()
|
Theano and Celery : Worker exited prematurely: signal 11 (SIGSEGV)
Question: I'm building a web app where I start the training of a neural network
implemented with theano on the server side by sending an ajax request from the
client. Obviously I don't want to wait for the server to train the network
entirely to send an answer back to my client because it would be way to long.
So I came up with celery which enable me to do asynchronous code on the server
side. I run the celery workers with the command `celery -A CBIR worker -l
info`. Unfortunately I get the following message each time a worker is running
my task (which is training my network using theano) :
[2015-12-14 19:15:06,790: ERROR/MainProcess] Process 'Worker-3' pid:1610 exited with 'signal 11 (SIGSEGV)'
[2015-12-14 19:15:07,001: ERROR/MainProcess] Task fit[ac40d4d4-5b56-4278-b270-647ef76f3a49] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/Users/leo/anaconda/envs/ImgRet/lib/python3.5/site-packages/billiard/pool.py", line 1175, in mark_as_worker_losthuman_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
I have been searching why this error might occur and from what I understand
the code I'm running is suffering memory leak. What I don't understand is why
my code has no problem running when not using celery but this error come up
when using Celery ?
And most importantly I have no clue how to resolve this. I used lldb to look
at the dump file generated and here is the backtrace I have :
thread #1: tid = 0x0000, 0x00007fff93b4a9b3 libdispatch.dylib`dispatch_group_async + 533, stop reason = signal SIGSTOP
* frame #0: 0x00007fff93b4a9b3 libdispatch.dylib`dispatch_group_async + 533
frame #1: 0x00007fff7c5b8d40 libdispatch.dylib`_dispatch_root_queues + 1280
frame #2: 0x00007fff9519b228 libBLAS.dylib`APL_dgemm + 1100
frame #3: 0x00007fff951d27aa libBLAS.dylib`cblas_dgemm + 1420
frame #4: 0x0000000104beeb18 multiarray.cpython-35m-darwin.so`gemm + 200
frame #5: 0x0000000104bee3b9 multiarray.cpython-35m-darwin.so`cblas_matrixproduct + 3097
frame #6: 0x0000000104bc01af multiarray.cpython-35m-darwin.so`PyArray_MatrixProduct2 + 207
frame #7: 0x0000000104bc4808 multiarray.cpython-35m-darwin.so`array_matrixproduct + 264
frame #8: 0x00000001000671a9 libpython3.5m.dylib`PyCFunction_Call + 281
frame #9: 0x00000001000f2fbd libpython3.5m.dylib`PyEval_EvalFrameEx + 32029
frame #10: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #11: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #12: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #13: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #14: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #15: 0x00000001000e95a7 libpython3.5m.dylib`PyEval_CallObjectWithKeywords + 87
frame #16: 0x00000001042fae3a lazylinker_ext.so`pycall(self=0x0000000108fad3d8, node_idx=13, verbose=0) + 442 at mod.cpp:510
frame #17: 0x00000001042fa869 lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=24, one=0x000000010026cf60, zero=0x000000010026cf40) + 2089 at mod.cpp:704
frame #18: 0x00000001042fa789 lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=28, one=0x000000010026cf60, zero=0x000000010026cf40) + 1865 at mod.cpp:690
frame #19: 0x00000001042fa16d lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=30, one=0x000000010026cf60, zero=0x000000010026cf40) + 301 at mod.cpp:576
frame #20: 0x00000001042fa789 lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=33, one=0x000000010026cf60, zero=0x000000010026cf40) + 1865 at mod.cpp:690
frame #21: 0x00000001042fa789 lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=36, one=0x000000010026cf60, zero=0x000000010026cf40) + 1865 at mod.cpp:690
frame #22: 0x00000001042fa789 lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=41, one=0x000000010026cf60, zero=0x000000010026cf40) + 1865 at mod.cpp:690
frame #23: 0x00000001042fa789 lazylinker_ext.so`lazy_rec_eval(self=0x0000000108fad3d8, var_idx=42, one=0x000000010026cf60, zero=0x000000010026cf40) + 1865 at mod.cpp:690
frame #24: 0x00000001042f83db lazylinker_ext.so`CLazyLinker_call(_self=0x0000000108fad3d8, args=0x0000000100382048, kwds=0x0000000000000000) + 811 at mod.cpp:838
frame #25: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #26: 0x00000001000ed08c libpython3.5m.dylib`PyEval_EvalFrameEx + 7660
frame #27: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #28: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #29: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #30: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #31: 0x000000010002a79c libpython3.5m.dylib`method_call + 140
frame #32: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #33: 0x0000000100080743 libpython3.5m.dylib`slot_tp_call + 67
frame #34: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #35: 0x00000001000ed08c libpython3.5m.dylib`PyEval_EvalFrameEx + 7660
frame #36: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #37: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #38: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #39: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #40: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #41: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #42: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #43: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #44: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #45: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #46: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #47: 0x000000010002a79c libpython3.5m.dylib`method_call + 140
frame #48: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #49: 0x0000000100080743 libpython3.5m.dylib`slot_tp_call + 67
frame #50: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #51: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #52: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #53: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #54: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #55: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #56: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #57: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #58: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #59: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #60: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #61: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #62: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #63: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #64: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #65: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #66: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #67: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #68: 0x000000010002a79c libpython3.5m.dylib`method_call + 140
frame #69: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #70: 0x0000000100080471 libpython3.5m.dylib`slot_tp_init + 81
frame #71: 0x000000010007b114 libpython3.5m.dylib`type_call + 212
frame #72: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #73: 0x00000001000ed08c libpython3.5m.dylib`PyEval_EvalFrameEx + 7660
frame #74: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #75: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #76: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #77: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #78: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #79: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #80: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #81: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #82: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #83: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #84: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #85: 0x000000010002a79c libpython3.5m.dylib`method_call + 140
frame #86: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #87: 0x0000000100080471 libpython3.5m.dylib`slot_tp_init + 81
frame #88: 0x000000010007b114 libpython3.5m.dylib`type_call + 212
frame #89: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #90: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #91: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #92: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #93: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #94: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #95: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #96: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #97: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #98: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #99: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #100: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #101: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #102: 0x00000001000f4ef7 libpython3.5m.dylib`PyEval_EvalCodeEx + 71
frame #103: 0x0000000100041d2a libpython3.5m.dylib`function_call + 186
frame #104: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #105: 0x000000010002a79c libpython3.5m.dylib`method_call + 140
frame #106: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #107: 0x0000000100080743 libpython3.5m.dylib`slot_tp_call + 67
frame #108: 0x000000010000d783 libpython3.5m.dylib`PyObject_Call + 99
frame #109: 0x00000001000eff0b libpython3.5m.dylib`PyEval_EvalFrameEx + 19563
frame #110: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #111: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #112: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #113: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #114: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #115: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #116: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #117: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #118: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #119: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #120: 0x00000001000f3d26 libpython3.5m.dylib`PyEval_EvalFrameEx + 35462
frame #121: 0x00000001000f4053 libpython3.5m.dylib`PyEval_EvalFrameEx + 36275
frame #122: 0x00000001000f4df0 libpython3.5m.dylib`_PyEval_EvalCodeWithName + 2400
frame #123: 0x00000001000f4f51 libpython3.5m.dylib`PyEval_EvalCode + 81
frame #124: 0x0000000100123d4e libpython3.5m.dylib`PyRun_FileExFlags + 206
frame #125: 0x0000000100123fef libpython3.5m.dylib`PyRun_SimpleFileExFlags + 447
frame #126: 0x000000010013c7d7 libpython3.5m.dylib`Py_Main + 3479
frame #127: 0x0000000100000e92 python3`main + 418
frame #128: 0x0000000100000cc4 python3`start + 52
I don't really know how to interpret this backtrace. Thanks in advance for any
help !
Answer: If anyone run into the same problem turns out a workaround is to import theano
library inline in the task and not at the module level.
This way :
import baz
import bar
@app.task
def foo():
import theano
# do something with theano
Check
[here](https://github.com/celery/celery/issues/2964#issuecomment-165669659)
for more explanation
|
Find the same pattern in two files python
Question: Im trying to find a word that is in a file and search for that word in two
different files \- to search that word in the first file and if the word is
found the the line is printed in a new file \- to search for the same word in
a second file and if the word is found to print the line found in the second
file in the same output file after the line found in the first file and
separated with a _tabulation._
this is the script that i wrote :
for line in first_file:
for line2 in second_file:
for word in keywords:
if re.match (r"\b"+word+r"\b" , line):
result = (line.strip()+'\t'+line2.strip())
print (result, file = new_file)
but the result that i get is always an empty file.
This is an example of the first file :
contig-39000000 1211 11.3902
contig-44000000 6128 17.8944
contig-91000000 2180 14.2197
this is an example of the second file
contig-316000330 out.18
contig-39000000 out.25
...
and this is an example of the search file
contig-39000000
contig-289000379
contig-300000489
contig-310000168
contig-310000172
contig-320000463
contig-324000213
contig-327000055
contig-341000436
contig-342000384
....
The expected result would look like :
contig-39000000 1211 11.3902 out.25
Thank you
Answer: Yeah, pandas was much easier than SQLite.
import pandas as pd
# change sep to use '\t' if those files actually are tab-delimited
df1 = pd.read_csv('file1.txt', sep=r'\s+', header=None)
df2 = pd.read_csv('file2.txt', sep=r'\s+', header=None)
result = pd.merge(df1, df2, on=0)
result.to_csv('output.txt', sep='\t', header=False, index=False)
* * *
Contents of `file1.txt`
contig-39000000 1211 11.3902
contig-44000000 6128 17.8944
contig-91000000 2180 14.2197
Contents of `file2.txt`
contig-39000000 out.13
contig-316000330 out.18
contig-316000341 out.25
Contents of `output.txt`
contig-39000000 1211 11.3902 out.13
|
AttributeError: 'module' object has no attribute 'MutableMapping'
Question: I followed the instructions for installing Google Tensorflow and its
dependencies on an Ubuntu 14.04 g2.8xlarge aws instance. While trying to run
the example problems, I'm running into the error posted below. Any help would
be greatly appreciated. Thanks.
Traceback (most recent call last):
File "convolutional.py", line 30, in <module>
import tensorflow.python.platform
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 37, in <module>
from tensorflow.core.framework.graph_pb2 import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/core/framework/graph_pb2.py", line 6, in <module>
from google.protobuf import descriptor as _descriptor
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/descriptor.py", line 50, in <module>
from google.protobuf.pyext import _message
AttributeError: 'module' object has no attribute 'MutableMapping'
Answer: This sounds like an incompatibility between TensorFlow and the version of
Protocol Buffers that's installed on your machine. The two best options are:
1. Try to upgrade the Protobuf library in `/usr/local/lib/python2.7/dist-packages/google/protobuf/` to version 3.0.0a3 or higher.
2. Install TensorFlow in a virtualenv, by following the instructions [here](https://www.tensorflow.org/versions/master/get_started/os_setup.html#virtualenv_install). This should install the appropriate version of protobuf alongside TensorFlow.
|
draw matrix as a table in python
Question: I am trying to generate a data matrix with python and wish to draw it as a
table. From IPython, I found a display api and it can display the matrix as a
table. but I still prefer: 1\. remove the row index col and col index row. 2\.
the data grid should split evenly.
I am think about matplotlib but not sure how to do it.I wish to do it in the
same script so no need to paste it here and there!
%matplotlib inline
import numpy as np
import pandas as pd
from IPython.display import display, HTML
import matplotlib.pyplot as plt
print pd.__version__
row = 6
col = row
matrix = np.zeros((row, col))
for i in range(row):
for j in range(col):
if i == 0:
matrix[i][j] = 1
else:
matrix[i][j] = matrix[i-1][j] + matrix[i][j-1]
df = pd.DataFrame(matrix)
display(df)
Current output is:
[](http://i.stack.imgur.com/BH63Xm.png)
### Updated
According to Brat's comments, I have updated my code for referenc:
%matplotlib inline
import numpy as np
import pandas as pd
from IPython.display import display, HTML
import matplotlib.pyplot as plt
row = 7
col = row
matrix = np.zeros((row, col))
matrix = matrix.astype(int)
for i in range(row):
for j in range(col):
if i == 0:
matrix[i][j] = 1
else:
matrix[i][j] = matrix[i-1][j] + matrix[i][j-1]
df = pd.DataFrame(matrix)
#display(df)
w = 5
h = 5
plt.figure(1, figsize=(w, h))
tb = plt.table(cellText=matrix, loc=(0,0), cellLoc='center')
tc = tb.properties()['child_artists']
for cell in tc:
cell.set_height(1.0/row)
cell.set_width(1.0/col)
ax = plt.gca()
ax.set_xticks([])
ax.set_yticks([])
plt.show()
Output as bleow:
[](http://i.stack.imgur.com/3JVbr.png)
Answer: With matplotlib you might consider using `table`, for example:
import numpy as np
import matplotlib.pylab as pl
nx = 4
ny = 5
data = np.random.randint(0,10,size=(ny,nx))
pl.figure()
tb = pl.table(cellText=data, loc=(0,0), cellLoc='center')
tc = tb.properties()['child_artists']
for cell in tc:
cell.set_height(1/ny)
cell.set_width(1/nx)
ax = pl.gca()
ax.set_xticks([])
ax.set_yticks([])
[](http://i.stack.imgur.com/SgkYh.png)
|
Python : Runtime Warning : Overflow encountered in square, add, multiply, substract
Question: I'm beginning with Python so I might be asking a not so subtle question, but
after quite a lot of research I couldn't resolve this error. I'm actually
trying to solve a physics problem using the Gray-Scott model, but I'm stuck at
the very end of the code : the results are not considered as numbers and
there's an overflow encountered in square, add, multiply and substract.
Anyone here have the slightest idea of where this comes from ?
Thanks !
These are the initial conditions of the problem I'm trying to solve :
n = 192
Du, Dv, F, k = 0.00016, 0.00008, 0.035, 0.065
dh = 5/(n-1)
T = 8000
dt = .9 * dh**2 / (4*max(Du,Dv))
nt = int(T/dt)
uvinitial = numpy.load('./uvinitial.npz')
Uin = uvinitial['U']
Vin = uvinitial['V']
Now here are my functions :
def Nd1(U,V) :
return - U*(V)**2 + F*(1-U)
def Nd2(U,V) :
return U*(V)**2 -(F+k)*V
def gray_scott_solve(Du, Dv, dh, dt, nt, Uin, Vin, Nd1, Nd2):
Uplus = Uin.copy()
Vplus = Vin.copy()
for n in range(nt):
U = Uplus.copy()
V = Vplus.copy()
Uplus[1:-1,1:-1] = ( Nd1(U[1:-1,1:-1], V[1:-1,1:-1]) + Du/(dh**2) \
*(U[2:,1:-1] + U[:-2,1:-1] - 4*U[1:-1,1:-1]) \
+ U[1:-1,2:] + U[1:-1,:-2] )*dt \
+ U[1:-1,1:-1]
Uplus[:,-1] = Uplus[:,-2]
Uplus[-1,:] = Uplus[-2,:]
Uplus[:,0] = Uplus[:,1]
Uplus[0,:] = Uplus[1,:]
Vplus[1:-1,1:-1] = ( Nd2(U[1:-1,1:-1], V[1:-1,1:-1]) + Du/(dh**2) \
*(V[2:,1:-1] + V[:-2,1:-1] - 4*V[1:-1,1:-1]) \
+ V[1:-1,2:] + V[1:-1,:-2] )*dt \
+ V[1:-1,1:-1]
Vplus[:,-1] = Vplus[:,-2]
Vplus[-1,:] = Vplus[-2,:]
Vplus[:,0]= Vplus[:,1]
Vplus[0,:]= Vplus[1,:]
return U, V
I now want to print the results I'm looking for :
U, V = gray_scott_solve(Du, Dv, dh, dt, nt, Uin, Vin, Nd1, Nd2)
print(U[100,::40])
And I finally get this error :
[ nan nan nan nan nan]
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:2: RuntimeWarning: overflow encountered in square from ipykernel import kernelapp as app
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:2: RuntimeWarning: overflow encountered in multiply from ipykernel import kernelapp as app
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:5: RuntimeWarning: overflow encountered in square
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:5: RuntimeWarning: overflow encountered in multiply
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:11: RuntimeWarning: invalid value encountered in add
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:11: RuntimeWarning: overflow encountered in multiply
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:11: RuntimeWarning: invalid value encountered in subtract
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:18: RuntimeWarning: invalid value encountered in add
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:18: RuntimeWarning: overflow encountered in multiply
C:\Users\Anto\Anaconda3\lib\site-packages\ipykernel\__main__.py:18: RuntimeWarning: invalid value encountered in subtract
Answer: As you have it written, your spatial step, `dh`, will be equal to zero in
python 2:
n = 192
...
dh = 5 / ( n - 1 )
If you are using python 3 then dh will properly be treated as a float.
Otherwise, as was stated by @WarrenWeckesser, you are using the Method of
lines and integrating in time with the Forward Euler method, which with your
set of constraints (you said that your instructor specified your timestep and
other parameter values) is apparently unstable. However, using a Runge Kutta
Two method will work for your `dt` (I verified this), but your instructor may
have mentioned the time integration that you should use.
Regardless, if Runge Kutta Two seems daunting, use a second order central
space method:
u_{n+1} = u_{n-1} + 2 * dt * f(t_n,u_n)
where `f(t,u)` is the right hand side and `u_{n-1}` is the value of `u` at
time `t_{n-1}`, or the backward euler method.
|
Efficiently filter windowed observations in a pandas dataframe if they contain a certain value
Question: I have a pandas dataframe that has windows/chains of string observations
indexed at the offset of their first observation. The window is of a variable
but known length. For this example we can say their chains of 4 observations.
I want to know how to most efficiently eliminate certain values if they have a
specific observation anywhere in their windows, knowing that if he 9th window
begins with the value I am looking for, I know I can get rid of it, and the
three windows before it because they will also contain the same value later in
their windows. It is possible for a window to contain multiple instances of
the value I want to filter for. Here's some sample data. Starting with a
simple series of events, ser:
import pandas as pd
ser = pd.Series(['a','b','c','d','e','f','g','h','i','j','k'])
>>> ser
0 a
1 b
2 c
3 d
4 e
5 f
6 g
7 h
8 i
9 j
10 k
Then I turn this into a dataframe where each row is a window of n
observations. Here n == 4.
df = pd.concat([ser.shift(-x) for x in range(4)], axis=1)
>>> df
0 1 2 3
0 a b c d
1 b c d e
2 c d e f
3 d e f g
4 e f g h
5 f g h i
6 g h i j
7 h i j k
8 i j k NaN
9 j k NaN NaN
10 k NaN NaN NaN
Now I want to get rid of all the rows that include the value 'f' anywhere,
i.e.:
desired_output
0 1 2 3
0 a b c d
1 b c d e
6 g h i j
7 h i j k
8 i j k NaN
9 j k NaN NaN
10 k NaN NaN NaN
I'd like to avoid searching the whole dataframe as it only contains repetition
of the first column, and my value for n can be somewhat long. In this example,
what would be the best way to drop the columns that start with 'c', 'd', 'e',
and 'f', knowing that they will all include an 'f' somewhere. Later I join all
the strings in each row into one value but it seems like it should be easier
to manipulate the data at this stage where everything is in different columns.
This is with pandas 0.16.0 and must work on python 2.76 and python 3.4. Thank
you!
Answer: You can do, without searching the whole dataframe:
import numpy as np
ind = -np.arange(0, df.shape[1])+pd.Index(ser).get_loc('f')
df.iloc[np.setdiff1d(ser.index, ind)]
#Out[48]:
# 0 1 2 3
#0 a b c d
#1 b c d e
#6 g h i j
#7 h i j k
#8 i j k NaN
#9 j k NaN NaN
#10 k NaN NaN NaN
|
Passing arguments to shell script from Python
Question: I'm writing a python program which passes arguments to a shell script.
Here's my python code:
import subprocess
Process=subprocess.Popen('./copyImage.sh %s' %s str(myPic.jpg))
And my "copyImage.sh":
#!/bin/sh
cp /home/pi/project/$1 /home/pi/project/newImage.jpg
I can run the script on terminal without problems. But when executing the
python code, the terminal returned `"NameError: name 'myPic' is not defined"`.
If I change the syntax to
Process=subprocess.Popen('./copyImage.sh %s' %s "myPic.jpg")
Then the terminal returned `"OSError: [Errno 2] No such file or directory"`.
I've followed this: [Python: executing shell script with arguments(variable),
but argument is not read in shell
script](http://stackoverflow.com/questions/19325761/python-executing-shell-
script-with-argumentsvariable-but-argument-is-not-rea) but it didn't help.
Answer: The `subprocess` module is expecting a list of arguments, not a space-
separated string. The way you tried caused python to look for a program called
`"copyImage.sh myPic.jpg"` and call it with no arguments, whereas you wanted
to look for a program called `copyImage.sh` and call it with one argument.
subprocess.check_call(['copyImage.sh', 'myPic.jpg'])
I also want to mention, since your script simply calls copy in a shell, you
should probably cut out the middleman and just use python's
[`shutil.copy`](https://docs.python.org/2/library/shutil.html#shutil.copy)
directly. It's a more appropriate tool than running a subprocess for this
task.
|
Making Fill Plot Matplotlib - Error: invalid type promotion
Question: I am trying to create a fill plot using Matplotlib in Python (I am using
spyder IDE) using the code below. However, I receive the error posted at the
bottom of the code. The dataframe I want to plot contains dates along the
x-axis and then a range of y-values (I created the sample dataframe to see if
the error was reproduced, in case there was a problem with my dataframe).
This seems to be snagging me quite a bit and I am trying to start from scratch
(and visited the matplotlib website, which is posted in the comments within
the codes).
I am very new to Python and Matplotlib and maybe I am just missing something
basic. I will bee needing to add a lot of plots with date-time x-axis for the
work I am performing, so if there are any additional features I should be
aware of, please let me know as it would be greatly appreciated!
Thanks in advance!
import pandas as pd
import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
data1 = [{'a':dt.datetime(2015,1,1,12,0,0), 'b':1},{'a':dt.datetime(2015,1,1,13,0,0), 'b':2},{'a':dt.datetime(2015,1,1,14,0,0), 'b':3}]
df1 = pd.DataFrame(data1)
data2 = [{'a':dt.datetime(2015,1,1,12,0,0), 'c':9},{'a':dt.datetime(2015,1,1,13,0,0), 'c':8},{'a':dt.datetime(2015,1,1,14,0,0), 'c':7}]
df2 = pd.DataFrame(data2)
df3 = pd.merge(df1, df2, how = 'left', on = 'a')
plt.fill(df3['a'], df3['c'])
plt.show()
"""
used this example code from Fill Demo on page http://matplotlib.org/users/screenshots.html
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 1)
y = np.sin(4 * np.pi * x) * np.exp(-5 * x)
plt.fill(x, y, 'r')
plt.grid(True)
plt.show()
"""
File "/Applications/anaconda/lib/python3.5/site-packages/numpy/core/shape_base.py", line 280, in hstack
return _nx.concatenate(arrs, 1)
TypeError: invalid type promotion
Answer: Just use the values instead of the Series directly:
plt.fill(df3['a'].values, df3['c'].values
You more likely want this type of plot:
plt.fill_between(df3['a'].values, df3['c'].values)
|
Insert Google Analytics API data to postgresql [python]
Question: I want to store bulk data to postgresql. The data I got are from google
analytics [API]. The data is about pageviews and here is my code:
data = '[["20151201","path","title",345], ["20151202","path","title",321], ["20151203","path","title",214]]'
def storeJson( jsonFile, tableName ):
conn = psycopg2.connect( host=hostname, user=username, password=password, dbname=database )
try:
cur = conn.cursor()
# Here is the problem:
cur.executemany( "INSERT INTO " + tableName + " VALUES(%s)", [jsonFile])
conn.commit()
except psycopg2.DatabaseError as e:
if conn:
conn.rollback()
print("Error %s" %e)
exit()
finally:
if conn:
cur.close()
conn.close()
def main()
storeJson(data, "daily_pageviews")
if __name__ == '__main__':
main()
with the code above, i got error message like this:
> json.decoder.JSONDecodeError: Expecting ':' delimiter: line 1 column 12
> (char 11)
Can someone enlighten me? Thanks guys!
Answer: _`jsonFile` is a string_ in your case. You need to load it with
`json.loads()`:
import json
data = json.loads(jsonFile)
cur.executemany("INSERT INTO " + tableName + " VALUES(%s, %s, %s, %s)", data)
Note that I have _4 placeholders_ in the query - each for every item in every
sublist.
|
Django error: No module named _mysql
Question: I am migrating my app on prod mariadb database from default django's sqlitedb.
There are python3.3 + centos 7 + django 1.8 + apache envirounment. On sqlite
everything is working fine but after there migration i have such error after
apache restarting:
Traceback (most recent call last):
File "/usr/local/mainproject/mysite/mysite/wsgi.py", line 23, in <module>
application = get_wsgi_application()
File "/usr/local/lib/python3.3/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
django.setup()
File "/usr/local/lib/python3.3/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.3/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/usr/local/lib/python3.3/site-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python3.3/site-packages/django/contrib/auth/models.py", line 41, in <module>
class Permission(models.Model):
File "/usr/local/lib/python3.3/site-packages/django/db/models/base.py", line 139, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "/usr/local/lib/python3.3/site-packages/django/db/models/base.py", line 324, in add_to_class
value.contribute_to_class(cls, name)
File "/usr/local/lib/python3.3/site-packages/django/db/models/options.py", line 250, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/usr/local/lib/python3.3/site-packages/django/db/__init__.py", line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/local/lib/python3.3/site-packages/django/db/utils.py", line 240, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/local/lib/python3.3/site-packages/django/db/utils.py", line 111, in load_backend
return import_module('%s.base' % backend_name)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python3.3/site-packages/django/db/backends/mysql/base.py", line 27, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
ImproperlyConfigured: Error loading MySQLdb module: No module named _mysql
I have installed mysqlclient as it said here
<https://docs.djangoproject.com/en/1.8/ref/databases/#mysql-db-api-drivers>.
Than i have did manage.py syncdb and it had finished successfully. I have
checked database structure and it is ok - all data had migrated. But there is
still such error on my app start.
The Solution is: to use non-official libs <https://github.com/nakagami/django-
cymysql/> It is easy to install and configure with django:
$ pip install cymysql
$ pip install django-cymysql
Than just add this engine to django's settings.py:
DATABASES = {
'default': {
'ENGINE': 'mysql_cymysql',
'NAME': 'some_what_database',
'HOST': ...,
'USER': ...,
'PASSWORD': ...,
}
}
Answer: Try this; this solved my issue
pip install MySQL-python
|
hasMany not updating ember store
Question: I am using ember django adapter as my backend is designed in
python/django.(<http://dustinfarris.com/ember-django-adapter/>)
feed.js (model)
import DS from 'ember-data';
const {attr} = DS
export default DS.Model.extend({
gallery_name:attr('string'),
thumbnail_url:attr('string'),
is_following:attr('boolean'),
time:attr('string'),
description:attr('string'),
feedPhotos:DS.hasMany('feedPhoto',{embedded:'always'})
});
feedphoto.js(model)
import DS from 'ember-data';
export default DS.Model.extend({
feed: DS.belongsTo('feed'),
url: DS.attr(),
profilePic: DS.attr(),
userName: DS.attr(),
userKarma: DS.attr(),
caption: DS.attr(),
numComments: DS.attr(),
owner: DS.attr(),
time: DS.attr(),
photo_url: DS.attr(),
comments_url: DS.attr(),
numFives: DS.attr(),
fivers_url: DS.attr(),
fivers_pk: DS.attr(),
fullphoto_url: DS.attr(),
fullphoto_pk: DS.attr(),
is_fived: DS.attr('boolean'),
hiFiveKarma: DS.attr(),
owner_pk: DS.attr(),
userFirstName: DS.attr(),
is_bookmarked: DS.attr('boolean')
});
feed.js(serializer)
import DRFSerializer from './drf';
import DS from 'ember-data';
export default DRFSerializer.extend(DS.EmbeddedRecordsMixin,{
primaryKey: 'pk',
attrs:{
feedPhotos:{ embedded: 'always' }
}
});
feedphoto.js(serailizer)
import DRFSerializer from './drf';
import DS from 'ember-data';
export default DRFSerializer.extend({
primaryKey: 'pk',
});
response.json
[{
"pk": 127,
"url": "http://example.com/api/galleries/127/",
"gallery_name": "Faces",
"thumbnail_url": "https://dz.cloudfront.net/galleryThumbs/2656a05c-4ec7-3eea-8c5e-d8019454d443.jpg",
"time": "1 month ago",
"description": "Created by user",
"is_following": true,
"feedPhotos": [{
"pk": 574,
"url": "http://examle.com/api/photos/574/",
"profilePic": "https://d3.cloudfront.net/userDPs/b6f69e4e-980d-3cc3-8b3e-3eb1a7f21350.jpg",
"userName": "Rohini",
"userKarma": 194,
"caption": "Life @ Myanmar!",
"numComments": 0,
"owner": "http://example.cloud.net/api/users/45/",
"time": "2 months ago",
"photo_url": "https://example.cloud.net/photos/eeae72d5-d6af-391e-a218-b442c0c7e34e.jpg",
"comments_url": "http://example.cloud.net/api/photos/574/comments/",
"numFives": 2,
"fivers_url": "http://example.cloud.net/api/photogalleries/1303/fivers/",
"fivers_pk": 1303,
"fullphoto_url": "http://example.cloud.net/api/photogalleries/1303/photo/",
"fullphoto_pk": 1303,
"is_fived": false,
"hiFiveKarma": 0,
"owner_pk": 45,
"userFirstName": "Rohini",
"is_bookmarked": false
}, {
"pk": 446,
"url": "http://example.cloud.net/api/photos/446/",
"profilePic": "https://example.cloud.net/userDPs/b359fab0-211d-32b5-8f13-f5edbeb0fbf9.jpg",
"userName": "Shushma",
"userKarma": 224,
"caption": "",
"numComments": 0,
"owner": "http://example.cloud.net/api/users/34/",
"time": "2 months ago",
"photo_url": "https://example.cloud.net/photos/a415ed45-b6e5-33e0-a17e-6452ddb2f258.jpg",
"comments_url": "http://example.cloud.net/api/photos/446/comments/",
"numFives": 3,
"fivers_url": "http://example.cloud.net/api/photogalleries/1315/fivers/",
"fivers_pk": 1315,
"fullphoto_url": "http://example.cloud.net/api/photogalleries/1315/photo/",
"fullphoto_pk": 1315,
"is_fived": false,
"hiFiveKarma": 0,
"owner_pk": 34,
"userFirstName": "Shushma",
"is_bookmarked": false
}]
}]
The code is able to save the feed model details but not the feedphoto model.In
my ember inspector the feed model show the correct data.But the feedphoto
model is not showing any data.My not sure why I am going wrong.
Answer: I think you have two problems
1. the capital letter in: `DS.hasMany('feedPhoto',{embedded:'always'})`. This would mean, that your model file should be named 'feed-photo.js' or 'feed_photo.js'.
2. Also, I haven't worked with the ember-django-adapter yet, but they usually interprete the `feed_photos` coming as a response as a `feedPhoto` ember relation. So you should probably return as `feed_photos` from your server instead of `feedPhotos` or rename the attribute to `feedphotos`
|
Which logger should I use to get my data in Cloud Logging
Question: I am running a PySpark job using Cloud Dataproc, and want to log info using
the `logging` module of Python. The goal is to then push these logs to Cloud
Logging.
From [this question](http://stackoverflow.com/questions/34186511/output-from-
dataproc-spark-job-in-google-cloud-logging/), I learned that I can achieve
this by adding a logfile to the fluentd configuration, which is located at
`/etc/google-fluentd/google-fluentd.conf`.
However, when I look at the log files in `/var/log`, I cannot find the files
that contain my logs. I've tried using the default python logger and the
'py4j' logger.
logger = logging.getLogger()
logger = logging.getLogger('py4j')
Can anyone shed some light as to which logger I should use, and which file
should be added to the fluentd configuration?
Thanks
Answer: tl;dr
This is not natively supported now but will be natively supported in a future
version of Cloud Dataproc. That said, there is a manual workaround in the
interim.
Workaround
First, make sure you are sending the python logs to the correct log4j logger
from the spark context. To do this declare your logger as:
import pyspark
sc = pyspark.SparkContext()
logger = sc._jvm.org.apache.log4j.Logger.getLogger(__name__)
The second part involves a workaround that isn't natively supported yet. If
you look at the spark properties file under
/etc/spark/conf/log4j.properties
on the master of your cluster, you can see how log4j is configured for spark.
Currently it looks like the following:
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c: %m%n
# Settings to quiet third party logs that are too verbose
...
Note that this means log4j logs are sent only to the console. The dataproc
agent will pick up this output and return it as the [job driver
ouput](https://cloud.google.com/dataproc/driver-output). However in order for
fluentd to pick up the output and send it to Google Cloud Logging, you will
need log4j to write to a local file. Therefore you will need to modify the
log4j properties as follows:
# Set everything to be logged to the console and a file
log4j.rootCategory=INFO, console, file
# Set up console appender.
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c: %m%n
# Set up file appender.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=/var/log/spark/spark-log4j.log
log4j.appender.file.MaxFileSize=512KB
log4j.appender.file.MaxBackupIndex=3
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.conversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c: %m%n
# Settings to quiet third party logs that are too verbose
...
If you set the file to /var/log/spark/spark-log4j.log as shown above, the
default fluentd configuration on your Dataproc cluster should pick it up. If
you want to set the file to something else you can follow the instructions in
[this question](http://stackoverflow.com/questions/34186511/output-from-
dataproc-spark-job-in-google-cloud-logging/) to get fluentd to pick up that
file.
|
Google API Build Service Object returns KeyError: 'rootUrl'
Question: I have a flask application where I can run a script (with the help of Flask-
script) that makes use of google api discovery using the code below:
app_script.py
import argparse
import csv
import httplib2
from apiclient import discovery
from oauth2client import client
from oauth2client.file import Storage
from oauth2client import tools
def get_auth_credentials():
flow = client.flow_from_clientsecrets(
'/path/to/client_screts.json', # file downloaded from Google Developers Console
scope='https://www.googleapis.com/auth/webmasters.readonly',
redirect_uri='urn:ietf:wg:oauth:2.0:oob')
storage = Storage('/path/to/storage_file.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
parser = argparse.ArgumentParser(parents=[tools.argparser])
flags = parser.parse_args(['--noauth_local_webserver'])
credentials = tools.run_flow(flow=flow, storage=storage, flags=flags)
return credentials
def main():
credentials = get_auth_credentials()
http_auth = credentials.authorize(httplib2.Http())
# build the service object
service = discovery.build('webmasters', 'v3', http_auth)
Now the problem is every time I shutdown my computer upon booting and running
the script again, I get the following error when trying to build the service
object:
terminal:
$ python app.py runscript
No handlers could be found for logger "oauth2client.util"
Traceback (most recent call last):
File "app.py", line 5, in <module>
testapp.manager.run()
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/home/user/development/testproject/testapp/__init__.py", line 16, in runscript
metrics_collector.main()
File "/home/user/development/testproject/testapp/metrics_collector.py", line 177, in main
service = discovery.build('webmasters', 'v3', http_auth)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 206, in build
credentials=credentials)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 306, in build_from_document
base = urljoin(service['rootUrl'], service['servicePath'])
KeyError: 'rootUrl'
intalled:
google-api-python-client==1.4.2
httplib2==0.9.2
Flask==0.10.1
Flask-Script==2.0.5
The script runs sometimes*, but thats the problem I don't know why it runs
sometimes and others doesn't
*What I tried to make it work was to, delete all the cookies, download the client_secrets.json from the Google Developers Console again, remove the storage_file.dat, remove all .pyc files from the project
Can anyone help me see what's going on?
Answer: From a little bit of research [here](https://code.google.com/p/google-api-
python-client/issues/detail?id=225), it seems that the `No handlers could be
found for logger "oauth2client.util"` error can actually be masking a
different error. You need to use the
[logging](https://docs.python.org/3.5/library/logging.html) module and
configure your system to output.
**Solution**
Just add the following to configure logging:
import logging
logging.basicConfig()
**Other helpful/related posts**
[Python - No handlers could be found for logger
"OpenGL.error"](http://stackoverflow.com/questions/345991/python-no-handlers-
could-be-found-for-logger-opengl-error)
[SOLVED: Error trying to access "google drive" with python (google
quickstart.py source
code)](http://stackoverflow.com/questions/25786218/solved-error-trying-to-
access-google-drive-with-python-google-quickstart-py)
|
Write Excel file from Outlook Emails
Question: I have written some code in Python that reads through emails in an Outlook
folder using win32com.client. I can read it in easily, have all of the logic
built out, now want to write an Excel file as I iterate through all of the
messages. This is where I have a problem.
My latest attempt used xlwt but I am open to using anything. The problem I am
running into is that when I try to write a cell with the Sender or the Date
from the outlook email, I get the following error:
> Exception: Unexpected data type
Does anyone know how I can fix this/get around it?
Do I have to convert the .Sender, .Date instances to some other form?
Quick sample below:
import win32com.client
import xlwt
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6)
book = xlwt.Workbook(encoding="utf-8")
sheet = book.add_sheet("New Sheet")
for folder in inbox_folders:
fold = folder.Items
for messages in fold:
date = fold.ReceivedTime
sender = fold.Sender
sheet.write(1,0,date)
sheet.write(2,0,sender)
Answer: Replace `sheet.write(2,0,sender)` with `sheet.write(2,0,sender.Name)` or
`sheet.write(2,0,sender.Address).`
|
Issues with Folium Choropleth Mapping Example
Question: I am trying to create a choropleth map using folium, following the example
here: <https://pypi.python.org/pypi/folium>. The goal is to produce a
choropleth map of US unemployment rates, but when I open my map US states are
not shaded in. Any suggestions?
import folium
import pandas as pd
state_geo = r'data/us-states.json'
state_unemployment = r'data/US_Unemployment_Oct2012.csv'
state_data = pd.read_csv(state_unemployment)
#Let Folium determine the scale
map = folium.Map(location=[48, -102], zoom_start=3)
map.geo_json(geo_path=state_geo, data=state_data,
columns=['State', 'Unemployment'],
key_on='feature.id',
fill_color='YlGn', fill_opacity=0.7, line_opacity=0.2,
legend_name='Unemployment Rate (%)')
map.create_map(path='us_states.html')
Thanks,
Answer: I think the issue is a mis-match of column names:
1) data=state_data has columns ['State', 'Unemployment'] 2) if you open us-
states.json, you will find key_on='feature.id' corresponds to '01', '02' and
so on..
In folium key_on is suppose to match the first column of data, in this case
'State'.
But '01', '02' ..doesn't fit 'State' column which has 'AL', 'AK', 'AZ'..
If you could come up with key_on in us-states.json that matches 'States'
columns, I think it should solve your problem.
Note: I am assuming us-states.json is from
<https://raw.githubusercontent.com/alignedleft/d3-book/master/chapter_12/us-
states.json> and US_Unemployment_Oct2012.csv from
<https://raw.githubusercontent.com/python-
visualization/folium/master/examples/US_Unemployment_Oct2012.csv>
|
How do you run a python file from cmd or Python Shell or setup a workspace?
Question: I have looked at [this
explanation](http://stackoverflow.com/questions/1522564/how-do-i-run-a-python-
program) on how to put Python as a system variable for `Path`. This worked
successfully and from my `cmd` I can now run python which will result in the
terminal changing to the `>>>` notation.
However, I am not able to run the script `test.py` even when I `cd` into the
right directory. I get the error: `name test not defined`.
My first question is: how can I solve this and should I solve it?
Digging a bit deeper, I also found that there is the **IDLE Python shell**
that comes with the installation of Python. This interface comes provides
support for the `>>>` notation and you can run Python code in it. My guess
that it is better to use this interface to run scripts. However, it is not
clear to me how to cd into the right directory and run the file `test.py`
(located in: `C:\dev`). How should I do run it? Is there a way to `cd` into a
directory/workspace?
So basically, how do I run a python file and where should I run it? cmd or in
the shell?
Answer: The error sounds like you may be correctly changing directories in cmd, then
running python, then entering the name of the file in python. So you are
setting a path in cmd, then you ran python, and gave it an object it doesn't
know.
Since you aren't importing or running this file from within python you need to
tell cmd to run this file in python by changing directory correctly and then
telling it to run in python like so:
cd documents/py_scripts
python test.py
I would point you to executable scripts using #!/usr/local/bin/python as your
first special line (with your own path to the python interpreter), and using
chmod +x file.py command for unix.
|
How do you get the time and date in python?
Question: I am trying to get the date and time on python for a program I am writing, But
am unsure how to get them. I tried doing:
import dateline
import date
This usually works but now it just prints:
<module 'time' (built-in)> <module 'datetime' from 'C:\\Python34\\lib\\datetime.py'>
How can I fix this?
Answer: Formatted time: `datetime.datetime.now().strftime('%d/%m/%Y %H:%M')`
Datetime object: `datetime.datetime.now()`
|
TypeError: can only concatenate list (not "int") to list 4
Question: I'm required to take a Python module for my course and I get this error for my
script. It's plotting the trajectory of a projectile and calculating a few
other variables. I've typed the script exactly as in the booklet we are given.
Because I am an absolute beginner I can't understand other answers to this
error. I would appreciate it an awful lot if someone could give me a quick
fix, I don't have time at the moment to learn enough to fix it myself.
Code:
import matplotlib.pyplot as plt
import numpy as np
import math # need math module for trigonometric functions
g = 9.81 #gravitational constant
dt = 1e-3 #integration time step (delta t)
v0 = 40 # initial speed at t = 0
angle = math.pi/4 #math.pi = 3.14, launch angle in radians
time = np.arange(0,10,dt) #time axis
vx0 = math.cos(angle)*v0 # starting velocity along x axis
vy0 = math.sin(angle)*v0 # starting velocity along y axis
xa = vx0*time # compute x coordinates
ya = -0.5*g*time**2 + vy0*time # compute y coordinates
fig1 = plt.figure()
plt.plot(xa, ya) # plot y versus x
plt.xlabel ("x")
plt.ylabel ("y")
plt.ylim(0, 50)
plt.show()
def traj(angle, v0): # function for trajectory
vx0 = math.cos(angle) * v0 # for some launch angle and starting velocity
vy0 = math.sin(angle) * v0 # compute x and y component of starting velocity
x = np.zeros(len(time)) #initialise x and y arrays
y = np.zeros(len(time))
x[0], y[0], 0 #projecitle starts at 0,0
x[1], y[1] = x[0] + vx0 * dt, y[0] + vy0 * dt # second elements of x and
# y are determined by initial
# velocity
i = 1
while y[i] >= 0: # conditional loop continuous until
# projectile hits ground
x[i+1] = (2 * x[i] - x[i - 1]) # numerical integration to find x[i + 1]
y[i+1] = (2 * y[i] - y[i - 1]) - g * dt ** 2 # and y[i + 1]
i = [i + 1] # increment i for next loop
x = x[0:i+1] # truncate x and y arrays
y = y[0:i+1]
return x, y, (dt*i), x[i] # return x, y, flight time, range of projectile
x, y, duration, distance = traj(angle, v0)
print "Distance:" ,distance
print "Duration:" ,duration
n = 5
angles = np.linspace(0, math.pi/2, n)
maxrange = np.zeros(n)
for i in range(n):
x,y, duration, maxrange [i] = traj(angles[i], v0)
angles = angles/2/math.pi*360 #convert rad to degress
print "Optimum angle:", angles[np.where(maxrange==np.max(maxrange))]
The error explicitly:
File "C:/Users/***** at *****", line 52, in traj
x = x[0:i+1] # truncate x and y arrays
TypeError: can only concatenate list (not "int") to list
Answer: As is pointed out in the comments, this is the offending line
i = [i + 1] # increment i for next loop
Here, `i` is not actually being incremented as the comment suggests. When `i`
is 1, it's being set to `[1 + 1]`, which evaluates to `[2]`, the list
containing only the number 2. Remove the brackets.
|
elastic beanstalk, awsebcli, and blessed 1.9.5
Question: I was using the elastic beanstalk cli with AWS without any difficulty a few
months ago. I wanted to update my website and ran into this error:
> me$ eb status Traceback (most recent call last): File
> "/Library/Frameworks/Python.framework/Versions/2.7/bin/eb", line 5, in from
> pkg_resources import load_entry_point File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
> packages/pkg_resources/**init**.py", line 3095, in @_call_aside File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
> packages/pkg_resources/**init**.py", line 3081, in _call_aside f(*args,
> **kwargs) File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
> packages/pkg_resources/**init**.py", line 3108, in
> _initialize_master_working_set working_set = WorkingSet._build_master() File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
> packages/pkg_resources/**init**.py", line 660, in _build_master return
> cls._build_from_requirements(**requires**) File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
> packages/pkg_resources/**init**.py", line 673, in _build_from_requirements
> dists = ws.resolve(reqs, Environment()) File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
> packages/pkg_resources/**init**.py", line 846, in resolve raise
> DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The
> 'blessed==1.9.5' distribution was not found and is required by awsebcli
I haven't been able to find anything about this error, except for a
[question](http://stackoverflow.com/questions/33665282/how-can-i-configure-
blessed-1-9-5-with-python-in-ubuntu) about how to deal with a similar problem
on ubuntu (I'm on a Mac) that has gone unanswered for a month.
Does anyone have any ideas?
Answer: This is most likely caused by the fact that the `eb` script is using Apple's
Python interpreter instead of the one you installed yourself.
There are two workarounds:
## 1\. Run the EB CLI in a virtual environment
1. Create a virtual environment for the EB CLI by running `virtualenv ~/eb_cli_env`.
2. Run `source ~/eb_cli_env/bin/activate` to activate the created virtual environment.
3. Run `pip install awsebcli`.
After that, you should be able to use the `eb` command just fine. You will
have to run `source ~/eb_cli_env/bin/activate` every time before you can use
the EB CLI.
**\--OR--**
## 2\. Edit the shebang line in the `eb` script
1. Run `vim /usr/local/bin/eb`.
2. Change the first line from `#!/usr/bin/python` to `#!/usr/bin/env python`.
This will ensure the `eb` command works globally without using a virtual
environment, however it is very likely that if you upgrade the `awsebcli`
package you will have to edit the shebang line again.
|
Trying to timeit my sqrt function in Python
Question: So I've tried to code a simple square root function. I did, and I wanted to
compare it with Python's original one. Here's my code:
from math import sqrt
import timeit
def sqrt2(number):
i=1
while i**2<number:
i+=1
left=float(i-1)
right=float(i)
temp = left+(right-left)/2
while str(temp**2)!=str(number):
if temp**2>number:
right=temp
temp = left+(right-left)/2
elif temp**2<number:
left=temp
temp = left+(right-left)/2
else:
return temp
return temp
print timeit.timeit("sqrt2(12)", number=10000, setup='from __main__ import sqrt2')
print timeit.timeit("sqrt(12)", number=10000, setup='from math import sqrt')
It simply takes the number and notices when and integer^2 is lower and when
(integer+1)^2 is higher than our number, then divides the gap between those 2
by 2, and tries the middle number, etc. It's ok for me, but when I try to
timeit.timeit it just stops, cursor in console shows, that it's still working,
but nothing happens
Answer: As mentioned in the comments, the issue is that you get in an infinite loop
here: `while str(temp**2)!=str(number):`
Your problem is that `temp**2` in a `float`, while `number` is an `int`. Since
you compare their string representation, the first one will be, for example,
`12.0`, and the second one, `12`.
You can either fix the condition in your `while` loop (using `str()` seems
very odd indeed), or pass a `float` to your function.
And by the way, your implementation is very slow. A quick run with a float
returns:
Yours: `0.461127996445`
python's: `0.000571012496948`
|
pyodbc error on OS X machine?
Question: I'm trying to do access the remote MS-SQL database from my OS X machine, with
help of pyodbc and python. When I try to execute, it's showing an error like
this
Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)')
The code I've written is:
import pyodbc
def createdatabase():
driver = 'SQL Server'
server = '00000000'
db1 = 'Vodafone'
tcon = 'yes'
uname = 'user'
pword = 'pwd'
cnxn = pyodbc.connect(driver='{SQL Server}', host=server,database=db1,trusted_connection=tcon, user=uname, password=pword)
cursor = cnxn.cursor()
Answer: This may not be your only problem, but the pyodbc.connect() string needs to be
formed with semicolons like this:
DRIVER={FreeTDS};SERVER=yoursqlserver.com;PORT=1433;DATABASE=yourdb;UID=youruser;PWD=yourpass;TDS_Version=7.2;
You'll also have to read up on setting up FreeTDS and unixODBC from a Mac. If
you're specifying SQL Server instead of FreeTDS, I'm guessing you haven't set
up your configuration in freetds.conf, odbc.ini and odbcinst.ini, which are
required for connecting to SQL Server from non-Windows machines.
|
How to save a 3D array in python and Import it in mathematica
Question: I want to save a 3D binary array in .txt file or .csv file on python and
import it onto mathematica.
I googled and I found a lot of answers, I try this:
import numpy as np
a=np.zeros((2,3,4))
a[0,0,0]=10
cPickle.dump( a, open( "matrix.txt", "wb" ) )
In mathematica I have used the Import["matrix.txt","Data"] and I did not get
what I expect
IN[]:Import["matrix.txt", "Data"]
Out[]:{{"cnumpy.core.multiarray"}, {"_reconstruct"}, {"p1"}, {"(cnumpy"}, \
{"ndarray"}, {"p2"}, {"(I0"}, {"tS'b'"}, {"tRp3"}, {"(I1"}, {"(I2"}, \
{"I3"}, {"I4"}, {"tcnumpy"}, {"dtype"}, {"p4"}, {"(S'f8'"}, {"I0"}, \
{"I1"}, {"tRp5"}, {"(I3"}, {"S'<'"}, {"NNNI-1"}, {"I-1"}, {"I0"}, \
{"tbI00"}, \
{"S'\\x00\\x00\\x00\\x00\\x00\\x00$@\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\
x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'"}, {"tb."}}
Answer: The trick is to flatten the array as a 2D-array :
import numpy as np
a=np.zeros((2,3,4))
a[0,0,0]=10
b=a.reshape(1,24)
np.savetxt("/matrix.CSV",b,delimiter=',')
and then convert it in 3D in Mathematica after importing, we can use:
file=Import["matrix.CSV","Data"]
matrix=ArrayReshape[file, {2, 3, 4}]
|
HTML templates using Jinja2
Question: I need help with my main.py file, I can't manage to make my webapp to post
comments and save them with the user name.
This is my main code. main.py
import os
import jinja2
import webapp2
from google.appengine.ext import ndb
template_dir = os.path.join(os.path.dirname(__file__), 'templates')
jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir), autoescape = True)
DEFAULT_WALL = 'Public'
def wall_key(wall_name = DEFAULT_WALL):
"""This will create a datastore key"""
return ndb.Key('Wall', wall_name)
class CommentContainer(ndb.Model):
"""This function contains the name, the content and date of the comments"""
name = ndb.StringProperty(indexed=False)
content = ndb.StringProperty(indexed=False)
date = ndb.DateTimeProperty(auto_now_add = True)
class Handler(webapp2.RequestHandler):
"""This handler process the request, manipulates data and define a response to be returned to the client"""
def write(self, *a, **kw):
self.response.out.write(*a, **kw)
def render_str(self, template, **params):
t = jinja_env.get_template(template)
return t.render(params)
def render(self, template, **kw):
self.write(self.render_str(template, **kw))
class MainPage(Handler):
def get(self):
wall_name = self.request.get('wall_name', DEFAULT_WALL)
if wall_name == DEFAULT_WALL.lower(): wall_name = DEFAULT_WALL
comments_query = CommentContainer.query(ancestor = wall_key(wall_name)).order(-CommentContainer.date)
#comments = comments_query.fetch()
self.render("content.html")
class Post(Handler):
def post(self):
wall_name = self.request.get('wall_name',DEFAULT_WALL)
comment_container = CommentContainer(parent = wall_key(wall_name))
comment_container.name = self.request.get('name')
comment_container.content = self.request.get('content')
if comment_container.content == '':
self.redirect("/error")
else:
comment_container.put()
self.redirect('/#comment_section')
class Error_Page(Handler):
"""This controls empty comments"""
def get(self):
self.render("error.html")
app = webapp2.WSGIApplication([
("/", MainPage),
("/comments", Post),
("/error", Error_Page)
],
debug = True)
This is my template content.html:
{% extends "index.html" %}
<html>
{% block content %}
<div class="container">
<div class="row header">
<div class="col-md-6">
<img class="title-photo" src="images/ed.png" alt="Photo of Ed">
</div>
<div class="col-md-6 text-right">
<h1>Eduardo González Robles.</h1>
<h2>Portfolio.</h2>
</div>
</div>
<div class="row">
<div class="col-md-12">
<hr>
</div>
</div>
<div class="row text-center">
<div class="col-md-4">
<h3 class="text-body"><u>Block vs Inline</u>
</h3>
<p class="p-text"><span>Block Elements</span> are those who take the complete line and full width of the page creating a "box".<br>
<span>Inline Elements</span> are those who doesn´t affect the layout, just the element inside the tag.
</p>
</div>
<div class="col-md-4">
<h3 class="text-body"><u>Selectors</u></h3>
<p class="p-text"><span>Class selectors</span> are used to target elements with specific attributes<br>On the other hand, <span>id selectors</span> are just for unique elements.</p>
</div>
<div class="col-md-4">
<h3 class="text-body"><u>Responsive Layout</u></h3>
<p class="p-text"><span>Responsive Layout</span> is the combination of html and css design to make the website look good in terms of enlargement, shrink and width in any screen (<em>computers, laptops, netbooks, tablets, phones</em>). </p>
</div>
</div>
<div class="row text-center">
<div class="col-md-6">
<article>
<h3><u>The Importance of Avoiding Repetition</u></h3>
<p class="p-text">For a better reading and understanding of the code, it is important not to repeat elements and group them with selectors. It makes it <span>easier to read</span> and understand, the code does not get longer and with that it´s easier to <span>find errors</span> in the syntax.</p>
</article>
</div>2
<div class="col-md-6">
<h3><u>Tree like Structure</u></h3>
<p class="p-text">All the elements in html have a <span><em>parent</em></span> element. Elements that are inside another element are called <span><em>child</em></span><br>For example, the tag <u>html</u> is the parent of all the structure, body and head are the children of html.</p>
</div>
</div>
<div class="row">
<div class="col-md-12 text-center">
<img class= "tree-image" src="http://www.w3schools.com/xml/nodetree.gif" alt="Tree Like Structure"><br>
<q class="img-quote">http://www.w3schools.com/html/html_quotation_elements.asp</q>
</div>
</div>
<div class="row">
<div class="col-md-12">
<hr>
</div>
</div>
<div class="row">
<div class="col-md-12 text-center">
<h2><span class="python"><u>Python</u></span></h2>
<p class="p-text">The reason why we cannot write in a common language in Python is beacuse computers are "stupid" and they have to take the orders from a language they understand so they can follow the exact command and execute the function correct.</p>
</div>
</div>
<div class ="row text-center">
<div class = "col-md-4">
<h3><span class="python"><u>Procedual Thinking</u></span></h3>
<p class="p-text"> It refers to be capable of understand and give clear commands to a computer so this one can understand and execute them.</p>
</div>
<div class="col-md-4">
<h3><span class="python"><u>Abstract Thinking</u></span></h3>
<p class="p-text"> It means you have the abstraction ability to avoid the unnecessary repetitions in the code.
</p>
</div>
<div class="col-md-4">
<h3><span class="python"><u>Technological Empathy</u></span></h3>
<p class="p-text">This term makes reference that you understand what is a computers, it's functions. A computer is a tool that we use to write programming languages.</p>
</div>
</div>
<div class="row text-center">
<div class="col-md-6">
<h3><span class="python"><u>System Thinking</u></span></h3>
<p class="p-text">This refers when you break a big problem into smaller problems, it makes easier to understand the big picture. Solving problems is a part of being able to system thinking. The most difficult part of solving problems is to know where to start. The very first thing you have to do is to try to understand the problem. All computer problems have two things in common, they have inputs and desired outputs. Udacity's Pythonist Guide goes like this:</p>
<ol class="ol-text" start = "0">
<li><span>Don't Panic</span></li>
<li>What are the inputs?</li>
<li>What are the outputs?</li>
<li>Solve the problem</li>
<li>Simple mechanical solution</li>
<li>Develop incrementally and test as you go</li>
</ol>
</div>
<div class="col-md-6">
<h3><span class="python"><u>Debbugging</u></span></h3>
<p class="p-text"> Action of identifying the causes that produces errors in the code. There are many strategies to identify the errors, here are 5:<br></p>
<ol class="ol-text">
<li>Analize the error messages when the programs crash.</li>
<li>Work with the example code and compare with yours.</li>
<li>Make sure that the example code works.</li>
<li>Check with the print statement that your code is working properly. Sometimes it doesn't crash but that doesn't mean that is working correct, so you have to make sure it's doing what you are commanding.</li>
<li>Save and compare your old code versions so you can back and analize them if needed.</li>
</ol>
</div>
</div>
<div class ="row text-center">
<div class="col-md-4">
<h3><span class="python"><u>Variables</u></span></h3>
<p class="p-text">Variable are used as a data storage that saves information like integers, decimals and characters. Variables give names to the values and you can change those values for new one if desire. Variables work because they increase the code reading when we use names that makes sense to humans.</p>
</div>
<div class="col-md-4">
<h3><span class="python"><u>Strings</u></span></h3>
<p class="p-text">The strings are one of the most common things in python, you have to put the word between quotes. If you start just with one quote the you have to finish with one too, same thing with 2 or 3 quotes.</p>
</div>
<div class="col-md-4">
<h3><span class="python"><u>Procedures/Functions</u></span></h3>
<p class="p-text">Are "commands" that takes data as input and transform them into an output. They help programmers to avoid repetition because once you define the procedure, you can use it forever and you don't have to write it again.
</div>
</div>
<div class="row text-center">
<div class="col-md-12">
<h3><span class="python"><u>Comparatives</u></span></h3>
<p class="p-text">They are usually Booleans so they can only mean True or False. They are used to check real vs non-real parameters.You can use them in numbers and strings. These signs are used for the comparatives: <,>,<=,>=,==,!=<br><br>If Statements are other comparatives that are involved with the conditional. The syntax in Python for an if statement is like this:<br>if [condition]:<br>[indentedStatementBlock]<br><br>There is another comparative form named IF-ELSE statement, it has two indented blocks, one for if and one for else. If activates itself when the condition is True, the else block it's activated when the condition is False. <br> if condition:<br>indentedStatementBlockForTrueCondition<br>else:<br>indentedStatementBlockForFalseCondition]<br><br>The <strong>"or"</strong> value analize one of the expressions, if the result is true, then the other expression is not evaluated.</p></p>
</div>
</div>
<div class="row text-center">
<div class="col-md-4">
<h3><span class="python"><u>Lists</u></span></h3>
<p class="p-text">Lists can contain many objects in certain order; you can access to that list and add, modify, remove objects. The difference between <strong>Strings</strong> and <strong>Lists</strong> is that lists can contain strings and supports mutation. Lists start with a bracket and end with a bracket [LIST].</p>
</div>
<div class="col-md-4">
<h3><span class="python"><u>Mutation</u></span></h3>
<p class="p-text">Mutation allows us to modify the list after we created it. There's another term named <u>Aliasing</u> and it is when two different names refer to the same object.</p>
</div>
<div class="col-md-4">
<h3><span class="python"><u>List Operations</u></span></h3>
<p class= "p-text">These are some examples of list operations</p>
<ol class="ol-text">
<li>Append.- adds new elements at the end of the list. [list].append([element]).</li>
<li>Plus (+).- Acts similar to concatenation of strings. This one produces new lists. [list1] + [list2] (result) [list1list2].</li>
<li>Length.- Indicates how many elements the list contain. <br>len ("Eduardo") = 7.</li>
</ol>
</div>
</div>
<div class= "row text-center">
<div class="col-md-6">
<h3><span class="python"><u>While Loop</u></span></h3>
<p class="p-text">The While Loops are used to repeat codes. It has a test expression followed by a block, when the expression is true then the block executes, when the expression is false, the the code skips the True block to go to the False one. On the other hand, loops can be infinite if the expression does not correspond to False.<br><u>while</u>[test expression]:<br>[True block]<br>if [test expression]:<br>[False block]</p>
</div>
<div class="col-md-6">
<h3><span class="python"><u>For Loop</u></span></h3>
<p class="p-text">For loops are easier to use because you need to write less code than the while loops. They are written like this: <br><u>for</u> [name] <u>in</u> [list]:<br>[block]<br>The Loop goes through each element of the list and evaluates the block. In other words, for loop is used to repeat an action/sentence certain number of times within a range. We can use For Loops on list to, syntax looks like this:<br>for [name] in [list]:<br>[block]
</div>
</div>
<div class="row text-center">
<div class="col-md-4">
<h3><span class="python"><u>Find</u></span></h3>
<p class ="p-text"> Find is a method not an operand because it's a process created by Python. It helps you to find strings in strings. Syntax:<br>str.find(str, beg=0 end=len(string))</p>
</div>
</div>
<div class="row">
<div class="col-md-12">
<hr>
</div>
</div>
<div class="row text-center">
<div class="col-md-12">
<h2><span class="stage-3"><u>Stage 3: Create a Movie Website</u></span></h2>
</div>
</div>
<div class = "row text-center">
<div class = "col-md-4">
<h3><span class="stage-3"><u>import System</u></span></h3>
<p class = "p-text"> The import System in Python is used to bring codes to a desired file, you can import time, webbrowser, turtle, even packages outside the Pyhton Standard Library like Twilio or frameworks like fresh tomatoes</p>
</div>
<div class = "col-md-4">
<h3><span class="stage-3"><u>Built-in Functions</u></span></h3>
<p class = "p-text"> In the stage 3 of IPND we saw different types of methods like de open method that opens files inside files or websites. <br>The rename method renames files and/or directories src indicates de name of the actual file or directory, dst indicates the new name of the file or directory. <br> The translate method allows you to put 2 arguments.</p>
</div>
<div class = "col-md-4">
<h3><span class="stage-3"><u>Python Class-Intances/Objects</u></span></h3>
<p class = "p-text"> Classes in Python are like building blueprints, they contain information that creates instances. Instances or Objects are examples of a Class. Another way to describe a Class would be as a box that contains sorted files that access to different files, once a Class enter to it's files, it let us use all the functions that those files contain.</p>
</div>
</div>
<div class = "row text-center">
<div class = "col-md-3">
<h3><span class="stage-3"><u>Constructor</u></span></h3>
<p class = "p-text"> When we create instances, we invoke the constructor method init inside the class, it is here where all the data asociated with the instance starts.</p>
</div>
<div class = "col-md-3">
<h3><span class="stage-3"><u>self</u></span></h3>
<p class = "p-text"> The constructor uses the keyword "self" to access to the instance attribute.</p>
</div>
<div class = "col-md-3">
<h3><span class="stage-3"><u> Instance Variable</u></span></h3>
<p class = "p-text"> All the variables asociated with an specific instance are called Instance Variables, they are unique to the object and you access to them using the kew word slef inside the class and the instance name outside the class.</p>
</div>
<div class = "col-md-3">
<h3><span class="stage-3"><u> Instance Method</u></span></h3>
<p class = "p-text"> The instance method are all the functions inside the class asociated with the with the instances and that have self as their first argument.</p>
</div>
</div>
<div class="row footer">
<div class="col-md-6 text-left">
<p>Walnut Creek Ca.
94596<br>
USA.</p>
</div>
<div class="col-md-6 text-right">
<a href="[email protected]">[email protected]</a>
</div>
</div>
<div class="lesson">
<div class="concept" id="comment_section">
<div class="concept-title">Write your comments below thanks!!
</div>
<br>
<form align="center" action="/comments" method="post">
<label>Your name:
<div>
<textarea name="name" rows="1" cols="50"></textarea>
</div>
</label>
<label>Your comment:
<div>
<textarea name="content" rows="5" cols="100"></textarea>
</div>
</label>
<div>
<input type="submit" value="Post Your Comment">
</div>
</form>
<div class="part-concept">
<div class="part-concept-title">
Previous Comments:
</div>
<br>
{% for comment in comments %}
{% if comment.content != '' %}
{% if comment.name != '' %}
<b>{{ comment.name }}</b> wrote:
{% else %}
<b>Anonymous</b> wrote:
{% endif %}
<blockquote>{{ comment.content }}</blockquote>
{% endif %}
<div>{{ error }}</div>
{% endfor %}
</div>
</div>
</div>
{% endblock %}
</html>
Here is my template error.html:
{% extends "index.html" %}
<html>
{% block content %}
<div>
<span>Please, add a comment.</span>
<button = onclick="goBack()">Return to comments</button>
<script>
function goBack() {
window.history.back();
}
</script>
</div>
{% endblock %}
</html>
And finally my index.html:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Portfolio GonzandRobles</title>
<link href='http://fonts.googleapis.com/css?family=Amatic+SC' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<link rel="stylesheet" href="css/main.css">
</head>
<body>
{% block content %}
{% endblock %}
</body>
</html>
Im also having problems deploying my app with google app engine, it says that
my app name doesn't exist.
Thank you very much.
Answer: You need to pass the comments query into your template like this:
class MainPage(Handler):
def get(self):
wall_name = self.request.get('wall_name', DEFAULT_WALL)
if wall_name == DEFAULT_WALL.lower(): wall_name = DEFAULT_WALL
comments_query = CommentContainer.query(ancestor = wall_key(wall_name)).order(-CommentContainer.date)
#comments = comments_query.fetch()
self.render("content.html", comments = comments_query)
This makes `comments` available to iterate through as you are in your
template.
As for your deployment problem, open app.yaml and look at the `application:`
field of your project. It needs to be exactly the same as the one you created
online (`Project ID`) in the Google Developer Console. I'm going to guess that
the names don't match or, you didn't create the app online yet.
|
HTML Scraping with Python, document_fromstring is empty
Question: I am trying to extract some data from a website using python. I found a
([document](http://docs.python-guide.org/en/latest/scenarios/scrape/) that
exactly fits to my problem.
But when I run the provided code
from lxml import html
import requests
page = requests.get('http://econpy.pythonanywhere.com/ex/001.html')
tree = html.fromstring(page.content)
#This will create a list of buyers:
buyers = tree.xpath('//div[@title="buyer-name"]/text()')
#This will create a list of prices
prices = tree.xpath('//span[@class="item-price"]/text()')
print 'Buyers: ', buyers
print 'Prices: ', prices
I get an error:
File "C:\Python27\lib\site-packages\lxml\html\__init__.py", line 617, in document_fromstring
"Document is empty")
ParserError: Document is empty
Anyone an idea what the problem could be?
Answer: Your script works fine to me. I got on output:
Buyers: ['Carson Busses', 'Earl E. Byrd', 'Patty Cakes', 'Derri Anne Connecticut', 'Moe Dess', 'Leda Doggslife', 'Dan Druff', 'Al Fresco', 'Ido Hoe', 'Howie Kisses', 'Len Lease', 'Phil Meup', 'Ira Pent', 'Ben D. Rules', 'Ave Sectomy', 'Gary Shattire', 'Bobbi Soks', 'Sheila Takya', 'Rose Tattoo', 'Moe Tell']
Prices: ['$29.95', '$8.37', '$15.26', '$19.25', '$19.25', '$13.99', '$31.57', '$8.49', '$14.47', '$15.86', '$11.11', '$15.98', '$16.27', '$7.50', '$50.85', '$14.26', '$5.68', '$15.00', '$114.07', '$10.09']
I recommend you to try [latest lxml
package](https://pypi.python.org/pypi/lxml/3.5.0). And check that [desired
webpage](http://econpy.pythonanywhere.com/ex/001.html) is available to you at
this time.
|
2to3 conversion for strings
Question: I have written a small Qt-based text editor for Python code that I embed in my
application. Now I am going to switch to Python 3 and I want to help the users
of my app with converting their code. I know that `2to3` can do most of the
conversion for files. However, I need an on-the-fly conversion, without
touching the files on disk. I mean something like:
py3_code_str = convert2to3(py2_code_str)
Does anybody know how it can be achieved with `2to3` or `lib2to3`?
Answer: Unfortunately, `lib2to3` doesn't have stable interface, meaning it might
change drastically.
Though, in case you don't really care about that, it might be possible (with
minor changes) to use `from lib2to3.refactor import RefactoringTool` and
continue on with its `refactor_string` method.
|
How Do I Send Emails Using Python 3.4 With GMail?
Question: I've been trying bits of code such as:
import smtplib
def sendemail(from_addr, to_addr_list, cc_addr_list,
subject, message,
login, password,
smtpserver='smtp.gmail.com:587'):
header = 'From: %s\n' % from_addr
header += 'To: %s\n' % ','.join(to_addr_list)
header += 'Cc: %s\n' % ','.join(cc_addr_list)
header += 'Subject: %s\n\n' % subject
message = header + message
server = smtplib.SMTP(smtpserver)
server.starttls()
server.login(login,password)
problems = server.sendmail(from_addr, to_addr_list, message)
server.quit()
sendemail(from_addr = '[email protected]',
to_addr_list = ['[email protected]'],
cc_addr_list = ['[email protected]'],
subject = 'Howdy',
message = 'Hello!',
login = '[email protected]',
password = 'XXXX')
Except with an actual password and details.
It brings this error:
Traceback (most recent call last):
File "User/Python/Email.py", line 27, in <module>
password = 'lollol69')
File "User/Python/Email.py", line 15, in sendemail
server = smtplib.SMTP(smtpserver)
File "C:\Python34\lib\smtplib.py", line 242, in __init__
(code, msg) = self.connect(host, port)
File "C:\Python34\lib\smtplib.py", line 321, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "C:\Python34\lib\smtplib.py", line 292, in _get_socket
self.source_address)
File "C:\Python34\lib\socket.py", line 509, in create_connection
raise err
File "C:\Python34\lib\socket.py", line 500, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Ideally, I could know what's wrong with the code and how I can fix this. If
it's totally wrong then some working code with Gmail would be greatly
appreciated.
Thanks!
Answer: You have to split the smpt server address and the port and use them as
separate parameters for `smtplib.SMTP('smtp.gmail.com', 587)` See
<https://docs.python.org/3.4/library/smtplib.html>:
import smtplib
def sendemail(from_addr, to_addr_list, cc_addr_list,
subject, message,
login, password,
smtpserver='smtp.gmail.com', smtpport=587): # split smtpserver and -port
header = 'From: %s\n' % from_addr
header += 'To: %s\n' % ','.join(to_addr_list)
header += 'Cc: %s\n' % ','.join(cc_addr_list)
header += 'Subject: %s\n\n' % subject
message = header + message
server = smtplib.SMTP(smtpserver, smtpport) # use both smtpserver and -port
server.starttls()
server.login(login,password)
problems = server.sendmail(from_addr, to_addr_list, message)
server.quit()
sendemail(from_addr = '[email protected]',
to_addr_list = ['[email protected]'],
cc_addr_list = ['[email protected]'],
subject = 'Howdy',
message = 'Hello!',
login = '[email protected]',
password = 'XXXX')
By the way: change your gmail password as it was posted in the stacktrace
|
Linking a FuncAnimation() with a Button in Python 2.7
Question: I am trying to start a graph (which is it's getting data from serial port) on
a button's click. I had tried the following code but was not successful. I am
new to python. Please help me in guiding were I am wrong.
I am using Python 2.7 with Tkinter
Thanks in advance
import serial
import Tkinter as tk
import ttk
import matplotlib
matplotlib.use('TkAgg')
from matplotlib.backends.backend_tkagg import figureCanvasTkAgg
import matplotlib.animation as animation
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
import tkFileDialog
x = []
adc_data = []
f = plt.Figure(figsize = (9,5), dpi = 100)
ax = f.add_subplot(111)
def select(self):
self.BaudRate = self.Baud.get()
self.COMPort = self.COM.get()
self.ser = serial.Serial(port = self.COMPort, baudrate = self.BaudRate,bytesize = serial.EIGHTBITS, stopbits = serial.STOPBITS_ONE, parity = serial.PARITY_NONE)
self.ser.close()
self.ser.open()
self.ser.flushInput();
self.ser.flushOutput();
def quit_(self):
self.ser.close()
def animate_(i):
self.ser.write(str(chr(250)))
data = self.ser.read(1)
data1 = self.ser.read(1)
LSB = ord(data)
MSB = ord(data1)
x.append(LSB)
adc_data.append(MSB) #adding data to list
plt.pause(.00001)
ax.clear()
ax.plot(x,adc_data)
def animate_button(self):
ani = animation.FuncAnimation(f, animate_,interval=1000)
class ADC_Ref_Data(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.wm_geometry(self, '900x600+200+150')
tk.Tk.wm_title(self, "ADC Reference")
container = tk.Frame(self)
container.pack(side="top", fill="both", expand = True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
frame = StartPage(container, self)
self.frames[StartPage] = frame
frame.grid(row=0, column=0, sticky = "nsew")
self.show_frame(StartPage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
self.button = ttk.Button(self, text="Stop", state = 'disable',
command=lambda: quit_(self))
self.button.place(relx = 0.97, rely = 0.95, height = 30 , width = 80, anchor = 'se')
button2 = ttk.Button(self, text="Select",
command = lambda:select(self))
button2.place(relx = 0.97, rely = 0.016, height = 30 , width = 80, anchor = 'ne')
button4 = ttk.Button(self, text="Start",
command=lambda: animate_button(self))
button4.place(relx = 0.03, rely = 0.95, height = 30 , width = 80 , anchor = 'sw')
canvas = FigureCanvasTkAgg(f, self)
canvas.show()
canvas.get_tk_widget().place(relx = 0.5, rely = 0.48, relwidth = 1, relheight = 0.8, anchor = 'center' )
app = ADC_Ref_Data()
app.mainloop()
Answer: I got success in getting my plot to start on button click. To get my code
working I just need to add a simple line in it which is as follow:
def animate_button(self):
ani = animation.FuncAnimation(f, animate_,frames = 10, interval=1000)
f.canvas.show()
|
How to get preview from childs in notebook like in sublime or terminology
Question: For example I have a window with a notebook widget and in this notebook I have
two tabs which contain one vte widgets each The idea it to get a preview of
each vte widgets (You know like in Terminology) here is the simple code I have
so far:
import gi
import os
gi.require_version('Gtk', '3.0')
gi.require_version('Vte', '2.91')
from gi.repository import Gtk
from gi.repository import Vte
from gi.repository import GLib
from gi.repository import Gdk
from gi.repository import GdkPixbuf
win = Gtk.Window()
win.connect('delete-event', Gtk.main_quit)
notebook = Gtk.Notebook()
win.add(notebook)
directory = os.environ['HOME']
env=[]
term1 = Vte.Terminal()
term1.spawn_sync( Vte.PtyFlags.DEFAULT, directory, ["/bin/bash"],
env, GLib.SpawnFlags.CHILD_INHERITS_STDIN | GLib.SpawnFlags.SEARCH_PATH,
None, None, )
term1.show()
term2 = Vte.Terminal()
term2.spawn_sync( Vte.PtyFlags.DEFAULT, directory, ["/bin/bash"],
env, GLib.SpawnFlags.CHILD_INHERITS_STDIN | GLib.SpawnFlags.SEARCH_PATH,
None, None, )
term2.show()
notebook.append_page(term1)
notebook.append_page(term2)
notebook.show()
notebook.next_page()
def manage_key_press_events_cb(widget, event):
keyval = event.keyval
if ((event.state & (Gdk.ModifierType.CONTROL_MASK|Gdk.ModifierType.SHIFT_MASK)) == (Gdk.ModifierType.CONTROL_MASK|Gdk.ModifierType.SHIFT_MASK)):
if keyval == Gdk.KEY_O:
for i in range(notebook.get_n_pages()):
child = notebook.get_nth_page(i)
child.get_window().show()
width = child.get_allocation().width
height = child.get_allocation().height
x = child.translate_coordinates(notebook,0,0)[0]
y = child.translate_coordinates(notebook,0,0)[1]
p = Gdk.pixbuf_get_from_window(child.get_window(), 0, 0, width, height)
p.savev(str(i) + ".png", "png", (), ())
return True
else:
return False
else:
return False
win.show_all()
win.connect('key-press-event', manage_key_press_events_cb)
Gtk.main()
In one tab I launch htop and in another I launch glances (the applications
doesn't matter, they just help me to differenciate the previews)
The problem I cannot solve is that each preview represents the currently
focused tab. What ever I do I am not abble to have a preview of the tabs that
are not the current one.
FYI I use python in order to give you a simple working example but initialy I
have this problem with ruby from the ruby-gnome2 project. So I guess this is
not related to python or ruby.
Answer: The GtkNotebook implementation doesn't allow to show/hide/draw its childs
directly. In order to have a pixbuf of a child it must be the currently
selected.
So the idea is to cycle throught all the childs of the notebook and use the
"switch-page" event in order to generate the pixbuf: see here :
[http://www.gtkforums.com/viewtopic.php?f=3&t=179132](http://www.gtkforums.com/viewtopic.php?f=3&t=179132)
and here
<https://github.com/cedlemo/germinal>
<https://github.com/cedlemo/germinal/blob/master/bin/terminal_chooser.rb>
<https://github.com/cedlemo/germinal/blob/master/bin/notebook.rb>
|
Tkinter Python GUI Issues
Question: Below is my code, it runs but I'm not sure how to get the "Run text" button to
prompt me to open text file in new window, currently a new window appears with
a "Quit" button, nothing else.
import tkFileDialog
import Tkinter as tk
from Tkinter import *
import logging
logging.basicConfig(filename= "log_file.txt", filemode = "w", level=logging.DEBUG, format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %I:%M:%S %p')
logging.warning('is when this event was logged.')
class HomeScreen:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.button1 = tk.Button(self.frame, text = 'Run Text', width = 25, command = self.new_window)
self.button1.pack()
self.frame.pack()
def openFile(self):
openfile = tkFileDialog.askopenfile().read()
text= open(openfile, 'r').read()
T.insert(1.0, openfile)
T = Text(height=10, width=100)
T.pack()
T.insert(END, "Select file to input")
B = Button(root, text="Open", command=openFile)
B.pack()
mainloop()
return
def new_window(self):
self.newWindow = tk.Toplevel(self.master)
self.app = Quit(self.newWindow)
class Quit:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.close_windows)
self.quitButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
def main():
root = tk.Tk()
app = HomeScreen(root)
app = Quit(root)
root.mainloop()
if __name__ == '__main__':
main()
I'm sure my code is very messy as I'm just a beginner, some parts may not be
needed, any advice would be greatly appreciated.
Answer: If you want that `"Run text"` open's a file dialog change called method:
self.button1 = tk.Button(self.frame, text = 'Run Text', width = 25, command = self.openFile)
|
Python: how to get this elegantly?
Question: I'm new to Python and I wrote a function:
def f(x, y, z):
ret = []
for i in range(x):
for j in range(y):
for k in range(z):
ret.append((i, j, k))
return ret
print f(2, 3, 4)
Output:
[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 0, 3), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 1, 3), (0, 2, 0), (0, 2, 1), (0, 2, 2), (0, 2, 3), (1, 0, 0), (1, 0, 1), (1, 0, 2), (1, 0, 3), (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 2, 0), (1, 2, 1), (1, 2, 2), (1, 2, 3)]
But I'm not satisfied with that because I think there must be a shorter
implementation.
So could any one give me some hint about that?
Answer: You can use
[`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product)
because that is essentially what you are after, the [Cartesian
product](https://en.wikipedia.org/wiki/Cartesian_product)
>>> from itertools import product
>>> list(product(range(2), range(3), range(4)))
[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 0, 3), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 1, 3), (0, 2, 0), (0, 2, 1), (0, 2, 2), (0, 2, 3), (1, 0, 0), (1, 0, 1), (1, 0, 2), (1, 0, 3), (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 2, 0), (1, 2, 1), (1, 2, 2), (1, 2, 3)]
So replacing your existing function you could do
def f(x, y, z):
return list(product(range(x), range(y), range(z)))
To remove the number of times you have to type out `range`, you could accept a
single list argument then use a generator expression such as
def f(l):
return list(product(*(range(i) for i in l)))
So then you could call it as
>>> f([2,3,4])
[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 0, 3), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 1, 3), (0, 2, 0), (0, 2, 1), (0, 2, 2), (0, 2, 3), (1, 0, 0), (1, 0, 1), (1, 0, 2), (1, 0, 3), (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 2, 0), (1, 2, 1), (1, 2, 2), (1, 2, 3)]
|
In python, when use multiprocessing Pool, main process can not finished in some situation
Question: first situation, main process can not finished,
from multiprocessing import Pool, Queue
queue = Queue()
def handle(slogan):
for i in xrange(100000):
queue.put(slogan)
print 'put done'
def main():
pools = Pool(2)
for i in xrange(4):
pools.apply_async(handle, args=('test', ))
print 'waiting all done...'
pools.close()
pools.join()
print 'all done...'
if __name__ == '__main__':
main()
the result of this code, like this:
waiting all done...
put done
put done
put done
put done
I have waited for over 1 hours. I can not understand. I thought
multiprocessing module has some bug or something. So I change this code. This
time I do not use Queue of multiprocessing, I just use it for computing some
numbers. And code as follow:
from multiprocessing import Pool
def handle(slogan):
tmp = 0
for i in xrange(100000):
tmp += i
print 'put done'
def main():
pools = Pool(2)
for i in xrange(4):
pools.apply_async(handle, args=('test', ))
print 'waiting all done...'
pools.close()
pools.join()
print 'all done...'
if __name__ == '__main__':
main()
for the code, it finished successfully, result as:
waiting all done...
put done
put done
put done
put done
all done...
just because I use Queue? I do not know why. who can explain it for me?
Answer: You aren't capturing the result. You should capture the return values from
`apply_async()` and call `get()` on each one of them.
Also, try specifying a large `timeout` value in `join()` or `get()`. In some
versions of Python this is required to work around a bug.
See also: <http://stackoverflow.com/a/3571687/4323>
|
In Python's Seaborn, is there any way to do the opposite of `despine`?
Question: `seaborn` is a beautiful Python package that acts, for the most part, as an
additional layer on top of `matplotlib`. However, it changes, for instance,
things that would be `matplotlib` methods on a plot object to direct `seaborn`
functions.
* * *
`seaborn`'s `despine()` **remove** any spines (the outer edges of the plot)
from a plot. But I cannot do the opposite.
I cannot seem to recreate the spine in the standard way that I would / could
if I had used `matplotlib` entirely from the start. Is there a way to do that?
How would I?
* * *
Below is an example. Could I, for instance, add a spine on the bottom and the
left of the plot?
from sklearn import datasets
import pandas as pd
tmp = datasets.load_iris()
iris = pd.DataFrame(tmp.data, columns=tmp.feature_names)
iris['species'] = tmp.target_names[tmp.target]
iris.species = iris.species.astype('category')
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style('darkgrid')
sns.boxplot(x='species', y='sepal length (cm)', data=iris_new)
plt.show()
[](http://i.stack.imgur.com/fK3vM.png)
Answer: Thanks for all the great comments! I knew some of what you wrote, but not that
**_both_** the `'axes.linewidth'` **_and_** `'axes.edgecolor'` needed to be
set.
I'm writing an answer here, since it is a compilation of a few comments.
* * *
That is, the following code generates the plot below:
sns.set_style('darkgrid', {'axes.linewidth': 2, 'axes.edgecolor':'black'})
sns.boxplot(x='species', y='sepal length (cm)', data=iris_new)
plt.show()
[](http://i.stack.imgur.com/7C53N.png)
|
Why does the OrderedDict keys view compare order-insensitive?
Question: Why does the `OrderedDict` keys view compare order-insensitive?
>>> from collections import OrderedDict
>>> xy = OrderedDict([('x', None), ('y', None)])
>>> yx = OrderedDict([('y', None), ('x', None)])
>>> xy == yx
False
>>> xy.keys() == yx.keys()
True
The OrderedDict keys view should arguably behave like an OrderedSet, but
instead it behaves the same as `dict.keys` (i.e. like a usual `set`).
Same "issue" in python2:
>>> xy.viewkeys() == yx.viewkeys()
True
They are different types, (`odict_keys` is a subclass of `dict_keys`)
>>> type(xy.keys())
odict_keys
>>> type({}.keys())
dict_keys
And there is already an [order-sensitive keys comparison
available](https://github.com/python/cpython/blob/master/Objects/odictobject.c#L794)
that they could have trivially used, but it's apparently only used as a [post-
check](https://github.com/python/cpython/blob/master/Objects/odictobject.c#L1604)
for the odict rich comparison.
Is this a design decision, or a bug? If it's a design decision, where could I
find a discussion of the justification?
Answer: Looks like `OrderedDict` delegates the implementation of the various view
objects to the common `dict` implementation; this remains the case even in
Python 3.5 where `OrderedDict` gained a C accelerated implementation ([it
delegates object construction to
`_PyDictView_New`](https://hg.python.org/cpython/file/e81189f75d04/Objects/odictobject.c#l2119)
and [provides no
override](https://hg.python.org/cpython/file/e81189f75d04/Objects/odictobject.c#l2176)
for the generic view's [rich comparison
function](https://hg.python.org/cpython/file/e81189f75d04/Objects/dictobject.c#l3332).
Basically, `OrderedDict` views iterate with the same order their backing
`OrderedDict` would (because there is no cost to do so), but for `set`-like
operations, they act like `set`, using content equality, subset/superset
checks, etc.
This makes the choice to ignore ordering make sense to some extent; for some
`set` operations (e.g. `&`, `|`, `^`), the return value is a `set` without
order (because there is no `OrderedSet`, and even if there were, which
ordering do you use for something like `&` where the ordering may be different
in each view?), you'd get inconsistent behaviors if some of the `set`-like
operations were order sensitive and some weren't. And it would be even weirder
when two `OrderedDict` keys views were order sensitive, but comparing
`OrderedDict` views to `dict` views wasn't.
As I noted in the comments, you can get order sensitive `keys` comparison
pretty easily with:
from operator import eq
# Verify that keys are the same length and same set of values first for speed
# The `all` check then verifies that the known identical keys appear in the
# same order.
xy.keys() == yx.keys() and all(map(eq, xy, yx))
# If you expect equality to occur more often than not, you can save a little
# work in the "are equal" case in exchange for costing a little time in the
# "not even equal ignoring order case" by only checking length, not keys equality:
len(xy) == len(yz) and all(map(eq, xy, yx))
|
Ipython kernel dies unexpectedly reading large file
Question: I'm reading in a ~3Gb csv using pandas in an ipython notebook. While reading
the file, the notebook unexpectedly gives me an error message saying the
kernel appears to have died and will restart.
As per several "big data" workflows in python/pandas, I'm reading the files in
as follows:
import pandas as pd
tp = pd.read_csv(file_name_cleaned,chunksize,iterator=True,low_memory=False)
df = pd.concat(tp,ignore_index=True)
My workflow has involved some preprocessing to remove all but alphanumeric
characters and a few pieces of punctuation as follows:
with open(file_name,'r') as file1:
with open(file_name_cleaned,'w') as file:2
for line in file1:
if len(line.split(sep_string)) == num_columns:
line = re.sub(r'[^A-Za-z0-9|._]+','',line)
file2.write(line+'\n')
The strange thing is that if I remove the line containing re.sub(), I get a
different error - "Expected 209 fileds, saw in line 22236, saw 329" even
though I've explicitly checked for the exact number of delimiters. Visual
inspection of the line and surrounding lines don't really show me much either.
This process has worked fine for several other files, including ones that are
larger so I don't think the size of the file is the issue although I suppose
it's possible that that's an oversimplification.
I included the preprocessing because I know from experience that sometimes the
data contains strange special characters, I've also gone back and forth
between using encoding='utf-8' and encoding='utf-8-sig' in the read_csv() and
open() statements to no real avail.
I have several questions - does including the encoding keyword argument cause
python to ignore characters outside of those character sets or does it maybe
invoke some kind of conversion for those characters? I'm not very familiar
with these types of issues. Is it possible that some kind of unexpected
character could have slipped through my preprocessing and caused this? Is
there another type of issue that I haven't found that could cause this? (I
have done research but nothing has been quite right.)
Any help would be much appreciated.
Also, I'm using Anaconda 2.4, with Python 3.5.1, Ipython 4.0.0, and pandas
0.17.0
Answer: I'm not sure that this totally answers my questions but I did solve the issue,
while it is slower, using engine='python' in pd.read_csv() did the trick.
|
Merging two GEOJSON polygons in Python
Question: Is there a way to merge two overlapping GEOJSON polygons in python, returning
a single merged GEOJSON object?
Answer: This is how I was able to do it using the packages/modules json, geojson,
shapely, pyproj, and partial from functools:
import json
import geojson
from functools import partial
import pyproj
import shapely.geometry
import shapely.ops
# reading into two geojson objects, in a GCS (WGS84)
with open('file1.json') as geojson1:
poly1_geojson = json.load(geojson1)
with open('file2.json') as geojson2:
poly2_geojson = json.load(geojson2)
# pulling out the polygons
poly1 = shapely.geometry.asShape(poly1_geojson['features'][2]['geometry'])
poly2 = shapely.geometry.asShape(poly2_geojson['features'][2]['geometry'])
# checking to make sure they registered as polygons
print poly1.geom_type
print poly2.geom_type
# merging the polygons - they are feature collections, containing a point, a polyline, and a polygon - I extract the polygon
# for my purposes, they overlap, so merging produces a single polygon rather than a list of polygons
mergedPolygon = poly1.union(poly2)
# using geojson module to convert from WKT back into GeoJSON format
geojson_out = geojson.Feature(geometry=mergedPolygon, properties={})
# outputting the updated geojson file - for mapping/storage in its GCS format
with open('Merged_Polygon.json', 'w') as outfile:
json.dump(geojson_out.geometry, outfile, indent=3, encoding="utf-8")
outfile.close()
# reprojecting the merged polygon to determine the correct area
# it is a polygon covering much of the US, and dervied form USGS data, so using Albers Equal Area
project = partial(
pyproj.transform,
pyproj.Proj(init='epsg:4326'),
pyproj.Proj(init='epsg:5070'))
mergedPolygon_proj = shapely.ops.transform(project,mergedPolygon)
|
MultiPartParserError :- Invalid boundary
Question: Im trying to send some data and file using Python requests module to my django
rest application but get the below error.
raise MultiPartParserError('Invalid boundary in multipart: %s' % boundary)
MultiPartParserError: Invalid boundary in multipart: None
Code:-
import requests
payload={'admins':[
{'first_name':'john'
,'last_name':'white'
,'job_title':'CEO'
,'email':'[email protected]'
},
{'first_name':'lisa'
,'last_name':'markel'
,'job_title':'CEO'
,'email':'[email protected]'
}
],
'company-detail':{'description':'We are a renowned engineering company'
,'size':'1-10'
,'industry':'Engineering'
,'url':'http://try.com'
,'logo':''
,'addr1':'1280 wick ter'
,'addr2':'1600'
,'city':'rkville'
,'state':'md'
,'zip_cd':'12000'
,'phone_number_1':'408-393-254'
,'phone_number_2':'408-393-221'
,'company_name':'GOOGLE'}
}
files = {'upload_file':open('./test.py','rb')}
import json
headers = {'content-type' : 'application/json'}
headers = {'content-type' : 'multipart/form-data'}
#r = requests.post('http://127.0.0.1:8080/api/create-company-profile/',data=json.dumps(payload),headers=headers,files=files)
r = requests.post('http://127.0.0.1:8080/api/create-company-profile/',data=payload,headers=headers,files=files)
print r.status_code
print r.text
Django code:-
class CompanyCreateApiView(CreateAPIView):
parser_classes = (MultiPartParser, FormParser,)
def post(self, request, *args, **kwargs):
print 'request ==', request.data
Answer: Okay, I forgot about your headers. According to [the
spec](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html):
>
> Content-Type = "Content-Type" ":" media-type
>
>
> MIME provides for a number of "multipart" types -- encapsulations of one or
> more entities within a single message-body. All multipart types share a
> common syntax, ... and MUST include a boundary parameter as part of the
> media type value.
Here is what a request containing multipart/form-data looks like:
POST /myapp/company/ HTTP/1.1
Host: localhost:8000
Content-Length: 265
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.9.0
Connection: keep-alive
Content-Type: multipart/form-data; boundary=63c5979328c44e2c869349443a94200e
--63c5979328c44e2c869349443a94200e
Content-Disposition: form-data; name="hello"
world
--63c5979328c44e2c869349443a94200e
Content-Disposition: form-data; name="mydata"; filename="data.txt"
line 1
line 2
line 3
line 4
--63c5979328c44e2c869349443a94200e--
See how the sections of data are separated by the boundary:
--63c5979328c44e2c869349443a94200e--
The idea is to use something for a boundary that is unlikely to appear in the
data. Note that the boundary was included in the `Content-Type` header of the
request.
That request was produced by this code:
import requests
myfile = {'mydata': open('data.txt','rb')}
r = requests.post(url,
#headers = myheaders
data = {'hello': 'world'},
files = myfile
)
It looks like you were paying careful attention to the following note in the
django-rest-framework [docs](http://www.django-rest-framework.org/api-
guide/parsers/#multipartparser):
> Note: When developing client applications always remember to make sure
> you're setting the Content-Type header when sending data in an HTTP request.
>
> If you don't set the content type, most clients will default to using
> 'application/x-www-form-urlencoded', which may not be what you wanted.
But when you are using `requests`, if you specify the `Content-Type` header
yourself, then `requests` assumes that you know what you're doing, and it
doesn't overwrite your `Content-Type` header with the `Content-Type` header it
would have provided.
You didn't provide the boundary in your `Content-Type` header--as required.
How could you? You didn't assemble the body of the request and create a
boundary to separate the various pieces of data, so you couldn't possibly know
what the boundary is.
When the `django-rest-framework` note says that you should include a `Content-
Type` header in your request, what that really means is:
> You or any programs you use to create the request need to include a
> `Content-Type` header.
So @AChampion was exactly right in the comments: let `requests` provide the
`Content-Type header`, after all the `requests` docs advertise:
> Requests takes all of the work out of Python HTTP/1.1
`requests` works like this: if you provide a `files` keyword arg, then
requests uses a `Content-Type` header of `multipart/form-data` and also
specifies a boundary in the header; then `requests` assembles the body of the
request using the boundary. If you provide a `data` keyword argument then
requests uses a `Content-Type` of `application/x-www-form-urlencoded`, which
just assembles all the keys and values in the dictionary into this format:
x=10&y=20
No boundary required.
And, if you provide both a `files` keyword arg and a `data` keyword arg, then
requests uses a `Content-Type` of `multipart/form-data`.
|
Scrapy cannot find spider
Question: I am doing scrapy tutorial in [scrapy
documentation](http://doc.scrapy.org/en/1.0/intro/tutorial.html) . This is my
current directory looks like:
.
├── scrapy.cfg
└── tutorial
├── __init__.py
├── __init__.pyc
├── items.py
├── pipelines.py
├── settings.py
├── settings.pyc
└── spiders
├── __init__.py
├── __init__.pyc
└── dmoz_spider
The dmoz_spider.py is the same as described in scrapy tutorial page.
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
Then I run this command from current directory
scrapy crawl dmoz
But I get the error message:
2015-12-17 12:23:22 [scrapy] INFO: Scrapy 1.0.3 started (bot: tutorial)
2015-12-17 12:23:22 [scrapy] INFO: Optional features available: ssl, http11
2015-12-17 12:23:22 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
...
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: dmoz'
Is there any suggestions which part did I do wrong? I have checked [similar
question](http://stackoverflow.com/questions/18152473/scrapy-python-cannot-
find-the-spider) in stack overflow and follow the solution there. But I still
get the error.
Answer: You have to add a `.py` extension to your `dmoz_spider` file. The file name
should be `dmoz_spider.py`.
|
Error with HTTP Post in python requests library
Question: I am not able to send a http post request via python(Flask).
Here is my python code.
import json
from flask import Flask, render_template, request, jsonify
import requests
app = Flask(__name__)
@app.route("/",methods=['GET','POST'])
def home():
if request.method == 'POST':
#user inputs
value1 = request.form.get('first')
value2 = request.form.get('second')
value3 = request.form.get('third')
#api call
url = 'http://myapiurl.com/ws/spm/spm-general'
payload = {"perfid" : {0}, "section" : {"hostname" : {1}, "iteration" : {2}, "sectionname" : "sysstat_M"}.format(value1,value2,value3)}
r = requests.post(url, data=json.dumps(payload))
#print(r.status_code, r.headers['content-type'])
#print(r.text)
returnData["result"] = json.loads(r.text)
return jsonify(returnData)
return render_template('index.html')
if __name__ == '__main__':
app.run(debug=True)
I am using FLASK. The error I am getting is :
payload = {"perfid" : {0}, "section" : {"hostname" : {1}, "iteration" : {2},
"sectionname" : "sysstat_M"}.format(value1,value2,value3)}
AttributeError: 'dict' object has no attribute 'format'
When I try it in simple python(without Flask) it works , but I am not able to
do it with Flask
Here is the code which works:
import json
import requests
url = 'http://myapiurl.com/ws/spm/spm-general'
payload = {"perfid" : 124, "section" : {"hostname" : "10.161.146.94/10.161.146.90", "iteration" : "1", "sectionname" : "sysstat_M"}}
r = requests.post(url, data=json.dumps(payload))
print(r.status_code, r.headers['content-type'])
print(r.text)
Answer: It seems like you're confused `dict` with `str` (which has a
[`format`](https://docs.python.org/3/library/stdtypes.html#str.format)
method).
Just put the variables to the dictionary literal:
payload = {
"perfid" : value1,
"section" : {
"hostname" : value2,
"iteration" : value3,
"sectionname" : "sysstat_M"
}
}
|
How to check a Python script's the correct version automatically while the shebang is not accurate?
Question: For example, I find
[download_model_binary.py](https://github.com/BVLC/caffe/blob/master/scripts/download_model_binary.py)'s
shebang is wrong while it contains a Python 2 library function
`urllib.urlretrieve`.
I try to use two python interpreters to execute the file and watch its return
value in script, but it will lead side effect.
Note: I am asking how detect the correct version of a existing Python 2 script
like `download_model_binary.py` which has the wrong shebang, not how to
rewrite it to be compatible.
Answer: You could do something like this:
import sys
if sys.version_info.major < 3:
from urllib import urlretrieve
else:
from urllib.request import urlretrieve
And later use:
urlretrieve(frontmatter['caffemodel_url'], model_filename, reporthook)
If you need this often consider using [Python Future](http://python-
future.org/index.html). It offers a [good solution](http://python-
future.org/quickstart.html#standard-library-reorganization) for this type of
problem.
|
Python:Getting text from html using Beatifulsoup
Question: I am trying to extract the ranking text number from this link [link example:
kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an
image:
[](http://i.stack.imgur.com/sClUu.png)
I am using the following code:
def get_single_item_data(item_url):
sourceCode = requests.get(item_url)
plainText = sourceCode.text
soup = BeautifulSoup(plainText)
for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}):
print(item_name.string)
item_url = 'https://www.kaggle.com/titericz'
get_single_item_data(item_url)
The result is `None`. The problem is that `soup.findAll('h4',{'data-
bind':"text: rankingText"})` outputs:
`[<h4 data-bind="text: rankingText"></h4>]`
but in the html of the link when inspecting this is like:
`<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image:
[](http://i.stack.imgur.com/8i76M.png)
Its clear that the text is missing. How can I overpass that?
Edit: Printing the `soup` variable in the terminal I can see that this value
exists: [](http://i.stack.imgur.com/BFyuz.png)
So there should be a way to access through `soup`.
Edit 2: I tried unsuccessfully to use the most voted answer from this
[stackoverflow question](http://stackoverflow.com/questions/24118337/fetch-
data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could
be a solution around there.
Answer: If you aren't going to try browser automation through `selenium` as @Ali
suggested, you would have to _parse the javascript containing the desired
information_. You can do this in different ways. Here is a working code that
locates the `script` by a [regular expression
pattern](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-
expression), then extracts the `profile` object, loads it with
[`json`](https://docs.python.org/2/library/json.html) into a Python dictionary
and prints out the desired ranking:
import re
import json
from bs4 import BeautifulSoup
import requests
response = requests.get("https://www.kaggle.com/titericz")
soup = BeautifulSoup(response.content, "html.parser")
pattern = re.compile(r"profile: ({.*}),", re.MULTILINE | re.DOTALL)
script = soup.find("script", text=pattern)
profile_text = pattern.search(script.text).group(1)
profile = json.loads(profile_text)
print profile["ranking"], profile["rankingText"]
Prints:
1 1st
|
can't terminate a sudo process created with python, in Ubuntu 15.10
Question: I just updated to Ubuntu 15.10 and suddenly in Python 2.7 I am not able to
**terminate** a process I created when being **root**. For example, this
doesn't terminate tcpdump:
import subprocess, shlex, time
tcpdump_command = "sudo tcpdump -w example.pcap -i eth0 -n icmp"
tcpdump_process = subprocess.Popen(
shlex.split(tcpdump_command),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
time.sleep(1)
tcpdump_process.terminate()
tcpdump_out, tcpdump_err = tcpdump_process.communicate()
What happened? It works on previous versions.
Answer: **TL;DR** : `sudo` does not forward signals sent by a process in the command's
process group [since 28 May 2014
commit](https://www.sudo.ws/repos/sudo/rev/7ffa2eefd3c0) released in `sudo
1.8.11` \-- the python process (sudo's parent) and the tcpdump process
(grandchild) are in the same process group by default and therefore `sudo`
does not forward `SIGTERM` signal sent by `.terminate()` to the `tcpdump`
process.
* * *
> It shows the same behaviour when running that code while being the root user
> and while being a regular user + sudo
Running as a regular user raises `OSError: [Errno 1] Operation not permitted`
exception on `.terminate()` (as expected).
Running as `root` reproduces the issue: `sudo` and `tcpdump` processes are not
killed on `.terminate()` and the code is stuck on `.communicate()` on Ubuntu
15.10.
The same code kills both processes on Ubuntu 12.04.
`tcpdump_process` name is misleading because the variable refers to the `sudo`
process (the child process), not `tcpdump` (grandchild):
python
└─ sudo tcpdump -w example.pcap -i eth0 -n icmp
└─ tcpdump -w example.pcap -i eth0 -n icmp
As [@Mr.E pointed out in the
comments](http://stackoverflow.com/questions/34337840/cant-terminate-a-
process-created-with-python-in-ubuntu-15-10#comment56417413_34337840), you
don't need `sudo` here: you're root already (though you shouldn't be -- you
can [sniff the network without root](http://askubuntu.com/q/74059/3712)). If
you drop `sudo`; `.terminate()` works.
In general, `.terminate()` does not kill the whole process tree recursively
and therefore it is expected that a grandchild process survives. Though `sudo`
is a special case, [from sudo(8) man
page](http://manpages.ubuntu.com/manpages/wily/man8/sudo.8.html):
> When the command is run as a child of the `sudo` process, `sudo` will
> **_relay signals_** it receives to the command.emphasis is mine
i.e., `sudo` should relay `SIGTERM` to `tcpdump` and [`tcpdump` should stop
capturing packets on `SIGTERM`, from tcpdump(8) man
page](http://manpages.ubuntu.com/manpages/wily/man8/tcpdump.8.html):
> Tcpdump will, ..., continue capturing packets until it is interrupted by a
> SIGINT signal (generated, for example, by typing your interrupt character,
> typically control-C) or a SIGTERM signal (typically generated with the
> kill(1) command);
i.e., **the expected behavior is** : `tcpdump_process.terminate()` sends
SIGTERM to `sudo` which relays the signal to `tcpdump` which should stop
capturing and both processes exit and `.communicate()` returns `tcpdump`'s
stderr output to the python script.
Note: in principle the command may be run without creating a child process,
[from the same sudo(8) man
page](http://manpages.ubuntu.com/manpages/wily/man8/sudo.8.html):
> As a special case, if the policy plugin does not define a close function and
> no pty is required, `sudo` will execute the command directly instead of
> calling fork(2) first
and therefore `.terminate()` may send SIGTERM to the `tcpdump` process
directly -- though it is not the explanation: `sudo tcpdump` creates two
processes on both Ubuntu 12.04 and 15.10 in my tests.
If I run `sudo tcpdump -w example.pcap -i eth0 -n icmp` in the shell then
`kill -SIGTERM` terminates both processes. It does not look like Python issue
(Python 2.7.3 (used on Ubuntu 12.04) behaves the same on Ubuntu 15.10. Python
3 also fails here).
It is related to process groups ([job control](http://www.openbsd.org/cgi-
bin/man.cgi/OpenBSD-current/man4/termios.4?&manpath=OpenBSD-
current&sec=4&query=termios)): passing `preexec_fn=os.setpgrp` to
`subprocess.Popen()` so that `sudo` will be in a new process group (job) where
it is the leader as in the shell makes `tcpdump_process.terminate()` work in
this case.
> What happened? It works on previous versions.
The explanation is in [the sudo's source
code](https://www.sudo.ws/repos/sudo/rev/7ffa2eefd3c0):
> **_Do not forward signals sent by a process in the command's process
> group_** , do not forward it as we don't want the child to indirectly kill
> itself. For example, this can happen with some versions of reboot that call
> kill(-1, SIGTERM) to kill all other processes.emphasis is mine
`preexec_fn=os.setpgrp` changes `sudo`'s process group. `sudo`'s descendants
such as `tcpdump` process inherit the group. `python` and `tcpdump` are no
longer in the same process group and therefore the signal sent by
`.terminate()` is relayed by `sudo` to `tcpdump` and it exits.
Ubuntu 15.04 uses `Sudo version 1.8.9p5` where the code from the question
works as is.
Ubuntu 15.10 uses `Sudo version 1.8.12` that contains [the
commit](https://www.sudo.ws/repos/sudo/rev/7ffa2eefd3c0).
[sudo(8) man page in wily
(15.10)](http://manpages.ubuntu.com/manpages/wily/man8/sudo.8.html) still
talks only about the child process itself -- no mention of the process group:
> As a special case, sudo will not relay signals that were sent by the command
> it is running.
It should be instead:
> As a special case, sudo will not relay signals that were sent by a process
> in the process group of the command it is running.
You could open a documentation issue on [Ubuntu's bug
tracker](https://help.ubuntu.com/community/ReportingBugs) and/or on [the
upstream bug tracker](https://www.sudo.ws/).
|
python 2.7 : create dictionary from list of sets
Question: After performing some operations I get a `list` of `set` as following :
from pyspark.mllib.fpm import FPGrowth
FreqItemset(items=[u'A_String_0'], freq=303)
FreqItemset(items=[u'A_String_0', u'Another_String_1'], freq=302)
FreqItemset(items=[u'B_String_1', u'A_String_0', u'A_OtherString_1'], freq=301)
I'd like to create from this list :
1. RDD
2. Dictionary , for example :
key: A_String_0 value: 303
key: A_String_0,Another_String_1 value: 302
key: B_String_1,A_String_0,A_OtherString_1 value: 301
I'd like to continue with calculations to produce Confidence and Lift
I tried to execute `for` loop to get each item from list .
The question is if there is another , better way to create rdd and/or lists
here ?
Thank you in advance .
Answer: 1. If you want a `RDD` simply don't collect `freqItemsets`
model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
freqItemsets = model.freqItemsets()
you can of course `parallelize`
result = model.freqItemsets().collect() sc.parallelize(result)
2. I am not sure why you need this (it looks like a [XY problem](http://meta.stackexchange.com/questions/66377/what-is-the-xy-problem) but you can use comprehensions on the collected data:
{tuple(x.items): x.freq for x in result}
or
{",".join(x.items): x.freq for x in result}
Generally speaking if you want to apply further transformations on your data
don't collect and process data directly in Spark.
Also you should take a look at the Scala API. It already implements
[association rules](https://spark.apache.org/docs/latest/mllib-frequent-
pattern-mining.html#association-rules).
|
TensorFlow no attribute 'make_template'
Question: I am trying to get familiar with tensorflow and NNs. I recently crashed into
this problem when I tried to use `tensorflow.make_template()`. I first noticed
that there were no auto-complete option in the IDE I use, and then I just
tried to call the function from the python cmd:
$ python
Python 2.7.10 (default, Oct 14 2015, 16:09:02)
[GCC 5.2.1 20151010] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.make_template()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'make_template'
>>>
I installed tensorflow from sources, and the protobuf version installed is:
$ git submodule status
55ad57a235c009d0414aed1781072adda0c89137 google/protobuf (v3.0.0-alpha-4-179-g55ad57a)
I haven't faced any similar behaviour with other tensorflow functions so far.
Any ideas about what's the issue causing this one?
Answer: As noted in the comments, this issue was due to a mismatch between the
installed version of TensorFlow (0.5.0) and the downloaded source (0.6.0).
To upgrade to the latest development version of TensorFlow, follow the
instructions to [install from
source](https://www.tensorflow.org/versions/master/get_started/os_setup.html#source),
then [build and install the PIP
package](https://www.tensorflow.org/versions/master/get_started/os_setup.html#create-
pip) based on that source.
|
How to not await in a loop with asyncio?
Question: Here is a toy example that downloads the home page from several websites using
asyncio and aiohttp:
import asyncio
import aiohttp
sites = [
"http://google.com",
"http://reddit.com",
"http://wikipedia.com",
"http://afpy.org",
"http://httpbin.org",
"http://stackoverflow.com",
"http://reddit.com"
]
async def main(sites):
for site in sites:
download(site)
async def download(site):
response = await client.get(site)
content = await response.read()
print(site, len(content))
loop = asyncio.get_event_loop()
client = aiohttp.ClientSession(loop=loop)
content = loop.run_until_complete(main(sites))
client.close()
If I run it, I get:
RuntimeWarning: coroutine 'download' was never awaited
But I don't want to await it.
In twisted I can do:
for site in sites:
download(site)
And If I don't explicitly "yield" or add a callback to the returned Deferred,
it just runs without blocking nor complaining. I can't access the result, but
in this case I don't need it.
In JS I can do:
site.forEarch(site){
donwload(site)
}
And again, it doesn't block nor does it requires anything from my part.
I found a way to do:
async def main(sites):
await asyncio.wait([download(site) for site in sites])
But:
- this is really not obvious to find it out. I it's hard to remember.
- it's hard to understand what it does. "waits" seems to say "i block", but does not convey clearly it block for the entire list of coroutine to finish.
- you can't pass in a generator, it needs to be a real list, which i feels really unatural in Python.
- what if I have only ONE awaitable ?
- what if I don't want to wait at all on my tasks, and just schedule them for execution then carry on with the rest of my code ?
- it's way more verbose thant twisted and JS solution.
It there a better way ?
Answer: In order to schedule a coroutine as a task, use
[asyncio.ensure_future](https://docs.python.org/3/library/asyncio-
task.html#asyncio.ensure_future):
for site in sites:
coro = download(site)
future = asyncio.ensure_future(coro)
It replaces the deprecated function
[asyncio.async](https://docs.python.org/3/library/asyncio-
task.html#asyncio.async) in version 3.4.4.
Then you can manage those futures using `await`,
[asyncio.wait](https://docs.python.org/3/library/asyncio-
task.html#asyncio.wait) or
[asyncio.gather](https://docs.python.org/3/library/asyncio-
task.html#asyncio.gather).
|
Incomplete parsing of entire genbank file using python/biopython
Question: The main goal of my script is to convert a genbank file to a gtf file. My
problem pertains to extracting CDS information (gene, position (e.g., CDS
2598105..2598404), codon_start, protein_id, db_xref) from **_all_** CDS
entries. My script should open/parse a genbank file, extract information from
each CDS entry, and write the information to another file. The script produces
no errors, but only writes information from the first 1/2 of the genbank file
before terminating. Here is my code...
import Bio
from Bio import GenBank
from Bio import SeqIO
fileList = ['data_files/e_coli_ref_BA000007.2.gb']
qualies = ['gene', 'protein_id', 'db_xref']
#######################################################DEFINITIONS################################################################
def strip_it(string_name):
stripers = ['[', ']', '\'', '"']
for s in stripers:
string_name = string_name.replace(s, '')
string_name = string_name.lstrip()
return string_name
def strip_it_attributes(string_name):
stripers = ['[', ']', '\'', '"', '{', '}',',']
for s in stripers:
string_name = string_name.replace(s, '')
string_name = string_name.lstrip()
string_name = string_name.replace(': ', '=')
string_name = string_name.replace(' ', ';')
return string_name
#---------------------------------------------------------------------------------------------------------------------------------
#######################################################################################################################
for f in fileList:
nameOut = f.replace('gb', 'gtf')
with open(f, 'r') as inputFile:
with open(nameOut, 'w') as outputFile:
record = next(SeqIO.parse(f, 'genbank'))
seqid = record.id
typeName = 'Gene'
source = 'convert_gbToGFT.py'
start_codon = 'NA'
attribute = 'NA'
featureCount = 0
for f in record.features:
print(f.type)
string = ''
if f.type == 'CDS':
dic = {}
CDS = record.features[featureCount]
position = strip_it(str(CDS.location))
start = position.split(':')[0]
stop = position.split(':')[1].split('(')[0]
strand = position.split(':')[1].split('(')[1].replace(')', '')
score = '.'
for q in qualies:
if q in CDS.qualifiers:
if q not in dic:
dic[q] = ''
dic[q] = strip_it(str(CDS.qualifiers[q]))
attribute = strip_it_attributes(str(dic))
if 'codon_start' in CDS.qualifiers:
start_codon = str(int(str(CDS.qualifiers['codon_start'][0]))-1) #need string when finished so it can be added to variable 'string'
string = '\t'.join([seqid, source, typeName, start, stop, score, strand, start_codon, attribute])
if attribute.count(';') == 2:
outputFile.write(string + '\n')
featureCount+=1
#---------------------------------------------------------------------------------------------------------------------------------
The last line of the output file is:
BA000007.2 convert_gbToGFT.py Gene 2598104 2598404 . + 0 protein_i d=BAB36052.1;db_xref=GI:13362097;gene=ECs2629
The location of gene ECs2629 appears on line 36094 in the genbank file, but
the total number of lines in this file is 73498. I have re-downloaded the file
multiple times to see if there was a downloading issue and I have visually
inspected the file (I find no fault with it). I have also tried this script on
another equally large genbank file and was met with identical issues.
Can anyone offer some suggestions as to why the entire genbank file is not
parsed, how I could modify my code to remove this issue, or point me to
another possible solution?
(you can see the format of a genbank file from here:
<http://www.ncbi.nlm.nih.gov/Sitemap/samplerecord.html>), however, I am
working with an E. _coli_ genbank file (Escherichia _coli_ O157:H7 str. Sakai
DNA, complete genome) which can be found here:
<http://www.ncbi.nlm.nih.gov/nuccore/BA000007.2>
I am using the following: Centos 6.7, Python 3.4.3 :: Anaconda 2.3.0 (64-bit),
Biopython 1.66
[EDIT] @Gerrat suggestions worked for the file in question, but not for other
files. Using <http://www.ncbi.nlm.nih.gov/nuccore/NC_000913.3> with the
suggested edit yields ~28 lines of output where my original code output 2084
lines (however, there should be 4332 lines of output).
Answer: Change this line:
CDS = record.features[featureCount]
to:
CDS = f
You're skipping records by accessing them via the `featureCount' index (since
there are probably 1/2 as many feature Counts as records).
**EDIT: To elaborate on your comment:**
Your original script is just wrong (w.r.t. the way you're using
`featureCount`). My correction is necessary. If you have further issues, there
is something else wrong. In this case, there appear to be 28 CDS records with
an attribute count of 2. (I know nothing about gene sequencing, I'm just going
by the variable names in the script). When you switch back to using
`featureCount`, you're now looking at records where the "type" is not "CDS".
It is "gene", or "repeat_region". You're checking the type of the record, `f`
to see if it is `CDS`, but then using a completely different record,
`record.features[featureCount]`. These don't refer to the same record (check
the CDS.type of this record - it's no longer "CDS" in most cases).
|
Import functions directly from Python 3 modules
Question: I have the following folder structure for a Python 3 project where
`vehicle.py` is the main script and the folder `stats` is treated as a package
containing several modules:
[](http://i.stack.imgur.com/ohzSm.png)
The `cars` module defines the following functions:
def neon():
print('Neon')
print('mpg = 32')
def mustang():
print('Mustang')
print('mpg = 27')
Using Python 3, I can access the functions in each module from within
`vehicle.py` as follows:
import stats.cars as c
c.mustang()
However, I would like to access the functions defined in each module directly,
but I receive an error when doing this:
import stats as st
st.mustang()
# AttributeError: 'module' object has no attribute 'mustang'
I also tried placing an `__init__.py` file in the `stats` folder with the
following code:
from cars import *
from trucks import *
but I still receive an error:
import stats as st
st.mustang()
# ImportError: No module named 'cars'
I'm trying to use the same approach as NumPy such as:
import numpy as np
np.arange(10)
# prints array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
How can I create a package like NumPy in Python 3 to access functions directly
in modules?
Answer: Put an `__init__.py` file in the `stats` folder (as others have said), and put
this in it:
from .cars import neon, mustang
from .trucks import truck_a, truck_b
Not so neat, but easier would be to use the `*` wildcard:
from .cars import *
from .trucks import *
This way, the `__init__.py` script does some importing for you, into its own
namespace.
Now you can use functions/classes from the `neon`/`mustang` module directly
after you import `stats`:
import stats as st
st.mustang()
|
Analysing Time Series in Python - pandas formatting error - statsmodels
Question: I am trying to analyse stars' data. I have light time series of the stars and
I want to predict to which class (among 4 different types) they belong. I have
light time series of those stars, and I want to analyse those time series by
doing deseasonalisation, frequencies analysis and other potentially relevant
studies.
The object time_series is a panda DataFrame, including 10 columns :
time_points_b, light_points_b (the b being for blue), etc...
I first want to study the blue light time series.
import statsmodels.api as sm;
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.mpl_style = 'default'
%matplotlib inline
def star_key(slab_id, star_id_b):
return str(slab_id) + '_' + str(star_id_b)
raw_time_series = pd.read_csv("data/public/train_varlength_features.csv.gz", index_col=0, compression='gzip')
time_series = raw_time_series.applymap(csv_array_to_float)
time_points = np.array(time_series.loc[star_key(patch_id, star_id_b)]['time_points_b'])
light_points = np.array(time_series.loc[star_key(patch_id, star_id_b)]['light_points_b'])
error_points = np.array(time_series.loc[star_key(patch_id, star_id_b)]['error_points_b'])
light_data = pd.DataFrame({'time':time_points[:], 'light':light_points[:]})
residuals = sm.tsa.seasonal_decompose(light_data);
light_plt = residuals.plot()
light_plt.set_size_inches(10, 5)
light_plt.tight_layout()
This code gives me an attribute error when I apply the seasonal_decompose
method : AttributeError: 'Int64Index' object has no attribute 'inferred_freq'
Answer: `seasonal_decompose()` expects a `DateTimeIndex` on your DataFrame. Here's an
example:
[](http://i.stack.imgur.com/lZoUt.png)
|
How to incorporate code into a for loop using Python
Question: Edit: Here is the code I am trying to use:
from bs4 import BeautifulSoup
import re
import sys
m = re.compile("^\d\d:\d\d$")
readfile = open("C:\\Temp\\LearnPythonTheCompletePythonProgrammingCourse_Udemy.htm", 'r').read()
soup = BeautifulSoup(readfile, "html.parser")
ci_details = soup.findAll("span",{"class":"ci-details"})
timeList = []
for detail in ci_details:
for span in detail.findAll("span"):
if m.match(span.text):
timeList.append(span.text)
print (timeList)
for i in timeList:
time1=timeList[0]
print(time1)
_edit_ I realized looking this over that I am telling Python to print time1
for every item in timeList. How do I iterate over timeList ?
I want to use dstubeda's code to take each entry in the list, convert it to
raw seconds, add them up. Then once done, I will convert them to h:m:s. Where
did I go wrong with my for loop?
Answer: It would be easier if you showed your code, but you should be able to figure
it out from this:
totalTime = 0
spans = soup.find_all('span',{"class":"ci-details"})
for span in spans:
rawTime = span.text
processedTime = DavesTimeFunction(rawTime)
totalTime += processedTime
print("The total time is: " + str(totalTime))
|
Pandas Bar Graph: Side By Side (No X or Y axises shared)
Question: [Two Bar Plots- Non side by side](http://i.stack.imgur.com/XFt3A.png)
Hi, I am relatively new to Pandas and Python. I have been trying to get my two
bar graphs to be displayed side by side instead of showing it one after
another as it is in the image above. They do not share a y or x axis as they
are separate independent variables. I would like to present it so that I can
emphasize how closely related they are and that they are duplicate variables.
So far, I have looked at <http://worksofscience.net/matplotlib/gridspec> where
GridSpec is used to create different grids to plot in. After creating two
grids of equal size ax1 and ax2, I tried to add in the my two plots appended
to their respective variables as with the code below
fig1 = plt.figure(figsize=[15,8])
gs = GridSpec(100,100)
ax1 = fig1.add_subplot(gs[:,0:50])
ax2 = fig1.add_subplot(gs[:,51:100])
#saving the plots in variables
waterpointtype=waterindep.waterpoint_type.value_counts().plot(kind='bar',title ='Waterpoint_Type')
#plt.show()
waterpointtypegroup = waterindep.waterpoint_type_group.value_counts().plot(kind='bar',title='Waterpoint_Type_Group')
#plt.show()
#plotting on those axes
ax1.plot(waterpointtype)
ax2.plot(waterpointtypegroup)
fig1.savefig('waterpointcomparison.png', format ='png', dpi =600)
plt.show()
However, I am ending up with **Index Error: index 51 is out of bounds for axis
0 with size 3** which I cannot make sense of. If you have a solution to this
error or another way by which I can plot bar-graphs side by side, I would
really appreciate it. Thanks a lot!
Answer: Pass the axes to the plot calls:
waterindep['waterpoint_type'].value_counts().plot(
kind='bar',title='Waterpoint_Type', ax=ax1)
waterindep['waterpoint_type_group'].value_counts().plot(
kind='bar',title='Waterpoint_Type_Group', ax=ax2)
* * *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
np.random.seed(2015)
fig1 = plt.figure(figsize=[15,8])
gs = gridspec.GridSpec(100,100)
ax1 = fig1.add_subplot(gs[:,0:50])
ax2 = fig1.add_subplot(gs[:,51:100])
waterindep = pd.DataFrame(np.random.randint(10, size=(100,2)),
columns=['waterpoint_type', 'waterpoint_type_group'])
waterindep['waterpoint_type'].value_counts().plot(kind='bar',title='Waterpoint_Type', ax=ax1)
waterindep['waterpoint_type_group'].value_counts().plot(kind='bar',title='Waterpoint_Type_Group', ax=ax2)
plt.show()
## [](http://i.stack.imgur.com/DbLvW.png)
Note that you could also use
fig, axs = plt.subplots(figsize=[15,8], ncols=2)
ax1, ax2 = axs
instead of `gridspec.GridSpec` to create the axes. It will have better looking
spacing between the axes "out-of-the-box".
|
Generator Reading Log File While Name Changes Python
Question: The goal is to read a log file in real time line by line (standard generator
stuff) but the catch is, the file name changes at various intervals. The name
change can't be helped (application dictated appended with a time string) and
the name is changed when the log file size reaches ~2MB (guesstimate).
My approach was to create a file getter function that got the file (or new
file) and then passed that to the generator. I thought that when the file
changed names I would get a 'File not found' error, but what my test showed,
is that the file name change is prevented entirely as 'another program is
using this file'. The name change must be allowed, and this reader code cannot
interfere with the application logging process at all.
import os
import time
import fnmatch
directory = '\\foo\\'
def fileGenerator(logFile):
""" Run a line generator """
logFile.seek(0,2)
while True:
line = logFile.readline()
if not line:
time.sleep(0.1)
continue
yield line
def fileGetter():
""" Get the Logging File """
matchedFiles = []
for afile in os.listdir(directory):
if fnmatch.fnmatch(afile,'amc_*.txt'):
matchedFiles.append(afile)
if len(matchedFiles)==1:
#There was exactly one matching file found send it to the generator
return os.path.join(directory,matchedFiles[0])
else:
#There either wasn't a file found or many matching
#Error out and stop process... critical error
if __name__ == '__main__':
filePath = fileGetter()
try:
logFile = open(filePath,"r")
except Exception as e:
#Catch the file not found and go back to the file path getter
#Send the file back to the generator
print e
if logFile:
loglines = fileGenerator(logFile)
for line in loglines:
#handle the line
print line,
Answer: If you can't hold the file open while waiting for new content to be written to
it, I suggest saving the file position you were last at and closing the file
before you sleep, and then reopening the file and seeking to that point
afterwards. You could also investigate filesystem notification systems if you
care about spotting file additions or renames immediately.
def log_reader():
filename = "does_not_exist"
filepos = 0
while True:
try:
file = open(filename)
except FileNotFoundError:
filename = fileGetter()
# if renamed files start empty, set filepos to zero here!
continue
file.seek(filepos)
while True:
line = file.readline()
if not line:
filepos = file.tell()
file.close()
sleep(0.1) # you may want to test different sleep lengths to avoid FS thrash
break
yield line
The opening and closing of the file may stress out your filesystem if you do
it too much, so I'd suggest sleeping longer than your previous code did (but
you may want to test to see how well your OS handles it if you care about how
responsive your log reader is).
|
Matplotlib error with colormap in version 1.5.0
Question: In a python script, I just upgraded my matplotlib to 1.5.0 and am now getting
this error:
from matplotlib import pyplot as plt
import matplotlib.ticker as tkr
from matplotlib import rcParams
from mpl_toolkits.basemap import Basemap, maskoceans
cs = m.contourf(x,y,mask_data,numpy.arange(min_range,max_range,step),cmap=PRGn_10.mpl_colormap)
NameError: global name 'PRGn_10' is not defined
How can I fix this?
Answer: It is not a `matplotlib` error. The error message says that the name `PRGn_10`
is not defined — because you never defined it. It is not present in any of
your imports, and it is not a built-in, so Python cannot find it.
I am guessing you wanted to use the `PRGn` colormap. In order to do so, you
need to import it, or the whole `colormap` module and reference it properly:
import matplotlib.cm as cm
cs = m.contourf(x,y,mask_data,numpy.arange(min_range,max_range,step),cmap=cm.PRGn)
or
from matplotlib.cm import PRGn
cs = m.contourf(x,y,mask_data,numpy.arange(min_range,max_range,step),cmap=PRGn)
Not sure what you meant by the `.mpl_colormap` bit, colormaps do not have such
attribute.
|
How to tab seperated txt file to csv in python
Question: I have a txt file of 7 columns and about 8000 rows. the name of the file is
"Azimuth"
The first row- Id Xs Xe Ys Ye Length Azimuth
and each row under is a unique measurement. so this is a txt file with 8000
measurements. the data is tab separated or so it seems in the file itself.
when i open it with Excel it converts it to columns and rows with no problem,
but when i try to read it with python, it reads the whole text as one column.
i tried to run this code that I found:
f=open("Azimuth.txt")
lines=f.readlines()
result=[]
for x in lines:
result.append(x.split(' ')[0])
f.close()
print result
when i run it, the it prints the whole txt file as one long row (or column,
i'm not sure). I tried to convert it to csv file but that separated the whole
txt by commas, not as i was expected.
anyone can help? i need to build a code that relies on that file separated to
columns and rows.
I'm not a programmer, so i'll mention i use sublime editor and run the script
with the terminal (it works properly, the script ends with .py and i used the
#!/usr/bin/env python command)
thanks everyone
Answer: Use the [csv](https://docs.python.org/3.5/library/csv.html) module, using
`tab` as the delimiter:
import csv
with open("Azimuth.txt") as f:
rows = csv.reader(f, delimiter="\t")
for row in rows:
# do whatever
If you want a list of all the rows you can `rows = list(csv.reader(f,
delimiter="\t"))` but you can iterate a row at a time using the logic above.
You also never need to call `readlines()` unless you actually want a list, in
python you can iterate over the file object so if you were to parse manually,
you could:
with open("Azimuth.txt") as f:
rows = [line.split("\t") for line in f]
Or line by line:
with open("Azimuth.txt") as f:
for line in f:
row = line.split("\t")
# use row ...
|
Tensorflow: No shape function registered for standard op: ExtractGlimpse. Where do I add my code for the shape function?
Question: I am trying to build a tensorflow graph using `tf.image.extract_glimpse`.
Unfortunately I think there is a bug in the API itself. I am receiving the
error `No shape function registered for standard op: ExtractGlimpse`
There is actually the following code in `/usr/local/lib/python2.7/dist-
packages/tensorflow/python/ops/attentions_ops.py` :
@ops.RegisterShape("ExtractGlimpse")
def _ExtractGlimpseShape(op):
"""Shape function for ExtractGlimpse op."""
input_shape = op.inputs[0].get_shape().with_rank(4)
unused_size_shape = op.inputs[1].get_shape().merge_with(
tensor_shape.vector(2))
offsets_shape = op.inputs[2].get_shape().merge_with(
input_shape[:1].concatenate([2]))
offsets_shape = offsets_shape
size_value = tensor_util.ConstantValue(op.inputs[1])
if size_value is not None:
height = size_value[0]
width = size_value[1]
else:
height = None
width = None
return [tensor_shape.TensorShape(
[input_shape[0], height, width, input_shape[3]])]
For some reason this function isn't being used properly, however it's not
entirely clear from the
[documentation](https://www.tensorflow.org/versions/master/how_tos/adding_an_op/index.html#implement-
a-shape-function-in-python) where exactly this function is called.
In which python file is this function supposed to be called, and how does the
call need to be used?
Thanks in advance
Answer: This looks like a bug in TensorFlow: the shape function is defined in the
correct place, but the code in `attention_ops.py` is never executed, so the
shape function is never registered.
I will fix it upstream, but in the meantime you can fix it by adding the
following line to your program:
from tensorflow.python.ops import attention_ops
|
PDF Miner PDFEncryptionError
Question: I'm trying to extract text from pdf-files and later try to identify the
references. I'm using pdfminer 20140328. With unencrypted files its running
well, but I got now a file where i get:
> File "C:\Tools\Python27\lib\site-packages\pdfminer\pdfdocument.py", line
> 348, in _initialize_password
>
> raise PDFEncryptionError('Unknown algorithm: param=%r' % param)
>
> pdfminer.pdfdocument.PDFEncryptionError: Unknown algorithm: param={'CF':
> {'StdCF': {'Length': 16, 'CFM': /AESV2, 'AuthEvent': /DocOpen}}, 'O':
> '}\xe2>\xf1\xf6\xc6\x8f\xab\x1f"O\x9bfc\xcd\x15\xe09~2\xc9\\\x87\x03\xaf\x17f>\x13\t^K\x99',
> 'Filter': /Standard, 'P': -1548, 'Length': 128, 'R': 4, 'U':
> 'Kk>\x14\xf7\xac\xe6\x97\xb35\xaby!\x04|\x18(\xbfN^Nu\x8aAd\x00NV\xff\xfa\x01\x08',
> 'V': 4, 'StmF': /StdCF, 'StrF': /StdCF}
I checked with pdfinfo, that this file seemed to be AES encrypted, but i can
open it without any problems. So i have two questions:
* at first: how is it possible that a document is encrypted but i can open it without a password?
* and secondly: how do i make PDFMiner read that file properly? Somewhere i read to install pycrypto to get additional algorithms but it doesnt fixed my problem.
Many thanks.
Answer: I had the same problem with some documents. It looks like the document is
encrypted, but the password is blank. That's why we can easily open it without
a password.
I ended up fixing the problem with Ubuntu's _qpdf_ utility. It can decrypt the
file if you provide a password (blank in my case). I implemented a shell
command in Python script that would decrypt the document with an empty
password:
from subprocess import call
call('qpdf --password=%s --decrypt %s %s' %('', pdf_filename, pdf_filename_decr), shell=True)
where
`pdf_filename` - filename of encrypted pdf,
`pdf_filename_decr` - filename of a new decrypted copy.
_pdfminer_ should extract the text now.
|
Python - Input contains NaN, infinity or a value too large for dtype('float64')
Question: I am new on Python. I am trying to use sklearn.cluster. Here is my code:
from sklearn.cluster import MiniBatchKMeans
kmeans=MiniBatchKMeans(n_clusters=2)
kmeans.fit(df)
But I get the following error:
50 and not np.isfinite(X).all()):
51 raise ValueError("Input contains NaN, infinity"
---> 52 " or a value too large for %r." % X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float64')
I checked that the there is no Nan or infinity value. So there is only one
option left. However, my data info tells me that all variables are float64, so
I don't understand where the problem comes from.
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 362358 entries, 135 to 4747145
Data columns (total 8 columns):
User 362358 non-null float64
Hour 362352 non-null float64
Minute 362352 non-null float64
Day 362352 non-null float64
Month 362352 non-null float64
Year 362352 non-null float64
Latitude 362352 non-null float64
Longitude 362352 non-null float64
dtypes: float64(8)
memory usage: 24.9 MB
Thanks a lot,
Answer: I think that [fit()](http://scikit-
learn.org/stable/modules/generated/sklearn.cluster.MiniBatchKMeans.html#sklearn.cluster.MiniBatchKMeans.fit)
accepts only "array-like, shape = [n_samples, n_features]", not pandas
dataframes. So try to pass the values of the dataframe into it as:
kmeans=MiniBatchKMeans(n_clusters=2)
kmeans.fit(df.values)
Or shape them in order to run the function correctly. Hope that helps.
|
python something like assertFunctionNotCalled?
Question: Edit: This question refers to python 3.5
In python unit testing, is there a way to assert that a given block of code is
not called?
I have a function I am testing that can follow either of two branches: a
faster, more reliable branch if the user has logged in before, or a slower
less reliable branch if this is their first visit. Basically, the first time a
user logs in, the program has to request information about them from separate
server, which can be slow (and, occasionally, down). Once it has the
information, however, it caches it locally and should just use the local cache
for the data on subsequent logins.
The end result of the function is the same either way, so there is no way for
me to know which code branch it took to get the information by simply looking
at the output of the function. So ideally I'd like to be able to do
effectively an `assertFunctionNotCalled` to test that it is pulling from the
local cache on the second (and subsequent) login attempt. How should I best
test this?
Answer: Use the bool called like so:
from mock import patch
with patch('something') as mock_something:
self.assertFalse(mock_something.called)
|
How to consume JSON response in Python?
Question: I am creating a Django web app. There is a function which creates a JSON
response like this:
def rest_get(request, token):
details = Links.get_url(Links, token)
result={}
if len(details)>0:
result['status'] = 200
result['status_message'] = "OK"
result['url'] = details[0].url
else:
result['status'] = 404
result['status_message'] = "Not Found"
result['url'] = None
return JsonResponse(result)
And I get the response in the web browser like this:
{"status": 200, "url": "http://www.bing.com", "status_message": "OK"}
Now from another function I want to consume that response and extract the data
out of it. How do I do it?
Answer: You can use the `json` library in python to do your job. for example :
json_string = '{"first_name": "tom", "last_name":"harry"}'
import json
parsed_json = json.loads(json_string)
print(parsed_json['first_name'])
"tom"
Since you have created a web app. I am assuming you have exposed a URL from
which you can get you JSON response, for example
<http://jsonplaceholder.typicode.com/posts/1>.
import urllib2
import json
data = urllib2.urlopen("http://jsonplaceholder.typicode.com/posts/1").read()
parsed_json = json.loads(data)
The `urlopen` function sends a `HTTP GET` request at the given URL.
`parsed_json` is a variable of the type map and you can extract the required
data from it.
print parsed_json['userId']
1
|
Rotate point about another point in degrees python
Question: If you had a point (in 2d), how could you rotate that point by degrees around
the other point (the origin) in python?
You might, for example, tilt the first point around the origin by 10 degrees.
Basically you have one point PointA and origin that it rotates around. The
code could look something like this:
PointA=(200,300)
origin=(100,100)
NewPointA=rotate(origin,PointA,10) #The rotate function rotates it by 10 degrees
Answer: The following `rotate` function performs a rotation of the point `point` by
the angle `angle` (counterclockwise, in radians) around `origin`, in the
Cartesian plane, with the usual axis conventions: x increasing from left to
right, y increasing vertically upwards. All points are represented as length-2
tuples of the form `(x_coord, y_coord)`.
import math
def rotate(origin, point, angle):
"""
Rotate a point counterclockwise by a given angle around a given origin.
The angle should be given in radians.
"""
ox, oy = origin
px, py = point
qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy)
qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy)
return qx, qy
If your angle is specified in degrees, you can convert it to radians first
using `math.radians`. For a clockwise rotation, negate the angle.
Example: rotating the point `(3, 4)` around an origin of `(2, 2)`
counterclockwise by an angle of 10 degrees:
>>> point = (3, 4)
>>> origin = (2, 2)
>>> rotate(origin, point, math.radians(10))
(2.6375113976783475, 4.143263683691346)
Note that there's some obvious repeated calculation in the `rotate` function:
`math.cos(angle)` and `math.sin(angle)` are each computed twice, as are `px -
ox` and `py - oy`. I leave it to you to factor that out if necessary.
|
Feasibility of automatic cycle breaker for `std::shared_ptr`
Question: C++11 introduced reference-counted smart pointers, `std::shared_ptr`. Being
reference counted, these pointers are unable to automatically reclaim cyclic
data structures. However, automatic collection of reference cycles was shown
to be possible, for example by [Python](http://arctrix.com/nas/python/gc/) and
[PHP](http://php.net/manual/en/features.gc.collecting-cycles.php). To
distinguish this technique from garbage collection, the rest of the question
will refer to it as _cycle breaking_.
Given that there seem to be no proposals to add equivalent functionality to
C++, is there a fundamental reason why a cycle breaker similar to the ones
already deployed in other languages wouldn't work for `std::shared_ptr`?
Note that this question doesn't boil down to "why isn't there a GC for C++",
which [has been asked before](http://stackoverflow.com/q/147130/1600898). A
C++ GC normally refers to a system that automatically manages _all_
dynamically allocated objects, typically implemented using some form of
Boehm's conservative collector. It has been [pointed
out](http://stackoverflow.com/a/19466425/1600898) that such a collector is not
a good match for RAII. Since a garbage collector primarily manages memory, and
might not even be called until there is a memory shortage, and C++ destructors
manage other resources, relying on the GC to run destructors would introduce
non-determinism at best and resource starvation at worst. It has also bee
pointed out that a full-blown GC is largely unnecessary in the presence of the
more explicit and predictable smart pointers.
However, a library-based cycle breaker for smart pointers (analogous to the
one used by reference-counted interpreters) would have important differences
from a general-purpose GC:
* It only cares about objects managed through `shared_ptr`. Such objects _already_ participate in shared ownership, and thus have to handle delayed destructor invocation, whose exact timing depends on ownership structure.
* Due to its limited scope, a cycle breaker is unconcerned with patterns that break or slow down Boehm GC, such as pointer masking or huge opaque heap blocks that contain an occasional pointer.
* It can be opt-in, like `std::enable_shared_from_this`. Objects that don't use it don't have to pay for the additional space in the control block to hold the cycle breaker metadata.
* A cycle breaker doesn't require a comprehensive list of "root" objects, which is hard to obtain in C++. Unlike a mark-sweep GC which finds all live objects and discards the rest, a cycle breaker only traverses objects that can form cycles. In existing implementations, the type needs to provide help in the form of a function that enumerates references (direct or indirect) to other objects that can participate in a cycle.
* It relies on regular "destroy when reference count drops to zero" semantics to destroy cyclic garbage. Once a cycle is identified, the objects that participate in it are requested to clear their strongly-held references, for example by calling `reset()`. This is enough to break the cycle and would automatically destroy the objects. Asking the objects to provide and clear its strongly-held references (on request) makes sure that the cycle breaker does not break encapsulation.
Lack of proposals for automatic cycle breaking indicates that the idea was
rejected for practical or philosophical reasons. I am curious as what the
reasons are. For completeness, here are some possible objections:
* "It would introduce non-deterministic destruction of cyclic `shared_ptr` objects." If the programmer were in control of the cycle breaker's invocation, it would not be non-deterministic. Also, once invoked, the cycle breaker's behavior would be predictable - it would destroy all currently known cycles. This is akin to how `shared_ptr` destructor destroys the underlying object once its reference count drops to zero, despite the possibility of this causing a "non-deterministic" cascade of further destructions.
* "A cycle breaker, just like any other form of garbage collection, would introduce pauses in program execution." Experience with runtimes that implement this feature shows that the pauses are minimal because the GC only handles cyclic garbage, and all other objects are reclaimed by reference counting. If the cycle detector is never invoked automatically, the cycle breaker's "pause" could be a predictable consequence of running it, similar to how destroying a large `std::vector` might run a large number of destructors. (In Python, the cyclic gc is run automatically, but there is API to [disable it](https://docs.python.org/2/library/gc.html#gc.disable) temporarily in code sections where it is not needed. Re-enabling the GC later will pick up all cyclic garbage created in the meantime.)
* "A cycle breaker is unnecessary because cycles are not that frequent and they can be easily avoided using `std::weak_ptr`." Cycles in fact turn up easily in many simple data structures - e.g. a tree where children have a back-pointer to the parent, or a doubly-linked list. In some cases, cycles between heterogenous objects in complex systems are formed only occasionally with certain patterns of data and are hard to predict and avoid. In some cases it is far from obvious which pointer to replace with the weak variant.
Answer: There are a number of issues to be discussed here, so I've rewritten my post
to better condense this information.
## Automatic cycle detection
Your idea is to have a `circle_ptr` smart pointer (I know you want to add it
to `shared_ptr`, but it's easier to talk about a new type to compare the two).
The idea is that, if the type that the smart pointer is bound to derives from
some `cycle_detector_mixin`, this activates automatic cycle detection.
This mixin also requires that the type implement an interface. It must provide
the ability to enumerate all of the `circle_ptr` instances directly owned by
that instance. And it must provide the means to invalidate one of them.
I submit that this is a highly impractical solution to this problem. It is
excessively fragile and requires immense amounts of manual work from the user.
And therefore, it is not appropriate for inclusion in the standard library.
And here are some reasons why.
### Determinism and cost
> "It would introduce non-deterministic destruction of cyclic `shared_ptr`
> objects." Cycle detection only happens when a `shared_ptr`'s reference count
> drops to zero, so the programmer is in control of when it happens. It would
> therefore not be non-deterministic. Its behavior would be predictable - it
> would destroy all currently known cycles from that pointer. This is akin to
> how `shared_ptr` destructor destroys the underlying object once its
> reference count drops to zero, despite the possibility of this causing a
> "non-deterministic" cascade of further destructions.
This is true, but not in a helpful way.
There is a substantial difference between the determinism of regular
`shared_ptr` destruction and the determinism of what you suggest. Namely:
`shared_ptr` is cheap.
`shared_ptr`'s destructor does an atomic decrement, followed by a conditional
test to see if the value was decremented to zero. If so, a destructor is
called and memory is freed. That's it.
What you suggest makes this more complicated. Worst-case, every time a
`circle_ptr` is destroyed, the code will have to walk through data structures
to determine if there's a cycle. Most of the time, cycles won't exist. But it
still has to look for them, just to make sure. And it must do so _every single
time_ you destroy a `circle_ptr`.
Python et. al. get around this problem because they are built into the
language. They are able to see _everything_ that's going on. And therefore,
they can detect when a pointer is assigned at the time those assignments are
made. In this way, such systems are constantly doing small amounts of work to
build up cyclic chains. Once a reference goes away, it can look at its data
structures and take action if that creates a cyclical chain.
But what you're suggesting is a _library feature_ , not a language feature.
And library types can't really do that. Or rather, they can, but only with
help.
Remember: an instance of `circle_ptr` cannot know the subobject it is a member
of. It cannot _automatically_ transform a pointer to itself into a pointer to
its owning class. And without that ability, it cannot update the data
structures in the `cycle_detector_mixin` that owns it if it is reassigned.
Now, it could _manually_ do this, but only with help from its owning instance.
Which means that `circle_ptr` would need a set of constructors that are given
a pointer to its owning instance, which derives from `cycle_detector_mixin`.
And then, its `operator=` would be able to inform its owner that it has been
updated. Obviously, the copy/move assignment would _not_ copy/move the owning
instance pointer.
Of course, this requires the owning instance to give a pointer to itself to
_every`circle_ptr`_ that it creates. In every constructor&function that
creates `circle_ptr` instances. Within itself and any classes it owns which
are not also managed by `cycle_detection_mixin`. Without fail. This creates a
degree of fragility in the system; manual effort must be expended for each
`circle_ptr` instance owned by a type.
This also requires that `circle_ptr` contain 3 pointer types: a pointer to the
object you get from `operator*`, a pointer to the actual managed storage, and
a pointer to that instance's owner. The reason that the instance must contain
a pointer to its owner is that it is per-instance data, not information
associated with the block itself. It is the instance of `circle_ptr` that
needs to be able to tell its owner when it is rebound, so the instance needs
that data.
And this must be _static_ overhead. You can't know when a `circle_ptr`
instance is within another type and when it isn't. So every `circle_ptr`, even
those that don't use the cycle detection features, must bear this 3 pointer
cost.
So not only does this require a large degree of fragility, it's also
expensive, bloating the type's size by 50%. Replacing `shared_ptr` with this
type (or more to the point, augmenting `shared_ptr` with this functionality)
is just not viable.
On the plus side, you no longer need users who derive from
`cycle_detector_mixin` to implement a way to fetch the list of `circle_ptr`
instances. Instead, you have the class register itself with the `circle_ptr`
instances. This allows `circle_ptr` instances that could be cyclic to talk
directly to their owning `cycle_detector_mixin`.
So there's something.
### Encapsulation and invariants
The need to be able to tell a class to invalidate one of its `circle_ptr`
objects fundamentally changes the way the class can interact with any of its
`circle_ptr` members.
An invariant is some state that a piece of code assumes is true because it
should be logically impossible for it to be false. If you check that a `const
int` variable is > 0, then you have established an invariant for later code
that this value is positive.
Encapsulation exists to allow you to be able to build invariants within a
class. Constructors alone can't do it, because external code could modify any
values that the class stores. Encapsulation allows you to prevent external
code from making such modifications. And therefore, you can develop invariants
for various data stored by the class.
This is what encapsulation is _for_.
With a `shared_ptr`, it is possible to build an invariant around the existence
of such a pointer. You can design your class so that the pointer is never
null. And therefore, nobody has to check for it being null.
That's not the case with `circle_ptr`. If you implement the
`cycle_detector_mixin`, then your code _must_ be able to handle the case of
any of those `circle_ptr` instances becoming null. Your destructor therefore
cannot assume that they are valid, nor can any code that your destructor calls
make that assumption.
Your class therefore _cannot_ establish an invariant with the object pointed
to by `circle_ptr`. At least, not if it's part of a `cycle_detector_mixin`
with its associated registration and whatnot.
You can argue that your design does not _technically_ break encapsulation,
since the `circle_ptr` instances can still be private. But the class is
willingly _giving up_ encapsulation to the cycle detection system. And
therefore, the class can no longer ensure certain kinds of invariants.
That sounds like breaking encapsulation to me.
### Thread safety
In order to access a `weak_ptr`, the user must `lock` it. This returns a
`shared_ptr`, which ensures that the object will remain alive (if it still
was). Locking is an atomic operation, just like reference
incrementing/decrementing. So this is all thread-safe.
`circle_ptr`s may not be very thread safe. It may be possible for a
`circle_ptr` to become invalid from another thread, if the other thread
released the last non-circular reference to it.
I'm not entirely sure about this. It may be that such circumstances only
appear if you've already had a data race on the object's destruction, or are
using a non-owning reference. But I'm not sure that your design can be thread
safe.
### Virulence factors
This idea is incredibly viral. _Every other type_ where cyclic references can
happen must implement this interface. It's not something you can put on one
type. In order to get the benefits, every type that could participate in a
cyclical reference must use it. Consistently and correctly.
If you try to make `circle_ptr` require that the object it manages implement
`cycle_detector_mixin`, then you make it impossible to use such a pointer with
any other type. It wouldn't be a replacement of (or augmentation for)
`shared_ptr`. So there is no way for a compiler to help detect accidental
misuse.
Sure, there are accidental misuses of `make_shared_from_this` that cannot be
detected by compilers. However, that is not a viral construct. It is therefore
only a problem for those who _need_ this feature. By contrast, the only way to
get a benefit from `cycle_detector_mixin` is to use it as comprehensively as
possible.
Equally importantly, because this idea is so viral, you will be using it a
lot. And therefore, you are far more likely to encounter the multiple-
inheritance problem than users of `make_shared_from_this`. And that's not a
minor issue. Especially since `cycle_detector_mixin` will likely use
`static_cast` to access the derived class, so you won't be able to use virtual
inheritance.
### Summation
So here is what you must do, without fail, in order to detect cycles, none of
which the compiler will verify:
1. Every class participating in a cycle must be derived from `cycle_detector_mixin`.
2. Anytime a `cycle_detector_mixin`-derived class constructs a `circle_ptr` instance within itself (either directly or indirectly, but not within a class that itself derives from `cycle_detector_mixin`), pass a pointer to yourself to that `cycle_ptr`.
3. Don't assume that any `cycle_ptr` subobject of a class is valid. Possibly even to the extent of becoming invalid within a member function thanks to threading issues.
And here are the costs:
1. Cycle-detecting data structures within `cycle_detector_mixin`.
2. Every `cycle_ptr` must be 50% bigger, even the ones that aren't used for cycle detection.
## Misconceptions about ownership
Ultimately, I think this whole idea comes down to a misconception about what
`shared_ptr` is actually for.
> "A cycle detector is unnecessary because cycles are not that frequent and
> they can be easily avoided using `std::weak_ptr`." Cycles in fact turn up
> easily in many simple data structures - e.g. a tree where children have a
> back-pointer to the parent, or a doubly-linked list. In some cases, cycles
> between heterogenous objects in complex systems are formed only occasionally
> with certain patterns of data and are hard to predict and avoid. In some
> cases it is far from obvious which pointer to replace with the weak variant.
This is a very common argument for general-purpose GC. The problem with this
argument is that it usually makes an assumption about the use of smart
pointers that just isn't valid.
To use a `shared_ptr` _means something_. If a class stores a `shared_ptr`,
that represents that the class has _ownership_ of that object.
So explain this: why does a node in a linked list need to _own_ both the next
and previous nodes? Why does a child node in a tree need to _own_ its parent
node? Oh, they need to be able to _reference_ the other nodes. But they do not
need to control the lifetime of them.
For example, I would implement a tree node as an array of `unique_ptr` to
their children, with a single pointer to the parent. A _regular_ pointer, not
a smart pointer. After all, if the tree is constructed correctly, the parent
will own its children. So if a child node exists, it's parent node _must
exist_ ; the child cannot exist without having a valid parent.
With a double linked list, I might have the left pointer be a `unique_ptr`,
with the right being a regular pointer. Or vice-versa; one way is no better
than the other.
Your mentality seems to be that we should always be using `shared_ptr` for
things, and just let the automatic system work out how to deal with the
problems. Whether it's circular references or whatever, just let the system
figure it out.
That's not what `shared_ptr` is for. The goal of smart pointers is _not_ that
you don't think about ownership anymore; it's that you can _express_ ownership
relationships directly in code.
## Overall
How is any of this an improvement over using `weak_ptr` to break cycles?
Instead of recognizing when cycles might happen and doing extra work, you now
do a bunch of extra work _everywhere_. Work that is exceedingly fraglile; if
you do it wrong, you're no better off than if you missed a place where you
should have used `weak_ptr`. Only it's worse, because you probably think your
code is safe.
The illusion of safety is worse than no safety at all. At least the latter
makes you careful.
Could you implement something like this? Possibly. Is it an appropriate type
for the standard library? No. It's just too fragile. You must implement it
correctly, at all times, in all ways, everywhere that cycles might appear...
or you get nothing.
## Authoritative references
There can be no authoritative references for something that was [never
proposed, suggested, or even imagined](http://www.open-
std.org/JTC1/SC22/WG21/docs/papers/) for standardization. Boost has no such
type, and such constructs were [never even
considered](http://www.boost.org/doc/libs/1_60_0/libs/smart_ptr/smart_ptr.htm#History)
for `boost::shared_ptr`. Even the [very first smart pointer paper
(PDF)](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/1994/N0555.pdf)
never considered the possibility. The subject of expanding `shared_ptr` to
automatically be able to handle cycles through some manual effort has [never
been discussed even on the standard proposal
forums](https://groups.google.com/a/isocpp.org/forum/#!forumsearch/) where
**_far_** stupider ideas have been deliberated.
The closest to a reference I can provide is [this paper from 1994 about a
reference-counted smart
pointer](https://www.usenix.org/legacy/publications/library/proceedings/c++94/full_papers/ellis.a).
This paper basically talks about making the equivalent of `shared_ptr` and
`weak_ptr` part of the language (this was in the _early_ days; they didn't
even think it was possible to write a `shared_ptr` that allowed casting a
`shared_ptr<T>` to a `shared_ptr<U>` when `U` is a base of `T`). But even so,
it specifically says that cycles would not be collected. It doesn't spend much
time on why not, but it does state this:
> However, cycles of collected objects with clean-up functions are
> problematic. If A and B are reachable from each other, then destroying
> either one first will violate the ordering guarantee, leaving a dangling
> pointer. If the collector breaks the cycle arbitrarily, programmers would
> have no real ordering guarantee, and subtle, time-dependent bugs could
> result. To date, no one has devised a safe, general solution to this problem
> [Hayes 92].
This is essentially the encapsulation/invariant issue I pointed out: making a
pointer member of a type invalid breaks an invariant.
So basically, few people have even considered the possibility, and those few
who did quickly discarded it as being impractical. If you truly believe that
they're wrong, the single best way to prove it is by implementing it yourself.
Then propose it for standardization.
|
How do I play a .wav file using Tkinter on Mac?
Question: I am trying to get a laughing track to play whenever you die in my text
adventure game, I am using Python but since I have no clue how to use Tkinter
on Python. I never used TKinter so I need some help. If you are wondering
where the file is, It is in a folder I named **Sound Files**.
Answer: Use winsound - it's built in.
import winsound
winsound.PlaySound('laugh.wav', winsound.SND_FILENAME)
|
Python Subset dataframe with another dataframe
Question: I'm transitioning from R to Python and trying to subset a dataframe with a
field in another dataframe. What would be the Python Equivalent for this R
code:
final_solution <- subset(df1, item %in% df2$item)
Thanks
Answer: We can try
df1[(df1.item).isin(df2.item)]
Using a reproducible example (with `pandas`)
import pandas as pd
df1 = pd.DataFrame({'item' : [1, 2, 3, 4],
'fruit' : ['mango', 'apple', 'banana', 'mango']})
df2 = pd.DataFrame({'item' : [1, 2]})
print(df1[(df1.item).isin(df2.item)])
gives the output
# fruit item
#0 mango 1
#1 apple 2
|
Sending a constant rate data stream from Droid via TCP/IP
Question: in my quest to learn Java / Android development, I’m running into lots of
roadblocks. Mainly because I don’t really know much about threading and
communication between threads/processes. I’m trying to stream the IMU data
from an android device to a python application on a computer. Whenever the
sensor values change, a sensor listener saves the current values into a
variable for the network handler to access.
The network handler in turn is supposed to run on a timer, sending the values
and a current timestamp at a more or less fixed rate of 33Hz (perhaps a bit
fast? well, I’d be willing to accept as slow as 10Hz, but no slower than
that). Anyway, when I tested this, I could see on the computer interface that
the data isn’t nearly coming in at a steady pace of 30 per second, but rather
comes in surges, sometimes not coming at all for a second, and overall
accumulating quite the delay (ie. the later the values are, the more delayed
they come in). I understand there might be some variability in the network and
some lags, but I would at least like the overall pace to at least be correct,
ie that it doesn’t get worse and worse the longer I’m sending.
Considering the devices are both on a common wifi network, and I’m capable of
streaming 1080p video without any lags over wifi, I’m fairly confident that
the protocol should be able to handle a 64 Byte string every 30ms without
troubles. To eliminate the sensor reader as an problem source, I made a
minimum working example that simply sends a string every 30ms, without any
sensor reading. I basically got this code from various stackoverflow posts,
and modified it until it more or less did what I wanted. The problem is that
the network interface runs in an AsynchronousTask, for which I am uncertain
how to access it once it has been started. My theory is that it’s wasting
resources to open a new socket for every new data packet, but I’m unsure how
to open the socket once in the background and then pass the values to it on a
timer and tell it to send.
Here’s the basic activity I made to test this:
package com.jamesdoesntlikejava.motionlearning15;
import android.support.v7.app.ActionBarActivity;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import android.widget.Toast;
import java.util.Timer;
import java.util.TimerTask;
public class SendValuesActivity extends ActionBarActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_send_values);
// creating timer task, timer
final Timer timer = new Timer();
TimerTask taskNew = new TimerTask() {
@Override
public void run() {
int counter = 0;
int numsteps = 333;
String params[] = new String[2];
if (counter < numsteps) {
params[0] = "192.168.1.33";
long currentTime = System.currentTimeMillis();
params[1] = Long.toString(currentTime)+"blablabla";
new ServerCommunicationTask().execute(params);
counter++;
} else {
timer.cancel();
timer.purge();
}
}
};
// scheduling the task at fixed rate delay
Toast.makeText(this, "Sending Values in 1s...", Toast.LENGTH_SHORT).show();
timer.scheduleAtFixedRate(taskNew,1000,30);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.menu_send_values, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
//noinspection SimplifiableIfStatement
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
}
And this is the class for doing the networking:
package com.jamesdoesntlikejava.motionlearning15;
import android.os.AsyncTask;
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.Socket;
import java.net.UnknownHostException;
public class ServerCommunicationTask extends AsyncTask<String, Void, String> {
public final static int TCP_SERVER_PORT = 13337;
// params are 0: the target IP and 1: the message to send.
@Override
protected String doInBackground(String[] params) {
String TCP_SERVER_IP = params[0];
try {
Socket s = new Socket(TCP_SERVER_IP, TCP_SERVER_PORT);
BufferedWriter out = new BufferedWriter(new OutputStreamWriter(s.getOutputStream()));
//send output msg
String outMsg = params[1];
out.write(outMsg);
out.flush();
//close connection
s.close();
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return "success";
}
@Override
protected void onPostExecute(String response) {
}
}
Running on a Moto G LTE (updated 1st gen) with android 5.1. Any hints are
appreciated, thanks!
Answer: Instead of AsyncTask and always opening new connection, you can use Thread.
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.Socket;
import java.net.UnknownHostException;
import java.util.ArrayList;
public class ServerCommunicationThread extends Thread {
public final static int TCP_SERVER_PORT = 13337;
private ArrayList<String> mMessages = new ArrayList<>();
private String mServer;
private boolean mRun = true;
public ServerCommunicationThread(String server) {
this.mServer = server;
}
@Override
public void run() {
while (mRun) {
Socket s = null;
try {
s = new Socket(mServer, TCP_SERVER_PORT);
BufferedWriter out = new BufferedWriter(new OutputStreamWriter(s.getOutputStream()));
while (mRun) {
String message;
// Wait for message
synchronized (mMessages) {
while (mMessages.isEmpty()) {
try {
mMessages.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
// Get message and remove from the list
message = mMessages.get(0);
mMessages.remove(0);
}
//send output msg
String outMsg = message;
out.write(outMsg);
out.flush();
}
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
//close connection
if (s != null) {
try {
s.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
public void send(String message) {
synchronized (mMessages) {
mMessages.add(message);
mMessages.notify();
}
}
public void close() {
mRun = false;
}
}
You can keep the thread running with connection opened and send message when
needed.
ServerCommunicationThread thread = new ServerCommunicationThread("192.168.1.33");
thread.start();
...
thread.send("blablabla");
...
thread.send("blablabla");
...
thread.close();
Please note that this code is not tested.
|
Using Try/Except statement in Python
Question: my code is as follows:
import mechanize
import urlparse
import util
url = "https://math.berkeley.edu/~strain/"
urls = [url]
visited = []
link_pdf = []
br = mechanize.Browser()
br.open(url)
while len(urls) > 0:
try:
br.open(urls[0])
urls.pop(0)
for link in br.links():
new_url = urlparse.urljoin(link.base_url, link.url)
print new_url
urls.append(new_url)
except:
print "error"
Below is how it runs: At some points, it will be like this:
https://math.berkeley.edu/~strain/Publications/tvm.pdf
error
https://math.berkeley.edu/~strain/Publications/vce.pdf
error
https://math.berkeley.edu/~strain/Publications/mclaughlin.strain.kdv.pdf
error
error
error
error
error
error
....
And I only want it print out four times
It will run and print "error" until some points that it will only print out
error forever. And, I only want it hit my script hit error only four time and
break it. hence, can anyone show me how to do it? Thank you.
EDIT: I posted the the code that cause error forever and how it runs.
Answer: If I am understanding your question correctly, you want to allow for an error
only four times.
Count the number of errors you have had and let the loop end when you hit the
limit.
errors = 0
while len(urls) > 0 and errors < 4:
try:
#doing something
except:
errors += 1
print "error"
|
Scapy OSError when importing scapy.all: 'wpcap.dll' does not exsist
Question: I tried to import `scapy.all`, but an error saying 'wpcap.dll' does not exist.
Why does this happen? I am sure I downloaded everything correctly. I saved
everything into a folder called `scapy` . I got an OSError from module
`ctypes`. Does Python expect me to have something pre-installed?
This is my error (I used scapy.scapy.all since I saved it into a folder)
>>> import scapy.scapy.all
WARNING: Windows support for scapy3k is currently in testing. Sniffing/sending/receiving packets should be working with WinPcap driver and Powershell. Create issues at https://github.com/phaethon/scapy
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
import scapy.scapy.all
File "C:\Python34\scapy\scapy\all.py", line 16, in <module>
from .arch import *
File "C:\Python34\scapy\scapy\arch\__init__.py", line 88, in <module>
from .windows import *
File "C:\Python34\scapy\scapy\arch\windows\__init__.py", line 23, in <module>
from scapy.scapy.arch import pcapdnet
File "C:\Python34\scapy\scapy\arch\pcapdnet.py", line 32, in <module>
from .winpcapy import *
File "C:\Python34\scapy\scapy\arch\winpcapy.py", line 26, in <module>
_lib=CDLL('wpcap.dll')
File "C:\Python34\lib\ctypes\__init__.py", line 351, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
>>>
Answer: Make sure you have [WinPcap](http://www.winpcap.org/) installed in order to
use scapy.
|
Why can't I compare these form variables using python in my mysql query?
Question: I am trying to check for a string that is being passed from a form in an html
page. So the form picks up the user name and then checks the database if it
already has been made. If it hasn't, it goes ahead and creates it. My errors
are in the part of the logic that looks up that user name.
Note, I have commented out some areas where various errors have popped up:
import mysql.connector
import web
from mysql.connector import Error
import cgi, cgitb
cgitb.enable()
conn = mysql.connector.connect(host='localhost', database='database', user='root', password='root')
cursor = conn.cursor()
form = cgi.FieldStorage()
username = form.getvalue('username')
password = form.getvalue('password')
# check_existence = """
# SELECT username FROM members WHERE username = '%s'
# """
check_existence = """
SELECT username FROM members WHERE username = %s
"""
# cursor.execute(check_existence, username)
# "Wrong number of arguments during string formatting")
cursor.execute(check_existence, (username))
# ^pushes down to con.commit
# cursor.execute(check_existence, (username,))
# ^wrpmg number of arguments during string formatting
# with comma, the error is in commit, with comma, its in execute
conn.commit()
matches = cursor.rowcount()
Now the error is pointing to conn.commit. Though this is depending on the
syntax, sometimes it points to the line above it. Error:
=> 203 conn.commit()
<class 'mysql.connector.errors.InternalError'>: Unread result found.
args = (-1, 'Unread result found.', None)
errno = -1
message = ''
msg = 'Unread result found.'
Answer: In my limited experience, commit() is only used to save (commit) updates to
the database. It looks like you're executing a select query, but doing nothing
with the results, and the error is related to that. Try moving the commit to
the end, or doing away with it. Try using/doing something with the results
stored in the cursor. I believe the latter is the solution.
|
How to parse text in multiple body tags with BeautifulSoup in Python?
Question: I want to parse this website: <https://www.flyingv.cc/project/3724>
and I want fetch the information in the html source code.
Just like this information, 2830:
<span class="sharenumber" id="fb_share_span">2830</span>
But when I use BeautifulSoup to extract text between span tags, it is totally
failed.
My python codes are:
from bs4 import BeautifulSoup
res = requests.get("https://www.flyingv.cc/project/3724")
soup = BeautifulSoup(res.text, "html.parser")
test = soup.find('span', attrs={'id':'fb_share_span'})
print test
print test.txt
The output of python is:
<span class="sharenumber" id="fb_share_span"></span>
None
How can I extract the information in the scenario? Furthermore, this website
has multiple body tags. So I suspect this is the reason why I failed, but I
don't know how to handle this kind problem.
Thank for your help, and sorry for that my English is very pool.
Answer: The data/value you want is not rendered by <https://www.flyingv.cc>. It's
received in this XHR (AJAX) call:
[https://api.facebook.com/restserver.php?format=json&method=links.getStats&urls=https://www.flyingv.cc/project/3724](https://api.facebook.com/restserver.php?format=json&method=links.getStats&urls=https://www.flyingv.cc/project/3724)
Which outputs something like this:
[{
"url": "https:\/\/www.flyingv.cc\/project\/3724",
"normalized_url": "https:\/\/www.flyingv.cc\/project\/3724",
"share_count": 466,
"like_count": 1995,
"comment_count": 369,
"total_count": 2830,
"click_count": 0,
"comments_fbid": "673240602745001",
"commentsbox_count": 0
}]
So if you want to get every project's **`total_count`** value (which is a sum
of **share_count** \+ **like_count** \+ **comment_count**) just change
**`urls`** variable content in the above link.
* * *
This code could help:
#coding:utf-8
import json
import requests
url_prefix = "https://api.facebook.com/restserver.php?format=json&method=links.getStats&urls="
project_url = "https://www.flyingv.cc/project/3724"
response = requests.get("{}{}".format(url_prefix, project_url))
data = json.loads(response.content)[0]
print data.get('total_count', None)
Output:
2830
|
Reload module from a folder?
Question: Since IPython Notebook does not reload the file after I modified my module, I
am trying to reload it. There was a
[post](http://stackoverflow.com/questions/5516783/how-to-reload-python-module-
imported-using-from-module-import) on how to do similar things, but my problem
is when the .py file is in a subfolder, it does not work.
My original import works like this:
from myutils.MyClassFile import MyClass
while trying to reload:
reload(myutils.MyClassFile)
from myutils.MyClassFile import MyClass
I got the error:
name 'myutils' is not defined
Any suggestion on how to achive this.
Answer: In the notebook:
%load_ext autoreload
%autoreload 2
enables auto reload of all imported modules that changed for each execution of
a cell.
You can exclude modules with:
%aimport module_to_exclude
Alternatively use:
%autoreload 1
and whitelist what should be reloaded:
%aimport module_to_include
|
How to I create a ticking time series with Reactive Extensions for Python (RxPY)?
Question: ## My Setup:
I have a list of stock price data in (time, price) tuples:
from datetime import datetime
prices = [(datetime(2015, 1, 9), 101.9), (datetime(2015, 1, 12), 101.5), (datetime(2015, 1, 13), 101.7)]
which I want to make into an [RxPY](https://github.com/ReactiveX/RxPY)
`Observable` so that I may backtest a trading strategy tick-by-tick:
def myStrategy(date, price): # SELL if date is a Monday and price is above 101.6
strategy = 'SELL' if date.weekday() and price > 101.6 else 'BUY'
print 'date=%s price=%s strategy=%s' % (date, price, strategy)
I wish to start backtesting from 12 Jan 2015 so I assume I must use the
following scheduler:
from rx.concurrency import HistoricalScheduler
scheduler = HistoricalScheduler(datetime(2015, 1, 12))
To run my backtest I do:
from rx import Observable
observable = Observable.from_iterable(prices, scheduler=scheduler).timestamp()
observable.subscribe(lambda price: myStrategy(price.timestamp, price.value))
scheduler.start()
## Problem:
I expected to see:
date=2015-01-12 00:00:00 price=101.5 strategy=BUY
date=2015-01-13 00:00:00 price=101.7 strategy=SELL
but I got
date=2015-12-20 08:43:45.882000 price=(datetime.datetime(2015, 1, 9, 0, 0), 101.9) strategy=SELL
date=2015-12-20 08:43:45.882000 price=(datetime.datetime(2015, 1, 12, 0, 0), 101.5) strategy=SELL
date=2015-12-20 08:43:45.882000 price=(datetime.datetime(2015, 1, 13, 0, 0), 101.7) strategy=SELL
The problems are that:
* The timestamps are wrong: I get today's date of `2015-12-20 08:43:45.882000` instead of the historic dates (e.g. `datetime(2015, 1, 12)`)
* The price still contains the time component
* The scheduler did not start on 12 Jan 2015 as I requested, since I see that the data point from 9 Jan 2015 was still used.
I also tried using `scheduler.now()`:
observable.subscribe(lambda price: myStrategy(scheduler.now(), price.value))
but then the date was stuck on `date=2015-01-12 00:00:00` for some reason:
date=2015-01-12 00:00:00 price=(datetime.datetime(2015, 1, 9, 0, 0), 101.9) strategy=BUY
date=2015-01-12 00:00:00 price=(datetime.datetime(2015, 1, 12, 0, 0), 101.5) strategy=BUY
date=2015-01-12 00:00:00 price=(datetime.datetime(2015, 1, 13, 0, 0), 101.7) strategy=BUY
How do I fix the above and obtain the result I originally expected?
Answer: Also very new to rx,
* I think the timestamp() needs the scheduler parameter. Otherwise it works on some default scheduler as per rx docs.
* You are passing the entire tuple (date,price) as price to myStrategy() thats why it is printing the date.
"timestamp by default operates on the timeout Scheduler, but also has a
variant that allows you to specify the Scheduler by passing it in as a
parameter." <http://reactivex.io/documentation/operators/timestamp.html>
There is documentation only for rxjs , but the beauty of rx is that everything
follows.
Please see if this can work for you.
observable = Observable.from_iterable(prices,scheduler=scheduler).timestamp(scheduler=scheduler)
observable.subscribe(lambda price: myStrategy(price.timestamp, price.value[1]))
scheduler.start()
|
How to write a Dockerfile for a custom python project?
Question: I'm pretty new to Docker, and I need to create the container to run Docker
container as an Apache Mesos task.
The problem is that I can't find any relevant examples. They all are centered
around Web development, which is not my case.
I have a pure Python project with large number of dependencies ( like Berkeley
Caffe or OpenCV ). How to write a Docker file to properly enroll all
dependecies ( and how to find them out?)
Answer: The docker hub registry contains a number of official language images, which
you can use as your base image.
* <https://hub.docker.com/_/python/>
The instructions tell you how you can build your python project, including the
importation of dependencies.
├── Dockerfile <-- Docker build file
├── requirements.txt <-- List of pip dependencies
└── your-daemon-or-script.py <-- Python script to run
Image supports both Python 2 and 3, you specify this in the Dockerfile:
FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
The base image uses special [ONBUILD
instructions](https://docs.docker.com/engine/reference/builder/#onbuild) to
all the hard work for you.
|
run test cases in parallel python
Question: I would like to get some help for running multiple python test cases in
parallel using multiprocessing module. I have created a class FooTest with 10
test cases in it (testA, testB...). There is a test case test_foo outside the
class.
How can I run all the test cases in parallel using python multiprocessing
module? Thanks for the help.
import unittest
import time
def setUp():
print "module setup"
def test_foo():
pass
class FooTest(unittest.TestCase):
# preparing to test
def setUp(self):
""" Setting up for the test """
print "FooTest:setUp_:begin"
# ending the test
def tearDown(self):
"""Cleaning up after the test"""
print "FooTest:tearDown_:begin"
# test routine A
def testA(self):
"""Test routine A"""
print "FooTest:testA"
time.sleep(2)
# test routine B
def testB(self):
"""Test routine B"""
print "FooTest:testB"
# test routine C
def testC(self):
"""Test routine C"""
print "FooTest:testC"
# test routine D
def testD(self):
"""Test routine D"""
print "FooTest:testD"
# test routine E
def testE(self):
"""Test routine E"""
print "FooTest:testE"
time.sleep(2)
# test routine F
def testF(self):
"""Test routine F"""
print "FooTest:testF"
# test routine G
def testG(self):
"""Test routine G"""
print "FooTest:testG"
# test routine H
def testH(self):
"""Test routine H"""
print "FooTest:testH"
# test routine I
def testI(self):
"""Test routine I"""
print "FooTest:testI"
# test routine J
def testJ(self):
"""Test routine J"""
print "FooTest:testJ"
time.sleep(2)
Answer: According to [nose
documentation](http://nose.readthedocs.org/en/latest/doc_tests/test_multiprocess/multiprocess.html),
which you can also get by running `nosetests --help`:
> \--processes=NUM Spread test run among this many processes. Set a number
> equal to the number of processors or cores in your machine for best results.
> Pass a negative number to have the number of processes automatically set to
> the number of cores. Passing 0 means to disable parallel testing. Default is
> 0 unless NOSE_PROCESSES is set. [NOSE_PROCESSES]
So just run `nosetests --processes=-1` to run your tests in parallel using all
the cores on your machine.
|
Saving a matplotlib animation in Linux
Question: I'm having a problem saving a matplotlib animation. When I execute the
following test script:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylim([0,11])
ax.set_xlim([0,100])
u, v, ims = [], [], []
u.append(0)
v.append(10)
for i in range(100):
u.append(i+1)
v.append(10)
ims.append(ax.plot(u, v, 'b-', linewidth=3.))
im_ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=3000,
blit=True)
im_ani.save('c.mp4')
I get the following error:
im_ani.save('c.mp4')
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 712, in save
with writer.saving(self._fig, filename, dpi):
AttributeError: 'str' object has no attribute 'saving'
Now according to this
[answer](http://stackoverflow.com/questions/24815717/trouble-saving-
matplotlib-animation), I need to install either ffmpeg or libav-tools. I tried
this and found ffmpeg was not available, however libav-tools did seem to
install properly. However, when I executed my script again, I still got the
same error as before.
I also (following the advice of
[this](http://stackoverflow.com/questions/23856990/cant-save-matplotlib-
animation) answer) tried doing
mywriter = animation.FFMpegWriter()
anim.save('mymovie.mp4',writer=mywriter)
but that didn't work either! It resulted in the following error:
File "anitest.py", line 22, in <module>
im_ani.save('mymovie.mp4',writer=mywriter)
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 712, in save
with writer.saving(self._fig, filename, dpi):
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 169, in saving
self.setup(*args)
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 159, in setup
self._run()
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 186, in _run
stdin=subprocess.PIPE)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Any help here would be much appreciated. I'm using Ubuntu 14.04. Thanks!
Answer: We arrived at a solution in the comments above. To summarise:
* The reason for that rather cryptic error message:
AttributeError: 'str' object has no attribute 'saving'
is [this bug in
matplotlib](https://github.com/matplotlib/matplotlib/issues/2651) which was
fixed in version 1.4.0 ([also mentioned
here](http://stackoverflow.com/q/20137792/1461210)).
* However, updating matplotlib to 1.4.0 or newer will _not_ address the root cause of the problem, which is simply that `ffmpeg` is not installed ([see here](http://stackoverflow.com/q/24815717/1461210)).
* OP was having difficulty installing `ffmpeg` because it was [dropped from the official Ubuntu repositories in version 14.04](http://askubuntu.com/a/432585/185188) (it was reinstated in Ubuntu 15.04). One work-around for those still using older versions of Ubuntu is to add [this unofficial PPA](https://launchpad.net/~mc3man/+archive/ubuntu/trusty-media):
$ sudo add-apt-repository ppa:mc3man/trusty-media
$ sudo apt-get update
$ sudo apt-get dist-upgrade # recommended on first use
$ sudo apt-get install ffmpeg
|
Python - How to get variable from netcdf file on a specefic location and at different level
Question: I am learning on how to plot various variable from WRF output netcdf file. My
requirement is to extract variables for a certain lat long (8.4875° N,
76.9525° E) so that to plot SkewT using matplotlib package.
Similar question I found on this SO page [netcdf4 extract for subset of lat
lon](http://stackoverflow.com/questions/29135885/netcdf4-extract-for-subset-
of-lat-lon). However, its location is a set of boundary.
Any help is appriciated.
Answer: Check out [xray](http://xray.readthedocs.org). You'll have to do some work to
make the SkewT chart but accessing and summarizing the netCDF Dataset and
variables will be pretty easy. Just a few examples below.
import xray
ds = xray.open_dataset('your_wrf_file.nc')
ds_point = ds.sel(lon=76.9525, lat=8.4875)
ds_point['Temperature'].plot() # plot profile at point assuming Temperature had dimensions of (level, lat, lon)
df = ds_point.to_dataframe() # export dataset to Pandas.DataFrame
temp_array = ds_point['Temperature'].values # access the underlying numpy array of the "Temperature" variable
|
How do i remove all character besides digits and "," from unicode string in python?
Question: I'm writing small crawler using scrapy. One of XPath's is containing price
followed by "zł" (polish currency mark) the problem is it's obfuscated by new
line characters, spaces and non breaking spaces. so when I do :
sel.xpath("div/div/span/span/text()[normalize-space(.)]").extract()
I get:
[u'\n 1\xa0740,00 z\u0142\n \n \n ']
Which I want to change to
[u'1740,00']
or simply into float variable. What is the /best/simplest/fastest way to do
this?
Answer: You can use `re.findall` to extract the characters from the string:
>>> import re
>>> s = u'\n 1\xa0740,00 z\u0142\n \n \n '
>>> L = re.findall(r'[\d,]', s)
>>> "".join(L)
'1740,00'
|
How to add/use libraries in Python (3.5.1)
Question: I've recently been playing around with python and have now expanded into doing
stuff like scraping through websites and other cool stuff and I need to import
new libraries for these things like lxml, pandas, urllib2 and such. So I have
Python 3.5.1 installed and is also using Wing IDE. I (think) also managed to
install pip using this [tutorial](http://python-packaging-user-
guide.readthedocs.org/en/latest/installing/), but then got lost after the
> Run python get-pip.py
part.
So how would I go about installing those libraries to try new projects?
Thanks!
Answer: python 3.5 comes with pip already, to install something with it, open a cmd
and do `python -m pip install library` to install your desire library and if
you want the help on the command do `python -m pip --help`
|
TypeError: 'NoneType' object is not callable (Python: Scraping from HTML data)
Question: I am trying to scrape the numbers from HTML data so that I can get the sum of
them. However, I am running into the above error when I try to run it. It is
referring to the "data = " line. What is this error referring to in this line
of code? Have I set the "for" loop up correctly? Thank you for your thoughts.
import urllib
from bs4 import BeautifulSoup
url = "http://python-data.dr-chuck.net/comments_42.html"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html, "html.parser")
tags = soup('span')
data = soup.findall("span", {"Comments":"Comments"})
numbers = [d.text for d in data]
summation = 0
for tag in tags:
print tags
y= tag.finall("span").text
summation = summation + int(y)
print summation
**This is what the HTML data looks like:**
<tr><td>Modu</td><td><span class="comments">90</span></td></tr>
<tr><td>Kenzie</td><td><span class="comments">88</span></td></tr>
<tr><td>Hubert</td><td><span class="comments">87</span></td></tr>
Answer: First of all, there is no `findall()` method in `BeautifulSoup` \- there is
`find_all()`. Also, you are basically searching for elements having `Comments`
attribute that has a `Comments` value:
soup.findall("span", {"Comments":"Comments"})
And, _this is Python_ , you can sum up much easier with a [built-in
`sum()`](https://docs.python.org/2/library/functions.html#sum).
Fixed version:
data = soup.find_all("span", {"class": "comments"})
print sum(int(d.text) for d in data) # prints 2482
|
repeating program (python)
Question: Heyho
I want to repeat a program after runnning a task. At the start of the program
I ask some questions, than the code jumps into the task. If the task is done
the questions sholud ask again.. Each Question reads some infos at the serial
port. If i get the infos ten times ich will restart the programm.. but the
window closes and i must start the file..
What can i do?
import serial
import struct
import datetime
print("\n")
print("This tool reads the internal Bus ")
print("-----------------------------------------------------------------------")
COM=input("Check your COM Port an fill in with single semicolon (like: 'COM13' ): ")
ser = serial.Serial(
port=COM,
baudrate=19200,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS
)
print("\n")
print("Please choose an option: ")
print("Polling of measured values and the operating status (1)")
print("Reading Parameter memory (2)")
print("Reading Fault memory (3)")
print("EXIT (4)")
print("\n")
i=input("Reading: ")
while(i==1):
count=0
while (count<10):
print(file.name)
print(ser.isOpen())
print ("connected to: "+ ser.portstr)
data = "\xE1\x14\x75\x81"
ser.write(data)
a=(map(hex,map(ord,ser.read(46))))
with open("RS485_Reflex.txt",mode='a+') as file:
file.write(str(datetime.datetime.now()))
file.write(", Values: ,")
file.write(str(a))
file.write("\n")
print(a)
count=count+1
else:
i=0
loop=1
#-----------------------------------------------------------------------
while(i==2):
count=0
while (count<10):
print(file.name)
print(ser.isOpen())
print ("connected to: "+ ser.portstr)
data = "\xE1\x13\x00\x00\x74\x81"
ser.write(data)
a=(map(hex,map(ord,ser.read(11))))
with open("RS485_Reflex.txt",mode='a+') as file:
file.write(str(datetime.datetime.now()))
file.write(", Parameters: , ")
file.write(str(a))
file.write("\n")
print(a)
count=count+1
else:
i=0
#---------------------------------------------------------------------
while(i==3):
count=0
while (count<10):
print(file.name)
print(ser.isOpen())
print ("connected to: "+ ser.portstr)
data = "\xE1\x12\x00\x00\x73\x81"
ser.write(data)
a=(map(hex,map(ord,ser.read(11))))
with open("RS485_Reflex.txt",mode='a+') as file:
file.write(str(datetime.datetime.now()))
file.write(", Fault: , ")
file.write(str(a))
file.write("\n")
print(a)
count=count+1
else:
i=0
#----------------------------------------------------------------------
while(i==4):
file.close()
ser.close()
sys.exit(0)
file.close()
ser.close()
Answer: First, you should use **If/Elif/Else** statements instead of while loops.
while(i==1):
should be
if(i==1)0:
This is a simple program I wrote that you can follow:
# setup a simple run_Task method (prints number of times through the loop already)
def run_task(count):
print "Ran a certain 'task'", count,"times"
# count is a variable that will count the number of times we pass through a loop
count = 0
# loop through 10 times
while(count<10):
# ask questions
q1 = raw_input("What is your name?")
q2 = raw_input("What is your favorite color?")
# run a 'task'
run_task(count)
|
How to structure python project for use with coverage.py?
Question: I have a package with the following layout:
tiny-py-interpreter/
|-- setup.py
|-- .coveragerc
|-- tinypy/
| -- foo/
| -- bar/
| -- tests/
| -- tinypyapp.py
| -- run_tests.py
Here are some lines from the setup.py:
setup(
...
packages=find_packages(),
entry_points = {
'console_scripts' : [ 'tinypy = tinypy.tinypyapp:main']
},
test_suite = 'tinypy.run_tests.get_suite',
)
After installing the package a console script called `tinypy` is installed:
pip3 install .
Then I run coverage:
coverage run setup.py test
coverage combine
coverage report
All the tests I have are implemented in a such way that each test launches a
subprocess of the `tinypy`, so I use `parallel = True` in `.coveragerc` to
capture the result of the coverage run.
Essentially, I have the same layout as `coverage`
[itself](https://github.com/nedbat/coveragepy/blob/master/coverage/cmdline.py),
where `coverage` and `cmdline.py` are the same things as `tinypy` and
`tinypyapp.py` in my case.
**The problem** : when console script `tinypy` (`tinypyapp.py`) is executed,
it uses an installed version of the `tinypy` package, which is located in
`/usr/local/lib/python3.5/site-packages/tinypy/`. Coverage ignores sources in
`./tinypy` (as they're not used). If `source` parameter is omitted, that it's
possible to see coverage for the code from `site-packages/tinypy`.
**The question** : how to structure the project properly? When `tinypyapp.py`
is installed as a script, it is installed on the same level as `tinypy` folder
(one level higher, not inside). I can't place `tinypyapp.py` outside the
`tinypy` folder, as then setup.py can't find it. Because of this I can't use
`tinypyapp.py` and have to use name of the script (which is `tinypy`).
Answer: My current workaround is to create `test_entry_point.py` in the root folder:
tiny-py-interpreter/
|-- setup.py
|-- test_entry_point.py
|-- ...
With the following content:
import sys
from tinypy.tinypyapp import main
if __name__ == '__main__':
main()
And use the following filename when launching a subprocess in test:
tinypy_binary = sys.executable + ' ' + os.getcwd() + '/test_entryp.py'
subprocess.run(tinypy_binary, ...)
|
Learn Python the Hard Way - How to use +=
Question: Currently on exercise 20, study drill 5. The code is thus:
from sys import argv
script, input_file = argv
def print_all(f):
print f.read()
def rewind(f):
f.seek(0)
def print_a_line(line_count, f):
print line_count, f.readline()
current_file = open(input_file)
print 'first of all, lets read the wole file:\n'
print_all(current_file)
print 'now lets rewind, kind of like a tape'
rewind(current_file)
print 'lets print 3 lines:'
current_line = 1
print_a_line(current_line, current_file)
current_line = current_line + 1
print_a_line(current_line, current_file)
current_line = current_line + 1
print_a_line(current_line, current_file)
Study drill 5 asks me to: Research the shorthand notation += and rewrite the
script to use that.
I have looked up +=, but have no idea how to use it in the above code.
**PLEASE NOTE - I would appreciate it if you could give me just 1 example, and
not re-write the entire code for me...I don't mean to be rude, but I would
like to work out the rest if possible.**
Answer: Short notation of `a += b` is equivalent to `a = a + b` and you can use this
`+=` in place of `current_line = current_line + 1` replacing it with
`current_line += 1`
|
Using Python Functions in Terminal
Question: I'm just starting to learn Python and I'm having a hard time testing things
out in the terminal. What I want to do is simply run a pre-written Python
method in the Python interpreter. (I know how to run it by doing python
file_name.py, but I want to run it in the interpreter itself).
So if I for example had the file "exampleModule.py":
def exampleFunc(data):
print(data)
Then in the terminal I run Python and do:
Python 3.5.1 (v3.5.1:37a07cee5969, Dec 5 2015, 21:12:44)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import exampleModule
>>> exampleFunc('Hello')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'exampleFunc' is not defined
The thing that I don't get about this is that if I run the module in the
Python IDLE, I can access the exampleFunc, but not in the terminal
interpreter.
Thanks!
Answer: When you do
import exampleModule
you have to write the full name of its functions. According to the
[Docs](https://docs.python.org/2/tutorial/modules.html)*,
> This does not enter the names of the functions defined in **exampleModule**
> directly in the current symbol table; it only enters the module name
> **exampleModule** there. Using the module name you can access the functions
If you want to write only the function name, do
from exampleModule import *
As, according to the [Docs](https://docs.python.org/2/tutorial/modules.html)*
> This does not introduce the module name from which the imports are taken in
> the local symbol table (so in the example, **exampleModule** is not
> defined).
**changed the function name to yours for better understading.*
|
Python: Printing dataframe to csv
Question: I am currently using this code:
import pandas as pd
AllDays = ['a','b','c','d']
TempDay = pd.DataFrame( np.random.randn(4,2) )
TempDay['Dates'] = AllDays
TempDay.to_csv('H:\MyFile.csv', index = False, header = False)
But when it prints it prints the array before the dates with a header row. I
am seeking to print the dates before the TemperatureArray and no header rows.
Edit: The file is with the TemperatureArray followed by Dates: [
TemperatureArray, Date].
-0.27724356949570034,-0.3096554106726788,a
-0.10619546908708237,0.07430127684522048,b
-0.07619665345406437,0.8474460146082116,c
0.19668718143436803,-0.8072994364484335,d
I am looking to print: [ Date TemperatureArray]
a,-0.27724356949570034,-0.3096554106726788
b,-0.10619546908708237,0.07430127684522048
c,-0.07619665345406437,0.8474460146082116
d,0.19668718143436803,-0.8072994364484335
Answer: The [pandas.Dataframe.to_csv](http://pandas.pydata.org/pandas-
docs/version/0.17.1/generated/pandas.DataFrame.to_csv.html) method has a
keyword argument, `header=True` that can be turned off to disable headers.
However, it sometimes does not work (from experience). Using it in conjunction
with `index=False` should solve your issue.
For example, this snippet should fix your issue:
TempDay.to_csv('C:\MyFile.csv', index=False, header=False)
Here is a full example showing how it disables the header row:
>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame(np.random.randn(6,4))
>>> df
0 1 2 3
0 1.295908 1.127376 -0.211655 0.406262
1 0.152243 0.175974 -0.777358 -1.369432
2 1.727280 -0.556463 -0.220311 0.474878
3 -1.163965 1.131644 -1.084495 0.334077
4 0.769649 0.589308 0.900430 -1.378006
5 -2.663476 1.010663 -0.839597 -1.195599
>>> # just assigns sequential letters to the column
>>> df[4] = [chr(i+ord('A')) for i in range(6)]
>>> df
0 1 2 3 4
0 1.295908 1.127376 -0.211655 0.406262 A
1 0.152243 0.175974 -0.777358 -1.369432 B
2 1.727280 -0.556463 -0.220311 0.474878 C
3 -1.163965 1.131644 -1.084495 0.334077 D
4 0.769649 0.589308 0.900430 -1.378006 E
5 -2.663476 1.010663 -0.839597 -1.195599 F
>>> # here we reindex the headers and return a copy
>>> # using this form of indexing just requires you to provide
>>> # a list with all the columns you desire and in the order desired
>>> df2 = df[[4, 1, 2, 3]]
>>> df2
4 1 2 3
0 A 1.127376 -0.211655 0.406262
1 B 0.175974 -0.777358 -1.369432
2 C -0.556463 -0.220311 0.474878
3 D 1.131644 -1.084495 0.334077
4 E 0.589308 0.900430 -1.378006
5 F 1.010663 -0.839597 -1.195599
>>> df2.to_csv('a.txt', index=False, header=False)
>>> with open('a.txt') as f:
... print(f.read())
...
A,1.1273756275298716,-0.21165535441591588,0.4062624848191157
B,0.17597366083826546,-0.7773584823122313,-1.3694320591723093
C,-0.556463084618883,-0.22031139982996412,0.4748783498361957
D,1.131643603259825,-1.084494967896866,0.334077296863368
E,0.5893080536600523,0.9004299653290818,-1.3780062860066293
F,1.0106633581546611,-0.839597332636998,-1.1955992812601897
If you need to dynamically adjust the columns, and move the last column to the
first, you can do as follows:
# this returns the columns as a list
columns = df.columns.tolist()
# removes the last column, the newest one you added
tofirst_column = columns.pop(-1)
# just move it to the start
new_columns = [tofirst_column] + columns
# then you can the rest
df2 = df[new_columns]
This simply allows you to take the current column list, construct a Python
list from the current columns, and reindex the headers without having any
prior knowledge on the headers.
|
setting sys.path to have the parent module path doesn't import the sub module
Question: The project is in Python3:
Project
|
Mymodule
__init__.py
|
submodule1
__init__.py
some_script.py
|
submodule2
__init__.py
myclass.py -> implements MyCLass
Whenever I run a script, I start from my Project directory. I always execute
from this Project location
In `Mymodule.submodule1.some_script.py`, I want to import `MyClass` inside
`some_script.py` which is located in `submodule2`
## Things that work in `some_script.py`
from ..submodule2.myclass import MyClass
from Mymodule.submodule2.myclass import MyClass
What I am trying to achieve is to avoid location based relative reference in
my script file in my submodules. When ever I try to import a module It should
search in the current directory, if not found, the go to `sys.path`.
So my solution is in `my __init__.py` of my `submodule1`
import os
import sys
sys.path.append(os.path.abspath('./Mymodule')) -> cwd will be "Project"
Now inside my `some_script.py` if I import
from submodule2.myclass import MyClass
This should work, because I have a sys.path entry to the root/parent
module(Mymodule). I have verified this by print(sys.path) in `some_script.py`
This always throws an error:
`ImportError: No module named 'submodule2.myclass`
Why its not considering the `sys.path` to search for the module.
Answer: I was able to figure out the issue. In my sys.path, there was already a
similarly named module(submodule2) in a different path way ahead of my current
project path. So python was searching file in the first module that it
encountered while searching for the module, which was in a different location
print(sys.path)
\somepath: ... : .... : ... :\myproject_path
\somepath -> has module(submodule2) with a similar name
\myproject_path -> has my module with a similar name submodule2
|
python script runs on its own through terminal but not when it's executed in Java with Runtime.getRuntime().exec()
Question: python script runs on its own through terminal but not when it's executed in
Java with Runtime.getRuntime().exec()
here's my script. i set my python interpreter in Eclipse the proper way and I
don't know what to do.
#!/usr/bin python
import subprocess
def execute(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print (proc_stdout)
execute("command 1", "commnand 2", ...)
Answer: I can´t do much without seeing the java code (Most likely the problem is
here).
I still can recommend you not to use "Runtime.getRuntime().exec()", use this
instead (Might solve your problem):
String yourCommand = "Python ExampleScript.py";
ProcessBuilder pb = new ProcessBuilder(yourCommand);
Process p = pb.start();
p.waitFor();
**Edit:** If you are using complete paths, remember to use this:
(I had the same problem a couple weeks ago)
yourCommand.replaceAll("\\\\","/")
(I fixed a mistake in the code too)
|
Cannot import name random/multiarray in conda environment
Question: I'm trying to run tensorflow in a conda environment. I started off by creating
a python 2.7 environment with `conda create --name py27 python=2.7` and then
activated it. Within the environment, I ran `conda install -c
https://conda.anaconda.org/jjhelmus tensorflow`, which has tensorflow and
numpy in the package, so hypothetically there shouldn't be any issues running
numpy.
When I open up the python console within the environment, however, I'm
continually getting `ImportError: No module named multiarray` and
`ImportError: cannot import name Random` (I can import random with no issues,
but then I get the multiarray issue) no matter how many times I
uninstall/reinstall numpy/matplotlib (at one point I even
uninstalled/reinstalled python) and no matter what versions of these I try to
use, I keep on getting the same issue. What should I do?
Answer: There is an answer [here](http://stackoverflow.com/questions/33959028/trouble-
creating-a-conda-env-with-working-numpy-importerror-cannot-import-nam).
Shortly: that issue has something with the version of numpy which is upgraded
by another package by whatever reason. Try to specify version: `conda create
-n NAME numpy=1.9.3 other_package`.
If that doesn't work, check if you have files in your working directory which
names matches the names of some packages. For example, I had a similar problem
after renaming `numpy.py.txt` (which is a sort of handmade cheatsheet) into
just `numpy.py` and trying to `import numpy` within Python shell when I was in
that directory.
|
Python regular expression, remove repeated class
Question: I have a bunch of timestamp strings. Each string has three sections and
delimiters between sections. A delimiter may be any of these [.:,;] symbols.
Each section contains digits. The first one may contain one or two digits. Any
other section may contain two digits. I need to retrieve those digits and
fulfil some actions with them. I use Python3.
So I write this code:
import re
lines = ('1:24.15', '17.01.01', '05:07:28', '175.11.123', '4:35,07', '01;21;73', '00;1;1', '7;7.12')
pattern = re.compile(r'^(\d{1,2})[:.,;](\d{2})[:.,;](\d{2})$')
for i in lines:
try:
mm, ss, ff = pattern.search(i).groups()
except AttributeError:
print('{} is invalid'.format(i))
print(int(mm) * 60 + int(ss) + round(int(ff) / 0.075 / 1000, 3))
My question is... How could I reduce the repetitions in this regular
expression?
r'^(\d{1,2})[:.,;](\d{2})[:.,;](\d{2})$'
Thank you in advance. I'll be thankful for any advice.
Answer: Why don't use
[`re.split()`](https://docs.python.org/2/library/re.html#re.split):
pattern = re.compile(r"[.:;,]")
for line in lines:
mm, ss, ff = pattern.split(line)
Though, this would require an additional length check of `mm`, `ss` and `ff`.
On one hand - this makes things less attractive, but this leads to _more
precise and meaningful error messages_ :
pattern = re.compile(r"[.:;,]")
for line in lines:
try:
mm, ss, ff = pattern.split(line)
except ValueError:
print('{} has not enough digit groups'.format(line))
continue
if len(mm) not in (1, 2) or len(ss) != 2 or len(ff) != 2:
print('{} has a digit group with invalid length'.format(line))
continue
print(int(mm) * 60 + int(ss) + round(int(ff) / 0.075 / 1000, 3))
|
I do not understand how this Python code works - Unknown programming concept
Question: My apologies for the crappy headline. If I were able to frame my problem
properly I would have used google ;)
I found an piece of python code able to parse ini files into a python dict
called "store":
#!/usr/bin/env python
from ConfigParser import SafeConfigParser
def read(file, store):
def parse_maybe(section):
if not confp.has_section(section):
return False
if (section == "Main"):
for left, right in confp.items(section):
store[left] = right.format(**store)
return True
confp = SafeConfigParser()
confp.read(file)
parse_maybe("Main")
store = {}
store["basedir"] = "/path/to/somewhere"
read("foo.ini", store)
The ini files may include declarations with placeholders, for instance:
[Main]
output = {basedir}/somename.txt
When running the code, {basedir} gets replaced by "/path/to/somewhere" already
defined in store. I guess this magic comes from this line of code:
store[left] = right.format(**store)
I understand _what_ the code does. But I do not understand _how_ this works.
What is this ** operator doing with the dictionary? A pointer to a tutorial,
etc. would be highly appreciated.
Answer: The answer to my question is twofold:
1) I did not know that this is possible with format:
print "{a} is {b}".format(a="Python", b="great")
Python is great
2) Essentially the ** Operator unpacks a dictionary:
dict = {"a": "Python", "b": "great"}
print "{a} is {b}".format(**dict)
Python is great
|
Finding a variable in the .txt and comparing the values within two variables?
Question:
----bDdefsYG3wqContent-Disposition: form-data;name="QueryResult"Content-Type: application/JSON;charset=utf-8-Context: efb3d3ce-ef50-4e83-8c31-063c3f5208aa{
"status_code": 0,
"result_type": "DRAGON_NLU_ASR_CMD",
"NMAS_PRFX_SESSION_ID": "f786f0be-d547-4fca-8d72-96429a30c9db",
"NMAS_PRFX_TRANSACTION_ID": "1",
"audio_transfer_info": {
"packages": [{
"time": "20151221085512579",
"bytes": 1633
},
{
"time": "20151221085512598",
"bytes": 3969
}],
"nss_server": "10.56.11.186:4503",
"end_time": "20151221085512596",
"audio_id": 1,
"start_time": "20151221085512303"
},
"cadence_regulatable_result": "completeRecognition",
"appserver_results": {
"status": "success",
"final_response": 0,
"payload": {
"actions": [{
"speaker": "user",
"type": "conversation",
"nbest_text": {
"confidences": [478,
0,
0],
"words": [[{
"stime": 0,
"etime": 1710,
"word": "ConnectedDrive\\*no-space-before",
"confidence": "0.241"
}],
[{
"stime": 0,
"etime": 1020,
"word": "Connected\\*no-space-before",
"confidence": "0.0"
},
{
"stime": 1020,
"etime": 1710,
"word": "drive",
"confidence": "0.0"
}],
[{
"stime": 0,
"etime": 900,
"word": "Connect\\*no-space-before",
"confidence": "0.0"
},
{
"stime": 900,
"etime": 980,
"word": "to",
"confidence": "0.0"
},
{
"stime": 980,
"etime": 1710,
"word": "drive",
"confidence": "0.0"
}]],
"transcriptions"= ["ConnectedDrive",
"Connected drive",
"Connect to drive"]
}
}]
}
},
"final_response": 0,
"prompt": "",
"result_format": "appserver_post_results"
}-Disposition: form-data;name="QueryResult"Content-Type: application/JSON;charset=utf-8-Context: efb3d3ce-ef50-4e83-8c31-063c3f5208aa{
"status_code": 0,
"result_type": "DRAGON_NLU_ASR_CMD",
"NMAS_PRFX_SESSION_ID": "f786f0be-d547-4fca-8d72-96429a30c9db",
"NMAS_PRFX_TRANSACTION_ID": "1",
"audio_transfer_info": {
"packages": [{
"time": "20151221085512579",
"bytes": 1633
},
{
"time": "20151221085512598",
"bytes": 3969
}],
"nss_server": "10.56.11.186:4503",
"end_time": "20151221085512596",
"audio_id": 1,
"start_time": "20151221085512303"
},
"cadence_regulatable_result": "completeRecognition",
"appserver_results": {
"status": "success",
"final_response": 1,
"payload": {
"diagnostic_info": {
"adk_dialog_manager_status": "undefined",
"nlu_version": "[NLU_PROJECT:NVCCP-eng-USA];[Datapack:Version: nlps-eng-USA-NVCCP-6.1.100.12-2-GMT20151130160932];[VL-Models:Version: vlmodels-NVCCP-eng-USA-6.1.100.12-2-GMT20151130160335]",
"nlps_host": "mt-dmz-nlps002..com:8636",
"nlps_ip": "10.56.10.51",
"application": "AUDI_2017",
"nlu_component_flow": "[Input:VoiceJSON] [FieldID|auto_main] [NLUlib|C-eckart-r$Rev$.f20151118.1250] [build|G-r72490M.f20151130.1055] [vlmodel|Version: vlmodels-NVCCP-eng-USA-6.1.100.12-2-GMT20151130160335] [Flow|+VlingoTokenized]",
"third_party_delay": "0",
"nmaid": "AUDI_SDS_2017_EXT_20151203",
"nlps_profile": "AUDI_2017",
"fieldId": "auto_main",
"nlps_profile_package_version": "r159218",
"nlu_annotator": "com.NVCCP.eng-GBR.ncs51.VlingoNLU-client-qNVCCP_NCS51",
"ext_map_time": "2",
"nlu_use_literal_annotator": "0",
"int_map_time": "2",
"nlps_nlu_type": "nlu_project",
"nlu_language": "eng-GBR",
"timing": {
"finalRespSentDelay": "188",
"intermediateRespSentDelay": "648"
},
"nlps_profile_package": "AUDI_2017"
},
"actions": [{
"Input": {
"Interpretations": ["ConnectedDrive"],
"Type": "asr"
},
"Instances": [{
"nlu_classification": {
"Domain": "UDE",
"Intention": "Unspecified"
},
"nlu_interpretation_index": 1,
"nlu_slot_details": {
"Name": {
"literal": "ConnectedDrive"
},
"Search-phrase": {
"literal": "connecteddrive"
}
},
"interpretation_confidence": 4549
}],
"type": "nlu_results",
"api_version": "1.0"
}],
"nlps_version": "nlps(z):6.1.100.12.2-B359;Version: nlps-base-Zeppelin-6.1.100-B124-GMT20151130193521;"
}
},
"final_response": 1,
"prompt": "",
"result_format": "appserver_post_results"
}----_NMSP_vutc5w1XobDdefsYG3wq--
* * *
CODE:
#!/usr/bin/env python
import os
import sys
import time
import webbrowser
from io import BytesIO
import uuid
import httplib
import StringIO
import re
import difflib
import mmap
import json
directory =os.path.join("C:\Users\Desktop\Working\pcm-audio\English")
for subdir, dirs, files in os.walk(directory):
for file in files:
if file.endswith(".txt"):
content=json.load(file)
if "status_code" in content:
if content["status_code"]==0:
print("valid")
I am reading a `.txt` file from a specific path as above. Later I am checking
for only `status_code` in the `.txt` file.
If it is 0 then I am printing as valid else Invalid. Later I am looking for
transcriptions and Interpretations variable in the `.txt` file (I showed in
the beginning, what it contains), which has a list of values.
Comparing those values, I compared it but I am not getting output. I am making
a mistake with RegEx here: `if line.find("transcriptions") ==
("Interpretations"):`
`status_code` is working fine. But not the `transcriptions` and
`Interpretations`.
Can someone tell me what is the mistake here?
Answer: Without details on the implementation of find this is diffictult to answer.
I would propose a completely different approach. The input you are reading
looks like valid JSON. Why not just load this with pythons json module. This
will give you a python dict with a very nice interface.
import json
info=json.load("filename.txt")
if "status" in info:
print(info["status"])
JSON has full support for arrays. You can add validation quite easily
if "actions" in info:
value=info["actions"]
if not isinstance(value, list):
raise TypeException("expecting a list of actions")
I would avoid using regular expressions. Right now the regex you are using is
easy to read but if your input parsing becomes more sophisticated, this will
surely change. Regexes are good for validation but I wouldn't try to parse
JSON with them.
Mind the difference between `json.loads` and `json.load`. Loads will try to
parse a string, load will read a file.
**EDIT:** I've read up on find. It seems to return a boolean. You shouldn't
compare that to a string. In python 3 this will always be false. This could be
a problem in your code.
**EDIT2:** Reading a list of files:
filenames=["a.txt","b.txt","c.json","d.asdadf"]
for filename in filenames:
content = json.load(filename)
if "status" in content:
print(filename+" contains a status entry")
status_entry=content["status"]
print("it is: "+str(status_entry))
else:
print(filename+" does not contain a status entry")
With your setup, this should probably look like this, but I haven't tested
that yet:
directory =os.path.join("C:\Users\hemanth_venkatappa\Desktop\Working\pcm-audio\English")
for subdir, dirs, files in os.walk(directory):
for file in files:
if file.endswith(".txt"):
content=json.load(file)
if "status_code" in content:
if content["status_code"]==0:
print("valid")
You can use glob to get the filenames:
<https://docs.python.org/2/library/glob.html>
import glob
glob.glob("path_to_folder/*.txt")
will return a list of filenames, like this
["path_to_folder/a.txt", "path_to_folder/asdief.txt"]
You can use this list in the code above. Just be aware of supplying the
correct path to the folder. Python has a current working directory. The path
needs to be relative to this directory.
Take a look at os.getcwd() to get the current working directory and os.chdir()
to change it.
**Edit:**
The file is the result of some web query. It says
> Content-type : application/json
But of course the json parser will not be able to read that file. You didn't
include the output of your program, but I assume that json.load fails. The
error method should be something like "The file doesn't contain valid json" .
That's because you have the headers of the (http?)-response included in the
file. They need to be removed. You could do that manually, you could use
python, maybe with a regex. But the best way to do that would be not to store
the headers in the first place. Where do you generate the file ? I think you
should tweak that code first.
Honestly, I don't think you're really working on this. You just pasted my code
and didn't reflect on what you're doing. Read up on JSON, use a debugger and
look at each line of your code, think about the error message.
|
Trade-off in Python dictionary key types
Question: Say, I'm going to construct a probably large dictionary in Python 3 for in-
memory operations. The dictionary keys are integers, but I'm going to read
them from a file as string at first.
As far as storage and retrieval are concerned, I wonder if it matters whether
I store the dictionary keys as integers themselves, or as strings.
In other words, would leaving them as integers help with hashing?
Answer: Dicts are fast but can be heavy on the memory. Normally it shouldn't be a
problem but you will only know when you test. I would advise to first test
1.000 lines, 10.000 lines and so on and have a look on the memory footprint.
If you run out of memory and your data structure allows it maybe try using
[named
tuples](https://docs.python.org/3/library/collections.html#collections.namedtuple).
EmployeeRecord = namedtuple('EmployeeRecord', 'name, age, title, department, paygrade')
import csv
for emp in map(EmployeeRecord._make, csv.reader(open("employees.csv", "rb"))):
print(emp.name, emp.title)
(Example taken from the link)
If you have ascending integers you could also try to get more fancy by using
the [array module](https://docs.python.org/3/library/array.html).
|
Gui for Particlefilter with Python
Question: I'm trying to implement a particle filter and I chose python for it because I
kinda like python. By now i have written my gui using tkinter and python 3.4.
I use the tkinter.canvas object to display a map (png image loaded with PIL)
and then i create dots for each particle like:
> dot = canvas.create_oval(x, y, x + 1, y + 1)
When the robot moves I calculate the new position of each particle with the
control command of the robot, the particles position and the particles
alignment. To move the particle tkinter.canvas has two methods:
> canvas.move()
> canvas.coords()
But both methods seem to update the gui immediately which is OK when there are
about 100 particles but not if there are 200 - 5000 (what I actually should
have in the beginning for the global localization). So my problem is the
performance of the gui.
So my actual question is: Is there a way in tkinter to stop the canvas from
updating the gui, then change the gui and then update the gui again? Or can
you recommend me a module that is better than tkinter for my use-case?
Answer: Your observation is incorrect. The canvas is _not_ updated immediately. The
oval isn't redrawn until the event loop is able to process events. It is quite
possible to update thousands of objects before the canvas is redrawn. Though,
the canvas isn't a high performance tool so moving thousands of objects at a
high frame rate will be difficult.
If you are seeing the object being updated immediately it's likely because
somewhere in your code you are either calling `update`, `update_idletasks`, or
you are otherwise allowing the event loop to run.
The specific answer to your question, then, is to make sure that you don't
call `update` or `update_idletasks`, or let the event loop process events,
until you've changed the coordinates of all of your particles.
Following is a short example. When it runs, notice that all of the particles
move at once in one second intervals. This is because all of the calculations
are done before allowing the event loop to redraw the items on the canvas.
import Tkinter as tk
import random
class Example(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self, parent)
self.canvas = tk.Canvas(self, width=500, height=500, background="black")
self.canvas.pack(fill="both", expand=True)
self.particles = []
for i in range(1000):
x = random.randint(1, 499)
y = random.randint(1, 499)
particle = self.canvas.create_oval(x,y,x+4,y+4,
outline="white", fill="white")
self.particles.append(particle)
self.animate()
def animate(self):
for i, particle in enumerate(self.particles):
deltay = (2,4,8)[i%3]
deltax = random.randint(-2,2)
self.canvas.move(particle, deltax, deltay)
self.after(30, self.animate)
if __name__ == "__main__":
root = tk.Tk()
Example(root).pack(fill="both", expand=True)
root.mainloop()
|
Azure App Insights Authentication with Python Requests
Question: I'm trying to authenticate with Azure AD in order to access the Azure Insights
REST API, so that I can ultimately access Azure web apps. However, [the
authentication example in their documentation](https://msdn.microsoft.com/en-
us/library/azure/dn931949.aspx) is limited to C# and PowerShell. I am trying
to do the same thing, but with the Python requests library. This is what I
have so far, but I am getting a '404 not found' response back. Any ideas on
how I can authenticate to the Insights API using the Python requests library?
AUTH = 'https://login.windows.net/%s' % TENANT_ID
RESOURCE = 'https://management.azure.com/'
def auth():
s = requests.Session()
params = {
'grant_type': 'client_credentials',
'client_id': CLIENT_ID,
'client_secret': CLIENT_KEY,
'resource': RESOURCE
}
response = s.post(AUTH, params=params)
print response.url
print response.status_code
print response.reason
auth()
* * *
EDIT 1:
The updated auth URL fixed it. Thank you. However, I would still like to
exclusively use the Python requests library to get the web apps/resource
groups.
RESOURCE_VERSION = '2015-01-01'
RESOURCE_URI = 'https://management.azure.com/subscriptions/%s/resourcegroups' % (SUBSCRIPTION_ID)
s = requests.Session()
payload = {
'grant_type': 'client_credentials',
'client_id': CLIENT_ID,
'client_secret': CLIENT_KEY,
'resource': RESOURCE
}
response = s.post(AUTHENTICATION_CONTEXT, data=payload).json()
access_token = response['access_token']
s.headers = {
'Authorization': 'Bearer %s' % access_token,
'Content-Type': 'application/json'
}
s.params = {
'api-version': RESOURCE_VERSION
}
response2 = s.get(RESOURCE_URI).json()
print response2
This gives me the following output
{u'error': {u'message': u"The client 'CLIENT_ID' with object id 'OBJECT_ID' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/read' over scope '/subscriptions/SUBSCRIPTION_ID'.", u'code': u'AuthorizationFailed'}}
Based off the response, it seemed like it may be a permissions issue in my
Azure app, but I've given the app all the permissions I think it has to have
and it still gives me the same error message.
[](http://i.stack.imgur.com/tbkaN.png)
Answer: The authentication endpoint is incomplete. And in .Net, it is wrapped in .Net
SDK, and the complete endpoint for authentication token looks like:
`https://login.microsoftonline.com/<tenant_id>/oauth2/token`
Here is the code snippet:
from azure.mgmt.common import SubscriptionCloudCredentials
from azure.mgmt.resource import ResourceManagementClient
import requests
def get_token_from_client_credentials(endpoint, client_id, client_secret):
payload = {
'grant_type': 'client_credentials',
'client_id': client_id,
'client_secret': client_secret,
'resource': 'https://management.core.windows.net/',
}
response = requests.post(endpoint, data=payload).json()
return response['access_token']
auth_token = get_token_from_client_credentials(
endpoint='https://login.microsoftonline.com/<tenant_id>/oauth2/token',
client_id='<client_id>',
client_secret='<client_secret>',
)
subscription_id = '<subscription_id>'
creds = SubscriptionCloudCredentials(subscription_id, auth_token)
resource_client = ResourceManagementClient(creds)
resource_group_list = resource_client.resource_groups.list(None)
rglist = resource_group_list.resource_groups
print rglist
You can refer to [Resource Management Authentication](http://azure-sdk-for-
python.readthedocs.org/en/latest/resourcemanagementauthentication.html) for
more information.
|
Python 3 Tkinter - Change state of multiple entries with the same name
Question: Earlier I was having trouble with changing the state of one entry. That was
solved by using `.grid()` on a new line but when I altered this in my program
it only changed one of the entries.
from tkinter import *
def changestate():
global entry1
entry1['state']='normal'
root=Tk()
entry1_list=[]
def newday():
global entry1
row=0
for i in range(0,5):
var=IntVar()
entry1=Entry(root,width=3,bd=4,textvariable=var,state='disabled')
entry1.grid(row=row,column=1)
entry1_list.append(var)
row=row+1
button1=Button(root,text='Change state',command=changestate).grid(row=row,column=1)
newday()
root.mainloop()
It is meant to change the state of all the entries to normal when the button
is clicked however it only changes the last one.
I am using the same name for the entries because I don't want to type them out
multiple times in my program as it would make it very long and I would not be
able to let the user input the number of entries they want to appear.
Is there a way I can refer each `entry1` individually so that this will work?
Answer: You can store the entries in a list and update each one in your function:
def changestate():
for e in entry1_list:
e['state'] = 'normal'
root = Tk()
entry1_list = []
def newday():
var = IntVar()
for i in range(5):
entry1 = Entry(root, width=3, bd=4, textvariable=var, state='disabled')
entry1.grid(row=i, column=1)
entry1_list.append(entry1)
Button(root, text='Change state', command=changestate).grid(row=i+1, column=2)
In your code `entry1` is referencing the last one you create in the loop so
only that gets updated.
|
Run system command with no output
Question: I'm running a `wget` in Python via `os.system`.
Is there anyways to hide the output? I tried
> /dev/null
and tried running the command with a `$` in front of it.
Answer: Use the [subprocess](https://docs.python.org/2/library/subprocess.html) module
instead.
Using `subprocess.call` (which is a helper function for some more advanced
subprocess features), you can redirect stdout and stderr to file objects. If
you open `/dev/null` (the os module has `os.devnull` which is a platform-
independant path of the null device that can help), you can hand it to
subprocess.call and suppress all output
import os
import subprocess
devnull = open(os.devnull, 'w')
subprocess.call([...], stdout=devnull, stderr=devnull)
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.