title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
How to use Anaconda Python to execute a .py file? | 39,995,380 | <p>I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?</p>
| 0 | 2016-10-12T09:40:36Z | 39,995,712 | <p>Anaconda should add itself to the PATH variable so you can start any .py file with "python yourpythonfile.py" and it should work from any folder. </p>
<p>Alternatively download pycharm community edition, open your python file there and run it. Make sure to have python.exe added as interpreter in the settings.</p>
| 0 | 2016-10-12T09:55:38Z | [
"python",
"windows",
"anaconda"
] |
how to call a method on the GUI thread? | 39,995,447 | <p>I am making a small program that gets the latest revenue from a webshop, if its more than the previous amount it makes a sound, I am using Pyglet but I get errors because its not being called from the main thread. I would like to know how to call a method on the main thread. see error below:</p>
<blockquote>
<p>'thread that imports pyglet.app' RuntimeError: EventLoop.run() must
be called from the same thread that imports pyglet.app</p>
</blockquote>
<pre><code> def work ():
threading.Timer(5, work).start()
file_Name = "save.txt"
lastRevenue = 0
data = json.load(urllib2.urlopen(''))
newRevenue = data["revenue"]
if (os.path.getsize(file_Name) <= 0):
with open(file_Name, "wb") as f:
f.write('%d' % newRevenue)
f.flush()
with open(file_Name, "rb") as f:
lastRevenue = float(f.readline().strip())
print lastRevenue
print newRevenue
f.close()
if newRevenue > lastRevenue:
with open(file_Name, "wb") as f:
f.write('%f' % newRevenue)
f.flush()
playsound()
def playsound():
music = pyglet.resource.media('cash.wav')
music.play()
pyglet.app.run()
work()
</code></pre>
| 0 | 2016-10-12T09:43:24Z | 39,999,168 | <p>It's not particularly strange. <code>work</code> is being executed as a separate thread from where <code>pyglet</code> was imported.</p>
<p><code>pyglet.app</code> when imported sets up a lot of context variables and what not. I say what not because I actually haven't bothered checking deeper into what it actually sets up.</p>
<p>And OpenGL can't execute things out of it's own context (the main thread where it resides). There for you're not allowed to poke around on OpenGL from a neighboring thread. If that makes sense.</p>
<p>However, if you create your own <code>.run()</code> function and use a class based method of activating Pyglet you can start the GUI from the thread.</p>
<p>This is a working example of how you could set it up:</p>
<pre><code>import pyglet
from pyglet.gl import *
from threading import *
# REQUIRES: AVBin
pyglet.options['audio'] = ('alsa', 'openal', 'silent')
class main(pyglet.window.Window):
def __init__ (self):
super(main, self).__init__(300, 300, fullscreen = False)
self.x, self.y = 0, 0
self.bg = pyglet.sprite.Sprite(pyglet.image.load('background.jpg'))
self.music = pyglet.resource.media('cash.wav')
self.music.play()
self.alive = 1
def on_draw(self):
self.render()
def on_close(self):
self.alive = 0
def render(self):
self.clear()
self.bg.draw()
self.flip()
def run(self):
while self.alive == 1:
self.render()
if not self.music.playing:
self.alive = 0
# -----------> This is key <----------
# This is what replaces pyglet.app.run()
# but is required for the GUI to not freeze
#
event = self.dispatch_events()
class ThreadExample(Thread):
def __init__(self):
Thread.__init__(self)
self.start()
def run(self):
x = main()
x.run()
Test_One = ThreadExample()
</code></pre>
<p>Note that you still have to start the actual GUI code from within the thread.</p>
<h1>I STRONGLY RECOMMEND YOU DO THIS INSTEAD THO</h1>
<p>Seeing as mixing threads and GUI calls is a slippery slope, I would suggest you go with a more cautious path.</p>
<pre><code>from threading import *
from time import sleep
def is_main_alive():
for t in enumerate():
if t.name == 'MainThread':
return t.isAlive()
class worker(Thread):
def __init__(self, shared_dictionary):
Thread.__init__(self)
self.shared_dictionary
self.start()
def run(self):
while is_main_alive():
file_Name = "save.txt"
lastRevenue = 0
data = json.load(urllib2.urlopen(''))
newRevenue = data["revenue"]
if (os.path.getsize(file_Name) <= 0):
with open(file_Name, "wb") as f:
f.write('%d' % newRevenue)
f.flush()
with open(file_Name, "rb") as f:
lastRevenue = float(f.readline().strip())
print lastRevenue
print newRevenue
f.close()
if newRevenue > lastRevenue:
with open(file_Name, "wb") as f:
f.write('%f' % newRevenue)
f.flush()
#playsound()
# Instead of calling playsound() here,
# set a flag in the shared dictionary.
self.shared_dictionary['Play_Sound'] = True
sleep(5)
def playsound():
music = pyglet.resource.media('cash.wav')
music.play()
pyglet.app.run()
shared_dictionary = {'Play_Sound' : False}
work_handle = worker(shared_dictionary)
while 1:
if shared_dictionary['Play_Sound']:
playsound()
shared_dictionary['Play_Sound'] = False
sleep(0.025)
</code></pre>
<p>It's a rough draft of what you're looking for.<br>
Basically some sort of event/flag driven backend that the Thread and the GUI can use to communicate with each other.</p>
<p>Essentially you have a worker thread (just as you did before), it checks whatever file you want every 5 seconds and if it detects <code>newRevenue > lastRevenue</code> it will set a specific flag to <code>True</code>. Your main loop will detect this change, play a sound and revert the flag back to False.</p>
<p>I've by no means included any error handling here on purpose, we're here to help and not create entire solutions. I hope this helps you in the right direction.</p>
| 0 | 2016-10-12T12:55:33Z | [
"python",
"multithreading",
"audio",
"pyglet"
] |
Set different color to specifc items in QListWidget | 39,995,688 | <p>I have a <code>QListWidget</code> and I want to add border bottom to each item of the list and set a background color to items and I want to set to specific items a different background color.
So I used <code>my_list.setStyleSheet("QListWidget::item { border-bottom: 1px solid red; background-color: blue;}")</code> and to set background color to specific items I used <code>item.setBackground(QtGui.QColor("#E3E3E3"))</code></p>
<p>The problem is the specif items that I set a different color don't get this color.</p>
| 2 | 2016-10-12T09:54:42Z | 39,998,051 | <p>You can't use a combination of stylesheets and style settings on a widget â the stylesheet will override any settings on individual items. For example, the following code uses <code>setBackground</code> on each item to change the background colour.</p>
<pre><code>from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
colors = [ '#7fc97f', '#beaed4', '#fdc086', '#ffff99', '#386cb0', '#f0027f', '#bf5b17', '#666666']
class MainWindow(QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
w = QListWidget()
for n in range(8):
i = QListWidgetItem('%s' % n)
i.setBackground( QColor(colors[n]) )
w.addItem(i)
self.setCentralWidget(w)
self.show()
app = QApplication([])
w = MainWindow()
app.exec_()
</code></pre>
<p>The resulting output:</p>
<p><a href="https://i.stack.imgur.com/KaOSU.png" rel="nofollow"><img src="https://i.stack.imgur.com/KaOSU.png" alt="enter image description here"></a></p>
<p>However, if we add the stylesheet line in the result is (and second with only the bottom border):</p>
<p><a href="https://i.stack.imgur.com/iMF70.png" rel="nofollow"><img src="https://i.stack.imgur.com/iMF70.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/3PdUO.png" rel="nofollow"><img src="https://i.stack.imgur.com/3PdUO.png" alt="enter image description here"></a></p>
<p>Unfortunately, there is no way to set the border and the colour for the items. However, what you <em>can</em> do is either insert a custom widget as the list item, or <a href="http://www.saltycrane.com/blog/2008/01/pyqt4-qitemdelegate-example-with/" rel="nofollow">use an item delegate</a> to draw the item. This gives you complete control over appearance, however you have to handle drawing yourself. Below is an example of doing this with a custom delegate:</p>
<pre><code>from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
colors = [ '#7fc97f', '#beaed4', '#fdc086', '#ffff99', '#386cb0', '#f0027f', '#bf5b17', '#666666']
class MyDelegate(QItemDelegate):
def __init__(self, parent=None, *args):
QItemDelegate.__init__(self, parent, *args)
def paint(self, painter, option, index):
painter.save()
# set background color
painter.setPen(QPen(Qt.NoPen))
if option.state & QStyle.State_Selected:
# If the item is selected, always draw background red
painter.setBrush(QBrush(Qt.red))
else:
c = index.data(Qt.DisplayRole+1) # Get the color
painter.setBrush(QBrush(QColor(c)))
# Draw the background rectangle
painter.drawRect(option.rect)
# Draw the bottom border
# option.rect is the shape of the item; top left bottom right
# e.g. 0, 0, 256, 16 in the parent listwidget
painter.setPen(QPen(Qt.red))
painter.drawLine(option.rect.bottomLeft(), option.rect.bottomRight())
# Draw the text
painter.setPen(QPen(Qt.black))
text = index.data(Qt.DisplayRole)
# Adjust the rect (to pad)
option.rect.setLeft(5)
option.rect.setRight(option.rect.right()-5)
painter.drawText(option.rect, Qt.AlignLeft, text)
painter.restore()
class MainWindow(QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
de = MyDelegate(self)
w = QListWidget()
w.setItemDelegate(de)
for n in range(8):
s = '%s' % n
i = QListWidgetItem()
i.setData(Qt.DisplayRole, s) # The label
i.setData(Qt.DisplayRole+1, colors[n]) # The color
w.addItem(i)
self.setCentralWidget(w)
self.show()
app = QApplication([])
w = MainWindow()
app.exec_()
</code></pre>
<p>Which gives the following output:</p>
<p><a href="https://i.stack.imgur.com/M9Y8R.png" rel="nofollow"><img src="https://i.stack.imgur.com/M9Y8R.png" alt="enter image description here"></a></p>
<p>The approach with delegates is to define a <code>paint</code> method, which accepts a <code>QPainter</code> object (with which you do the actual drawing), an <code>option</code> parameter containing the rectangle of the item (relative to the parent widget) and an <code>index</code> through which you can retrieve the item data. You then use the methods on the <a href="http://doc.qt.io/qt-4.8/qpainter.html" rel="nofollow">QPainter</a> to draw your item. </p>
<p>In the example above we use this to pass in both the item label (at position <code>Qt.DisplayRole</code>) and the color in hex (at position <code>Qt.DisplayRole+1</code>). The name docs for <a href="http://doc.qt.io/qt-4.8/qt.html#ItemDataRole-enum" rel="nofollow">ItemDisplayRole</a> list the other defined data 'roles', but for most purposes you can ignore these.</p>
| 1 | 2016-10-12T11:59:07Z | [
"python",
"pyqt",
"pyqt5"
] |
Pandas find first nan value by rows and return column name | 39,995,707 | <p>I have a dataframe like this </p>
<pre><code>>>df1 = pd.DataFrame({'A': ['1', '2', '3', '4','5'],
'B': ['1', '1', '1', '1','1'],
'C': ['c', 'A1', None, 'c3',None],
'D': ['d0', 'B1', 'B2', None,'B4'],
'E': ['A', None, 'S', None,'S'],
'F': ['3', '4', '5', '6','7'],
'G': ['2', '2', None, '2','2']})
>>df1
A B C D E F G
0 1 1 c d0 A 3 2
1 2 1 A1 B1 None 4 2
2 3 1 None B2 S 5 None
3 4 1 c3 None None 6 2
4 5 1 None B4 S 7 2
</code></pre>
<p>and I drop the rows which contain nan values<code>df2 = df1.dropna()</code></p>
<pre><code> A B C D E F G
1 2 1 A1 B1 None 4 2
2 3 1 None B2 S 5 None
3 4 1 c3 None None 6 2
4 5 1 None B4 S 7 2
</code></pre>
<p>This is a dropped dataframe due to those rows contain nan values.
However,I wanna know why they be dropped? Which column is the "first nan value column" made the row been dropped ? I need a dropped reason for report.</p>
<p>the output should be </p>
<pre><code>['E','C','D','C']
</code></pre>
<p>I know I can do <code>dropna</code> by each column then record it as the reason
but it's really non-efficient.</p>
<p>Is any more efficient way to solve this problem?
Thank you</p>
| 1 | 2016-10-12T09:55:28Z | 39,996,080 | <p>I think you can create boolean dataframe by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html" rel="nofollow"><code>DataFrame.isnull</code></a>, then filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with mask where are at least one <code>True</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow"><code>any</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html" rel="nofollow"><code>idxmax</code></a> - you get column names of first <code>True</code> values of <code>DataFrame</code>:</p>
<pre><code>booldf = df1.isnull()
print (booldf)
A B C D E F G
0 False False False False False False False
1 False False False False True False False
2 False False True False False False True
3 False False False True True False False
4 False False True False False False False
print (booldf.any(axis=1))
0 False
1 True
2 True
3 True
4 True
dtype: bool
print (booldf[booldf.any(axis=1)].idxmax(axis=1))
1 E
2 C
3 D
4 C
dtype: object
</code></pre>
| 2 | 2016-10-12T10:15:01Z | [
"python",
"pandas"
] |
Pandas find first nan value by rows and return column name | 39,995,707 | <p>I have a dataframe like this </p>
<pre><code>>>df1 = pd.DataFrame({'A': ['1', '2', '3', '4','5'],
'B': ['1', '1', '1', '1','1'],
'C': ['c', 'A1', None, 'c3',None],
'D': ['d0', 'B1', 'B2', None,'B4'],
'E': ['A', None, 'S', None,'S'],
'F': ['3', '4', '5', '6','7'],
'G': ['2', '2', None, '2','2']})
>>df1
A B C D E F G
0 1 1 c d0 A 3 2
1 2 1 A1 B1 None 4 2
2 3 1 None B2 S 5 None
3 4 1 c3 None None 6 2
4 5 1 None B4 S 7 2
</code></pre>
<p>and I drop the rows which contain nan values<code>df2 = df1.dropna()</code></p>
<pre><code> A B C D E F G
1 2 1 A1 B1 None 4 2
2 3 1 None B2 S 5 None
3 4 1 c3 None None 6 2
4 5 1 None B4 S 7 2
</code></pre>
<p>This is a dropped dataframe due to those rows contain nan values.
However,I wanna know why they be dropped? Which column is the "first nan value column" made the row been dropped ? I need a dropped reason for report.</p>
<p>the output should be </p>
<pre><code>['E','C','D','C']
</code></pre>
<p>I know I can do <code>dropna</code> by each column then record it as the reason
but it's really non-efficient.</p>
<p>Is any more efficient way to solve this problem?
Thank you</p>
| 1 | 2016-10-12T09:55:28Z | 39,996,092 | <p>I would use a combination of <code>itertools</code> and <code>numpy.where</code>, along with <code>pd.DataFrame.isnull</code>:</p>
<pre><code>>>> df1.isnull()
A B C D E F G
0 False False False False False False False
1 False False False False True False False
2 False False True False False False True
3 False False False True True False False
4 False False True False False False False
>>> from itertools import *
>>> r,c = np.where(df1.isnull().values)
>>> first_cols = [next(g)[1] for _, g in groupby(izip(r,c), lambda t:t[0])]
>>> df1.columns[first_cols]
Index([u'E', u'C', u'D', u'C'], dtype='object')
>>>
</code></pre>
<p>For Python 2, use <code>izip</code> from <code>itertools</code>, and in Python 3 simply use built-in <code>zip</code>. </p>
| 0 | 2016-10-12T10:15:36Z | [
"python",
"pandas"
] |
gis calculate distance between point and polygon / border | 39,995,839 | <p>I want to calculate the distance between a point and the border of a country using python / <code>shapely</code>. It should work just fine point.distance(poly) e.g. demonstrated here <a href="http://stackoverflow.com/questions/33311616/find-coordinate-of-closest-point-on-polygon-shapely">Find Coordinate of Closest Point on Polygon Shapely</a> but using <code>geopandas</code> I face the issue of:
<code>'GeoSeries' object has no attribute '_geom'</code></p>
<p>What is wrong in my handling of the data?
My border dataset is from <a href="http://www.gadm.org/" rel="nofollow">http://www.gadm.org/</a></p>
<p><a href="https://i.stack.imgur.com/0VzUi.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/0VzUi.jpg" alt="'GeoSeries' object has no attribute '_geom'"></a></p>
| 0 | 2016-10-12T10:03:01Z | 39,999,017 | <p>According to <a href="http://geopandas.org/reference.html" rel="nofollow">geopandas</a> documentation, a GeoSeries is a vector of geometries (in your case, <code>0 (POLYGON...</code> tells that you have just one object, but it is still a vector). There should be a way of getting the first geometry element. GeoSeries class implements the <code>__getitem__</code> method, so <code>austriaBorders.geometry[0]</code> should give you the geometry you need. So, try with <code>point.distance(austriaBorders.geometry[0])</code>.</p>
<p>If you only need the distance to a point, the <code>GeoSeries</code> has the <code>distance</code> method implemented, but it will return a vector with the distance to each geometry it contains (according to documentation). </p>
| 2 | 2016-10-12T12:49:15Z | [
"python",
"gis",
"distance",
"shapely",
"geopandas"
] |
pyschools square root approximation | 39,995,912 | <p>I am new to Python and stackoverflow. I have been trying to solve Pyschools 'while loop' example for square root approximation (Topic 5: Question 9). However I am unable to get the desired output. I am not sure if this issue is related to the loop or the formula. Here is the question:</p>
<blockquote>
<p>Create a function that takes in a positive number and return 2
integers such that the number is between the squares of the 2
integers. It returns the same integer twice if the number is a square
of an integer.</p>
<hr>
</blockquote>
<p>Examples:</p>
<pre><code>sqApprox(2)
(1, 2)
sqApprox(4)
(2, 2)
sqApprox(5.1)
(2, 3)
</code></pre>
<p>Here is my code:</p>
<hr>
<pre><code>def sqApprox(num):
i = 0
minsq = 1 # set lower bound
maxsq = minsq # set upper bound
while i*i<=num: # set 'while' termination condition
if i*i<=num and i >=minsq: # complete inequality condition
minsq = i
if i*i<=num and i <=maxsq: # complete inequality condition
maxsq = i
i=i+1 # update i so that 'while' will terminate
return (minsq, maxsq)
</code></pre>
<hr>
<p>If I create this function <code>sqApprox(4)</code> and call it on IDE, I get output <code>(2, 0)</code>.</p>
<p>Could someone please let me know what I am doing wrong? Thanks in advance.</p>
| 1 | 2016-10-12T10:06:37Z | 39,996,916 | <p>This is why your code does what it does:</p>
<p>After the line <code>maxsq = minsq</code> is executed, both of those values are 1.</p>
<p>When we come to the loop</p>
<pre><code>while i*i<=num: # set 'while' termination condition
if i*i<=num and i >=minsq: # complete inequality condition
minsq = i
if i*i<=num and i <=maxsq: # complete inequality condition
maxsq = i
i=i+1 # update i so that 'while' will terminate
</code></pre>
<p>first note that inside the loop <code>i*i<=num</code>, so there is no need to retest it. Thus it is equivalent to:</p>
<pre><code> while i*i<=num:
if i >=minsq:
minsq = i
if i <=maxsq:
maxsq = i
i=i+1
</code></pre>
<p>In the first pass through the loop <code>i == 0</code> but <code>maxsq == 1</code>, making the second condition true, hence setting <code>maxsq</code> equal to the current value of <code>i</code>, which is 0. In subsequent passes through the loop, <code>i <= maxsq</code> is false (since <code>maxsq == 0</code> but <code>i > 0</code>) hence <code>maxsq</code> is never moved beyond 0. On the other hand, the first condition in the while loop keeps updating <code>minsq</code> as intended.</p>
<p>I would recommend forgetting about both <code>minsq</code> and <code>maxsq</code> completely. Have the loop simply be:</p>
<pre><code>while i*i <= num:
i += 1 #shortcut for i = i + 1
</code></pre>
<p>When the loop is done executing, a simple test involving <code>i-1</code> is enough to determine what to return.</p>
| 0 | 2016-10-12T10:59:48Z | [
"python"
] |
Draw a closed and filled contour | 39,996,028 | <p>I am trying to draw a closed contour an fill it up with (transparent or whatever) color with <code>folium</code>. The lacking docs are not helpful, any ideas how to do that ?</p>
<p>This is my current code</p>
<pre><code>m = folium.Map(location=[46, 2], zoom_start=5)
pts = [
[43.601795137863135, 1.451673278566412],
[43.61095574264419, 1.437239509310642],
[43.60999839038903, 1.45630473303456],
[43.60607351937904, 1.438762676051137],
[43.59725521090158, 1.444569790831369],
[43.6076281683173, 1.451991362348086]
]
p = folium.PolyLine(locations=pts,weight=5)
m.add_children(p)
</code></pre>
| 0 | 2016-10-12T10:12:17Z | 40,113,212 | <p>This is not documented (nothing is really documented ATM), but this works</p>
<pre><code>m = folium.Map(location=[46, 2], zoom_start=5)
pts = [
[43.601795137863135, 1.451673278566412],
[43.61095574264419, 1.437239509310642],
[43.60999839038903, 1.45630473303456],
[43.60607351937904, 1.438762676051137],
[43.59725521090158, 1.444569790831369],
[43.6076281683173, 1.451991362348086]
]
folium.features.PolygonMarker(locations=pts, color='#FF0000', fill_color='blue', weight=5).add_to(m)
</code></pre>
| 0 | 2016-10-18T16:13:01Z | [
"python",
"folium"
] |
How to run python + google app engine application locally but pointing to live google datastore? | 39,996,067 | <p>I am developing an application using python and google app engine. It is using local database. I want to run my code which points to google live datastore. Will you please guide how to configure my app to point to live google datastore instead of local so that I can check my code against live datastore?</p>
| 1 | 2016-10-12T10:14:13Z | 40,010,133 | <p>There's a tool available called the Remote API that should provide the functionality you're asking about. The documentation there should provide you with what you need to interact with the actual datastore. It's available for other languages as well, but here is the link to the Python documentation:</p>
<p><a href="https://cloud.google.com/appengine/docs/python/tools/remoteapi" rel="nofollow">https://cloud.google.com/appengine/docs/python/tools/remoteapi</a></p>
<p>I have a small amount of experience with the Go interface.</p>
| 1 | 2016-10-12T23:51:18Z | [
"python",
"google-app-engine"
] |
Remove special characters (in list) in string | 39,996,097 | <p>I have a bunch of special characters which are in a list like:</p>
<pre><code>special=[r'''\\''', r'''+''', r'''-''', r'''&''', r'''|''', r'''!''', r'''(''', r''')''', r'''{''', r'''}''',\
r'''[''', r''']''', r'''^''', r'''~''', r'''*''', r'''?''', r''':''', r'''"''', r''';''', r''' ''']
</code></pre>
<p>And I have a string:</p>
<pre><code>stringer="Müller my [ string ! is cool^&"
</code></pre>
<p>How do I make this replacement? I am expecting:</p>
<pre><code>stringer = "Müller my string is cool"
</code></pre>
<p>Also, is there some builtin to replace these âspecialâ chars in Python?</p>
| -1 | 2016-10-12T10:15:54Z | 39,996,232 | <p>This can be solved with a simple generator expression:</p>
<pre><code>>>> ''.join(ch for ch in stringer if ch not in special)
'M\xc3\xbcllermystringiscool'
</code></pre>
<p><strong>Note that this also removes the spaces</strong>, since they're in your <code>special</code> list (the last element). If you don't want them removed, either don't include the space in <code>special</code> or do modify the <code>if</code> check accordingly.</p>
| 1 | 2016-10-12T10:23:14Z | [
"python",
"regex"
] |
Remove special characters (in list) in string | 39,996,097 | <p>I have a bunch of special characters which are in a list like:</p>
<pre><code>special=[r'''\\''', r'''+''', r'''-''', r'''&''', r'''|''', r'''!''', r'''(''', r''')''', r'''{''', r'''}''',\
r'''[''', r''']''', r'''^''', r'''~''', r'''*''', r'''?''', r''':''', r'''"''', r''';''', r''' ''']
</code></pre>
<p>And I have a string:</p>
<pre><code>stringer="Müller my [ string ! is cool^&"
</code></pre>
<p>How do I make this replacement? I am expecting:</p>
<pre><code>stringer = "Müller my string is cool"
</code></pre>
<p>Also, is there some builtin to replace these âspecialâ chars in Python?</p>
| -1 | 2016-10-12T10:15:54Z | 39,996,288 | <p>If you remove the space from your specials you can do it using <code>re.sub()</code> but note that first you need to escape the special regex characters.</p>
<pre><code>In [58]: special=[r'''\\''', r'''+''', r'''-''', r'''&''', r'''|''', r'''!''', r'''(''', r''')''', r'''{''', r'''}''',\
r'''[''', r''']''', r'''^''', r'''~''', r'''*''', r'''?''', r''':''', r'''"''', r''';''']
In [59]: print re.sub(r"[{}]".format(re.escape(''.join(special))), '', stringer, re.U)
Müller my string is cool
</code></pre>
| 0 | 2016-10-12T10:26:36Z | [
"python",
"regex"
] |
Share python source code files between Azure Worker Roles in the same project | 39,996,098 | <p>I have a single Azure Cloud Service as a project in Visual Studio 2015, which contains 2 Python Worker Roles.</p>
<p>They each have their own folder with source code files, and they are deployed to separate VMs. However, they both rely on some identical pieces of code. Right now my solution is to just include a copy of the code in each worker role, but then I have to remember to apply changes to both worker roles in case of a bug fix.</p>
<p>I have tried making a folder on the project level, containing the shared files, but when I add them to the worker role, VS just copies the files.</p>
<p>Is there a way to implement something like a shared folder, which only copies the files upon building the project?</p>
| 0 | 2016-10-12T10:15:54Z | 39,997,193 | <p>Likely many ways to solve your problem, but specifically from a worker role standpoint: Worker (and web) roles have definable startup tasks, allowing you to execute code/script during role startup. This allows you to do things like copying content from blob storage to local disk on your role instance. In this scenario, the blob where your code is stored acts like a shared disk.</p>
| 0 | 2016-10-12T11:15:00Z | [
"python",
"azure",
"azure-worker-roles",
"azure-cloud-services"
] |
Pass argument to flask from javascript | 39,996,133 | <p>When pressing a button i call a javascript function in my html file which takes two strings as parameters (from input fields). When the function is called i want to pass these parameters to my flask file and call another function there. How would i accomplish this?</p>
<p>The javascript:</p>
<pre><code><script>
function ToPython(FreeSearch,LimitContent)
{
alert(FreeSearch);
alert(LimitContent);
}
</script>
</code></pre>
<p>The flask function that i want to call:</p>
<pre><code>@app.route('/list')
def alist(FreeSearch,LimitContent):
new = FreeSearch+LimitContent;
return render_template('list.html', title="Projects - " + page_name, new = new)
</code></pre>
<p>I want to do something like <code>"filename.py".alist(FreeSearch,LimitContent)</code> in the javascript but its not possible...</p>
| 0 | 2016-10-12T10:17:34Z | 39,996,313 | <p>From JS code, call (using GET method) the URL to your flask route, passing parameters as query args:</p>
<pre><code>/list?freesearch=value1,limit_content=value2
</code></pre>
<p>Then in your function definition:</p>
<pre><code>@app.route('/list')
def alist():
freesearch = request.args.get('freesearch')
limitcontent = request.args.get('limit_content')
new = freesearch + limitcontent
return render_template('list.html', title="Projects - "+page_name, new=new)
</code></pre>
<p>Alternatively, you could use path variables:</p>
<pre><code>/list/value1/value2
</code></pre>
<p>and</p>
<pre><code>@app.route('/list/<freesearch>/<limit_content>')
def alist():
new = free_search + limit_content
return render_template('list.html', title="Projects - "+page_name, new=new)
</code></pre>
| 0 | 2016-10-12T10:27:49Z | [
"javascript",
"python",
"flask",
"jinja2"
] |
How to dynamically print list values? | 39,996,254 | <p>This Code Works Fine.....But it's like static.
I don't know what to do to make it work in dynamic way?
I want:-
When user inputs 3 number of cities it should give</p>
<blockquote>
<p>a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city
2 and "+li[2]+" as city 3 on your trip"</p>
</blockquote>
<p>Similaraly when input is 5 cities it should go to 5 times</p>
<pre><code>li = []
global a
number_of_cities = int(raw_input("Enter Number of Cities -->"))
for city in range(number_of_cities):
li.append(raw_input("Enter City Name -->"))
print li
a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
a = a.split(" ")
print "\nSplitted First Sentence looks like"
print a
print "\nJoined First Sentence and added 1"
index = 0
for word in a:
if word.isdigit():
a[index] = str(int(word)+1)
index += 1
print " ".join(a)
</code></pre>
| 0 | 2016-10-12T10:24:25Z | 39,996,385 | <p>You should do something like this</p>
<pre><code>a = 'You would like to visit ' + ' and '.join('{0} as city {1}'.format(city, index) for index, city in enumerate(li, 1)) + ' on your trip'
</code></pre>
| 2 | 2016-10-12T10:30:51Z | [
"python"
] |
How to dynamically print list values? | 39,996,254 | <p>This Code Works Fine.....But it's like static.
I don't know what to do to make it work in dynamic way?
I want:-
When user inputs 3 number of cities it should give</p>
<blockquote>
<p>a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city
2 and "+li[2]+" as city 3 on your trip"</p>
</blockquote>
<p>Similaraly when input is 5 cities it should go to 5 times</p>
<pre><code>li = []
global a
number_of_cities = int(raw_input("Enter Number of Cities -->"))
for city in range(number_of_cities):
li.append(raw_input("Enter City Name -->"))
print li
a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
a = a.split(" ")
print "\nSplitted First Sentence looks like"
print a
print "\nJoined First Sentence and added 1"
index = 0
for word in a:
if word.isdigit():
a[index] = str(int(word)+1)
index += 1
print " ".join(a)
</code></pre>
| 0 | 2016-10-12T10:24:25Z | 39,996,412 | <p>You can build the string with a combination of <a href="https://docs.python.org/2/library/stdtypes.html#str.format" rel="nofollow">string formatting</a>, <a href="https://docs.python.org/2/library/stdtypes.html#str.join" rel="nofollow"><code>str.join</code></a> and <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a>:</p>
<pre><code>a = "You would like to visit {} on your trip".format(
" and ".join("{} as city {}".format(city, i)
for i, city in enumerate(li, 1)))
</code></pre>
<p><code>str.join()</code> is given a <a href="https://docs.python.org/2/reference/expressions.html#generator-expressions" rel="nofollow">generator expression</a> as the iterable argument. The curly braces (<code>{}</code>) in the strings are <a href="https://docs.python.org/2/library/string.html#formatstrings" rel="nofollow">replacement fields</a> (placeholders), which will be replaced by positional arguments when formatting. For example</p>
<pre><code>'{} {}'.format('a', 'b')
</code></pre>
<p>will produce the string <code>"a b"</code>, as will</p>
<pre><code># explicit positional argument specifiers
'{0} {1}'.format('a', 'b')
</code></pre>
| 0 | 2016-10-12T10:32:06Z | [
"python"
] |
How to dynamically print list values? | 39,996,254 | <p>This Code Works Fine.....But it's like static.
I don't know what to do to make it work in dynamic way?
I want:-
When user inputs 3 number of cities it should give</p>
<blockquote>
<p>a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city
2 and "+li[2]+" as city 3 on your trip"</p>
</blockquote>
<p>Similaraly when input is 5 cities it should go to 5 times</p>
<pre><code>li = []
global a
number_of_cities = int(raw_input("Enter Number of Cities -->"))
for city in range(number_of_cities):
li.append(raw_input("Enter City Name -->"))
print li
a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
a = a.split(" ")
print "\nSplitted First Sentence looks like"
print a
print "\nJoined First Sentence and added 1"
index = 0
for word in a:
if word.isdigit():
a[index] = str(int(word)+1)
index += 1
print " ".join(a)
</code></pre>
| 0 | 2016-10-12T10:24:25Z | 39,996,508 | <p>create another for loop and save your cities to an array. afterwords, concat the array using "join" and put put everything inside the string:</p>
<pre><code>cities = []
for i in range(0,len(li), 1):
cities.append("%s as city %i" % (li[i], i))
cities_str = " and ".join(cities)
a = "You would like to visit %s on your trip" % (cities_str) # edited out the + sign
</code></pre>
| 0 | 2016-10-12T10:37:43Z | [
"python"
] |
Correct way to get allowed arguments from ArgumentParser | 39,996,295 | <p><strong>Question:</strong> What is the intended / official way of accessing possible arguments from an existing <code>argparse.ArgumentParser</code> object?</p>
<p><strong>Example:</strong> Let's assume the following context:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--foo', '-f', type=str)
</code></pre>
<p>Here I'd like to get the following list of allowed arguments:</p>
<pre><code>['-h', '--foo', '--help', '-f']
</code></pre>
<p>I found the following workaround which does the trick for me</p>
<pre><code>parser._option_string_actions.keys()
</code></pre>
<p>But I'm not happy with it, as it involves accessing a <code>_</code>-member that is not officially documented. Whats the correct alternative for this task?</p>
| 7 | 2016-10-12T10:26:59Z | 39,997,213 | <p>I don't think there is a "better" way to achieve what you want.</p>
<hr>
<p>If you really don't want to use the <code>_option_string_actions</code> attribute, you could process the <code>parser.format_usage()</code> to retrieve the options, but doing this, you will get only the short options names.</p>
<p>If you want both short and long options names, you could process the <code>parser.format_help()</code> instead.</p>
<p>This process can be done with a very simple regular expression: <code>-+\w+</code></p>
<pre><code>import re
OPTION_RE = re.compile(r"-+\w+")
PARSER_HELP = """usage: test_args_2.py [-h] [--foo FOO] [--bar BAR]
optional arguments:
-h, --help show this help message and exit
--foo FOO, -f FOO a random options
--bar BAR, -b BAR a more random option
"""
options = set(OPTION_RE.findall(PARSER_HELP))
print(options)
# set(['-f', '-b', '--bar', '-h', '--help', '--foo'])
</code></pre>
<hr>
<p>Or you could first make a dictionnary which contains the argument parser configuration and then build the argmuent parser from it. Such a dictionnary could have the option names as key and the option configuration as value. Doing this, you can access the options list via the dictionnary keys flattened with <a href="https://docs.python.org/2/library/itertools.html?highlight=itertools#itertools.chain" rel="nofollow">itertools.chain</a>:</p>
<pre><code>import argparse
import itertools
parser_config = {
('--foo', '-f'): {"help": "a random options", "type": str},
('--bar', '-b'): {"help": "a more random option", "type": int, "default": 0}
}
parser = argparse.ArgumentParser()
for option, config in parser_config.items():
parser.add_argument(*option, **config)
print(parser.format_help())
# usage: test_args_2.py [-h] [--foo FOO] [--bar BAR]
#
# optional arguments:
# -h, --help show this help message and exit
# --foo FOO, -f FOO a random options
# --bar BAR, -b BAR a more random option
print(list(itertools.chain(*parser_config.keys())))
# ['--foo', '-f', '--bar', '-b']
</code></pre>
<p>This last way is what I would do, if I was reluctant to use <code>_option_string_actions</code>.</p>
| 3 | 2016-10-12T11:15:54Z | [
"python",
"argparse"
] |
Correct way to get allowed arguments from ArgumentParser | 39,996,295 | <p><strong>Question:</strong> What is the intended / official way of accessing possible arguments from an existing <code>argparse.ArgumentParser</code> object?</p>
<p><strong>Example:</strong> Let's assume the following context:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--foo', '-f', type=str)
</code></pre>
<p>Here I'd like to get the following list of allowed arguments:</p>
<pre><code>['-h', '--foo', '--help', '-f']
</code></pre>
<p>I found the following workaround which does the trick for me</p>
<pre><code>parser._option_string_actions.keys()
</code></pre>
<p>But I'm not happy with it, as it involves accessing a <code>_</code>-member that is not officially documented. Whats the correct alternative for this task?</p>
| 7 | 2016-10-12T10:26:59Z | 39,997,271 | <p>I have to agree with Tryph's answer.</p>
<p>Not pretty, but you can retrieve them from <code>parser.format_help()</code>:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--foo', '-f', type=str)
goal = parser._option_string_actions.keys()
def get_allowed_arguments(parser):
lines = parser.format_help().split('\n')
line_index = 0
number_of_lines = len(lines)
found_optional_arguments = False
# skip the first lines until the section 'optional arguments'
while line_index < number_of_lines:
if lines[line_index] == 'optional arguments:':
found_optional_arguments = True
line_index += 1
break
line_index += 1
result_list = []
if found_optional_arguments:
while line_index < number_of_lines:
arg_list = get_arguments_from_line(lines[line_index])
if len(arg_list) == 0:
break
result_list += arg_list
line_index += 1
return result_list
def get_arguments_from_line(line):
if line[:2] != ' ':
return []
arg_list = []
i = 2
N = len(line)
inside_arg = False
arg_start = 2
while i < N:
if line[i] == '-' and not inside_arg:
arg_start = i
inside_arg = True
elif line[i] in [',',' '] and inside_arg:
arg_list.append(line[arg_start:i+1])
inside_arg = False
i += 1
return arg_list
answer = get_allowed_arguments(parser)
</code></pre>
<p>There's probably a regular expressions alternative to the above mess...</p>
| 0 | 2016-10-12T11:19:22Z | [
"python",
"argparse"
] |
Correct way to get allowed arguments from ArgumentParser | 39,996,295 | <p><strong>Question:</strong> What is the intended / official way of accessing possible arguments from an existing <code>argparse.ArgumentParser</code> object?</p>
<p><strong>Example:</strong> Let's assume the following context:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--foo', '-f', type=str)
</code></pre>
<p>Here I'd like to get the following list of allowed arguments:</p>
<pre><code>['-h', '--foo', '--help', '-f']
</code></pre>
<p>I found the following workaround which does the trick for me</p>
<pre><code>parser._option_string_actions.keys()
</code></pre>
<p>But I'm not happy with it, as it involves accessing a <code>_</code>-member that is not officially documented. Whats the correct alternative for this task?</p>
| 7 | 2016-10-12T10:26:59Z | 40,007,285 | <p>First a note on the <code>argparse</code> docs - it's basically a how-to-use document, not a formal API. The standard for what <code>argparse</code> does is the code itself, the unit tests (<code>test/test_argparse.py</code>), and a paralyzing concern for backward compatibility.</p>
<p>There's no 'official' way of accessing <code>allowed arguments</code>, because users usually don't need to know that (other than reading the <code>help/usage</code>).</p>
<p>But let me illustrate with a simple parser in an iteractive session:</p>
<pre><code>In [247]: parser=argparse.ArgumentParser()
In [248]: a = parser.add_argument('pos')
In [249]: b = parser.add_argument('-f','--foo')
</code></pre>
<p><code>add_argument</code> returns the Action object that it created. This isn't documented, but obvious to any one who has created a parser interactively.</p>
<p>The <code>parser</code> object has a <code>repr</code> method, that displays major parameters. But it has many more attributes, which you can see with <code>vars(parser)</code>, or <code>parser.<tab></code> in Ipython.</p>
<pre><code>In [250]: parser
Out[250]: ArgumentParser(prog='ipython3', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)
</code></pre>
<p>The Actions too have <code>repr</code>; the Action subclass is determined by the <code>action</code> parameter.</p>
<pre><code>In [251]: a
Out[251]: _StoreAction(option_strings=[], dest='pos', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None)
In [252]: b
Out[252]: _StoreAction(option_strings=['-f', '--foo'], dest='foo', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None)
</code></pre>
<p><code>vars(a)</code> etc can be used to see all attributes.</p>
<p>A key <code>parser</code> attribute is <code>_actions</code>, a list of all defined Actions. This is the basis for all parsing. Note it includes the <code>help</code> action that was created automatically. Look at <code>option_strings</code>; that determines whether the Action is positional or optional.</p>
<pre><code>In [253]: parser._actions
Out[253]:
[_HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None),
_StoreAction(option_strings=[], dest='pos',....),
_StoreAction(option_strings=['-f', '--foo'], dest='foo', ...)]
</code></pre>
<p><code>_option_string_actions</code> is a dictionary, mapping from <code>option_strings</code> to Actions (the same objects that appear in <code>_actions</code>). References to those Action objects appear all over the place in <code>argparse</code> code.</p>
<pre><code>In [255]: parser._option_string_actions
Out[255]:
{'--foo': _StoreAction(option_strings=['-f', '--foo'],....),
'--help': _HelpAction(option_strings=['-h', '--help'],...),
'-f': _StoreAction(option_strings=['-f', '--foo'], dest='foo',...),
'-h': _HelpAction(option_strings=['-h', '--help'], ....)}
In [256]: list(parser._option_string_actions.keys())
Out[256]: ['-f', '--help', '-h', '--foo']
</code></pre>
<p>Note that there is a key for each <code>-</code> string, long or short; but there's nothing for <code>pos</code>, the positional has an empty <code>option_strings</code> parameter.</p>
<p>If that list of keys is what you want, use it, and don't worry about the <code>_</code>. It does not have a 'public' alias.</p>
<p>I can understand parsing the <code>help</code> to discover the same; but that's a lot of work to just avoid using a 'private' attribute. If you worry about the undocumented attribute being changed, you should also worry about the help format being changed. That isn't part of the docs either.</p>
<p><code>help</code> layout is controlled by <code>parser.format_help</code>. The <code>usage</code> is created from information in <code>self._actions</code>. Help lines from information in</p>
<pre><code> for action_group in self._action_groups:
formatter.add_arguments(action_group._group_actions)
</code></pre>
<p>(you don't want to get into <code>action groups</code> do you?).</p>
<p>There is another way of getting the <code>option_strings</code> - collect them from the <code>_actions</code>:</p>
<pre><code>In [258]: [a.option_strings for a in parser._actions]
Out[258]: [['-h', '--help'], [], ['-f', '--foo']]
</code></pre>
<p>===================</p>
<p>Delving in to code details a bit:</p>
<p><code>parser.add_argument</code> creates an Action, and then passes it to <code>parser._add_action</code>. This is the method the populates both <code>.actions</code> and <code>action.option_strings</code>.</p>
<pre><code> self._actions.append(action)
for option_string in action.option_strings:
self._option_string_actions[option_string] = action
</code></pre>
| 0 | 2016-10-12T19:58:25Z | [
"python",
"argparse"
] |
Correct way to get allowed arguments from ArgumentParser | 39,996,295 | <p><strong>Question:</strong> What is the intended / official way of accessing possible arguments from an existing <code>argparse.ArgumentParser</code> object?</p>
<p><strong>Example:</strong> Let's assume the following context:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--foo', '-f', type=str)
</code></pre>
<p>Here I'd like to get the following list of allowed arguments:</p>
<pre><code>['-h', '--foo', '--help', '-f']
</code></pre>
<p>I found the following workaround which does the trick for me</p>
<pre><code>parser._option_string_actions.keys()
</code></pre>
<p>But I'm not happy with it, as it involves accessing a <code>_</code>-member that is not officially documented. Whats the correct alternative for this task?</p>
| 7 | 2016-10-12T10:26:59Z | 40,032,361 | <p>This started as a joke answer, but I've learned something since - so I'll post it.</p>
<p>Assume, we know the maximum length of an option allowed. Here is a nice answer to the question in this situation:</p>
<pre><code>from itertools import combinations
def parsable(option):
try:
return len(parser.parse_known_args(option.split())[1]) != 2
except:
return False
def test(tester, option):
return any([tester(str(option) + ' ' + str(v)) for v in ['0', '0.0']])
def allowed_options(parser, max_len=3, min_len=1):
acceptable = []
for l in range(min_len, max_len + 1):
for option in combinations([c for c in [chr(i) for i in range(33, 127)] if c != '-'], l):
option = ''.join(option)
acceptable += [p + option for p in ['-', '--'] if test(parsable, p + option)]
return acceptable
</code></pre>
<p>Of course this is very pedantic as the question doesn't require any specific runtime. So I'll ignore that here. I'll also disregard, that the above version produces a mess of output because <a href="http://stackoverflow.com/a/2829036/2747160">one can get rid of it easily</a>.</p>
<p>But more importantly, this method detected the following interesting <code>argparse</code> "features":</p>
<ul>
<li>In in the OP example, <code>argparse</code> would also allow <code>--fo</code>. This has to be a bug.</li>
<li>But further, in the OP example again, <code>argparse</code> would also allow <code>-fo</code> (ie. setting <code>foo</code> to <code>o</code> without space or anything). This is documented and intended, but I didn't know it.</li>
</ul>
<p>Because of this, a correct solution is a bit longer and would look something like this (only <code>parsable</code> changes, I'll omit the other methods):</p>
<pre><code>def parsable(option):
try:
default = vars(parser.parse_known_args(['--' + '0' * 200])[0])
parsed, remaining = parser.parse_known_args(option.split())
if len(remaining) == 2:
return False
parsed = vars(parsed)
for k in parsed.keys():
try:
if k in default and default[k] != parsed[k] and float(parsed[k]) != 0.0:
return False # Filter '-fx' cases where '-f' is the argument and 'x' the value.
except:
return False
return True
except:
return False
</code></pre>
<p><strong>Summary</strong>: Besides all the restrictions (runtime and fixed maximum option length), this is the only answer that correctly respects the real <code>parser</code> behavior - however buggy it may even be. So here you are, a perfect answer that is absolutely useless.</p>
| 1 | 2016-10-13T22:58:45Z | [
"python",
"argparse"
] |
Pykwalify: YAML Schema validation error | 39,996,464 | <p>I am writing a config file in <code>YAML</code> and its corresponding schema in <code>PyKwalify</code>.</p>
<p>when I compile with <code>pykwalify</code>, I get this error</p>
<pre><code>NotMappingError: error code 6: Value: None is not of a mapping type: Path: '/'
</code></pre>
<p>what does this error imply? </p>
| 0 | 2016-10-12T10:35:22Z | 40,005,476 | <p>It implies that instead of providing a mapping, which could have the form of a block style:</p>
<pre><code>a: 1
b: 2
</code></pre>
<p>of a flow style:</p>
<pre><code>{a: 1, b: 2}
</code></pre>
<p>you provided the null scalar (<code>null</code>, <code>~</code>) or no scalar:</p>
<pre><code>x:
</code></pre>
<p>or </p>
<pre><code>x: null
</code></pre>
<p>would load <code>None</code> in Python as value for the key <code>x</code>, whereas</p>
<pre><code>x:
a: 1
b: 1
</code></pre>
<p>would load a dictionary/mapping as the value for key <code>x</code>. Please note that is you make mistakes with indentation or mix in TAB characters you can get something that looks OK in your editor but doesn't parse as expected.</p>
| 0 | 2016-10-12T18:07:11Z | [
"python",
"yaml"
] |
Find a specific word from a list in python | 39,996,485 | <p>So I have a list like below-</p>
<pre><code>list = [scaler-1, scaler-2, scaler-3, backend-1, backend-2, backend-3]
</code></pre>
<p>I want to create another list from it with the words which starts with 'backend'.How can i do that ?</p>
<p>Please note the content of the list will change from system to system, I want my code to be dynamic, any help?</p>
| -3 | 2016-10-12T10:36:49Z | 39,996,548 | <p>First off, do not use the name <code>list</code> for assignment to your objects, you'll shadow the builtin list type.</p>
<p>Then, you can use a <em>list comprehension</em> with <a href="https://docs.python.org/2/library/stdtypes.html#str.startswith" rel="nofollow"><code>str.startswith</code></a> in a filter:</p>
<pre><code>new_lst = [x for x in lst if x.startswith('backend')]
</code></pre>
| 3 | 2016-10-12T10:40:01Z | [
"python"
] |
handling a lot of parameters in luigi | 39,996,544 | <p>In a lot of my projects I use <a href="http://luigi.readthedocs.io/en/stable/" rel="nofollow">luigi</a> as a pipelining tool. This made me think of employing it to implement a parameter search. The standard <code>luigi.file.LocalTarget</code> has a very naive approach to deal with parameters, which is also shown in the doc's examples:</p>
<pre><code>def output(self):
return luigi.LocalTarget("data/artist_streams_%s.tsv" % self.date_interval)
</code></pre>
<p>Namely, the parameters get saved in the file name. This makes it easy to check if a certain parameter combination is already computed. This gets messy as soon as the parameters for a task are more complex.</p>
<p>Here is the very simple idea of a parameter search:</p>
<pre><code>import luigi
class Sum(luigi.Task):
long_ = luigi.Parameter()
list_ = luigi.Parameter()
of = luigi.Parameter()
parameters = luigi.Parameter()
def output(self):
return luigi.LocalTarget("task{}_{}_{}_{}.txt".format(self.long_,
self.list_,
self.of,
self.parameters))
def run(self):
sum_ = self.long_ + self.list_ + self.of + self.parameters
with self.output().open('w') as out_file:
out_file.write(str(sum_))
class ParameterSearch(luigi.Task):
def requires(self):
list_of_parameter_combinations = [
{
"long_" : 1,
"list_" : 2,
"of" : 3,
"parameters" : 4
},{
"long_" : 5,
"list_" : 6,
"of" : 7,
"parameters" : 8
}
]
for pc in list_of_parameter_combinations:
yield Sum(**pc)
</code></pre>
<p>Sure, in this example, all four parameters can be encoded in the file name, but it does not require a lot of phantasy, that this approach can reach boundaries. Think for example of array-like parameters.</p>
<p>My follow up idea was to store the parameters and the result in some kind of <strong>envelope</strong> object, which can then be saved as a target.
The file name could then be some kind of hash of the parameters for a first fuzzy search.</p>
<p>There is the envelope class</p>
<pre><code>class Envelope(object):
@classmethod
def hashify(cls, params):
return hash(frozenset(params.items()))
def __init__(self, result, **params):
self.params = {}
for k in params:
self.params[k] = params.get(k)
def hash(self):
return Envelope.hashify(self.params)
</code></pre>
<p>Then, there is the new Target, that enhances the LocalTarget and is able to check if all parameters inside the envelope are matching:</p>
<pre><code>class EnvelopedTarget(luigi.file.LocalTarget):
fs = luigi.file.LocalFileSystem()
def __init__(self, params, path=None, format=None, is_tmp=False):
self.path = path
self.params = params
if format is None:
format = luigi.file.get_default_format()
if not path:
if not is_tmp:
raise Exception('path or is_tmp must be set')
path = os.path.join(tempfile.gettempdir(), 'luigi-tmp-%09d' % random.randint(0, 999999999))
super(EnvelopedTarget, self).__init__(path)
self.format = format
self.is_tmp = is_tmp
def exists(self):
path = self.path
if '*' in path or '?' in path or '[' in path or '{' in path:
logger.warning("Using wildcards in path %s might lead to processing of an incomplete dataset; "
"override exists() to suppress the warning.", path)
if self.fs.exists(path):
with self.open() as fin:
envelope = pickle.load(fin)
try:
assert len(envelope.params) == len(self.params)
for param,paramval in self.params.items():
assert paramval == envelope.params.get(param)
except(AssertionError):
return False
return True
else:
return False
</code></pre>
<p>The problem here is, that using this target adds some boilerplate which originally luigi aims to minimize. I set up a new base task</p>
<pre><code>class BaseTask(luigi.Task):
def output(self, envelope):
path = '{}{}.txt'.format(type(self).__name__, envelope.hash())
params = envelope.params
return EnvelopedTarget(params, path=path)
def complete(self):
envelope = Envelope(None, **self.param_kwargs)
outputs = flatten(self.output(envelope))
if len(outputs) == 0:
warnings.warn(
"Task %r without outputs has no custom complete() method" % self,
stacklevel=2
)
return False
return all(map(lambda output: output.exists(), outputs))
def run(self):
result, outparams = self.my_run()
envelope = Envelope(result, **outparams)
with self.output(envelope).open('w') as fout:
pickle.dump(envelope, fout)
</code></pre>
<p>The resulting <code>EnvelopedSum</code> Task would then be pretty small:</p>
<pre><code>class EnvelopedSum(BaseTask):
long_ = luigi.Parameter()
list_ = luigi.Parameter()
of = luigi.Parameter()
parameters = luigi.Parameter()
def my_run(self):
return sum(self.param_kwargs.values()), self.param_kwargs
</code></pre>
<p>This task can be run in the same fashion as the <code>Sum</code> task in the beginning.</p>
<p>Note: this example implementation of how to envelope the luigi-task-results is far from stable and is more of a illustration of what I mean by envelope the results and the parameters.</p>
<p><strong>My question is</strong>: Isn't there a simpler way to deal with a lot of complex parameters in luigi? </p>
<p><strong>Followup-question</strong>: Has anyone thought about holding record of the code-version (and/or package versions of subtaks) on which the parameter search had been performed?</p>
<p>Any comments on where to read on on this topic are also appreciated.</p>
<p><strong>Note:</strong></p>
<p>you probably need some imports to make this running:</p>
<pre><code>from luigi.task import flatten
import warnings
import pickle
</code></pre>
| 0 | 2016-10-12T10:39:48Z | 40,075,143 | <p>You might get a better response to this sort of proposal on the mailing list. The Luigi task code already does generate an MD5 hash of the parameters to generate a unique task identifier which you could grab.</p>
<p><a href="https://github.com/spotify/luigi/blob/master/luigi/task.py#L79-L82" rel="nofollow">https://github.com/spotify/luigi/blob/master/luigi/task.py#L79-L82</a></p>
<pre><code># task_id is a concatenation of task family, the first values of the first 3 parameters
# sorted by parameter name and a md5hash of the family/parameters as a cananocalised json.
param_str = json.dumps(params, separators=(',', ':'), sort_keys=True)
param_hash = hashlib.md5(param_str.encode('utf-8')).hexdigest()
</code></pre>
| 0 | 2016-10-16T20:33:21Z | [
"python",
"parameter-passing",
"luigi"
] |
Is it possible to consume from every queue with pika? | 39,996,583 | <p>I'm trying to set up a program that will consume from every queue in RabbitMQ and depending on certain messages it will run certain scripts. Unfortunately while adding consumers if it runs into a single error (i.e. timeout or queue not found) the entire channel is dead. Additionally queues come and go so it has to refresh the queues list quite often. Is this even possible?
Here is my code so far.</p>
<pre><code>import pika
import requests
import sys
try:
host = sys.argv[1]
except:
host = "localhost"
def get_queues(host="localhost", port=15672, user="guest", passwd="guest", virtual_host=None):
url = 'http://%s:%s/api/queues/%s' % (host, port, virtual_host or '')
response = requests.get(url, auth=(user, passwd))
return response.json()
queues = get_queues(host)
def get_on_message(queue):
def on_message(channel, method_frame, header_frame, body):
print("message from", queue)
channel.basic_ack(delivery_tag=method_frame.delivery_tag)
return on_message
connection = pika.BlockingConnection(pika.ConnectionParameters(host))
channel = connection.channel()
for queue in queues:
print(channel.is_open)
try:
channel.basic_consume(get_on_message(queue["name"]), queue["name"])
print("queue added",queue["name"])
except Exception as e:
print("queue failed",queue["name"])
sys.exit()
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
connection.close()
</code></pre>
<p>Is there a right way to do this or is doing this not right at all?</p>
| 0 | 2016-10-12T10:41:48Z | 40,007,113 | <p>It possible to consume from every queue in any language. It's also wrong and if this is something that is required, then the whole design/setup should be re-thought.</p>
<p>EDIT after comments:</p>
<p>Basically, you'd need to get the names of all existing queues which can be programmatically done via the <a href="http://hg.rabbitmq.com/rabbitmq-management/raw-file/rabbitmq_v3_3_4/priv/www/api/index.html" rel="nofollow">rest api</a> (potentially even by calling rabbitmqctl and parsing the output). Once you have the names of the queues, you can simply consume from them as it is explained in <a href="http://www.rabbitmq.com/tutorials/tutorial-one-python.html" rel="nofollow">the tutorial</a>.</p>
<p>Once again, I don't think that this the right way to go, and perhaps you should consider using topic exchange - I'm guessing this since you wrote <code>queues come and go</code>.</p>
| 4 | 2016-10-12T19:48:21Z | [
"python",
"rabbitmq",
"pika"
] |
Python Minimum Skew function | 39,996,632 | <p>I used following code to find MinimumSkew:</p>
<pre><code>genome = "TAAAGACTGCCGAGAGGCCAACACGAGTGCTAGAACGAGGGGCGTAAACGCGGGTCCGAT"
def Skew(genome):
Skew = {}
Skew[0] = 0
for i in range(1, len(genome)+1):
if genome[i - 1] == "G":
Skew[i] = Skew[i - 1] + 1
elif genome[i - 1] == "C":
Skew[i] = Skew[i - 1] - 1
else:
Skew[i] = Skew[i-1]
return Skew
Skew(genome)
def MinimumSkew(genome):
positions = [] # output variable
s = Skew(genome)
m = min(s.values())
for (k,v) in s.items():
if v == m:
positions.append(k)
return positions
print(MinimumSkew(genome))
</code></pre>
<p>I keep getting the error:
Failed test #5. Your code did not find all minimum skew indices.</p>
<p>Test Dataset:
CCGGCCGG</p>
<p>Your output:
[11]
2</p>
<p>Correct output:
2 6
Can anyone help me what I am doing wrong?</p>
| 0 | 2016-10-12T10:44:40Z | 39,998,275 | <p>There are much easier solutions to calculate skew. Here is one approach:</p>
<pre><code>def skew(genome):
res = []
cntr = 0
res.append(cntr)
for i in genome:
if i == 'C':
cntr -= 1
if i == "G":
cntr += 1
res.append(cntr)
return [str(i) for i, j in enumerate(res) if j == min(res)]
print(skew('CCGGCCGG')) # returns ['2', '6']
</code></pre>
<p>And your solutions is good as well, you just need to fix the indentation:</p>
<pre><code>genome = "CCGGCCGG"
def Skew(genome):
Skew = {}
Skew[0] = 0
for i in range(1, len(genome)+1):
if genome[i - 1] == "G":
Skew[i] = Skew[i - 1] + 1
elif genome[i - 1] == "C":
Skew[i] = Skew[i - 1] - 1
else:
Skew[i] = Skew[i-1]
return Skew
def MinimumSkew(genome):
positions = [] # output variable
s = Skew(genome)
m = min(s.values())
for (k,v) in s.items():
if v == m:
positions.append(k)
return positions
print(MinimumSkew(genome))
</code></pre>
<p>This returns <code>[2, 6]</code></p>
| 0 | 2016-10-12T12:10:20Z | [
"python"
] |
Scraping HTML data from website in Python | 39,996,721 | <p>I'm trying to scrape certain pieces of HTML data from certain websites, but I can't seem to scrape the parts I want. For instance I set myself the challenge of scraping the number of followers from <a href="http://freelegalconsultancy.blogspot.co.uk/" rel="nofollow">this blog</a>, but I can't seem to do so. </p>
<p>I've tried using urllib, request, beautifulsoup as well as <a href="https://www.jamapi.xyz/" rel="nofollow">Jam API</a>. </p>
<p>Here's what my code looks like at the moment:</p>
<pre><code>from bs4 import BeautifulSoup
from urllib import urlopen
import json
import urllib2
html = urlopen('http://freelegalconsultancy.blogspot.co.uk/')
soup = BeautifulSoup(html, "lxml")
print soup
</code></pre>
<p>How would I go about pulling the number of followers in this instace?</p>
| 0 | 2016-10-12T10:49:32Z | 39,996,983 | <p>You can't grab the followers as it's a widget loaded by javascript. You need to grab parts of the html by css class or id or by the element.</p>
<p>E.g:</p>
<pre><code>from bs4 import BeautifulSoup
from urllib import urlopen
html = urlopen('http://freelegalconsultancy.blogspot.co.uk/')
soup = BeautifulSoup(html)
assert soup.h1.string == '\nLAW FOR ALL-M.MURALI MOHAN\n'
</code></pre>
| 1 | 2016-10-12T11:02:46Z | [
"python",
"html"
] |
How to get a list of all of a database's secondary indixes in RethinkDB | 39,996,756 | <p>In Python, how can I get a list of all a database's secondary indixes?</p>
<p>For example, in the web UI under "Tables" you can see the list of "Secondary indexes" (in this case only <code>timestamp</code>); I would like to get this in the Python environment.</p>
<p><a href="https://i.stack.imgur.com/jky7D.png" rel="nofollow"><img src="https://i.stack.imgur.com/jky7D.png" alt="enter image description here"></a></p>
| 0 | 2016-10-12T10:51:34Z | 39,997,277 | <p>Check <a href="https://www.rethinkdb.com/docs/secondary-indexes/python/" rel="nofollow">this</a> doc about secondary indexes in RethinkDB.</p>
<p>You can get list of all indexes on table (e.x. "users") using this query:</p>
<pre><code>r.table("users").index_list()
</code></pre>
<p>If you want to get all secondary indexes for all tables, you can query table list and after that get indexes for each. I don't know python, but in Java Script you can do it using this query:</p>
<pre><code>r.tableList().map(function(table){
return {table: table,
indexes: r.table(table).indexList()};
})
</code></pre>
<p>I think, in python it looks like this:</p>
<pre><code>r.table_list().map(lambda name:{table: name,indexes: r.table(name).index_list()})
</code></pre>
| 1 | 2016-10-12T11:19:33Z | [
"python",
"rethinkdb"
] |
Opening two webservers via one ps1 script | 39,996,792 | <p>I've tried running the following, via PowerShell, in a single ps1 script in the hope of two localservers opening on different ports - only 8080 is opened:</p>
<pre><code>cd "\\blah\Statistics\Reporting\D3\walkthroughs\letsMakeABarChart"
python -m http.server 8080
cd "\\foo\data_science\20161002a_d3js\examples"
python -m http.server 8000
</code></pre>
<p>Can I adjust it so both get opened?</p>
| 1 | 2016-10-12T10:53:24Z | 39,996,864 | <p>The first python invoke probably <em>doesn't return</em> so you could use the <code>Start-Job</code> cmdlet:</p>
<pre><code>$job1 = start-job -scriptblock {
cd "\\blah\Statistics\Reporting\D3\walkthroughs\letsMakeABarChart"
python -m http.server 8080
}
$job2 = start-job -scriptblock {
cd "\\foo\data_science\20161002a_d3js\examples"
python -m http.server 8000
}
# Wait until both web servers terminates:
Wait-Job -Job ($job1, $job2)
</code></pre>
| 1 | 2016-10-12T10:56:53Z | [
"python",
"powershell"
] |
Generate random numbers in range from INPUT (python) | 39,996,814 | <p>This is a question close to some others I have found but they don't help me so I'm asking specifically for me and my purpose this time. </p>
<p>I'm coding for a bot which is supposed to ask the user for max and min in a range, then generating ten random numbers within that range. When validating I'm told both <code>random</code> and <code>i</code>are unused variables. I don't really get why. I believed <code>random.radint</code>was supposed to be a built in function and as far as <code>i</code> is concerned I really don't know what to believe. This is what I've got so far. </p>
<pre><code>def RandomNumbers():
"""
Asking the user for a min and max, then printing ten random numbers between min and max.
"""
print("Give me two numbers, a min and a max")
a = input("Select min. ")
b = input("Select max. ")
for i in range(10):
number = random.randint(a, b)
print(str(number)+ str(","), end="")
</code></pre>
<p>I'll be very happy for every piece of advice I can get to complete my task. Thank you in advance!</p>
| 3 | 2016-10-12T10:54:41Z | 39,996,891 | <p>No. <code>random.randint</code> is not a builtin function. You'll have to <em>import</em> the <code>random</code> module to use the function.</p>
<p>On another note, <code>i</code> was evidently not used in the loop, so you'll conventionally use the underscore <code>_</code> in place of <code>i</code></p>
<pre><code>import random
numbers = []
for _ in range(10):
numbers.append(random.randint(a, b))
</code></pre>
<p>You'll also notice I have used a list to store all the values from each iteration. In that way, you don't throw away the values from previous iterations.</p>
<p>In case you're not already familiar with lists, you can check out the docs: </p>
<p><a href="https://docs.python.org/2/tutorial/datastructures.html#data-structures" rel="nofollow">Data structures</a></p>
<p><em>Lists:</em></p>
<blockquote>
<p>The items of a list are arbitrary Python objects. Lists are formed by
placing a comma-separated list of expressions in square brackets</p>
</blockquote>
<hr>
<p>On a final note, to print the items from your list, you can use the <a href="https://docs.python.org/2/library/stdtypes.html#str.join" rel="nofollow"><code>str.join</code></a> method, but not after the items in your list have been converted from integers to strings:</p>
<pre><code>output = ', '.join([str(num) for num in numbers])
print(output)
</code></pre>
| 6 | 2016-10-12T10:58:40Z | [
"python"
] |
Generate random numbers in range from INPUT (python) | 39,996,814 | <p>This is a question close to some others I have found but they don't help me so I'm asking specifically for me and my purpose this time. </p>
<p>I'm coding for a bot which is supposed to ask the user for max and min in a range, then generating ten random numbers within that range. When validating I'm told both <code>random</code> and <code>i</code>are unused variables. I don't really get why. I believed <code>random.radint</code>was supposed to be a built in function and as far as <code>i</code> is concerned I really don't know what to believe. This is what I've got so far. </p>
<pre><code>def RandomNumbers():
"""
Asking the user for a min and max, then printing ten random numbers between min and max.
"""
print("Give me two numbers, a min and a max")
a = input("Select min. ")
b = input("Select max. ")
for i in range(10):
number = random.randint(a, b)
print(str(number)+ str(","), end="")
</code></pre>
<p>I'll be very happy for every piece of advice I can get to complete my task. Thank you in advance!</p>
| 3 | 2016-10-12T10:54:41Z | 39,997,027 | <p>A couple of points worth mentioning with your original function:</p>
<ul>
<li>the <code>random</code> module is not built-in, you have to explicitly import it.</li>
<li><code>input</code> always returns strings, so you have to convert them to integers, before passing them to <code>random.randint</code></li>
<li><code>i</code> is indeed not use within your for loop. You may as well replace it with <code>_</code> (reinforcing the fact, that you loop because of side-effects, e.g. printing and not the variable itself).</li>
<li>More of a stylistic side-note regarding function names: PEP8 (Python style guide) encourages the use of lowercase in combination with underscore to separate words and not camel case (<code>random_number</code>vs <code>RandomNumber</code>)</li>
</ul>
<p>Here's a working example:</p>
<pre><code>import random
def random_numbers():
"""
Asking the user for a min and max, then printing ten random numbers between min and max.
"""
print("Give me two numbers, a min and a max")
a = int(input("Select min. "))
b = int(input("Select max. "))
numbers = [random.randint(a, b) for i in range(10)]
print(','.join(str(n) for n in numbers))
random_numbers()
</code></pre>
| 2 | 2016-10-12T11:04:56Z | [
"python"
] |
How to create a mapping to execute python code from Vim? | 39,996,852 | <p>Here's the steps I follow to execute python code from Vim:</p>
<ol>
<li>I write a python source code using Vim</li>
<li>I execute it via <code>:w !python</code></li>
</ol>
<p>and I'd like a shorter way to execute the code, so I searched the Internet for a shortcut key such as this map:</p>
<pre><code>autocmd FileType python nnoremap <buffer> <F9> :exec '!python' shellescape(@%, 1)<cr>
</code></pre>
<p>and added it to the <code>_vimrc</code> file. But when I hit <kbd>F9</kbd>, the following error message appears:</p>
<p><a href="https://i.stack.imgur.com/GTfPC.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/GTfPC.jpg" alt="enter image description here"></a></p>
<p>Here is the relevant text in the image above:</p>
<pre><code>python: can't open file ''D:/py2.py'': [Errno 22] Invalid argument
shell returned 2.
</code></pre>
<p>I have already searched the Internet, but I have not got any result.</p>
<p>I use gVim 8.0 in Windows 10, if that helps.</p>
| 0 | 2016-10-12T10:56:21Z | 39,996,978 | <p>Interacting with command-line through vim, the much preferred way would be with exclamation.<br>Instead of:</p>
<pre><code>:exec '!python' shellescape(@%, 1)<cr>
</code></pre>
<p>Try doing the easy way:</p>
<pre><code>:update<bar>!python %<CR>
</code></pre>
<p>here in the above code:<br>
<code>update</code> helps save file if modified (<em>preferred</em>) or can also use <code>w</code>.<br>
<code><bar></code> will pipe the commands properly.<br>
<code>!python %</code> will run the file as <code>python filename</code>.<br></p>
<p>So basically, put this in your <code>.vimrc</code> file, and that'll do it:</p>
<pre><code>autocmd FileType python nnoremap <buffer> <F9> :update<bar>!python %<CR>
</code></pre>
| 1 | 2016-10-12T11:02:36Z | [
"python",
"vim",
"compilation",
"shortcut"
] |
Need Help on Python | 39,997,053 | <p>I just wrote the code below, and the problem is when I write BANKISGADZARCVA, it still shows a print of WESIERIMUSHAOBA.</p>
<pre><code>print("Gamarjoba")
print("tqveni davalebaa ishovot fuli valis gadasaxdelad")
print("Fulis sashovnelad gaqvt ori gza, WESIERIMUSHAOBA da BANKISGADZARCVA")
input('Airchiet Fulis Shovnis Gza: ')
if "WESIERIMUSHAOBA":
print("Sadaa Samushao Am Mtavrobis Xelshi")
elif "BANKISGADZARCVA":
print("Axlobeltan tu ucxostanertad")
</code></pre>
| -2 | 2016-10-12T11:06:32Z | 39,997,082 | <p>Python needs indentation. Try this:</p>
<pre><code>print("Gamarjoba")
print("tqveni davalebaa ishovot fuli valis gadasaxdelad")
print("Fulis sashovnelad gaqvt ori gza, WESIERIMUSHAOBA da BANKISGADZARCVA")
x = input('Airchiet Fulis Shovnis Gza: ')
if "WESIERIMUSHAOBA" in x:
print("Sadaa Samushao Am Mtavrobis Xelshi")
elif "BANKISGADZARCVA":
print("Axlobeltan tu ucxostanertad")
</code></pre>
<p>More, there's no need for so many prints. You could compress them as follows:</p>
<pre><code>print('Gamarjoba\n'
'tqveni davalebaa ishovot fuli valis gadasaxdelad\n'
'Fulis sashovnelad gaqvt ori gza, WESIERIMUSHAOBA da BANKISGADZARCVA\n')
x = input('Airchiet Fulis Shovnis Gza: ')
if "WESIERIMUSHAOBA" in x:
print("Sadaa Samushao Am Mtavrobis Xelshi")
elif "BANKISGADZARCVA":
print("Axlobeltan tu ucxostanertad")
</code></pre>
<p>In PEP8 (which is what every Python developer should read) it's written that:</p>
<p>Use 4 spaces per indentation level.</p>
<blockquote>
<p>Continuation lines should align wrapped elements either vertically
using Python's implicit line joining inside parentheses, brackets and
braces, or using a hanging indent. When using a hanging indent
the following should be considered; there should be no arguments on
the first line and further indentation should be used to clearly
distinguish itself as a continuation line.</p>
</blockquote>
| -1 | 2016-10-12T11:08:00Z | [
"python"
] |
How can I add rows for all dates between two columns? | 39,997,063 | <pre><code>import pandas as pd
mydata = [{'ID' : '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016'},
{'ID' : '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016'}]
mydata2 = [{'ID': '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016', 'Date': '10/10/2016'},
{'ID': '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016', 'Date': '11/10/2016'},
{'ID': '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016', 'Date': '12/10/2016'},
{'ID': '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016', 'Date': '13/10/2016'},
{'ID': '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016', 'Date': '14/10/2016'},
{'ID': '10', 'Entry Date': '10/10/2016', 'Exit Date': '15/10/2016', 'Date': '15/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '10/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '11/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '12/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '13/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '14/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '15/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '16/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '17/10/2016'},
{'ID': '20', 'Entry Date': '10/10/2016', 'Exit Date': '18/10/2016', 'Date': '18/10/2016'},]
df = pd.DataFrame(mydata)
df2 = pd.DataFrame(mydata2)
</code></pre>
<p>I can't find an answer on how to change 'df' into 'df2'. Maybe I'm not phrasing it right.</p>
<p>I want to take all dates between the dates in two columns 'Entry Date', 'Exit Date', and make a row for each, entering a corresponding date for each row in a new column, 'Date'.</p>
<p>Any help would be greatly appreciated.</p>
| 1 | 2016-10-12T11:06:57Z | 39,997,231 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow"><code>melt</code></a> for reshaping, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> and remove column <code>variable</code>:</p>
<pre><code>#convert columns to datetime
df['Entry Date'] = pd.to_datetime(df['Entry Date'])
df['Exit Date'] = pd.to_datetime(df['Exit Date'])
df2 = pd.melt(df, id_vars='ID', value_name='Date')
df2.Date = pd.to_datetime(df2.Date)
df2.set_index('Date', inplace=True)
df2.drop('variable', axis=1, inplace=True)
print (df2)
ID
Date
2016-10-10 10
2016-10-10 20
2016-10-15 10
2016-10-18 20
</code></pre>
<p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.resample.html" rel="nofollow"><code>resample</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tseries.resample.Resampler.ffill.html" rel="nofollow"><code>ffill</code></a> missing values:</p>
<pre><code>df3 = df2.groupby('ID').resample('D').ffill().reset_index(level=0, drop=True).reset_index()
print (df3)
Date ID
0 2016-10-10 10
1 2016-10-11 10
2 2016-10-12 10
3 2016-10-13 10
4 2016-10-14 10
5 2016-10-15 10
6 2016-10-10 20
7 2016-10-11 20
8 2016-10-12 20
9 2016-10-13 20
10 2016-10-14 20
11 2016-10-15 20
12 2016-10-16 20
13 2016-10-17 20
14 2016-10-18 20
</code></pre>
<p>Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> original <code>DataFrame</code>:</p>
<pre><code>print (pd.merge(df, df3))
Entry Date Exit Date ID Date
0 2016-10-10 2016-10-15 10 2016-10-10
1 2016-10-10 2016-10-15 10 2016-10-11
2 2016-10-10 2016-10-15 10 2016-10-12
3 2016-10-10 2016-10-15 10 2016-10-13
4 2016-10-10 2016-10-15 10 2016-10-14
5 2016-10-10 2016-10-15 10 2016-10-15
6 2016-10-10 2016-10-18 20 2016-10-10
7 2016-10-10 2016-10-18 20 2016-10-11
8 2016-10-10 2016-10-18 20 2016-10-12
9 2016-10-10 2016-10-18 20 2016-10-13
10 2016-10-10 2016-10-18 20 2016-10-14
11 2016-10-10 2016-10-18 20 2016-10-15
12 2016-10-10 2016-10-18 20 2016-10-16
13 2016-10-10 2016-10-18 20 2016-10-17
14 2016-10-10 2016-10-18 20 2016-10-18
</code></pre>
| 1 | 2016-10-12T11:16:35Z | [
"python",
"datetime",
"pandas",
"resampling",
"melt"
] |
multiple console windows for one Python script | 39,997,081 | <p>I've seen similar questions such as this one: <a href="http://stackoverflow.com/questions/12122535">keep multiple console windows open from batch</a>.</p>
<p>However, I have a different situation. I do not want to run a different script in a different console window. My idea is to have socket running as a server and accepting all connections. When a connection is accepted, a new console window is created, and all in-coming and out-going data is shown there. Is that even possible?</p>
| 0 | 2016-10-12T11:07:58Z | 40,033,234 | <p>A process can only be attached to one console (i.e. instance of conhost.exe) at a time, and a console with no attached processes automatically closes. You would need to spawn a child process with <code>creationflags=CREATE_NEW_CONSOLE</code>. </p>
<p>The following demo script requires Windows Python 3.3+. It spawns two worker processes and duplicates each socket connection into the worker via <code>socket.share</code> and <code>socket.fromshare</code>. The marshaled socket information is sent to the child's <code>stdin</code> over a pipe. After loading the socket connection, the pipe is closed and <code>CONIN$</code> is opened as <code>sys.stdin</code> to read standard input from the console.</p>
<pre><code>import sys
import time
import socket
import atexit
import threading
import subprocess
HOST = 'localhost'
PORT = 12345
def worker():
conn = socket.fromshare(sys.stdin.buffer.read())
sys.stdin = open('CONIN$', buffering=1)
while True:
msg = conn.recv(1024).decode('utf-8')
if not msg:
break
print(msg)
conn.sendall(b'ok')
input('press enter to quit')
return 0
def client(messages):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
for msg in messages:
s.sendall(msg.encode('utf-8'))
response = s.recv(1024)
if response != b'ok':
break
time.sleep(1)
procs = []
def server():
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen(1)
while True:
conn, addr = s.accept()
with conn:
p = subprocess.Popen(
['python', sys.argv[0], '-worker'],
stdin=subprocess.PIPE, bufsize=0,
creationflags=subprocess.CREATE_NEW_CONSOLE)
p.stdin.write(conn.share(p.pid))
p.stdin.close()
procs.append(p)
def cleanup():
for p in procs:
if p.poll() is None:
p.terminate()
if __name__ == '__main__':
if '-worker' in sys.argv[1:]:
sys.exit(worker())
atexit.register(cleanup)
threading.Thread(target=server, daemon=True).start()
tcli = []
for msgs in (['spam', 'eggs'], ['foo', 'bar']):
t = threading.Thread(target=client, args=(msgs,))
t.start()
tcli.append(t)
for t in tcli:
t.join()
input('press enter to quit')
</code></pre>
| 1 | 2016-10-14T00:45:24Z | [
"python",
"windows",
"windows-console"
] |
Is there an anonymous call of a method in Python? | 39,997,132 | <p>This is my database manager class:</p>
<pre><code>class DatabaseManager(object):
def __init__(self):
super().__init__()
self.conn = sqlite3.connect('artist.db')
self.c = self.conn.cursor()
def create_table(self):
self.c.execute(
'CREATE TABLE IF NOT EXISTS artists(title TEXT, artist TEXT, album TEXT, year INTEGER, '
'genre TEXT, ext TEXT, path TEXT, art TEXT )')
self.c.close()
self.conn.close()
def insert(self, song):
self.c.execute(
'INSERT INTO artists(title, artist, album, year, genre, ext, path, art) VALUES(?, ?, ?, ?, ?, ?, ?, ?)',
(song.title, song.artist, song.album, song.year, song.genre, song.ext, song.path, song.art))
self.conn.commit()
self.c.close()
self.conn.close()
def retrieve_all(self):
self.c.execute('SELECT * FROM artists')
data = self.c.fetchall()
self.c.close()
self.conn.close()
return data
</code></pre>
<p>There are more methods but this is not important. When i start the app i call <code>create_table()</code> and when i need some database action i create another obect(or remake the past one) to call <code>insert</code> for example.</p>
<pre><code> db = DatabaseManager()
print(db.retrieve_artist('asdf'))
db = DatabaseManager()
print(db.retrieve_all())
</code></pre>
<p>Is there any way to avoid this? In Java there is anonymous calls:
<code>new DatabaseManager().retrieve_artist("asdf")</code>
Is anything similar possible in Python?</p>
| 0 | 2016-10-12T11:11:11Z | 39,997,201 | <p>As @Remcogerlich said, you are assigning the instantiation to a variable. If you just don't do so, you can approach what you want.</p>
<pre><code>DatabaseManager().retrieve_artist('asdf')
AnyOtherClass().callAnyMethod()
# both disappeared
</code></pre>
<p>In this way, Python executes the instantiation, calls the method you want to, and since you don't tell it to assign to any variable, it "<em>disappears</em>".</p>
<p>You may even do that with class functions, like this:</p>
<pre><code>ClassWithClassMethods.callAnyClassMethod()
</code></pre>
<p>However, I wouldn't recommend you to work with that on that way. Normally, database connections are established as an instantiation, used as long as you need to, and when you don't need them anymore you close them <em>safely</em>. I think you are not doing that there, and are wasting memory usage, I guess (anyone correct me if I'm wrong). It's easier that you create an object for each time you want to use the database for long, and once you finish in that piece of code, you close the connection and remove the object.</p>
| 0 | 2016-10-12T11:15:26Z | [
"python",
"python-2.7",
"python-3.x"
] |
Using Google Appengine Python (Webapp2) I need to authenticate to Microsoft's new V2 endpoint using OpenID Connect | 39,997,134 | <p>There are built in decorators that easily allow me to access Google's own services but how can I overload these decorators to call other endpoints, specifically Microsofts V2 Azure endpoint (I need to authenticate Office 365 users).</p>
<p>Code snippet which I would like to override to call other end points such as Microsofts:</p>
<p><a href="https://login.microsoftonline.com/common/oauth2/v2.0/authorize" rel="nofollow">https://login.microsoftonline.com/common/oauth2/v2.0/authorize</a></p>
<pre><code>decorator = OAuth2Decorator(
client_id='d4ea6ab9-adf4-4aec-9b99-675cf46ad37',
redirect_uri='',
client_secret='sW8rJYvWtCBVpge54L8684w',
scope='')
class Authtest(BaseRequestHandler):
@decorator.oauth_required
</code></pre>
<p>Any ideas greatly appreciated.
Thanks,
Ian</p>
| 0 | 2016-10-12T11:11:12Z | 40,134,416 | <p>Having wasted a lot of time on this I can confirm that you CAN overload the decorator to direct to the Azure V2 endpoint using the code below:</p>
<pre><code>decorator = OAuth2Decorator(
client_id='d4ea6ab9-adf4-4aec-9b99-675cf46XXX',
auth_uri='https://login.microsoftonline.com/common/oauth2/v2.0/authorize',
response_type='id_token',
response_mode='form_post',
client_secret='sW8rJYvWtCBVpgXXXXX',
extraQueryParameter='nux=1',
state='12345',
nonce='678910',
scope=['openid','email','profile'])
</code></pre>
<p>Problem is that the decorators are coded purely to handle Google APIs and can not decode the response from Microsoft, whilst it may be possible to implement this myself by modifying the code in appengine.py it's too much work.</p>
<p>So if you are looking to authenticate to the Microsoft Azure V2 endpoint via Appengine it is not possible by using the built in OAuth2Decorator this only works with Google's own services.</p>
| 0 | 2016-10-19T14:33:11Z | [
"python",
"google-app-engine",
"azure",
"office365",
"webapp2"
] |
Argparse: Default values not shown for subparsers | 39,997,152 | <p>I have the problem that I am not seeing any default values for arguments when specifying them via add_argument for subparsers using the argparse Python package.</p>
<p>Some research said that you need non-empty help-parameters set for each add_argument step and you need ArgumentDefaultsHelpFormatter as formatter_class as described here:</p>
<p><a href="http://stackoverflow.com/questions/12151306/argparse-way-to-include-default-values-in-help">Argparse: Way to include default values in '--help'?</a></p>
<p>That's not working for me, however. I suspect that somehow the subparser defaults are suppressed.</p>
<p>Here's an example:</p>
<pre><code>from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
parser = ArgumentParser(description="Sample script", formatter_class=ArgumentDefaultsHelpFormatter, version="sample version")
# Initialize Subparsers
subparsers = parser.add_subparsers(help="", dest="command")
# foo command
fooparser = subparsers.add_parser('foo', help='Do foo')
fooparser.add_argument('files', action='store', help='Foo file(s)' , nargs="+")
fooparser.add_argument("-5", "--Do5", type=int, required=False, dest="do5", help="Do 5 subprocedure.")
fooparser.add_argument("-n", "--topn", type=int, required=False, dest="topn", default=1, help="Show topn")
# bar command
barparser = subparsers.add_parser('bar', help='Do bar')
barparser.add_argument('files', action='store', help='Bar file(s)' , nargs="+")
barparser.add_argument("-mq", "--min-mq", type=int, required=False, default=2, dest="mq", help="Minimum MQ")
barparser.add_argument("-mi", "--min-identity", type=float, required=False, default=0.95, dest="identity", help="Minimum identity")
args = parser.parse_args()
</code></pre>
| 1 | 2016-10-12T11:12:20Z | 39,997,364 | <p>Specify <code>formatter_class</code> when adding sub-parsers.</p>
<pre><code>subparsers = parser.add_subparsers(help="", dest="command")
fooparser = subparsers.add_parser('foo', help='Do foo',
formatter_class=ArgumentDefaultsHelpFormatter)
...
barparser = subparsers.add_parser('bar', help='Do bar',
formatter_class=ArgumentDefaultsHelpFormatter)
...
</code></pre>
<p>Output of <code>python argparse_test.py --help foo</code>:</p>
<pre><code>usage: argparse_test.py foo [-h] [-5 DO5] [-n TOPN] files [files ...]
positional arguments:
files Foo file(s)
optional arguments:
-h, --help show this help message and exit
-5 DO5, --Do5 DO5 Do 5 subprocedure. (default: None)
-n TOPN, --topn TOPN Show topn (default: 1)
</code></pre>
| 1 | 2016-10-12T11:24:17Z | [
"python",
"argparse"
] |
Robot Framework - check if element defined by xpath exists | 39,997,176 | <p>I'm wondering, I'd love to find or write condition to check if some element exists. If it does than I want to execute body of IF condition. If it doesn't exist than to execute body of ELSE.</p>
<p>Is there some condition like this or is it necessary to write by myself somehow?</p>
| -1 | 2016-10-12T11:14:01Z | 40,018,399 | <p>By locating the element using xpath, I assume that you're using <code>Sselenium2Library</code>. In that lib there is a keyword named: </p>
<p><code>Page Should Contain Element</code> which requires an argument, which is a <code>selector</code>, for example the xpath that defines your element. </p>
<p>The keyword failes, if the page does not contain the specified element. </p>
<p>For the condition, use this: </p>
<p><code>${Result}= Page Should Contain Element ${Xpath}
Run Keyword Unless '${RESULT}'=='PASS' Keyword args*</code></p>
<p>You can also use an other keyword: <code>Xpath Should Match X Times</code></p>
| 0 | 2016-10-13T10:25:25Z | [
"python",
"testing",
"xpath",
"automated-tests",
"robotframework"
] |
Python numpy array assignment to integer indexed flat slice | 39,997,202 | <p>While learning numpy, I wrote code which doing LSB(steganography) encryption:</p>
<pre><code>def str2bits_nparray(s):
return np.array(map(int, (''.join(map('{:07b}'.format, bytearray(s))))), dtype=np.bool)
def LSB_encode(img, msg, channel):
msg_bits = str2bits_nparray(msg)
xor_mask = np.zeros_like(img, dtype=np.bool)
xor_mask[:, :, channel].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)
img[xor_mask] = img[xor_mask] >> 1 << 1 | msg_bits
msg = 'A' * 1000
img_name = 'screenshot.png'
chnl = 2
img = imread(img_name)
LSB_encode(img, msg, chnl)
</code></pre>
<p>Code works fine, but when i'm trying to made <code>chnl = [2, 1]</code> this line:</p>
<pre><code>xor_mask[:, :, channel].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)
</code></pre>
<p>doesnt assign value to <code>xor_mask</code> with </p>
<p><code>xor_mask[:, :,</code><strong>[2, 1]</strong><code>].flat[:len(msg_bits)]</code></p>
<p>Is there way to fix this?</p>
<p>I tryed solution with for-loop over channels:</p>
<pre><code>for ch in channel:
xor_mask[:, :, ch].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)
</code></pre>
<p>But this is doing not that i want from </p>
<p><code>xor_mask[:, :,</code><strong>[2, 1]</strong><code>].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)</code></p>
| 0 | 2016-10-12T11:15:33Z | 39,997,869 | <p>IIUC, here's an approach to get the linear indices, then slice to the length of no. of elems required to be set and then perform the setting -</p>
<pre><code>m,n,r = xor_mask.shape # Store shape info
# Create range arrays corresponding to those shapes
x,y,z = np.ix_(np.arange(m),np.arange(n),channel)
# Get the indices to be set and finaally perform the setting
idx = (x*n*r + y*r + z).ravel()[:len(msg_bits)]
xor_mask.ravel()[idx] = 1
</code></pre>
<p>Sample run -</p>
<pre><code>In [180]: xor_mask
Out[180]:
array([[[25, 84, 37, 96, 72, 84, 91],
[94, 56, 78, 71, 48, 65, 98]],
[[33, 56, 14, 92, 90, 64, 76],
[71, 71, 77, 31, 96, 36, 49]]])
In [181]: # Other inputs
...: channel = np.array([2,1])
...: msg_bits = np.array([2,3,6,1,4])
...:
In [182]: m,n,r = xor_mask.shape # Store shape info
...: x,y,z = np.ix_(np.arange(m),np.arange(n),channel)
...: idx = (x*n*r + y*r + z).ravel()[:len(msg_bits)]
...: xor_mask.ravel()[idx] = 1
...:
In [183]: xor_mask # First 5 elems from flattend version
# of xor_mask[:,:,channel] set as 1
# as len(msg_bits) = 5.
Out[183]:
array([[[25, 1, 1, 96, 72, 84, 91],
[94, 1, 1, 71, 48, 65, 98]],
[[33, 56, 1, 92, 90, 64, 76],
[71, 71, 77, 31, 96, 36, 49]]])
</code></pre>
<p>Instead, if you were trying to set for all elems across all dimensions in <code>3D</code> input array along the first of <code>channel</code> : <code>2</code> and then along the second one <code>1</code> and so on, we need to create <code>idx</code> differently, like so -</p>
<pre><code>idx = (x*n*r + y*r + z).transpose(2,0,1).ravel()[:len(msg_bits)]
</code></pre>
| 1 | 2016-10-12T11:49:12Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"slice"
] |
Flask python manage.py db upgrade raise error | 39,997,252 | <p>I am working on an flask project which contain a large amount of models and some of them make user of <code>from sqlalchemy.dialects.postgresql import JSONB</code>. Form management i created manage.py as per this <a href="http://flask-migrate.readthedocs.io/en/latest/" rel="nofollow">link</a>. <code>pyhton manage.py init & python manage.py migrate</code>
are working fine but when i run <code>pyhton manage.py upgrade</code> the following error occurs in migrated file.</p>
<pre><code> sa.Column('images', postgresql.JSONB(astext_type=Text()), nullable=True),
NameError: global name 'Text' is not defined
</code></pre>
<p>Do anyone anyone know how to fix it? Thanks. </p>
| 0 | 2016-10-12T11:18:11Z | 39,997,959 | <p>You need to import that <code>Text</code></p>
<p>As i searched it comes from <code>sqlalchemy.types</code>, so you need at the top of file import it</p>
<pre><code>from sqlalchemy.types import Text
</code></pre>
<p>But you don't even need to supply <code>astext_type</code> as a parameter, because it defaults to <code>Text()</code>. From <a href="http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON.params.astext_type" rel="nofollow">docs</a> of <code>sqlalchemy.dialects.postgresql.JSON</code>:</p>
<blockquote>
<p><strong><code>astext_type</code></strong></p>
<p>the type to use for the <code>JSON.Comparator.astext</code> accessor on indexed attributes. Defaults to <code>types.Text</code>.</p>
</blockquote>
<p>And <code>sqlalchemy.dialects.postgresql.JSONB</code> is</p>
<blockquote>
<p>Bases: <code>sqlalchemy.dialects.postgresql.json.JSON</code></p>
</blockquote>
| 1 | 2016-10-12T11:54:00Z | [
"python",
"postgresql",
"flask",
"flask-sqlalchemy"
] |
editing and reordering tuples in a list | 39,997,293 | <p>I have a list of tuples:</p>
<pre><code>lst = [(1, "text"), (2, "more"), (5, "more"), (10, "more")]
</code></pre>
<p>The tuples have the structure <code>(int, string)</code> and start from 1 and the max value is 10. I would like to edit and reorder them to the following:</p>
<pre><code>lst2 = [(1, "text"), (2, "more"), (3, ""), (4, ""), (5, "more"), (6, ""), (7, ""), (8, ""),
(9, ""), (10, "more")]
</code></pre>
<p>As you can see I want to create a new list that is consecutively numbered up to 10. All int's from the tuples in the first list <code>lst</code> that doesn't occur in the range of 1 to 10 will produce an empty string in the new list <code>lst2</code>.</p>
<p>I came up with this code:</p>
<pre><code>lst2 = []
for tupl in lst:
for k in range(1,11):
if tupl[0] == k:
lst2.append((k, tupl[1]))
else:
lst2.append((k, ""))
print lst2
</code></pre>
<p>however the result is weird:</p>
<pre><code>[(1, 'text'), (2, ''), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''), (8, ''), (9, ''),
(10, ''), (1, ''), (2, 'more'), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''), (8, ''),
(9, ''), (10, ''), (1, ''), (2, ''), (3, ''), (4, ''), (5, 'more'), (6, ''), (7, ''),
(8, ''), (9, ''), (10, ''), (1, ''), (2, ''), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''),
(8, ''), (9, ''), (10, 'more')]
</code></pre>
<p>Can anyone please help me or tell me what I am doing wrong? Thanks.</p>
| 1 | 2016-10-12T11:20:37Z | 39,997,382 | <pre><code>lst = [(1, "text"), (2, "more"), (5, "more"), (10, "more")]
d = dict(lst)
lst2 = [(i, d.get(i, "")) for i in range(1, 11)]
</code></pre>
<p><strong>EDIT</strong></p>
<p>Or, using <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a>:</p>
<pre><code>lst = [(1, "text"), (2, "more"), (5, "more"), (10, "more")]
from collections import defaultdict
d = defaultdict(str, lst)
lst2 = [(i, d[i]) for i in range(1, 11)]
</code></pre>
| 4 | 2016-10-12T11:25:08Z | [
"python"
] |
editing and reordering tuples in a list | 39,997,293 | <p>I have a list of tuples:</p>
<pre><code>lst = [(1, "text"), (2, "more"), (5, "more"), (10, "more")]
</code></pre>
<p>The tuples have the structure <code>(int, string)</code> and start from 1 and the max value is 10. I would like to edit and reorder them to the following:</p>
<pre><code>lst2 = [(1, "text"), (2, "more"), (3, ""), (4, ""), (5, "more"), (6, ""), (7, ""), (8, ""),
(9, ""), (10, "more")]
</code></pre>
<p>As you can see I want to create a new list that is consecutively numbered up to 10. All int's from the tuples in the first list <code>lst</code> that doesn't occur in the range of 1 to 10 will produce an empty string in the new list <code>lst2</code>.</p>
<p>I came up with this code:</p>
<pre><code>lst2 = []
for tupl in lst:
for k in range(1,11):
if tupl[0] == k:
lst2.append((k, tupl[1]))
else:
lst2.append((k, ""))
print lst2
</code></pre>
<p>however the result is weird:</p>
<pre><code>[(1, 'text'), (2, ''), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''), (8, ''), (9, ''),
(10, ''), (1, ''), (2, 'more'), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''), (8, ''),
(9, ''), (10, ''), (1, ''), (2, ''), (3, ''), (4, ''), (5, 'more'), (6, ''), (7, ''),
(8, ''), (9, ''), (10, ''), (1, ''), (2, ''), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''),
(8, ''), (9, ''), (10, 'more')]
</code></pre>
<p>Can anyone please help me or tell me what I am doing wrong? Thanks.</p>
| 1 | 2016-10-12T11:20:37Z | 39,997,427 | <p>Your inner for is excessive here.
Instead of for k in range(1,11): you should use if k>=1 and k<=10:</p>
| 1 | 2016-10-12T11:27:48Z | [
"python"
] |
editing and reordering tuples in a list | 39,997,293 | <p>I have a list of tuples:</p>
<pre><code>lst = [(1, "text"), (2, "more"), (5, "more"), (10, "more")]
</code></pre>
<p>The tuples have the structure <code>(int, string)</code> and start from 1 and the max value is 10. I would like to edit and reorder them to the following:</p>
<pre><code>lst2 = [(1, "text"), (2, "more"), (3, ""), (4, ""), (5, "more"), (6, ""), (7, ""), (8, ""),
(9, ""), (10, "more")]
</code></pre>
<p>As you can see I want to create a new list that is consecutively numbered up to 10. All int's from the tuples in the first list <code>lst</code> that doesn't occur in the range of 1 to 10 will produce an empty string in the new list <code>lst2</code>.</p>
<p>I came up with this code:</p>
<pre><code>lst2 = []
for tupl in lst:
for k in range(1,11):
if tupl[0] == k:
lst2.append((k, tupl[1]))
else:
lst2.append((k, ""))
print lst2
</code></pre>
<p>however the result is weird:</p>
<pre><code>[(1, 'text'), (2, ''), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''), (8, ''), (9, ''),
(10, ''), (1, ''), (2, 'more'), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''), (8, ''),
(9, ''), (10, ''), (1, ''), (2, ''), (3, ''), (4, ''), (5, 'more'), (6, ''), (7, ''),
(8, ''), (9, ''), (10, ''), (1, ''), (2, ''), (3, ''), (4, ''), (5, ''), (6, ''), (7, ''),
(8, ''), (9, ''), (10, 'more')]
</code></pre>
<p>Can anyone please help me or tell me what I am doing wrong? Thanks.</p>
| 1 | 2016-10-12T11:20:37Z | 39,997,537 | <p>There are already better answers, but just to show you how your loop could be modified to yield the desired result:</p>
<pre><code>for k in range(1,11):
exists=False
for tupl in lst:
if tupl[0] == k:
lst2.append((k, tupl[1]))
exists = True
break
if not exists:
lst2.append((k, ""))
print lst2
</code></pre>
| 1 | 2016-10-12T11:32:41Z | [
"python"
] |
root user execution fails | 39,997,297 | <p>When I run <code>python abc.py</code> it runs fine</p>
<p>But when I do sudo <code>python abc.py</code> then it shows some packages missing error. Of the several import errors, here's the one:</p>
<pre class="lang-none prettyprint-override"><code>ImportError: No module named numpy
</code></pre>
<p>Why?</p>
<p>What I think is that those packages are installed with normal user(ubuntu) permissions and not root permissions. If this is the case, how should I get over with this? Do I have to install all the packages again with root access?</p>
<p>Note: everything I discussed here is w.r.t ec2 linux ubuntu machine</p>
| 3 | 2016-10-12T11:21:01Z | 39,997,375 | <p>The sudo environment may not contain your <code>PYTHONPATH</code></p>
<p><code>/etc/sudoers</code> contains Defaults <code>env_reset</code>.
Simply add Defaults <code>env_keep += "PYTHONPATH"</code> to <code>/etc/sudoers</code> and it will work just fine with <code>sudo</code>.</p>
| 1 | 2016-10-12T11:24:47Z | [
"python",
"linux",
"python-2.7"
] |
write times in NetCDF file | 39,997,314 | <p>While using the NETCDF4 package write times in netCDF file.</p>
<pre><code>dates = []
for iday in range(84):
dates.append(datetime.datetime(2016, 10, 1) + atetime.timedelta(hours = iday))
times[:] = date2num(dates, units=times.units, calendar = imes.calendar)
# print times[:]
for ii, i in enumerate(times[:]):
print i, num2date(i, units=times.units), dates[ii]
</code></pre>
<p>The times are right:</p>
<pre class="lang-none prettyprint-override"><code>17669815.0 2016-10-04 07:00:00 2016-10-04 07:00:00
17669816.0 2016-10-04 08:00:00.000006 2016-10-04 08:00:00
17669817.0 2016-10-04 09:00:00 2016-10-04 09:00:00
17669818.0 2016-10-04 10:00:00 2016-10-04 10:00:00
17669819.0 2016-10-04 11:00:00.000006 2016-10-04 11:00:00
</code></pre>
<p>But while reading the netcdf file:</p>
<pre><code>input_file = '/home/lovechang/test.nc'
data = Dataset(input_file)
times = data.variables['time']
# print times[:]
# print num2date(times[:], units=times.units)
for i in times[:]:
print i, num2date(i, units=times.units)
</code></pre>
<p>Results:</p>
<pre class="lang-none prettyprint-override"><code>17669813.0 2016-10-04 05:00:00.000006
17669814.0 2016-10-04 06:00:00
17669815.0 2016-10-04 07:00:00
17669816.0 2016-10-04 08:00:00.000006
17669817.0 2016-10-04 09:00:00
17669818.0 2016-10-04 10:00:00
17669819.0 2016-10-04 11:00:00.000006
</code></pre>
<p>Ncview shows the time is not the punctually hour.</p>
<p><img src="https://i.stack.imgur.com/8Y2Ze.png" alt="screenshot of Ncview program running"></p>
<p>So what happened with the times?
And how can I write punctually hour in netcdf file?</p>
| 1 | 2016-10-12T11:21:52Z | 39,998,698 | <p>Depending on the time units and datatype you choose, you may encounter floating point accuracy problems. For example, if you specify the time in <code>days since 1970-01-01 00:00</code>, 32 bit float is not sufficient and you should use a 64 bit float instead:</p>
<pre><code>import datetime
import netCDF4
times = [datetime.datetime(2016, 10, 1) + datetime.timedelta(hours=hour)
for hour in range(84)]
# Create netCDF file
calendar = 'standard'
units = 'days since 1970-01-01 00:00'
ds = netCDF4.Dataset('test.nc', 'w')
timedim = ds.createDimension(dimname='time', size=len(times))
# Write timestamps to netCDF file using 32bit float
timevar32 = ds.createVariable(varname='time32', dimensions=('time',),
datatype='float32')
timevar32[:] = netCDF4.date2num(times, units=units, calendar=calendar)
# Write timestamps to netCDF file using 64bit float
timevar64 = ds.createVariable(varname='time64', dimensions=('time',),
datatype='float64')
timevar64[:] = netCDF4.date2num(times, units=units, calendar=calendar)
# Read timestamps from netCDF file
times32 = netCDF4.num2date(timevar32[:], units=units, calendar=calendar)
times64 = netCDF4.num2date(timevar64[:], units=units, calendar=calendar)
for time, time32, time64 in zip(times, times32, times64):
print "original ", time
print " 32 bit ", time32
print " 64 bit ", time64
print
</code></pre>
<p>If you specified the time in <code>hours since 2016-10-01 00:00</code>, even an integer would suffice (in this example).</p>
| 1 | 2016-10-12T12:31:15Z | [
"python",
"netcdf"
] |
Error with .readlines()[n] | 39,997,324 | <p>I'm a beginner with Python.
I tried to solve the problem: "If we have a file containing <1000 lines, how to print only the odd-numbered lines? ". That's my code:</p>
<pre><code>with open(r'C:\Users\Savina\Desktop\rosalind_ini5.txt')as f:
n=1
num_lines=sum(1 for line in f)
while n<num_lines:
if n/2!=0:
a=f.readlines()[n]
print(a)
break
n=n+2
</code></pre>
<p>where <em>n</em> is a counter and <em>num_lines</em> calculates how many lines the file contains.
But when I try to execute the code, it says: </p>
<pre><code>"a=f.readlines()[n]
IndexError: list index out of range"
</code></pre>
<p>Why it doesn't recognize <em>n</em> as a counter?</p>
| 2 | 2016-10-12T11:22:13Z | 39,997,529 | <p>Well, I'd personally do it like this:</p>
<pre><code>def print_odd_lines(some_file):
with open(some_file) as my_file:
for index, each_line in enumerate(my_file): # keep track of the index of each line
if index % 2 == 1: # check if index is odd
print(each_line) # if it does, print it
if __name__ == '__main__':
print_odd_lines('C:\Users\Savina\Desktop\rosalind_ini5.txt')
</code></pre>
<p>Be aware that this will leave a blank line instead of the even number. I'm sure you figure how to get rid of it.</p>
| 0 | 2016-10-12T11:32:08Z | [
"python",
"python-3.x",
"while-loop"
] |
Error with .readlines()[n] | 39,997,324 | <p>I'm a beginner with Python.
I tried to solve the problem: "If we have a file containing <1000 lines, how to print only the odd-numbered lines? ". That's my code:</p>
<pre><code>with open(r'C:\Users\Savina\Desktop\rosalind_ini5.txt')as f:
n=1
num_lines=sum(1 for line in f)
while n<num_lines:
if n/2!=0:
a=f.readlines()[n]
print(a)
break
n=n+2
</code></pre>
<p>where <em>n</em> is a counter and <em>num_lines</em> calculates how many lines the file contains.
But when I try to execute the code, it says: </p>
<pre><code>"a=f.readlines()[n]
IndexError: list index out of range"
</code></pre>
<p>Why it doesn't recognize <em>n</em> as a counter?</p>
| 2 | 2016-10-12T11:22:13Z | 39,997,559 | <p>This code will do exactly as you asked:</p>
<pre><code>with open(r'C:\Users\Savina\Desktop\rosalind_ini5.txt')as f:
for i, line in enumerate(f.readlines()): # Iterate over each line and add an index (i) to it.
if i % 2 == 0: # i starts at 0 in python, so if i is even, the line is odd
print(line)
</code></pre>
<p>To explain what happens in your code:</p>
<p>A file can only be read through once. After that is has to be closed and reopened again. </p>
<p>You first iterate over the entire file in <code>num_lines=sum(1 for line in f)</code>. Now the object <code>f</code> is empty. </p>
<p>If n is odd however, you call <code>f.readlines()</code>. This will go through all the lines again, but none are left in <code>f</code>. So every time n is odd, you go through the entire file. It is faster to go through it once (as in the solutions offered to your question).</p>
| 0 | 2016-10-12T11:33:32Z | [
"python",
"python-3.x",
"while-loop"
] |
Error with .readlines()[n] | 39,997,324 | <p>I'm a beginner with Python.
I tried to solve the problem: "If we have a file containing <1000 lines, how to print only the odd-numbered lines? ". That's my code:</p>
<pre><code>with open(r'C:\Users\Savina\Desktop\rosalind_ini5.txt')as f:
n=1
num_lines=sum(1 for line in f)
while n<num_lines:
if n/2!=0:
a=f.readlines()[n]
print(a)
break
n=n+2
</code></pre>
<p>where <em>n</em> is a counter and <em>num_lines</em> calculates how many lines the file contains.
But when I try to execute the code, it says: </p>
<pre><code>"a=f.readlines()[n]
IndexError: list index out of range"
</code></pre>
<p>Why it doesn't recognize <em>n</em> as a counter?</p>
| 2 | 2016-10-12T11:22:13Z | 39,997,580 | <p>As a fix, you need to type</p>
<pre><code>f.close()
f = open(r'C:\Users\Savina\Desktop\rosalind_ini5.txt')
</code></pre>
<p>everytime after you read through the file, in order to get back to the start.</p>
<p>As a side note, you should look up modolus % for finding odd numbers.</p>
| 0 | 2016-10-12T11:34:21Z | [
"python",
"python-3.x",
"while-loop"
] |
Error with .readlines()[n] | 39,997,324 | <p>I'm a beginner with Python.
I tried to solve the problem: "If we have a file containing <1000 lines, how to print only the odd-numbered lines? ". That's my code:</p>
<pre><code>with open(r'C:\Users\Savina\Desktop\rosalind_ini5.txt')as f:
n=1
num_lines=sum(1 for line in f)
while n<num_lines:
if n/2!=0:
a=f.readlines()[n]
print(a)
break
n=n+2
</code></pre>
<p>where <em>n</em> is a counter and <em>num_lines</em> calculates how many lines the file contains.
But when I try to execute the code, it says: </p>
<pre><code>"a=f.readlines()[n]
IndexError: list index out of range"
</code></pre>
<p>Why it doesn't recognize <em>n</em> as a counter?</p>
| 2 | 2016-10-12T11:22:13Z | 39,998,187 | <p>You have the call to <code>readlines</code> into a loop, but this is not its intended use,
because <code>readlines</code> ingests the whole of the file at once, returning you a LIST
of newline terminated strings.</p>
<p>You may want to save such a list and operate on it</p>
<pre><code>list_of_lines = open(filename).readlines() # no need for closing, python will do it for you
odd = 1
for line in list_of_lines:
if odd : print(line, end='')
odd = 1-odd
</code></pre>
<p>Two remarks:</p>
<ol>
<li><code>odd</code> is alternating between <code>1</code> (hence true when argument of an <code>if</code>) or <code>0</code> (hence false when argument of an <code>if</code>),</li>
<li>the optional argument <code>end=''</code> to the <code>print</code> function is required because each line in <code>list_of_lines</code> is terminated by a new line character, if you omit the optional argument the <code>print</code> function will output a SECOND new line character at the end of each line.</li>
</ol>
<p>Coming back to your code, you can fix its behavior using a</p>
<pre><code>f.seek(0)
</code></pre>
<p>before the loop to rewind the file to its beginning position and using the
<code>f.readline()</code> (look, it's NOT <code>readline**S**</code>) method inside the loop,
but rest assured that proceding like this is. let's say, a bit unconventional...</p>
<p>Eventually, it is possible to do everything you want with a one-liner</p>
<pre><code>print(''.join(open(filename).readlines()[::2]))
</code></pre>
<p>that uses the <a href="http://stackoverflow.com/q/509211/2749397">slice notation for list</a>s and the <a href="http://stackoverflow.com/q/12453580/2749397">string method <code>.join()</code></a></p>
| 2 | 2016-10-12T12:05:30Z | [
"python",
"python-3.x",
"while-loop"
] |
is_max = s == s.max() | How should I read this? | 39,997,334 | <p>While studying <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow">Pandas Style</a>, I got to the following:</p>
<pre><code>def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
</code></pre>
<p>How should I read <code>is_max = s == s.max()</code>?</p>
| 4 | 2016-10-12T11:22:54Z | 39,997,371 | <p><code>s == s.max()</code> will evaluate to a boolean (due to the <code>==</code> in between the variables). The next step is storing that value in <code>is_max</code>.</p>
| 2 | 2016-10-12T11:24:36Z | [
"python",
"python-2.7",
"pandas"
] |
is_max = s == s.max() | How should I read this? | 39,997,334 | <p>While studying <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow">Pandas Style</a>, I got to the following:</p>
<pre><code>def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
</code></pre>
<p>How should I read <code>is_max = s == s.max()</code>?</p>
| 4 | 2016-10-12T11:22:54Z | 39,997,405 | <p>The code</p>
<p><code>is_max = s == s.max()</code></p>
<p>is evaluated as</p>
<p><code>is_max = (s == s.max())</code></p>
<p>The bit in parentheses is evaluated first, and that is either <code>True</code> or <code>False</code>. The result is assigned to <code>is_max</code>.</p>
| 2 | 2016-10-12T11:26:21Z | [
"python",
"python-2.7",
"pandas"
] |
is_max = s == s.max() | How should I read this? | 39,997,334 | <p>While studying <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow">Pandas Style</a>, I got to the following:</p>
<pre><code>def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
</code></pre>
<p>How should I read <code>is_max = s == s.max()</code>?</p>
| 4 | 2016-10-12T11:22:54Z | 39,997,436 | <p>In pandas <code>s</code> is very often <code>Series</code> (column in <code>DataFrame</code>).</p>
<p>So you compare all values in <code>Series</code> with <code>max</code> value of <code>Series</code> and get boolean mask. Output is in <code>is_max</code>. And then set style <code>'background-color: yellow'</code> only to cell of table where is <code>True</code> value - where is max value.</p>
<p>Sample:</p>
<pre><code>s = pd.Series([1,2,3])
print (s)
0 1
1 2
2 3
dtype: int64
is_max = s == s.max()
print (is_max)
0 False
1 False
2 True
dtype: bool
</code></pre>
| 2 | 2016-10-12T11:28:03Z | [
"python",
"python-2.7",
"pandas"
] |
is_max = s == s.max() | How should I read this? | 39,997,334 | <p>While studying <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow">Pandas Style</a>, I got to the following:</p>
<pre><code>def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
</code></pre>
<p>How should I read <code>is_max = s == s.max()</code>?</p>
| 4 | 2016-10-12T11:22:54Z | 39,997,846 | <blockquote>
<p>is_max is EQUAL TO comparison of s and s_max</p>
</blockquote>
| 0 | 2016-10-12T11:48:33Z | [
"python",
"python-2.7",
"pandas"
] |
is_max = s == s.max() | How should I read this? | 39,997,334 | <p>While studying <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow">Pandas Style</a>, I got to the following:</p>
<pre><code>def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
</code></pre>
<p>How should I read <code>is_max = s == s.max()</code>?</p>
| 4 | 2016-10-12T11:22:54Z | 39,998,175 | <p>According to the document, <a href="https://docs.python.org/3/reference/expressions.html#evaluation-order" rel="nofollow">Evaluation order</a>:</p>
<blockquote>
<p>Notice that while evaluating an assignment, the right-hand side is
evaluated before the left-hand side.</p>
</blockquote>
<p>This is quite reasonable, for you have to know the value of an expression before assigning it to a variable.</p>
<p>So Python first evaluates <code>s.max()</code>, followed by checking if the calculated value is equal to <code>s</code>, resulting in a boolean result, and then assign this boolean to a variable called <code>is_max</code>.</p>
<p>See also: <a href="https://docs.python.org/3/reference/simple_stmts.html#assignment-statements" rel="nofollow">Assignment statements</a></p>
| 0 | 2016-10-12T12:04:58Z | [
"python",
"python-2.7",
"pandas"
] |
TCP Client retrieving no data from the server | 39,997,379 | <p>I am trying to receive data from a TCP Server in python. I try to open a file at the server and after reading its content, try to send it to the TCP Client. The data is read correctly from the file as I try to print it first on the server side but nothing is received at the Client side.
PS. I am a beginner in network programming.</p>
<p>Server.py</p>
<pre><code>import socket
import os
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(("", 5000))
server_socket.listen(5)
client_socket, address = server_socket.accept()
print ("Conencted to - ",address,"\n")
data = client_socket.recv(1024).decode()
print ("Filename : ",data)
fp = open(data,'r')
string = fp.read()
fp.close()
print(string)
size = os.path.getsize(data)
size = str(size)
client_socket.send(size.encode())
client_socket.send(string.encode())
client_socket.close()
</code></pre>
<p>Client.py</p>
<pre><code>import socket,os
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(("", 5000))
size = 1024
print ("Your filename : \n")
string = input()
client_socket.send(string.encode())
size = client_socket.recv(1024).decode()
print ("The file size is - ",size[0:2]," bytes")
size = int(size[0:2])
string = client_socket.recv(size).decode()
print ("\nFile contains : ")
print (string)
client_socket.close();
</code></pre>
| 0 | 2016-10-12T11:24:58Z | 39,997,602 | <p>Add Accept() in while loop as below.</p>
<blockquote>
<p>while True: <BR>
client_socket, address = server_socket.accept()<BR>
print ("Conencted to - ",address,"\n")<BR>
......</p>
</blockquote>
| 1 | 2016-10-12T11:35:23Z | [
"python",
"sockets",
"tcp"
] |
TCP Client retrieving no data from the server | 39,997,379 | <p>I am trying to receive data from a TCP Server in python. I try to open a file at the server and after reading its content, try to send it to the TCP Client. The data is read correctly from the file as I try to print it first on the server side but nothing is received at the Client side.
PS. I am a beginner in network programming.</p>
<p>Server.py</p>
<pre><code>import socket
import os
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(("", 5000))
server_socket.listen(5)
client_socket, address = server_socket.accept()
print ("Conencted to - ",address,"\n")
data = client_socket.recv(1024).decode()
print ("Filename : ",data)
fp = open(data,'r')
string = fp.read()
fp.close()
print(string)
size = os.path.getsize(data)
size = str(size)
client_socket.send(size.encode())
client_socket.send(string.encode())
client_socket.close()
</code></pre>
<p>Client.py</p>
<pre><code>import socket,os
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(("", 5000))
size = 1024
print ("Your filename : \n")
string = input()
client_socket.send(string.encode())
size = client_socket.recv(1024).decode()
print ("The file size is - ",size[0:2]," bytes")
size = int(size[0:2])
string = client_socket.recv(size).decode()
print ("\nFile contains : ")
print (string)
client_socket.close();
</code></pre>
| 0 | 2016-10-12T11:24:58Z | 39,998,462 | <p>Try:</p>
<pre><code>#Get just the two bytes indicating the content length - client_socket.send(size.encode())
buffer = client_socket.recv(2)
size = len(buffer)
print size
print ("The file size is - ",buffer[0:2]," bytes")
#Now get the remaining. The actual content
print buffer.decode()
buffer = client_socket.recv(1024)
size = len(buffer)
print size
print buffer.decode()
</code></pre>
| 1 | 2016-10-12T12:19:53Z | [
"python",
"sockets",
"tcp"
] |
convert Unicode to cyrillic | 39,997,386 | <p>I have lists with Unicode:</p>
<pre><code>words
[u'\xd1', u'\xd0\xb0', u'\xd0\xb8', u'\u043e', u'\xd1\x81', u'-', u'\xd0\xb2', u'\u0438', u'\u0441', u'\xd0\xb8\xd1', u'\xd1\x83', u'\u0432', u'\u043a', u'\xd0\xba', u'\xd0\xbf\xd0\xbe', u'|', u'search', u'\xd0\xbd\xd0\xbe', u'25', u'in', u'\xd0\xbd\xd0\xb0', u'\u043d\u0430', u'\xd0\xbd\xd0\xb5', u'\xd0\xbe\xd0\xb1', u'\xd0\xbe\xd1\x82', u'\u043f\u043e', u'google', u'\xd0\x92', u'---', u'##']
[u'\u043e', u'\u0438', u'-', u'\u0441', u'\u0432', u'\u043a', u'\u0430', u'ebay', u'\u043d\u0430', u'\u0443', u'\u0442\u043e', u'"', u'33', u'**', u'ebay.', u'\u043f\u043e', u'jeans', u'at', u'\u0442\u043e\u0432\u0430\u0440', u'\u0434\u0436\u0438\u043d\u0441\u044b', u'\u0442\u043e\u0432\u0430\u0440\u043e\u0432', u'\u041a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u044f', u'\u043d\u0430\u0437\u0432\u0430\u043d\u0430', u'\u043e\u0442', u'tan', u'\u0432\u044b', u'altanbataev0', u'32', u'\u043d\u043e', u'&']
[u'\u043e', u'/', u'\u0430', u'-', u'\u0438', u'\u0441', u'\u0432', u'\u043a', u'\u0443', u'\u044f', u'\u043d\u043e', u'\u043f\u043e', u'\u0442\u043e', u'\u043d\u0430', u'\u043e\u0442', u'!', u'\u043d\u0435', u'"', u'\u043d\u0438', u'\u043a\u043e', u'\u0442\u0435\u0441\u0442', u'\u0437\u0430', u'\u043e\u043d']
</code></pre>
<p>I tried <code>[x.encode('latin-1') for x in lst]</code>
but it returns:</p>
<pre><code>UnicodeEncodeError: 'latin-1' codec can't encode character u'\u043e' in position 0: ordinal not in range(256)
</code></pre>
<p>I also tried <code>cp1252</code> and <code>utf8</code>, but they also return an error.</p>
| 0 | 2016-10-12T11:25:23Z | 40,012,784 | <p>You have Russian already (at least some of it), you just need to print the strings, not the list, on an IDE/terminal that supports Russian characters. Here's an excerpt, printed with Python 2.7 on a UTF-8 terminal:</p>
<pre><code>L = [u'\u0442\u043e\u0432\u0430\u0440', u'\u0434\u0436\u0438\u043d\u0441\u044b']
print L
for s in L:
print s
</code></pre>
<p>Output:</p>
<pre><code>[u'\u0442\u043e\u0432\u0430\u0440', u'\u0434\u0436\u0438\u043d\u0441\u044b']
ÑоваÑ
джинÑÑ
</code></pre>
| 1 | 2016-10-13T05:14:37Z | [
"python",
"unicode",
"encoding",
"latin"
] |
need only link as an output | 39,997,410 | <p>I have multiple html tag I want to extract only content of 1st href="..." for example this single line of data.</p>
<pre><code><a class="product-link" data-styleid="1424359" href="/tops/biba/biba-beige--pink-women-floral-print-top/1424359/buy?src=search"><img _src="http://assets.myntassets.com/h_240,q_95,w_180/v1/assets/images/1424359/2016/9/28/11475053941748-BIBA-Beige--Pink-Floral-Print-Kurti-7191475053941511-1_mini.jpg" _src2="http://assets.myntassets.com/h_307,q_95,w_230/v1/assets/images/1424359/2016/9/28/11475053941748-BIBA-Beige--Pink-Floral-Print-Kurti-7191475053941511-1_mini.jpg" alt="BIBA Beige &amp; Pink Women Floral Print Top" class="lazy loading thumb" onerror="this.className='thumb error'" onload="this.className='thumb'"/><div class="brand">Biba</div><div class="product">Beige &amp; Pink Women Floral Print Top</div><div class="price">Rs. 899</div><div class="sizes">Sizes: S, L, XL, XXL</div></a>
</code></pre>
<p>I want only <code>/tops/biba/biba-beige--pink-women-floral-print-top/1424359/buy?src=search</code> as output</p>
<p>The code is as follows:</p>
<pre><code>from bs4 import BeautifulSoup
import urllib
x=urllib.urlopen("http://www.myntra.com/tops-tees-menu/")
soup2 = BeautifulSoup(x, 'html.parser')
for i in soup2.find_all('a', attrs={'class': 'product-link'}):
print i
print i.find('a')['href']
</code></pre>
| 0 | 2016-10-12T11:26:33Z | 39,999,693 | <p>If you need a single "product link", just use <code>find()</code>:</p>
<pre><code>soup2.find('a', attrs={'class': 'product-link'})["href"]
</code></pre>
<p>Note that you can use a <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a> location technique as well:</p>
<pre><code>soup2.select_one('a.product-link')["href"]
</code></pre>
| 0 | 2016-10-12T13:19:57Z | [
"python",
"python-2.7",
"beautifulsoup",
"data-cleansing"
] |
Python class to take care of MySQL connection keeps throwing an error | 39,997,429 | <p>I am trying to build a class using Python in order to handle all interactions with MySQL databases.
I have installed the MySQLdb module and tested the connection to the databases using a simple code.
When I run the following code:</p>
<pre><code>import MySQLdb
db = MySQLdb.connect ( host="localhost", user="root" , passwd="xxxxxxx" , db= "some_db")
cursor = db.cursor()
s = cursor.execute ("SELECT * FROM some_table")
#Retrive a record to test it all went well...
t = cursor.fetchall()
for x in t:
print x
db.commit()
db.close()
</code></pre>
<p>The above code worked fine , it connected to the "some_db" database retrieved the one record from the "some_table" table.</p>
<p>However, When I run the code listed below , which consists of a section of a class using the same logic , I get an error thrown . The idea behind the class is to open any database and as many connections as one might wish to open, perform queries etc . Please bear in mind that the following is just a section of class, more functions and what-have- you will be added. </p>
<p>The code is as follows:</p>
<pre><code>import MySQLdb
class d_con (object):
def __init__ (self, host, user , passwd , db ):
self.host = host
self.user = user
self.passwd = passwd
self.db = db
self.d_b = MySQLdb.connect( host = self.host , user = self.user , passwd = self.passwd , db = self.db )
#Test the connection...
self.cur = self.d_b.cursor()
self.disp = self.cur.execute ("SELECT VERSION()")
print self.disp
def close_connection(self):
print "...Initializing connection closure..."
self.d_b.close()
#More function will be added , I am just testing the constructor.
if __name__ == "__main__ " :
a = d_con()
else:
print ( " d_con class is imported , please instantiate an object in the fashion a = datacon.d_con() . ")
</code></pre>
<p>When I import the file containing the above code, it executes and prints the statement asking to instantiate the class. After I instantiate the class</p>
<pre><code>a=datacon.d_con("localhost" , "root" , "xxxxxxx" , "some_db" )
</code></pre>
<p>this error get thrown:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "datacon.py", line 8, in __init__
self.d_b = MySQLdb.connect( self.host , self.user , self.passwd , self.db )
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/MySQLdb/connections.py", line 193, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1049, "Unknown database 'some_table'")
</code></pre>
<p>Rest assured that I have, repeatedly, checked that the parameters of the MySQLdb.connect () are correct. Besides, I, even, have hardwired the parameters in the constructor to make sure that there are no typos when instantiating the class. Every time, I have the same error.</p>
<p>Since I am new to Python , I am not sure whether the class file has trouble importing the MySQLdb module or if there is a problem of variable scope . The MySQLdb module did import successfully when I executed the code at the top of this conversation using the same database and the same MySQL credentials!
Could you guys help with this?
Thanks in advance. </p>
<p>Ps: I am running all this on Mac Os 10.12</p>
| 0 | 2016-10-12T11:27:50Z | 39,998,336 | <p>Isn't your MySQLdb.connect parameter syntax different in your examples? </p>
<p>In the working one you use named parameters for connect(host="localhost") etc. but in the class model you omit parameter names host=, user= etc and location based parameters do not seem to work.</p>
<p>Hannu</p>
| 0 | 2016-10-12T12:13:51Z | [
"python",
"mysql",
"python-2.7",
"class",
"mysql-python"
] |
Python class to take care of MySQL connection keeps throwing an error | 39,997,429 | <p>I am trying to build a class using Python in order to handle all interactions with MySQL databases.
I have installed the MySQLdb module and tested the connection to the databases using a simple code.
When I run the following code:</p>
<pre><code>import MySQLdb
db = MySQLdb.connect ( host="localhost", user="root" , passwd="xxxxxxx" , db= "some_db")
cursor = db.cursor()
s = cursor.execute ("SELECT * FROM some_table")
#Retrive a record to test it all went well...
t = cursor.fetchall()
for x in t:
print x
db.commit()
db.close()
</code></pre>
<p>The above code worked fine , it connected to the "some_db" database retrieved the one record from the "some_table" table.</p>
<p>However, When I run the code listed below , which consists of a section of a class using the same logic , I get an error thrown . The idea behind the class is to open any database and as many connections as one might wish to open, perform queries etc . Please bear in mind that the following is just a section of class, more functions and what-have- you will be added. </p>
<p>The code is as follows:</p>
<pre><code>import MySQLdb
class d_con (object):
def __init__ (self, host, user , passwd , db ):
self.host = host
self.user = user
self.passwd = passwd
self.db = db
self.d_b = MySQLdb.connect( host = self.host , user = self.user , passwd = self.passwd , db = self.db )
#Test the connection...
self.cur = self.d_b.cursor()
self.disp = self.cur.execute ("SELECT VERSION()")
print self.disp
def close_connection(self):
print "...Initializing connection closure..."
self.d_b.close()
#More function will be added , I am just testing the constructor.
if __name__ == "__main__ " :
a = d_con()
else:
print ( " d_con class is imported , please instantiate an object in the fashion a = datacon.d_con() . ")
</code></pre>
<p>When I import the file containing the above code, it executes and prints the statement asking to instantiate the class. After I instantiate the class</p>
<pre><code>a=datacon.d_con("localhost" , "root" , "xxxxxxx" , "some_db" )
</code></pre>
<p>this error get thrown:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "datacon.py", line 8, in __init__
self.d_b = MySQLdb.connect( self.host , self.user , self.passwd , self.db )
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/MySQLdb/connections.py", line 193, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1049, "Unknown database 'some_table'")
</code></pre>
<p>Rest assured that I have, repeatedly, checked that the parameters of the MySQLdb.connect () are correct. Besides, I, even, have hardwired the parameters in the constructor to make sure that there are no typos when instantiating the class. Every time, I have the same error.</p>
<p>Since I am new to Python , I am not sure whether the class file has trouble importing the MySQLdb module or if there is a problem of variable scope . The MySQLdb module did import successfully when I executed the code at the top of this conversation using the same database and the same MySQL credentials!
Could you guys help with this?
Thanks in advance. </p>
<p>Ps: I am running all this on Mac Os 10.12</p>
| 0 | 2016-10-12T11:27:50Z | 40,005,022 | <p>Sorry guys! The code is fine, I was, systematically, inputing the name of a table instead of a database to the self.db variable .
The code works fine.</p>
| 0 | 2016-10-12T17:41:27Z | [
"python",
"mysql",
"python-2.7",
"class",
"mysql-python"
] |
How to retrieve PID from windows event log? | 39,997,471 | <p><a href="https://i.stack.imgur.com/dgTiR.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/dgTiR.jpg" alt="enter image description here"></a>I have a python code that uses WMI module of python to get windows event viewer logs. But I am unable to retrieve the PID of the process that generated the log.
My code :</p>
<pre><code>wmi_obj = wmi.WMI('.') #Initialize WMI object and query.
wmi_query = "SELECT * FROM Win32_NTLogEvent WHERE Logfile='System' AND EventType=1"
query_result = wmi_obj.query(wmi_query) # Query WMI object
</code></pre>
<p>query_result is a list of wmi objects. Each object in this list is a windows system log and I want PID of the process that generated this log.
I have gone through several msdn docs but couldn't find anything useful there.</p>
<p>I want to retrieve the information marked in the above image.</p>
| 0 | 2016-10-12T11:29:24Z | 39,998,098 | <p>The Win32 API call to get event log items is <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa363674(v=vs.85).aspx" rel="nofollow">ReadEventLog</a> and this returns <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa363646(v=vs.85).aspx" rel="nofollow">EVENTLOGRECORD</a> structures. These do not have a field for a process identifier so unless your events have included this in the data of the event message it looks like this will not be available.</p>
| 0 | 2016-10-12T12:01:42Z | [
"python",
"wmi",
"pid",
"wmi-query",
"eventviewer"
] |
Finding the consecutive pairs of the values in a row and then operating on them in python | 39,997,575 | <p>I have a data frame in python like: <br></p>
<pre><code>item Value
abc 3
xyz 5
pqr 7
abc 3
pqr 7
abc 5
xyz 5
</code></pre>
<p>Now I want to add the first occurrence of any value and the second occurrence of that item's value in pairs,
so the output should be like:-<br></p>
<pre><code>item Value
abc 6
abc 8
xyz 10
pqr 14
</code></pre>
| -1 | 2016-10-12T11:34:17Z | 40,000,836 | <p>This is more of a question for logic than pandas but since you asked in pandas, there is a quick way to do this:</p>
<pre><code>df['value_to_add'] = df.sort_values('item').groupby('item').shift(-1)
df.dropna(inplace=True)
df['value'] = df.value + df.value_to_add
df.drop('value_to_add', inplace=True, axis=1)
</code></pre>
| 0 | 2016-10-12T14:10:04Z | [
"python",
"pandas"
] |
Can't use question mark (sqlite3, python) | 39,997,779 | <p>This code is stucks:</p>
<p><code>c.execute("select * from table_name where num=?", a)</code></p>
<p>And this is not:</p>
<p><code>c.execute("select * from table_name where num={}".format(a))</code></p>
<p>So what is wrong? Colmn in table is <code>int</code> and <code>a</code> is <code>int</code> too</p>
| -4 | 2016-10-12T11:45:27Z | 39,998,825 | <p>I'm not if it works the same on sqlite, but in mysql in order to use the "select from select" form, you must give each table an alias, like this:</p>
<pre><code>select count(*) from (select * from users as b where id=ID and lvl=LVL) as a
</code></pre>
<p>It might be the source of your problem.</p>
| -2 | 2016-10-12T12:38:32Z | [
"python",
"sqlite3"
] |
C++ uses twice the memory when moving elements from one dequeue to another | 39,997,840 | <p>In my project, I use <a href="https://github.com/pybind/pybind11" rel="nofollow">pybind11</a> to bind C++ code to Python. Recently I have had to deal with very large data sets (70GB+) and encountered need to split data from one <code>std::deque</code> between multiple <code>std::deque</code>'s. Since my dataset is so large, I expect the split not to have much of memory overhead. Therefore I went for one pop - one push strategy, which in general should ensure that my requirements are met. </p>
<p>That is all in theory. In practice, my process got killed. So I struggled for past two days and eventually came up with following minimal example demonstrating the problem.</p>
<p>Generally the minimal example creates bunch of data in <code>deque</code> (~11GB), returns it to Python, then calls again to <code>C++</code> to move the elements. Simple as that. Moving part is done in executor.</p>
<p>The interesting thing is, that if I don't use executor, memory usage is as expected and also when limits on virtual memory by ulimit are imposed, the program really respects these limits and doesn't crash.</p>
<p><strong>test.py</strong></p>
<pre><code>from test import _test
import asyncio
import concurrent
async def test_main(loop, executor):
numbers = _test.generate()
# moved_numbers = _test.move(numbers) # This works!
moved_numbers = await loop.run_in_executor(executor, _test.move, numbers) # This doesn't!
if __name__ == '__main__':
loop = asyncio.get_event_loop()
executor = concurrent.futures.ThreadPoolExecutor(1)
task = loop.create_task(test_main(loop, executor))
loop.run_until_complete(task)
executor.shutdown()
loop.close()
</code></pre>
<p><strong>test.cpp</strong></p>
<pre><code>#include <deque>
#include <iostream>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
namespace py = pybind11;
PYBIND11_MAKE_OPAQUE(std::deque<uint64_t>);
PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>);
template<class T>
void py_bind_opaque_deque(py::module& m, const char* type_name) {
py::class_<std::deque<T>, std::shared_ptr<std::deque<T>>>(m, type_name)
.def(py::init<>())
.def(py::init<size_t, T>());
}
PYBIND11_PLUGIN(_test) {
namespace py = pybind11;
pybind11::module m("_test");
py_bind_opaque_deque<uint64_t>(m, "NumbersDequeue");
// Generate ~11Gb of data.
m.def("generate", []() {
std::deque<uint64_t> numbers;
for (uint64_t i = 0; i < 1500 * 1000000; ++i) {
numbers.push_back(i);
}
return numbers;
});
// Move data from one dequeue to another.
m.def("move", [](std::deque<uint64_t>& numbers) {
std::deque<uint64_t> numbers_moved;
while (!numbers.empty()) {
numbers_moved.push_back(std::move(numbers.back()));
numbers.pop_back();
}
std::cout << "Done!\n";
return numbers_moved;
});
return m.ptr();
}
</code></pre>
<p><strong>test/__init__.py</strong></p>
<pre><code>import warnings
warnings.simplefilter("default")
</code></pre>
<p><strong>Compilation</strong>:</p>
<pre><code>g++ -std=c++14 -O2 -march=native -fPIC -Iextern/pybind11 `python3.5-config --includes` `python3.5-config --ldflags` `python3.5-config --libs` -shared -o test/_test.so test.cpp
</code></pre>
<p><strong>Observations:</strong></p>
<ul>
<li>When the moving part is not done by executor, so we just call <code>moved_numbers = _test.move(numbers)</code>, all works as expected, memory usage showed by htop stays around <code>11Gb</code>, great!.</li>
<li>When moving part is done in executor, the program takes double the memory and crashes.</li>
<li><p>When limits on virtual memory are introduced (~15Gb), all works fine, which is probably the most interesting part.</p>
<p><code>ulimit -Sv 15000000 && python3.5 test.py</code> >> <code>Done!</code>.</p></li>
<li><p>When we increase the limit the program crashes (150Gb > my RAM).</p>
<p><code>ulimit -Sv 150000000 && python3.5 test.py</code> >> <code>[1] 2573 killed python3.5 test.py</code></p></li>
<li><p>Usage of deque method <code>shrink_to_fit</code> doesn't help (And nor it should)</p></li>
</ul>
<p><strong>Used software</strong></p>
<pre><code>Ubuntu 14.04
gcc version 5.4.1 20160904 (Ubuntu 5.4.1-2ubuntu1~14.04)
Python 3.5.2
pybind11 latest release - v1.8.1
</code></pre>
<p><strong>Note</strong></p>
<p>Please note that this example was made merely to demonstrate the problem. Usage of <code>asyncio</code> and <code>pybind</code> is necessary for problem to occur. </p>
<p>Any ideas on what might be going on are most welcomed.</p>
| 3 | 2016-10-12T11:48:19Z | 40,068,900 | <p>The problem turned out to be caused by Data being created in one thread and then deallocated in another one. It is so because of malloc arenas in glibc <a href="https://siddhesh.in/posts/malloc-per-thread-arenas-in-glibc.html" rel="nofollow">(for reference see this)</a>. It can be nicely demonstrated by doing:</p>
<pre><code>executor1 = concurrent.futures.ThreadPoolExecutor(1)
executor2 = concurrent.futures.ThreadPoolExecutor(1)
numbers = await loop.run_in_executor(executor1, _test.generate)
moved_numbers = await loop.run_in_executor(executor2, _test.move, numbers)
</code></pre>
<p>which would take twice the memory allocated by <code>_test.generate</code> and</p>
<pre><code>executor = concurrent.futures.ThreadPoolExecutor(1)
numbers = await loop.run_in_executor(executor, _test.generate)
moved_numbers = await loop.run_in_executor(executor, _test.move, numbers)
</code></pre>
<p>which wound't.</p>
<p>This issue can be solved either by rewriting the code so it doesn't move the elements from one container to another (my case) or by setting environment variable <code>export MALLOC_ARENA_MAX=1</code> which will limit number of malloc arenas to 1. This however might have some performance implications involved (There is a good reason for having multiple arenas).</p>
| 0 | 2016-10-16T09:41:51Z | [
"python",
"c++",
"out-of-memory",
"python-asyncio",
"pybind11"
] |
Wrong dimensions when building convolutional autoencoder | 39,997,894 | <p>I'm taking my first steps in Keras and struggling with the dimensions of my layers. I'm currently building a convolutional autoencoder that I would like to train using the MNIST dataset. Unfortunately, I cannot seem to get the dimensions right, and I'm having trouble to understand where is my mistake.</p>
<p>My model is built through:</p>
<pre><code>def build_model(nb_filters=32, nb_pool=2, nb_conv=3):
input_img = Input(shape=(1, 28, 28))
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
encoded = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x)
return Model(input_img, decoded)
</code></pre>
<p>and the data is retrieved using:</p>
<pre><code>def load_data():
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 1, 28, 28))
x_test = np.reshape(x_test, (len(x_test), 1, 28, 28))
return x_train, x_test
</code></pre>
<p>As you see, I'm trying to normalize the images to display them in black and white, and simply to train an autoencoder to be able to restore them.</p>
<p>Below you can see the error I'm getting:</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:/Users//Documents/GitHub/main/research/research_framework/experiment.py",
line 46, in
callbacks=[EarlyStopping(patience=3)]) File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\site-packages\keras\engine\training.py",
line 1047, in fit
batch_size=batch_size) File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\site-packages\keras\engine\training.py",
line 978, in _standardize_user_data
exception_prefix='model target') File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\site-packages\keras\engine\training.py",
line 111, in standardize_input_data
str(array.shape)) Exception: Error when checking model target: expected convolution2d_7 to have shape (None, 8, 32, 1) but got array
with shape (60000L, 1L, 28L, 28L) Total params: 8273</p>
<p>Process finished with exit code 1</p>
</blockquote>
<p>Could you help me to decyper this error? Are there any materials beyond Keras website about building models and dealing with this kind of issues?</p>
<p>Cheers</p>
| 0 | 2016-10-12T11:50:24Z | 40,028,728 | <p>Looks like your input shape isn't correct. Try changing (1,28,28) to (28,28,1) and see if that works for you. For more details and other options to solve the problem, please refer to <a href="http://stackoverflow.com/questions/39848466/tensorflow-keras-convolution2d-valueerror-filter-must-not-be-larger-than-t/39882814#39882814">the answer to another question</a>.</p>
| 1 | 2016-10-13T18:48:29Z | [
"python",
"python-2.7",
"deep-learning",
"keras",
"autoencoder"
] |
Wrong dimensions when building convolutional autoencoder | 39,997,894 | <p>I'm taking my first steps in Keras and struggling with the dimensions of my layers. I'm currently building a convolutional autoencoder that I would like to train using the MNIST dataset. Unfortunately, I cannot seem to get the dimensions right, and I'm having trouble to understand where is my mistake.</p>
<p>My model is built through:</p>
<pre><code>def build_model(nb_filters=32, nb_pool=2, nb_conv=3):
input_img = Input(shape=(1, 28, 28))
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
encoded = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x)
return Model(input_img, decoded)
</code></pre>
<p>and the data is retrieved using:</p>
<pre><code>def load_data():
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 1, 28, 28))
x_test = np.reshape(x_test, (len(x_test), 1, 28, 28))
return x_train, x_test
</code></pre>
<p>As you see, I'm trying to normalize the images to display them in black and white, and simply to train an autoencoder to be able to restore them.</p>
<p>Below you can see the error I'm getting:</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:/Users//Documents/GitHub/main/research/research_framework/experiment.py",
line 46, in
callbacks=[EarlyStopping(patience=3)]) File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\site-packages\keras\engine\training.py",
line 1047, in fit
batch_size=batch_size) File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\site-packages\keras\engine\training.py",
line 978, in _standardize_user_data
exception_prefix='model target') File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\site-packages\keras\engine\training.py",
line 111, in standardize_input_data
str(array.shape)) Exception: Error when checking model target: expected convolution2d_7 to have shape (None, 8, 32, 1) but got array
with shape (60000L, 1L, 28L, 28L) Total params: 8273</p>
<p>Process finished with exit code 1</p>
</blockquote>
<p>Could you help me to decyper this error? Are there any materials beyond Keras website about building models and dealing with this kind of issues?</p>
<p>Cheers</p>
| 0 | 2016-10-12T11:50:24Z | 40,039,172 | <p>The reason was that while I changed my backend configuration in keras.json, I didn't change the image dimanesion, so it was still set to tensorflow.</p>
<p>Changing it to:</p>
<pre><code>{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
</code></pre>
<p>did the trick.</p>
| 0 | 2016-10-14T09:04:20Z | [
"python",
"python-2.7",
"deep-learning",
"keras",
"autoencoder"
] |
Creating a recursive function for calculating R = x - N * y, with conditions | 39,997,920 | <p>I wish to create a function for calculating R = x - N * y, where x and y are floats and N is the largest positive integer, so that x > N * y.</p>
<p>The function should only take the inputs of x and y.</p>
<p>I have previously created the function through a loop, but having trouble trying to convert it to recursion. My basic idea is something like:</p>
<pre><code>def florec(x, y):
if x > y:
R = x - N * y
florec(x, y_increased)
return R
</code></pre>
<p>My problem is that I can not figure out how to code "y_increased", meaning, how I can update N to N+1, and then call upon florec(x, (N+1)*y). Then update N+1 to N+2 and call upon florec(x, (N+2)*y) and so on.</p>
<p>Feeling quite stuck currently, so any help at all to move forward would be appreciated.</p>
| 0 | 2016-10-12T11:51:44Z | 39,998,123 | <p>Here's a recursive way of computing R:</p>
<pre><code>def florec(x, y):
if x > y:
return florec(x-y, y)
return x
</code></pre>
<p>(Note it only works for positive floats.)</p>
<p>I don't know if this addresses your recursion issues. Maybe this use case is not best suited to illustrate recursion.</p>
| 2 | 2016-10-12T12:02:30Z | [
"python",
"python-3.x"
] |
Creating a recursive function for calculating R = x - N * y, with conditions | 39,997,920 | <p>I wish to create a function for calculating R = x - N * y, where x and y are floats and N is the largest positive integer, so that x > N * y.</p>
<p>The function should only take the inputs of x and y.</p>
<p>I have previously created the function through a loop, but having trouble trying to convert it to recursion. My basic idea is something like:</p>
<pre><code>def florec(x, y):
if x > y:
R = x - N * y
florec(x, y_increased)
return R
</code></pre>
<p>My problem is that I can not figure out how to code "y_increased", meaning, how I can update N to N+1, and then call upon florec(x, (N+1)*y). Then update N+1 to N+2 and call upon florec(x, (N+2)*y) and so on.</p>
<p>Feeling quite stuck currently, so any help at all to move forward would be appreciated.</p>
| 0 | 2016-10-12T11:51:44Z | 39,998,546 | <p>If you're trying to return a value at each incrementation, you can use a generator function:</p>
<pre><code>def florec(x, y):
N = 30 # not sure what you want N to start with
while True:
if x > N * y:
yield x - N * y
else:
break
N += 1
for i in florec(332.432, 5.32):
print i
</code></pre>
<p>Result:</p>
<pre><code>172.832
167.512
162.192
156.872
151.552
146.232
140.912
135.592
130.272
124.952
119.632
114.312
108.992
103.672
98.352
93.032
87.712
82.392
77.072
71.752
66.432
61.112
55.792
50.472
45.152
39.832
34.512
29.192
23.872
18.552
13.232
7.912
2.592
</code></pre>
| 0 | 2016-10-12T12:23:52Z | [
"python",
"python-3.x"
] |
Creating a recursive function for calculating R = x - N * y, with conditions | 39,997,920 | <p>I wish to create a function for calculating R = x - N * y, where x and y are floats and N is the largest positive integer, so that x > N * y.</p>
<p>The function should only take the inputs of x and y.</p>
<p>I have previously created the function through a loop, but having trouble trying to convert it to recursion. My basic idea is something like:</p>
<pre><code>def florec(x, y):
if x > y:
R = x - N * y
florec(x, y_increased)
return R
</code></pre>
<p>My problem is that I can not figure out how to code "y_increased", meaning, how I can update N to N+1, and then call upon florec(x, (N+1)*y). Then update N+1 to N+2 and call upon florec(x, (N+2)*y) and so on.</p>
<p>Feeling quite stuck currently, so any help at all to move forward would be appreciated.</p>
| 0 | 2016-10-12T11:51:44Z | 39,998,591 | <p>Per <a href="http://stackoverflow.com/users/4653485/j%C3%A9r%C3%B4me">Jerome</a>'s original comment, the function you're describing is the definition of the modulus. If you absolutely need to use recursion, the following will get it done.</p>
<pre><code>def florec(x, y, N=1):
R = x - N * y
if R < y:
return R
return florec(x, y, N+1)
>>> florec(16.6, 3.2)
2.20000000000001
>>> 16.6 % 3.2
2.20000000000001
</code></pre>
<p>Note that the above will only work for positive x and y, and only when x is already greater than y.</p>
| 0 | 2016-10-12T12:25:19Z | [
"python",
"python-3.x"
] |
multiple connections to same spreadsheet (gspread) | 39,997,924 | <p>Good Morning, this is my first question so please bear with me! I created a system for doing a mock election at my high school that uses a raspberry pi and a touchscreen. The interface is handled through TKInter and the results are then appended to a google sheet using gspread. This allows me to then process the data in a variety of charts and analysis. </p>
<p>The issue I'm running into is that I am using 4 machines. If I take them one at a time they append the data fine. If I do multiple machines at once I SOMETIMES get one just waiting for the other and sometimes only 1 of the 4 is recorded. </p>
<p>Is there a may to better set things up to push multiple appends simultaneously from the different machines? Currently each machine is a mirror of the others. Would it work better if I did a different authorization setup and different JSON file for each machine? Or is there something else that I am missing? The relavaent code to writing to the sheet is below:</p>
<pre><code> #while True:
# Login if necessary.
if worksheet is None:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON, GDOCS_SPREADSHEET_NAME)
# Append the data in the spreadsheet, including a timestamp
try:
worksheet.append_row((datetime.datetime.now(), gender, grade, party, vote))
except:
# Error appending data, most likely because credentials are stale.
# Null out the worksheet so a login is performed at the top of the loop.
print('Append error, logging in again')
worksheet = None
</code></pre>
<p>Thank you in advance for your assistance!!</p>
| 1 | 2016-10-12T11:52:05Z | 40,091,230 | <p>The issue here is that if you have all four of these machines starting at the same time they will have a relatively similar state for the sheet. However, once a single machine has made a change the state for all the others becomes different from the actual sheet.</p>
<p>Lets say the sheet starts with N rows.</p>
<p>If Machine1 appends 2 rows to the sheet the number of rows then becomes the N + 2.</p>
<p>BUT</p>
<p>Machines 2-4 still think the sheet has N rows!</p>
<p>When any of these N-row machines tries to append to the sheet they will overwrite the data that Machine1 wrote to the sheet, obviously not something you want to happen when you're tallying votes.</p>
<p>You'll either need to find a way to let each machine know when the state of the sheet has changed and refresh its state or have a single point of contact from which all the appends are made.</p>
| 0 | 2016-10-17T16:20:59Z | [
"python",
"gspread"
] |
How to run a zeppelin notebook using REST api and return results in python? | 39,998,000 | <p>I am running a zeppelin notebook using the following REST call from python:</p>
<p><code>import requests
requests.post('http://x.y.z.x:8080/api/notebook/job/2BZ3VJZ4G').json()</code></p>
<p>The output is {u'status': u'OK'}</p>
<p>But I want to return some results/exception(if any) from few blocks in the zeppelin notebook to the python script. </p>
<p>I also tried to run only a paragraph in the notebook using </p>
<pre><code>requests.post('http://x.y.z.x:8080/api/notebook/job/2BZ3VJZ4G/20160922-140926_526498241').json()
</code></pre>
<p>and received the same output {u'status': u'OK'}. </p>
<p>Can somebody help me to retrieve the results from zeppelin in python?</p>
| 1 | 2016-10-12T11:55:48Z | 40,102,838 | <p>Zeppelin has introduced a synchronous API to run a paragraph in its latest yet to be releases 0.7.0 version. You can clone the latest code form their repo and build a snapshot yourself. URL for API is <a href="http://[zeppelin-server]:[zeppelin-port]/api/notebook/run/[notebookId]/[paragraphId]" rel="nofollow">http://[zeppelin-server]:[zeppelin-port]/api/notebook/run/[notebookId]/[paragraphId]</a>. This will return output of the paragraph after it has run completely.</p>
| 0 | 2016-10-18T08:06:55Z | [
"python",
"rest",
"apache-zeppelin"
] |
Passing captured video frame (numpy array object) from webcam feed to caffe model | 39,998,038 | <p>I am a beginner in Caffe and I am trying to use the Imagenet model for object classification. My requirement is that I want to use it from a webcam feed and detect the objects from the webcam feed.For this, I use the following code</p>
<pre><code> cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read() #frame is of type numpy array
#frame = caffe.io.array_to_datum(frame)
img = caffe.io.load_image(frame)
</code></pre>
<p>Obviously this does not work since <code>caffe.io.load_image</code> expects an image path.
As you can see, I also tried using <code>caffe io.py</code>'s <code>array_to_datum</code> function (got it from <a href="http://stackoverflow.com/questions/30815035/how-to-convert-mat-from-opencv-to-caffe-format">this stackoverflow question </a>) and passed the frame to caffe io load_image but this too does not work.
How can I pass the captured video frames from the webcam directly to <code>caffe io load_image</code> ?
and If that is not possible then what is the way to load the frame into <code>caffe io</code>? Please help. Thanks in advance.</p>
| 0 | 2016-10-12T11:58:37Z | 40,011,804 | <p>caffe.io.load_image does not do much. It only does the following :</p>
<ol>
<li>Read image from disk (given the path)</li>
<li>Make sure that the returned image has 3 dimensions (HxWx1 or HxWx3)</li>
</ol>
<p>(see source <a href="https://github.com/BVLC/caffe/blob/master/python/caffe/io.py#L279" rel="nofollow">caffe.io.load_image</a>)</p>
<p>So it does <strong>not</strong> load the image <strong>into your model</strong>, it's just a helper function that loads an image from disk. To load an image into memory, you can load it however you like (from disk, from webcam..etc). To load the network, feed the image into it and do inference, you can do something like the following :</p>
<pre><code># Load pre-trained network
net=caffe.Net(deployprototxt_path, caffemodel_path, caffe.TEST)
# Feed network with input
net.blobs['data'].data[0,...] = frame
# Do inference
net.forward()
# Access prediction
probabilities = net.blobs['prob'].data
</code></pre>
<p>Make sure the frame dimensions match the expected input dimensions as specified in the deploy.prototxt (see <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_reference_caffenet/deploy.prototxt#L6" rel="nofollow">example for CaffeNet</a>)</p>
| 3 | 2016-10-13T03:26:22Z | [
"python",
"opencv",
"video",
"caffe",
"pycaffe"
] |
need to restart python while applying Celery config | 39,998,083 | <p>That's a small story...</p>
<p>I had this error:</p>
<blockquote>
<p>AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'</p>
</blockquote>
<p>When changed tasks.py, like Diederik said at <a href="http://stackoverflow.com/questions/23215311/celery-with-rabbitmq-attributeerror-disabledbackend-object-has-no-attribute/39997411#39997411">Celery with RabbitMQ: AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'</a></p>
<pre><code>app = Celery('tasks', backend='rpc://', broker='amqp://guest@localhost//')
</code></pre>
<p>ran it</p>
<pre><code>>>> from tasks import add
>>> result = add.delay(4,50)
>>> result.ready()
</code></pre>
<p>got DisabledBackend again ... hmm what was that..</p>
<p>put code to file run.py and it returned True...</p>
<pre><code>from tasks import add
try:
result = add.delay(1,4)
print (result.ready())
except:
print "exept"
</code></pre>
<p>I see that if I call >>> from tasks import add after tasks.py changed, it doesn't get the updates... That behaviour is the same for ipython, so because of I can't understand the reason, I advice people to DEBUG from scripts like ~runthis.py </p>
<p>Will be glad for answer which will smash my idea...</p>
| 0 | 2016-10-12T12:01:07Z | 39,999,436 | <p>If using the interpreter, you need to </p>
<pre><code>reload(tasks)
</code></pre>
<p>this will force reimport tasks module</p>
| 0 | 2016-10-12T13:08:45Z | [
"python",
"ipython",
"celery"
] |
Two Celery Processes Running | 39,998,138 | <p>I am debugging an issue where every scheduled task is run twice. I saw two processes named celery. Is it normal for two celery tasks to be running?</p>
<pre><code>$ ps -ef | grep celery
hgarg 303 32764 0 17:24 ? 00:00:00 /home/hgarg/.pythonbrew/venvs/Python-2.7.3/hgarg_env/bin/python /data/hgarg/current/manage.py celeryd -B -s celery -E --scheduler=djcelery.schedulers.DatabaseScheduler -P eventlet -c 1000 -f /var/log/celery/celeryd.log -l INFO --pidfile=/var/run/celery/celeryd.pid --verbosity=1 --settings=settings
hgarg 307 21179 0 17:24 pts/1 00:00:00 grep celery
hgarg 32764 1 4 17:24 ? 00:00:00 /home/hgarg/.pythonbrew/venvs/Python-2.7.3/hgarg_env/bin/python /data/hgarg/current/manage.py celeryd -B -s celery -E --scheduler=djcelery.schedulers.DatabaseScheduler -P eventlet -c 1000 -f /var/log/celery/celeryd.log -l INFO --pidfile=/var/run/celery/celeryd.pid --verbosity=1 --settings=settings
</code></pre>
| 0 | 2016-10-12T12:03:09Z | 40,053,897 | <p>There were two pairs of Celery processes, the older of which shouldn't have been. Killing them all and restarting celery seems to have fixed it. Without any other recent changes, unlikely that anything else could have caused it.</p>
| 0 | 2016-10-15T00:52:50Z | [
"python",
"django",
"celery"
] |
How to sum values grouped by a categorical column in pandas? | 39,998,184 | <p>I have data which has a categorical column that groups the data and other columns likes this in a dataframe <code>df</code>.</p>
<pre><code>id subid value
1 10 1.5
1 20 2.5
1 30 7.0
2 10 12.5
2 40 5
</code></pre>
<p>What I need is a column that has the average value for each <code>subid</code> within each <code>id</code>. For example <code>df</code> could be, </p>
<pre><code>id subid value id_sum proportion
1 10 1.5 11.0 0.136
1 20 2.5 11.0 0.227
1 30 7.0 11.0 0.636
2 10 12.5 17.5 0.714
2 40 5 17.5 0.285
</code></pre>
<p>Now, I tried getting the id_sum column by doing</p>
<pre><code>df['id_sum'] = df.groupby('id')['value'].sum()
</code></pre>
<p>But this does not seem to work as hoped. My end goal is to get the <code>proportion</code> column. What is the correct way of getting that?</p>
| 0 | 2016-10-12T12:05:27Z | 39,998,460 | <p>here we go</p>
<pre><code>df['id_sum'] = df.groupby('id')['value'].transform('sum')
df['proportion'] = df['value'] / df['id_sum']
</code></pre>
| 2 | 2016-10-12T12:19:42Z | [
"python",
"pandas",
"aggregate"
] |
I want to compile the __init__.py file and install into other folder in yocto build system? | 39,998,240 | <p>I want to compile the __init__.py file and install into other folder in yocto build system?</p>
<p>Scenario:</p>
<blockquote>
<p>This basically in yocto build system. Third party library as zipped file
is available in download folder in my yocto build system. But that
library does not have the __init__.py file in the main folder. During
the building using bitbake command. It is unpacked and put the working
directory and the compile it. But __init__py and __init__.pyc file is
not available.</p>
</blockquote>
<p>Any one have idea, how can I manually copy this __init__.py file and compile using .bb file in yocto builds system? </p>
| 0 | 2016-10-12T12:08:31Z | 40,052,181 | <p>You can place empty file <strong>__init__.py</strong> along with recipe and add it to <strong>SRC_URI</strong> in this recipe:</p>
<pre><code>SRC_URI = "http://www.aaa/bbb.tar.gz \
file://__init__.py"
</code></pre>
<p>unpacker will just copy it to WORKDIR where the archive is unpacked.</p>
| 0 | 2016-10-14T21:20:54Z | [
"python",
"system",
"yocto"
] |
Append an empty row in dataframe using pandas | 39,998,262 | <p>I am trying to append an empty row at the end of dataframe but unable to do so, even trying to understand how pandas work with append function and still not getting it.</p>
<p>Here's the code:</p>
<pre><code>import pandas as pd
excel_names = ["ARMANI+EMPORIO+AR0143-book.xlsx"]
excels = [pd.ExcelFile(name) for name in excel_names]
frames = [x.parse(x.sheet_names[0], header=None,index_col=None).dropna(how='all') for x in excels]
for f in frames:
f.append(0, float('NaN'))
f.append(2, float('NaN'))
</code></pre>
<p>There are two columns and random number of row.</p>
<p>with "print f" in for loop i Get this:</p>
<pre><code> 0 1
0 Brand Name Emporio Armani
2 Model number AR0143
4 Part Number AR0143
6 Item Shape Rectangular
8 Dial Window Material Type Mineral
10 Display Type Analogue
12 Clasp Type Buckle
14 Case Material Stainless steel
16 Case Diameter 31 millimetres
18 Band Material Leather
20 Band Length Women's Standard
22 Band Colour Black
24 Dial Colour Black
26 Special Features second-hand
28 Movement Quartz
</code></pre>
| 0 | 2016-10-12T12:09:34Z | 39,998,297 | <p>The correct function is:</p>
<pre><code>pd.append(..., axis = 0) # axis = 0 for rows
</code></pre>
| 0 | 2016-10-12T12:11:54Z | [
"python",
"python-2.7",
"pandas"
] |
Append an empty row in dataframe using pandas | 39,998,262 | <p>I am trying to append an empty row at the end of dataframe but unable to do so, even trying to understand how pandas work with append function and still not getting it.</p>
<p>Here's the code:</p>
<pre><code>import pandas as pd
excel_names = ["ARMANI+EMPORIO+AR0143-book.xlsx"]
excels = [pd.ExcelFile(name) for name in excel_names]
frames = [x.parse(x.sheet_names[0], header=None,index_col=None).dropna(how='all') for x in excels]
for f in frames:
f.append(0, float('NaN'))
f.append(2, float('NaN'))
</code></pre>
<p>There are two columns and random number of row.</p>
<p>with "print f" in for loop i Get this:</p>
<pre><code> 0 1
0 Brand Name Emporio Armani
2 Model number AR0143
4 Part Number AR0143
6 Item Shape Rectangular
8 Dial Window Material Type Mineral
10 Display Type Analogue
12 Clasp Type Buckle
14 Case Material Stainless steel
16 Case Diameter 31 millimetres
18 Band Material Leather
20 Band Length Women's Standard
22 Band Colour Black
24 Dial Colour Black
26 Special Features second-hand
28 Movement Quartz
</code></pre>
| 0 | 2016-10-12T12:09:34Z | 39,998,624 | <p>You can add it by appending a Series to the dataframe as follows. I am assuming by blank you mean you want to add a row containing only "Nan".
You can first create a Series object with Nan. Make sure you specify the columns while defining 'Series' object in the -Index parameter.
The you can append it to the DF. Hope it helps!</p>
<pre><code>from numpy import nan as Nan
import pandas as pd
>>> df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
... 'B': ['B0', 'B1', 'B2', 'B3'],
... 'C': ['C0', 'C1', 'C2', 'C3'],
... 'D': ['D0', 'D1', 'D2', 'D3']},
... index=[0, 1, 2, 3])
>>> s2 = pd.Series([Nan,Nan,Nan,Nan], index=['A', 'B', 'C', 'D'])
>>> result = df1.append(s2)
>>> result
A B C D
0 A0 B0 C0 D0
1 A1 B1 C1 D1
2 A2 B2 C2 D2
3 A3 B3 C3 D3
4 NaN NaN NaN NaN
</code></pre>
| 0 | 2016-10-12T12:26:51Z | [
"python",
"python-2.7",
"pandas"
] |
How could I train a model in SPARK with MLlib with a Dataframe of String values? | 39,998,393 | <p>I am new on Apache SPARK and MLlib and I am trying to build a model to make a supervised classification over a dataset. The problema that I have is that all the examples I found in the internet are explanations of how to work with REGRESSION problems, and always with numerical values. But I have the next context:</p>
<p>I have installed HDP distribution (Hortonworks) and I am working from ZEPPELIN with a pyspark interpreter.</p>
<p>I have a dataframe with some attributes which are 'double' and 'string' types; and the label I want to predict is a string ('yes' or 'no'). I show you what I did by the moment:</p>
<pre><code>from pyspark.mllib.tree import RandomForest, RandomForestModel
from pyspark.mllib.util import MLUtils
from pyspark.sql.types import *
from pyspark.sql import Row
from pyspark.mllib.regression import LabeledPoint
# I get the data
sqlContext = sqlc
df = sqlContext.sql("SELECT string1, double1, string2, double2, label_to_predict FROM HIVE_TABLE")
temp = df.map(lambda line:LabeledPoint(line[0],[line[1:]]))
temp.take(5)
</code></pre>
<p>Here I get an error:</p>
<pre><code>Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 19, dlladatanaly02.orona.es): org.apache.spark.api.python. PythonException: Traceback (most recent call last):
File "/usr/hdp/current/spark-client/python/pyspark/worker.py", line 111, in main
process()
File "/usr/hdp/current/spark-client/python/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/hdp/current/spark-client/python/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/hdp/current/spark-client/python/pyspark/rdd.py", line 1293, in takeUpToNumLeft
yield next(iterator)
File "<string>", line 9, in <lambda>
File "/usr/hdp/current/spark-client/python/pyspark/mllib/regression.py", line 52, in __init__
self.features = _convert_to_vector(features)
File "/usr/hdp/current/spark-client/python/pyspark/mllib/linalg/__init__.py", line 71, in _convert_to_vector
return DenseVector(l)
File "/usr/hdp/current/spark-client/python/pyspark/mllib/linalg/__init__.py", line 274, in __init__
ar = np.array(ar, dtype=np.float64)
ValueError: setting an array element with a sequence.
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1856)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1869)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1882)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/current/spark-client/python/pyspark/worker.py", line 111, in main
process()
File "/usr/hdp/current/spark-client/python/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/hdp/current/spark-client/python/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/hdp/current/spark-client/python/pyspark/rdd.py", line 1293, in takeUpToNumLeft
yield next(iterator)
File "<string>", line 9, in <lambda>
File "/usr/hdp/current/spark-client/python/pyspark/mllib/regression.py", line 52, in __init__
self.features = _convert_to_vector(features)
File "/usr/hdp/current/spark-client/python/pyspark/mllib/linalg/__init__.py", line 71, in _convert_to_vector
return DenseVector(l)
File "/usr/hdp/current/spark-client/python/pyspark/mllib/linalg/__init__.py", line 274, in __init__
ar = np.array(ar, dtype=np.float64)
ValueError: setting an array element with a sequence.
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n', JavaObject id=o287), <traceback object at 0x13cc098>)
</code></pre>
<p>I suposed that the values must be numerical, and I tried this casting:</p>
<pre><code>temp = df.map(lambda line:LabeledPoint(float(line[0]),[float(line[1:])]))
</code></pre>
<p>But then I get this error:</p>
<pre><code>TypeError: float() argument must be a string or a number
</code></pre>
<p>So, my question is:</p>
<p>If I want to make a classification to predict a string value (or nominal value) with attributes that are numerical and string, how could I do it? (Supose I want to use, for example, RandomForest or SVM model).</p>
| 0 | 2016-10-12T12:16:20Z | 39,998,713 | <p>You cannot. All variables have to be index and or encoded to be used with Spark ML/MLib. </p>
<p>Check:</p>
<ul>
<li><a href="https://spark.apache.org/docs/latest/ml-features.html" rel="nofollow">Extracting, transforming and selecting features</a></li>
<li><a href="https://spark.apache.org/docs/latest/mllib-feature-extraction.html" rel="nofollow">Feature Extraction and Transformation - RDD-based API</a> </li>
</ul>
| 0 | 2016-10-12T12:31:53Z | [
"python",
"apache-spark",
"machine-learning",
"classification",
"prediction"
] |
How to delete a file without an extension? | 39,998,424 | <p>I have made a function for deleting files:</p>
<pre><code>def deleteFile(deleteFile):
if os.path.isfile(deleteFile):
os.remove(deleteFile)
</code></pre>
<p>However, when passing a FIFO-filename (without file-extension), this is not accepted by the os-module.
Specifically I have a subprocess create a FIFO-file named 'Testpipe'.
When calling:</p>
<pre><code>os.path.isfile('Testpipe')
</code></pre>
<p>It results to <code>False</code>. The file is not in use/open or anything like that. Python runs under Linux.</p>
<p>How can you correctly delete a file like that?</p>
| 5 | 2016-10-12T12:17:51Z | 39,998,522 | <p><code>isfile</code> checks for <em>regular</em> file.</p>
<p>You could workaround it like this by checking if it exists but not a directory or a symlink:</p>
<pre><code>def deleteFile(filename):
if os.path.exists(filename) and not os.path.isdir(filename) and not os.path.islink(filename):
os.remove(filename)
</code></pre>
| 6 | 2016-10-12T12:22:19Z | [
"python"
] |
Flask-Login - User returns to Anonymous after successful login then redirect | 39,998,516 | <p>I'm trying just to setup a base login model with developed code mostly from <a href="https://flask-login.readthedocs.io/en/latest/" rel="nofollow" title="Flask-Login">Flask-Login</a>. </p>
<p>After my user successfully logs in and I issue a <code>redirect(url_for('index'))</code>, the user loses his authentication and returns to the value <code>flask_login.AnonymousUserMixin.</code></p>
<p>I realize there are some simple workarounds but I'm trying to understand why my code doesn't work like the examples.</p>
<p>I must be missing something simple or a lack understanding of Flask-Login. How can the user remain logged in after a redirect?</p>
<pre><code>__init__.py
...
login_manager = LoginManager()
login_manager.init_app(app)
login_manager.login_view = "login"
...
models.py
class User(UserMixin, db.Model):
__tablename__ = 'users'
uid = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(100))
lastname = db.Column(db.String(100))
username = db.Column(db.String(100), unique=True)
email = db.Column(db.String(120), unique=True)
pwdhash = db.Column(db.String(54))
def __init__(self, firstname, lastname, email, username, password):
... self ... = ....
def get_id(self):
return(self.username)
def __repr__(self):
return '<User is:%r>' % (self.username)
routes.py
@login_manager.user_loader
def load_user(user_id):
try:
return User.query.get(user_id)
except:
return None
@app.route('/login', methods=['GET', 'POST'])
def login():
form = LoginForm()
if request.method == "POST":
if form.validate():
user = User.query.filter_by(username=form.username.data.lower()).first()
login_user(user, remember=False)
assert current_user.is_authenticated
return redirect(url_for('index'))
else:
return render_template('login.html', form=form)
return render_template('login.html', form=form)
@app.route('/index')
@login_required
def index():
return render_template("home.html")
</code></pre>
<p>I have reviewed <a href="http://stackoverflow.com/questions/30150626/flask-login-user-is-set-to-anonymous-after-login" title="flask-login user is set to anonymous after login">flask-login user is set to anonymous after login</a> but that login method is different than the one above.</p>
| 0 | 2016-10-12T12:21:56Z | 39,998,810 | <p>Add these functions to your <code>User</code> class.</p>
<pre><code>def is_authenticated(self):
return True
def is_active(self):
return self.active
def is_anonymous(self):
return False
</code></pre>
<p>If I remember correctly,<code>Flask-Login</code> requires them in your <code>User</code> class.</p>
<p><code>self.active</code> is a Boolean field. Trivially, it tells <code>Flask-Login</code> whether the user is active or not. You might want to declare it using <code>active = db.Column(db.Boolean, nullable=False)</code>.</p>
| 0 | 2016-10-12T12:37:29Z | [
"python",
"flask-login"
] |
Flask-Login - User returns to Anonymous after successful login then redirect | 39,998,516 | <p>I'm trying just to setup a base login model with developed code mostly from <a href="https://flask-login.readthedocs.io/en/latest/" rel="nofollow" title="Flask-Login">Flask-Login</a>. </p>
<p>After my user successfully logs in and I issue a <code>redirect(url_for('index'))</code>, the user loses his authentication and returns to the value <code>flask_login.AnonymousUserMixin.</code></p>
<p>I realize there are some simple workarounds but I'm trying to understand why my code doesn't work like the examples.</p>
<p>I must be missing something simple or a lack understanding of Flask-Login. How can the user remain logged in after a redirect?</p>
<pre><code>__init__.py
...
login_manager = LoginManager()
login_manager.init_app(app)
login_manager.login_view = "login"
...
models.py
class User(UserMixin, db.Model):
__tablename__ = 'users'
uid = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(100))
lastname = db.Column(db.String(100))
username = db.Column(db.String(100), unique=True)
email = db.Column(db.String(120), unique=True)
pwdhash = db.Column(db.String(54))
def __init__(self, firstname, lastname, email, username, password):
... self ... = ....
def get_id(self):
return(self.username)
def __repr__(self):
return '<User is:%r>' % (self.username)
routes.py
@login_manager.user_loader
def load_user(user_id):
try:
return User.query.get(user_id)
except:
return None
@app.route('/login', methods=['GET', 'POST'])
def login():
form = LoginForm()
if request.method == "POST":
if form.validate():
user = User.query.filter_by(username=form.username.data.lower()).first()
login_user(user, remember=False)
assert current_user.is_authenticated
return redirect(url_for('index'))
else:
return render_template('login.html', form=form)
return render_template('login.html', form=form)
@app.route('/index')
@login_required
def index():
return render_template("home.html")
</code></pre>
<p>I have reviewed <a href="http://stackoverflow.com/questions/30150626/flask-login-user-is-set-to-anonymous-after-login" title="flask-login user is set to anonymous after login">flask-login user is set to anonymous after login</a> but that login method is different than the one above.</p>
| 0 | 2016-10-12T12:21:56Z | 40,010,704 | <p>Well, I need to answer my own question on a dumb oversight that I should have caught (bleary eyes?)</p>
<p>Simple fix in <code>def load_user(user_id):</code> where I needed to replace the line<br></p>
<p>Bad: <code>return User.query.get(user_id)</code></p>
<p>Good: <code>return User.query.filter_by(username=user_id).first()</code></p>
<p>I suppose the take-away is the importance of <code>def load_user()</code> in preserving session integrity.</p>
| 0 | 2016-10-13T01:05:51Z | [
"python",
"flask-login"
] |
Regular expression not working in pywinauto unless I give full text | 39,998,520 | <p>Here is the code snippet that I am using:</p>
<pre><code>browserWin = application.Application()
browserWin.Start(<FirefoxPath>)
# This starts the Firefox browser.
browserWin.Window_(title_re="\.* Firefox \.*")
</code></pre>
<p>If I use the expression: ".* Mozilla Firefox Start Page .*", it works. However, if I only use a partial text, it doesn't work. </p>
<p>What am I doing wrong here?</p>
| 0 | 2016-10-12T12:22:14Z | 39,998,728 | <p>See this excerpt from the pywinauto source code:</p>
<pre><code>title_regex = re.compile(title_re)
def _title_match(w):
t = handleprops.text(w)
if t is not None:
return title_regex.match(t)
return False
</code></pre>
<p>The <code>return title_regex.match(t)</code> line means that the regex is passed to the <a href="https://docs.python.org/2/library/re.html#re.match" rel="nofollow"><code>re.match</code></a> method, so, the regex pattern is anchored at the string start.</p>
<p>To match for a partial match, you need to start the pattern with <code>.*</code>: <code>".* FireFox "</code>.</p>
| 0 | 2016-10-12T12:32:47Z | [
"python",
"regex",
"pywinauto"
] |
Regular expression not working in pywinauto unless I give full text | 39,998,520 | <p>Here is the code snippet that I am using:</p>
<pre><code>browserWin = application.Application()
browserWin.Start(<FirefoxPath>)
# This starts the Firefox browser.
browserWin.Window_(title_re="\.* Firefox \.*")
</code></pre>
<p>If I use the expression: ".* Mozilla Firefox Start Page .*", it works. However, if I only use a partial text, it doesn't work. </p>
<p>What am I doing wrong here?</p>
| 0 | 2016-10-12T12:22:14Z | 40,001,284 | <p>Escaped <code>.</code> with "\" means real dot symbol should be at the start of the text. Just remove "\".</p>
| 2 | 2016-10-12T14:30:46Z | [
"python",
"regex",
"pywinauto"
] |
How to save all items to memcached without losing them? | 39,998,614 | <pre><code>In [11]: from django.core.cache import cache
In [12]: keys = []
In [13]: for i in range(1, 10000):
...: key = "Key%s" % i
...: value = ("Value%s" % i)*5000
...: cache.set(key, value, None)
...: keys.append(key)
...: # check lost keys
...: lost = 0
...: for k in keys:
...: if not cache.get(k):
...: lost += 1
...: if lost:
...: print "Lost %s in %s" % (lost, i)
</code></pre>
<p>I am using Django, memcached with python-memcached with below cache settings: </p>
<pre><code>CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
</code></pre>
<p>For the above program, I started losing caches from i=1437. Can you please tell me what to do so I can save all items to cache ?</p>
| 0 | 2016-10-12T12:26:29Z | 39,998,762 | <p>You can achieve it by increasing Memcached cache size. </p>
| 0 | 2016-10-12T12:34:52Z | [
"python",
"django",
"memcached"
] |
Response mess up when curl Django url | 39,998,758 | <pre><code>@csrf_exempt
def add_node(request, uid=None):
resp = {
'status': 0
}
return JsonResponse(resp)
</code></pre>
<p>Then I use <code>curl</code> to test it, which messed my terminal. But it works fine in browser.</p>
<p><img src="https://i.stack.imgur.com/ifQEM.png" alt="screenshot">
<img src="https://i.stack.imgur.com/vfFag.png" alt="screenshot"></p>
| 0 | 2016-10-12T12:34:38Z | 40,011,999 | <p>Turns out it is because of my proxy configuration. Whenever <code>$http_proxy</code> variable is set, things go wrong.</p>
| 0 | 2016-10-13T03:50:06Z | [
"python",
"django",
"curl"
] |
regex - HTML tags with parameters | 39,998,776 | <pre><code>import re
text = "sometext <table var1=1 var2=2 var3=3> sometext"
tokenize = re.compile(r'(<\w+) ((\w+=\w+ )*) (\w+=\w+>)')
tokens = tokenize.search(text)
</code></pre>
<p>What I'm attempting to do here, in an exercise to better understand how to use regular expressions in Python, is write one to split HTML tags which have params into constituent parts so I can reformat these. tokens.groups() produces:</p>
<pre><code>('<table', 'var1=1 var2=2 ', 'var2=2 ', 'var3=3>')
</code></pre>
<p>but what I'm hoping to see is:</p>
<pre><code>('table' 'var1=1', 'var2=2', 'var3=3')
</code></pre>
<p>so I want the open, close '<', '>' to not feature, and first and second params (and additional ones up to the last) to be recognised as separate tokens. Where am I going wrong?</p>
<p>Thanks!</p>
| 2 | 2016-10-12T12:35:38Z | 39,999,157 | <p>I think some easier regex and a split is better for this case.</p>
<pre><code>import re
text = "sometext <table var1=1 var2=2 var3=3> sometext"
print(re.findall("<(.+)>", text)[0].split())
</code></pre>
<p>Returns <code>['table', 'var1=1', 'var2=2', 'var3=3']</code></p>
| 0 | 2016-10-12T12:55:17Z | [
"python",
"regex"
] |
Quickly build large dict in elegant manner | 39,998,790 | <p>I have a list with size about 30000: <code>['aa', 'bb', 'cc', 'dd', ...]</code>, from this list, I want to build a dict which maps element to index, so the result dict is <code>{'aa': 0, 'bb': 1, 'cc': 2, 'dd': 3, ...}</code>. Here comes my code:</p>
<pre><code>cnt = 0
mp = {}
for name in name_list:
mp[name] = cnt
cnt += 1
return mp
</code></pre>
<p>It seems my code is not concise and effective, so how to improve it?</p>
| 0 | 2016-10-12T12:36:30Z | 39,998,824 | <p>You can use <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a> to create the mapping between list object and the indices, then call <code>dict</code> on the reversed outputs:</p>
<pre><code>mp = dict(tup[::-1] for tup in enumerate(name_list))
</code></pre>
| 3 | 2016-10-12T12:38:31Z | [
"python"
] |
Quickly build large dict in elegant manner | 39,998,790 | <p>I have a list with size about 30000: <code>['aa', 'bb', 'cc', 'dd', ...]</code>, from this list, I want to build a dict which maps element to index, so the result dict is <code>{'aa': 0, 'bb': 1, 'cc': 2, 'dd': 3, ...}</code>. Here comes my code:</p>
<pre><code>cnt = 0
mp = {}
for name in name_list:
mp[name] = cnt
cnt += 1
return mp
</code></pre>
<p>It seems my code is not concise and effective, so how to improve it?</p>
| 0 | 2016-10-12T12:36:30Z | 39,998,835 | <p>The shortest is to use <a href="https://docs.python.org/2/library/functions.html#enumerate"><code>enumerate</code></a> and a dict comprehension, I guess:</p>
<pre><code>mp = {element: index for index, element in enumerate(name_list)}
</code></pre>
| 5 | 2016-10-12T12:38:54Z | [
"python"
] |
Quickly build large dict in elegant manner | 39,998,790 | <p>I have a list with size about 30000: <code>['aa', 'bb', 'cc', 'dd', ...]</code>, from this list, I want to build a dict which maps element to index, so the result dict is <code>{'aa': 0, 'bb': 1, 'cc': 2, 'dd': 3, ...}</code>. Here comes my code:</p>
<pre><code>cnt = 0
mp = {}
for name in name_list:
mp[name] = cnt
cnt += 1
return mp
</code></pre>
<p>It seems my code is not concise and effective, so how to improve it?</p>
| 0 | 2016-10-12T12:36:30Z | 39,999,537 | <p>How about using the list index for each item? </p>
<pre><code>mp = {item: name_list.index(item) for item in name_list}
</code></pre>
| 1 | 2016-10-12T13:13:59Z | [
"python"
] |
pandas table subsets giving invalid type comparison error | 39,998,850 | <p>I am using pandas and want to select subsets of data and apply it to other columns.
e.g.</p>
<ul>
<li>if there is data in column A; & </li>
<li>if there is NO data in column B;</li>
<li>then, apply the data in column A to column D</li>
</ul>
<p>I have this working fine for now using <code>.isnull()</code> and <code>.notnull()</code>.
e.g. </p>
<pre><code>df = pd.DataFrame({'A' : pd.Series(np.random.randn(4)),
'B' : pd.Series(np.nan),
'C' : pd.Series(['yes','yes','no','maybe'])})
df['D']=''
df
Out[44]:
A B C D
0 0.516752 NaN yes
1 -0.513194 NaN yes
2 0.861617 NaN no
3 -0.026287 NaN maybe
# Now try the first conditional expression
df['D'][df['A'].notnull() & df['B'].isnull()] \
= df['A'][df['A'].notnull() & df['B'].isnull()]
df
Out[46]:
A B C D
0 0.516752 NaN yes 0.516752
1 -0.513194 NaN yes -0.513194
2 0.861617 NaN no 0.861617
3 -0.026287 NaN maybe -0.0262874
</code></pre>
<p>When one adds a third condition, to also check whether data in column C matches a particular string, we get the error:</p>
<pre><code>df['D'][df['A'].notnull() & df['B'].isnull() & df['C']=='yes'] \
= df['A'][df['A'].notnull() & df['B'].isnull() & df['C']=='yes']
File "C:\Anaconda2\Lib\site-packages\pandas\core\ops.py", line 763, in wrapper
res = na_op(values, other)
File "C:\Anaconda2\Lib\site-packages\pandas\core\ops.py", line 718, in na_op
raise TypeError("invalid type comparison")
TypeError: invalid type comparison
</code></pre>
<p>I have read that this occurs due to the different datatypes. And I can get it working if I change all the strings in column C for integers or booleans. We also know that string on its own would work, e.g. <code>df['A'][df['B']=='yes']</code> gives a boolean list.</p>
<p>So any ideas how/why this is not working when combining these datatypes in this conditional expression? What are the more pythonic ways to do what appears to be quite long-winded?</p>
<p>Thanks</p>
| 1 | 2016-10-12T12:39:55Z | 39,998,901 | <p>I think you need add parentheses <code>()</code> to conditions, also better is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> for selecting column with boolean mask which can be assigned to variable <code>mask</code>:</p>
<pre><code>mask = (df['A'].notnull()) & (df['B'].isnull()) & (df['C']=='yes')
print (mask)
0 True
1 True
2 False
3 False
dtype: bool
df.ix[mask, 'D'] = df.ix[mask, 'A']
print (df)
A B C D
0 -0.681771 NaN yes -0.681771
1 -0.871787 NaN yes -0.871787
2 -0.805301 NaN no
3 1.264103 NaN maybe
</code></pre>
| 1 | 2016-10-12T12:42:31Z | [
"python",
"pandas",
"indexing",
"dataframe",
"condition"
] |
Read regexes from file and avoid or undo escaping | 39,998,980 | <p>I want to read regular expressions from a file, where each line contains a regex:</p>
<pre><code>lorem.*
dolor\S*
</code></pre>
<p>The following code is supposed to read each and append it to a list of regex strings:</p>
<pre><code>vocabulary=[]
with open(path, "r") as vocabularyFile:
for term in vocabularyFile:
term = term.rstrip()
vocabulary.append(term)
</code></pre>
<p>This code seems to escape the <code>\</code> special character in the file as <code>\\</code>. How can I either avoid escaping or unescape the string so that it can be worked with as if I wrote this?</p>
<pre><code>regex = r"dolor\S*"
</code></pre>
| -2 | 2016-10-12T12:47:06Z | 39,999,099 | <p>You are getting confused by <em>echoing the value</em>. The Python interpreter echoes values by printing the <code>repr()</code> function result, and this makes sure to escape any meta characters:</p>
<pre><code>>>> regex = r"dolor\S*"
>>> regex
'dolor\\S*'
</code></pre>
<p><code>regex</code> is still an 8 character string, not 9, and the single character at index 5 is a single backslash:</p>
<pre><code>>>> regex[4]
'r'
>>> regex[5]
'\\'
>>> regex[6]
'S'
</code></pre>
<p>Printing the string writes out all characters verbatim, so no escaping takes place:</p>
<pre><code>>>> print(regex)
dolor\S*
</code></pre>
<p>The same process is applied to the contents of containers, like a <code>list</code> or a <code>dict</code>:</p>
<pre><code>>>> container = [regex, 'foo\nbar']
>>> print(container)
['dolor\\S*', 'foo\nbar']
</code></pre>
<p>Note that I didn't echo there, I printed. <code>str(list_object)</code> produces the same output as <code>repr(list_object)</code> here.</p>
<p>If you were to print individual elements from the list, you get the same unescaped result again:</p>
<pre><code>>>> print(container[0])
dolor\S*
>>> print(container[1])
foo
bar
</code></pre>
<p>Note how the <code>\n</code> in the second element was written out as a newline now. It is for <em>that reason</em> that containers use <code>repr()</code> for contents; to make otherwise hard-to-detect or non-printable data visible.</p>
<p>In other words, your strings do <em>not</em> contain escaped strings here.</p>
| 1 | 2016-10-12T12:53:09Z | [
"python",
"regex",
"python-3.x",
"escaping"
] |
Django abstract models - how to implement specific access in abstract view method? | 39,999,112 | <p>Let's say I have an abstract model in Django, with two models extending off that.</p>
<p>Inside a Django Rest Framework generic view, how can I control creation of one of the two implementing models?</p>
<p>My solution is below:</p>
<pre><code> from enum import Enum
from rest_framework.views import APIView
class BazType(Enum):
FOO = 1
BAR = 2
class AbstractView(APIView):
def __init__self():
#Init code here
def _internal_method(self, django_model, this_type = BazType.FOO):
if this_type == BazType.FOO:
record, created = ConcreteModelOne.objects.get_or_create(some_field = django_model)
elif this.type == BazType.BAR:
record, created = ConcreteModelTwo.objects.get_or_create(some_field = django_model)
</code></pre>
<p>It works, but is there a way to get rid of the if/else block? In other words, is there a way, from a subclass of <code>AbstractView</code>, to pass in some identifier for which model is required for the <code>get_or_create</code> method call?</p>
| 0 | 2016-10-12T12:53:33Z | 39,999,424 | <p>You can create a mapping/dictionary that maps a model class to the values of each <code>Enum</code> member, and use it in your <code>_internal_method</code> for fetching the model class given the <code>Enum</code> name:</p>
<pre><code>class AbstractView(APIView):
models_map = {1: ConcreteModelOne, 2: ConcreteModelTwo}
def __init__(self):
#Init code here
def _internal_method(self, django_model, this_type=BazType.FOO):
record, created = self.models_map[this_type].objects.get_or_create(some_field = django_model)
</code></pre>
| 0 | 2016-10-12T13:08:15Z | [
"python",
"django"
] |
Referencing relation's relations in Django Serializer | 39,999,173 | <p>Let's say I have some models:</p>
<pre><code>class A(models.Model):
...
class B(models.Model):
my_reference_to_a = models.ForeignKey(A)
b_field_1 = ...
b_field_2 = ...
class C(models.Model):
my_reference_to_b = models.ForeignKey(B)
c_field_1 = ...
...
</code></pre>
<p>In my serializer for <code>C</code>, I want to include all of the fields in <code>C</code>, all the fields in <code>B</code>, as well as the reference to <code>A</code> in <code>B</code> (but not the reference to <code>B</code> in <code>C</code>), so the JSON API output would be something like this:</p>
<pre><code>{
"data": [{
"type": "C",
"id": "1",
"attributes": {
"b_field_1": "...",
"b_field_2": "...",
"c_field_1": "..."
},
"relationships": {
"a": {
"data": {
"type": "A",
"id": "1"
}
}
}
}],
...
}
</code></pre>
<p>How would I go about this? I've already tried doing something like this inside my serializer for <code>C</code>:</p>
<pre><code>A = ASerializer(source='my_reference_to_b.my_reference_to_a')
</code></pre>
<p>But that doesn't work, as DRF doesn't seem to support dotted paths for sources. I've also tried supplying a method that returns the proper model (the model is valid inside the method) as the source, but that outputs the reference in the JSON as:</p>
<pre><code>"a": {
"data": null
}
</code></pre>
<p>On my <code>A</code> model, I also have a reference to another model, <code>D</code>, that is not explicitly stated in <code>A</code>, but is instead defined in <code>D</code> as a OneToMany relationship (Many <code>D</code> models to one <code>A</code> model) with a resource_name on the ForeignKey declared in <code>D</code>, and trying to reference this in <code>C</code> to include that relationship in the JSON doesn't work, either. I get this error (trying to reference it by doing <code>D = DSerializer(source='B.D')</code>):</p>
<pre><code>'RelatedManager' object has no attribute 'B'
</code></pre>
<p>Any help would be greatly appreciated.</p>
| 0 | 2016-10-12T12:55:43Z | 40,027,539 | <p>I figured it out. Just answering my own question in case anyone lands on this page and they need help.</p>
<p>You need to use the SerializerMethodResourceRelatedField from the Django Rest Framework JSON API. I had tried the regular ResourceRelatedField without it working, looking through the source code showed me that ResourceRelatedField doesn't support dotted paths. Instead, use SerializerMethodResourceRelatedField with a source pointing to a method that returns the desired relation.</p>
| 0 | 2016-10-13T17:36:56Z | [
"python",
"django",
"ember.js",
"django-rest-framework",
"json-api"
] |
Surface plot with multiple polynomial fits | 39,999,239 | <p>what I'm asking may not be possible, but I'm hoping you guys can help me out.</p>
<p>So I have two 2D arrays, f1(x1) = y1, and f2(x2) = y2. I want to make a surface plot of the ratio of these, so the z dimension is (f2(x2)/f1(x1)). Unfortunately I'm coming up against a wall from whatever direction I approach the problem.</p>
<p>My main problem is that the ranges of each array is different, x1 goes from 300 to 50,000, and x2 goes from 300, 1200. Now I'm happy with assuming that f2(x2) = f2(1200) for all x2 > 1200. But this bound means it's impossible for me to fit a polynomial this data in any realistic way (my first set of data is nicely reproduced by a 5th order polynomial, and my second set of data is best fit with a 1st order polynomial). Is there an alternative way that I can fit a function to (x2,y2) so it takes the boundary values for all points outside of the boundaries?</p>
<p>My extremely wrong attempt looks like this,</p>
<pre><code># x1 is an array from 300 to 50,000 in steps of 50
# x2 is an array from 300 to 1,150 in steps of 50
f1_fit = np.poly1d(np.polyfit(x1, y1, 5))
f2_fit = np.poly1d(np.polyfit(x2, y2, 1))
X, Y = np.meshgrid(x1, x2)
Z = (f2_fit(x2) / f1_fit(x1))
</code></pre>
<p>Funny how seemingly innocuous problems can be a right pain in the a*se. :D</p>
<p>Edit : Here is amount of toy data,</p>
<pre><code>x1 = x2 = [ 300. 350. 400. 449. 499. 548. 598. 648. 698. 748.
798. 848. 897. 947. 997. 1047. 1097. 1147. 1196. 1246.
1296. 1346. 1396. 1446. 1496. 1546. 1595. 1645. 1695. 1745.]
y1 = [ 351. 413. 476. 561. 620. 678. 734. 789. 841. 891.
938. 982. 1023. 1062. 1099. 1133. 1165. 1195. 1223. 1250.
1274. 1298. 1320. 1340. 1360. 1378. 1395. 1411. 1426. 1441.]
y2 = [ 80. 75. 70. 65. 62. 58. 58. 52. 48. 46. 44. 41. 38. 35. 32.
32. 29. 30. 30. 30. 30. 30. 30. 30. 30. 30. 30. 30. 30. 30.]
</code></pre>
| 0 | 2016-10-12T12:58:48Z | 40,008,961 | <p>So I managed to solve my problem. I did some preproccessing on my data as explained above, by setting x1 = x2, and thus extrapolated the edge values for f(x2).</p>
<pre><code>import numpy as np
import scipy.interpolate as interp
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
data1 = np.loadtxt('data1.dat')
data2 = np.loadtxt('data2.dat')
X = []
Y = []
Z = []
for i in data1:
for j in data2:
X.append(i[0])
Y.append(j[0])
Z.append(i[1]/j[1])
x_mesh,y_mesh, = np.meshgrid(np.linspace(300,50000,200), np.linspace(300,50000,200))
z_mesh = interp.griddata((X,Y),Z,(x_mesh,y_mesh),method='linear')
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_surface(x_mesh,y_mesh,z_mesh,cstride=10,rstride=10,cmap='OrRd_r')
plt.show()
</code></pre>
| 0 | 2016-10-12T21:50:36Z | [
"python",
"matplotlib",
"curve-fitting",
"surface"
] |
Haystack with Whoosh not returning any results | 39,999,363 | <p>I've installed Django-Haystack and Whoosh and set it all up following the haystack documentation, but no matter what I search for I always get "No results found." on the search page, despite the index apparently being OK.</p>
<p>When running "manage.py rebuild_index" it correctly states:</p>
<pre><code>Indexing 12 assets
indexed 1 - 12 of 12 (worker PID: 1234).
</code></pre>
<p>And when running this in a Django shell:</p>
<pre><code>from whoosh.index import open_dir
ix = open_dir('mysite/whoosh_index')
from pprint import pprint
pprint(list(ix.searcher().documents()))
</code></pre>
<p>It correctly returns all details of the 12 indexed assets, so it looks like the index is fine, but no matter what I search for I cannot get any results, only "No results found"! </p>
<p>I have followed the advice in every other similar question on StackOverflow (and everywhere else that popped up on Google) to no avail.</p>
<p>Does anyone have any suggestions?</p>
<p>Files used (edited for brevity):</p>
<p>settings.py</p>
<pre><code>INSTALLED_APPS = [
....
'haystack',
'assetregister',
....
]
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.whoosh_backend.WhooshEngine',
'PATH': os.path.join(os.path.dirname(__file__), 'whoosh_index'),
},
}
</code></pre>
<p>models.py</p>
<pre><code>class Asset(models.Model):
asset_id = models.AutoField(primary_key=True)
asset_description = models.CharField(max_length=200)
asset_details = models.TextField(blank=True)
</code></pre>
<p>search_indexes.py</p>
<pre><code>from haystack import indexes
from .models import Asset
class AssetIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
asset_description = indexes.CharField(model_attr='asset_description')
def get_model(self):
return Asset
def no_query_found(self):
# The .exclude is a hack from another stackoverflow question that prevents it returning an empty queryset
return self.searchqueryset.exclude(content='foo')
def index_queryset(self, using=None):
return self.get_model().objects
</code></pre>
<p>/templates/search/indexes/assetregister/asset_text.txt</p>
<pre><code>{{ object.asset_description }}
{{ object.asset_details }}
</code></pre>
<p>urls.py</p>
<pre><code>urlpatterns = [
url(r'^search/', include('haystack.urls')),
]
</code></pre>
<p>search.html</p>
<pre><code><h2>Search</h2>
<form method="get" action=".">
<table>
{{ form.as_table }}
<tr>
<td>&nbsp;</td>
<td>
<input type="submit" value="Search">
</td>
</tr>
</table>
{% if query %}
<h3>Results</h3>
{% for result in page.object_list %}
<p>
<a href="{{ result.object.get_absolute_url }}">{{ result.object.asset_description }}</a>
</p>
{% empty %}
<p>No results found.</p>
{% endfor %}
{% else %}
{# Show some example queries to run, maybe query syntax, something else? #}
{% endif %}
</form>
</code></pre>
<p>And just in case it's useful, my "pip freeze":</p>
<pre><code>Django==1.9.3
django-cleanup==0.4.2
django-haystack==2.5.0
django-pyodbc-azure==1.9.3.0
Pillow==3.2.0
pyodbc==3.0.10
Whoosh==2.7.4
</code></pre>
| 0 | 2016-10-12T13:05:01Z | 40,018,850 | <p>For the benefit of any future people having this same problem, I found an unconventional solution...</p>
<p>So I double checked everything according to the haystack documentation and it all looked OK. I found out about a pip package called "django-haystackbrowser" that, once installed correctly, allows you to view your index through the django admin interface. </p>
<p>For some reason once I had viewed the index in the admin interface (and confirmed everything was already there as it should be) and then restarted the server using</p>
<pre><code>python manage.py runserver
</code></pre>
<p>I finally started getting search results back! </p>
<p>No idea what was causing the problem, and certainly no idea how just viewing the index using that package fixed it, but it now seems to be returning results as it should! </p>
| 0 | 2016-10-13T10:46:45Z | [
"python",
"django",
"python-3.x",
"django-haystack",
"whoosh"
] |
TypeError: 'in <string>' requires string as left operand, not QueryDict | 39,999,567 | <p>While updating the edited blog i am getting this error: in edit_article function
Here is my function,</p>
<pre><code>def edit_article(request, id):
session_start(request)
if Article.exists(id):
article = Article.getByName(id)
else:
article = Article(id)
if request.method == 'POST' and request.POST in "content":
if has_article_access(request, article):
article.body = request.POST['content']
article.save()
if 'counter_edit' in request.session:
request.session['counter_edit'] += 1
else:
request.session['counter_edit'] = 1
delete_article_locked(request, article)
return HttpResponseRedirect('/myblog/?edited')
else:
return HttpResponseRedirect('/myblog/?locked')
else:
if has_article_access(request, article):
start_article_locked(request, article)
else:
return HttpResponseRedirect('/myblog/?locked')
return render_to_response("edit.html",
{
'name': article.title,
'content': article.body,
'id':article.id
},
context_instance=RequestContext(request)
)
</code></pre>
<p>The error shows in 7th line.</p>
| 0 | 2016-10-12T13:14:58Z | 39,999,731 | <p>If you want to check if <code>request.POST</code> has the key <code>"content"</code>:</p>
<pre><code>if request.method == 'POST' and "content" in request.POST:
</code></pre>
| 1 | 2016-10-12T13:21:13Z | [
"python",
"python-3.x"
] |
Extracting multiple lists out of a nested dictionary | 39,999,581 | <p>I am new to Python. I want to extract multiple lists out of the nested dictionary. I did follow this link (<a href="http://stackoverflow.com/questions/28131446/get-nested-arrays-out-of-a-dictionary">Get nested arrays out of a dictionary</a>) didn't help me much.</p>
<p>If I have a dictionary, say:</p>
<pre><code>{"alpha": {2: {1: 1.1, 2: 4.1}, 3: {1: 9.1, 3: 4.1, 6: 5.1},
5: {1: 9.2, 3: 4.4, 6: 5.4}, 9: {1: 9.0, 3: 4.0, 6: 5.5}},
"beta": {2: {1: 4.0, 2: 7.9}, 3: {1: 24, 3: 89, 6: 98} ,
5: {1: 9, 3: 4, 6: 5}, 9: {1: 9.2, 3: 4.9, 6: 5.0}}}
</code></pre>
<p>How do I extract all these as individual lists say <code>(alpha,beta,..), (2,3,5,9), (1,2,4,9), (1.1,4.1)</code> etc.</p>
<p>When I tried I could get only the list of (alpha,beta,..) and list of values associated with alpha and beta. The list of values is again a dictionary as it is inside a list. I can't further do dict.values() as the previous operation gave me a list. Therefore, Python throws an error. How do I make list of all these values and keys individually? Like I want to make a list of decimals and the keys associated with that.</p>
| -2 | 2016-10-12T13:15:30Z | 39,999,787 | <p>You can get each 'layer' by accessing <code>dict.keys()</code> or get the layer below by accessing <code>dict.values()</code>. If you want to dive one level deeper you just iterate over <code>parent.values()</code> and get <code>dict.keys()</code> on each element. The last layer finally is just <code>dict.values()</code>.</p>
<pre><code>print data.keys() # top level keys
>>> ['alpha', 'beta']
print [x.keys() for x in data.values()] # second level keys
>>> [[9, 2, 3, 5], [9, 2, 3, 5]]
print [y.keys() for x in data.values() for y in x.values()] # third level keys
>>> [[1, 3, 6], [1, 2], [1, 3, 6], [1, 3, 6], [1, 3, 6], [1, 2], [1, 3, 6], [1, 3, 6]]
print [y.values() for x in data.values() for y in x.values()] # third level values
>>> [[9.0, 4.0, 5.5], [1.1, 4.1], [9.1, 4.1, 5.1], [9.2, 4.4, 5.4], [9.2, 4.9, 5.0], [4.0, 7.9], [24, 89, 98], [9, 4, 5]]
</code></pre>
<p>Note that dicts are unordered by nature.</p>
| 0 | 2016-10-12T13:23:50Z | [
"python",
"list",
"dictionary"
] |
Create 4D upper diagonal array from 3D | 39,999,590 | <p>Let's say that I have a <code>(x, y, z)</code> sized matrix. Now, I wish to create a new matrix of dimension <code>(x, y, i, i)</code>, where the <code>(i, i)</code> matrix is upper diagonal and constructed from the values on the <code>z</code>-dimension. Is there some easy way of doing this in <code>numpy</code> without using more than 1 for-loop (looping over x)? Thanks.</p>
<p><strong>EDIT</strong></p>
<pre><code>original = np.array([
[
[0, 1, 3],
[4, 5, 6]
],
[
[7, 8, 9],
[3, 2, 1]
],
])
new = np.array([
[
[
[0, 1],
[0, 3]
],
[
[4, 5],
[0, 6]
]
],
[
[
[7, 8],
[0, 9]
],
[
[3, 2],
[0, 1]
]
]
])
</code></pre>
<p>So, using the above we see that</p>
<pre><code>original[0, 0, :] = [0 1 3]
new[0, 0, :, :] = [[0 1]
[0 3]]
</code></pre>
| 1 | 2016-10-12T13:15:37Z | 39,999,997 | <p>Here's an approach using <code>boolean-indexing</code> -</p>
<pre><code>n = 2 # This would depend on a.shape[-1]
out = np.zeros(a.shape[:2] + (n,n,),dtype=a.dtype)
out[:,:,np.arange(n)[:,None] <= np.arange(n)] = a
</code></pre>
<p>Sample run -</p>
<pre><code>In [247]: a
Out[247]:
array([[[0, 1, 3],
[4, 5, 6]],
[[7, 8, 9],
[3, 2, 1]]])
In [248]: out
Out[248]:
array([[[[0, 1],
[0, 3]],
[[4, 5],
[0, 6]]],
[[[7, 8],
[0, 9]],
[[3, 2],
[0, 1]]]])
</code></pre>
<p>Another approach could be suggested using <code>subscripted-indexing</code> to replace the last step -</p>
<pre><code>r,c = np.triu_indices(n)
out[:,:,r,c] = a
</code></pre>
<p><strong>Note :</strong> As stated earlier, <code>n</code> would depend on <code>a.shape[-1]</code>. Here, we had <code>a.shape[-1]</code> as <code>3</code>, so <code>n</code> was <code>2</code>. If <code>a.shape[-1]</code> were <code>6</code>, <code>n</code> would be <code>3</code> and so on. The relationship is : <code>(n*(n+1))//2 == a.shape[-1]</code>.</p>
| 1 | 2016-10-12T13:32:25Z | [
"python",
"numpy",
"vectorization"
] |
Approximate pattern matching? | 39,999,637 | <p>I am trying to write code for Approximate Pattern Matching which is as below:</p>
<pre><code>def HammingDistance(p, q):
d = 0
for p, q in zip(p, q): # your code here
if p!= q:
d += 1
return d
Pattern = "ATTCTGGA"
Text = "CGCCCGAATCCAGAACGCATTCCCATATTTCGGGACCACTGGCCTCCACGGTACGGACGTCAATCAAAT"
d = 3
def ApproximatePatternMatching(Pattern, Text, d):
positions = [] # initializing list of positions
for i in range(len(Text) - len(Pattern)+1):
if Pattern == Text[i:i+len(Pattern)]:
positions.append(i)# your code here
return positions
print (ApproximatePatternMatching(Pattern, Text, d))
</code></pre>
<p>I keep getting the following error:
Failed test #3. You may be failing to account for patterns starting at the first index of text.</p>
<p>Test Dataset:</p>
<pre><code>GAGCGCTGG
GAGCGCTGGGTTAACTCGCTACTTCCCGACGAGCGCTGTGGCGCAAATTGGCGATGAAACTGCAGAGAGAACTGGTCATCCAACTGAATTCTCCCCGCTATCGCATTTTGATGCGCGCCGCGTCGATT
2
</code></pre>
<p>Your output:</p>
<pre><code>['[]', '0']
</code></pre>
<p>Correct output:</p>
<pre><code>['0', '30', '66']
</code></pre>
<p>Can not figure out what I am doing wrong as I am trying to learn python so don't have any idea about programming. Need help?</p>
| 1 | 2016-10-12T13:17:50Z | 40,000,120 | <p>I'm unsure why you're getting an empty list as one of your outputs - when I run your code above I only get [0] as the print out.</p>
<p>Specifically, your code at present only checks for an exact character substring match, without using the hamming distance definition you also included. </p>
<p>The following should return the result you expect:</p>
<pre><code>Pattern = "GAGCGCTGG"
Text = "GAGCGCTGGGTTAACTCGCTACTTCCCGACGAGCGCTGTGGCGCAAATTGGCGATGAAACTGCAGAGAGAACTGGTCATCCAACTGAATTCTCCCCGCTATCGCATTTTGATGCGCGCCGCGTCGATT"
d = 3
def HammingDistance(p, q):
d = 0
for p, q in zip(p, q): # your code here
if p!= q:
d += 1
return d
def ApproximatePatternMatching(Pattern, Text, d):
positions = [] # initializing list of positions
for i in range(len(Text) - len(Pattern)+1):
# and using distance < d, rather than exact matching
if HammingDistance(Pattern, Text[i:i+len(Pattern)]) < d:
positions.append(i)
return positions
print (ApproximatePatternMatching(Pattern, Text, d))
</code></pre>
| 1 | 2016-10-12T13:37:28Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.