title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
I want to split a sentence into words and get it to display vertically | 40,027,728 | <p>This is what i have so far.</p>
<pre><code>sentence = input("Enter a sentence".lower())
sentence = sentence.split()
print (sentence)
</code></pre>
<p>Current output:</p>
<pre><code>enter a sentence hi my name is bob
['hi', 'my', 'name', 'is', 'bob']
</code></pre>
<p>Desired output (without those big spaces)</p>
<pre><code>hi
my
name
is
bob
</code></pre>
<p>I think it involves <strong>for word in sentance</strong> but im not sure</p>
<p>Thanks!</p>
| -4 | 2016-10-13T17:48:47Z | 40,027,744 | <p>You can just do:</p>
<pre><code>for word in sentence:
print(word)
</code></pre>
| 6 | 2016-10-13T17:49:55Z | [
"python",
"python-3.x"
] |
I want to split a sentence into words and get it to display vertically | 40,027,728 | <p>This is what i have so far.</p>
<pre><code>sentence = input("Enter a sentence".lower())
sentence = sentence.split()
print (sentence)
</code></pre>
<p>Current output:</p>
<pre><code>enter a sentence hi my name is bob
['hi', 'my', 'name', 'is', 'bob']
</code></pre>
<p>Desired output (without those big spaces)</p>
<pre><code>hi
my
name
is
bob
</code></pre>
<p>I think it involves <strong>for word in sentance</strong> but im not sure</p>
<p>Thanks!</p>
| -4 | 2016-10-13T17:48:47Z | 40,027,779 | <p>You can specify the <code>end</code> argument for <code>print</code> to get the spacing you desire from a single print statement. The default is a single newline: <code>'\n'</code></p>
<pre><code>for word in sentence:
print(word, end="\n\n")
</code></pre>
<p>Edit: oops, missed the <code>out</code> in <code>without</code></p>
<blockquote>
<p>Desired output (<strong>without</strong> those big spaces)</p>
</blockquote>
<p>All the same, you can specify any end you want in Python3's print function. If you wanted to print it out as if it were a sentence, you could make the end character a space. </p>
<p><strong>Alternatively, you can use the <code>join</code> method of a string to join elements of an iterable to a string.</strong></p>
<pre><code>print("\n".join(sentence))
#or more explicitly:
print("\n".join(word for word in sentence))
</code></pre>
| 2 | 2016-10-13T17:51:56Z | [
"python",
"python-3.x"
] |
I want to split a sentence into words and get it to display vertically | 40,027,728 | <p>This is what i have so far.</p>
<pre><code>sentence = input("Enter a sentence".lower())
sentence = sentence.split()
print (sentence)
</code></pre>
<p>Current output:</p>
<pre><code>enter a sentence hi my name is bob
['hi', 'my', 'name', 'is', 'bob']
</code></pre>
<p>Desired output (without those big spaces)</p>
<pre><code>hi
my
name
is
bob
</code></pre>
<p>I think it involves <strong>for word in sentance</strong> but im not sure</p>
<p>Thanks!</p>
| -4 | 2016-10-13T17:48:47Z | 40,028,075 | <pre><code>for i=0;i < sentence.length;i++
print sentence[i];
if i != sentence.length - 1
println;
</code></pre>
| -2 | 2016-10-13T18:09:38Z | [
"python",
"python-3.x"
] |
Concatenate two 2 dimensional lists into a new list | 40,027,837 | <p>DISLCAIMER: I am new to Python</p>
<p>I would like to create a concatenated 2-D list in Python by combining 2 existing 2-D lists. I start with 2 lists:</p>
<pre><code>listA = [[a, b, c], [1, 2, 3]]
listB = [[d, e, f], [4, 5, 6]]
</code></pre>
<p>and i want to make a new list (while preserving listA and listB):</p>
<pre><code>listC = [[a, b, c, d, e, f], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>If I try to add them as with 1-dimensional lists, I get:</p>
<pre><code>listA + listB
result = [[a, b, c], [1, 2, 3], [d, e, f], [4, 5, 6]]
</code></pre>
<p>I have also tried:</p>
<pre><code>listC = listA
listC[0] += listB[0]
listC[1] += listB[1]
# This may be giving me the result I want, but it corrupts listA:
Before: listA = [[a, b, c], [1, 2, 3]
After: listA = [[a, b, c, d, e, f], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>What is the right way to make a new list of the data I want?</p>
<p>I could also work with a tuple:</p>
<pre><code>listC = [(a, 1), (b, 2), (c, 3), (d, 4), (e, 5), (f, 6)]
</code></pre>
<p>But don't know the method for that either.</p>
<p>I am currently using Python 2.7 (raspberry pi running raspbian Jessie), but Python 3.4 is available if necessary.</p>
| 0 | 2016-10-13T17:56:12Z | 40,027,879 | <p>Create a new list, e.g with list-comprehension</p>
<pre><code>listC = [a+b for a,b in zip(listA, listB)]
</code></pre>
| 1 | 2016-10-13T17:58:16Z | [
"python",
"list"
] |
Concatenate two 2 dimensional lists into a new list | 40,027,837 | <p>DISLCAIMER: I am new to Python</p>
<p>I would like to create a concatenated 2-D list in Python by combining 2 existing 2-D lists. I start with 2 lists:</p>
<pre><code>listA = [[a, b, c], [1, 2, 3]]
listB = [[d, e, f], [4, 5, 6]]
</code></pre>
<p>and i want to make a new list (while preserving listA and listB):</p>
<pre><code>listC = [[a, b, c, d, e, f], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>If I try to add them as with 1-dimensional lists, I get:</p>
<pre><code>listA + listB
result = [[a, b, c], [1, 2, 3], [d, e, f], [4, 5, 6]]
</code></pre>
<p>I have also tried:</p>
<pre><code>listC = listA
listC[0] += listB[0]
listC[1] += listB[1]
# This may be giving me the result I want, but it corrupts listA:
Before: listA = [[a, b, c], [1, 2, 3]
After: listA = [[a, b, c, d, e, f], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>What is the right way to make a new list of the data I want?</p>
<p>I could also work with a tuple:</p>
<pre><code>listC = [(a, 1), (b, 2), (c, 3), (d, 4), (e, 5), (f, 6)]
</code></pre>
<p>But don't know the method for that either.</p>
<p>I am currently using Python 2.7 (raspberry pi running raspbian Jessie), but Python 3.4 is available if necessary.</p>
| 0 | 2016-10-13T17:56:12Z | 40,027,924 | <p>There are a couple of ways:</p>
<pre><code>listC = [listA[0] + listB[0], listA[1] + listB[1]]
listC = [x + y for x, y in zip(listA, listB)]
</code></pre>
<p>Are probably the two simplest</p>
| 2 | 2016-10-13T18:00:39Z | [
"python",
"list"
] |
Concatenate two 2 dimensional lists into a new list | 40,027,837 | <p>DISLCAIMER: I am new to Python</p>
<p>I would like to create a concatenated 2-D list in Python by combining 2 existing 2-D lists. I start with 2 lists:</p>
<pre><code>listA = [[a, b, c], [1, 2, 3]]
listB = [[d, e, f], [4, 5, 6]]
</code></pre>
<p>and i want to make a new list (while preserving listA and listB):</p>
<pre><code>listC = [[a, b, c, d, e, f], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>If I try to add them as with 1-dimensional lists, I get:</p>
<pre><code>listA + listB
result = [[a, b, c], [1, 2, 3], [d, e, f], [4, 5, 6]]
</code></pre>
<p>I have also tried:</p>
<pre><code>listC = listA
listC[0] += listB[0]
listC[1] += listB[1]
# This may be giving me the result I want, but it corrupts listA:
Before: listA = [[a, b, c], [1, 2, 3]
After: listA = [[a, b, c, d, e, f], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>What is the right way to make a new list of the data I want?</p>
<p>I could also work with a tuple:</p>
<pre><code>listC = [(a, 1), (b, 2), (c, 3), (d, 4), (e, 5), (f, 6)]
</code></pre>
<p>But don't know the method for that either.</p>
<p>I am currently using Python 2.7 (raspberry pi running raspbian Jessie), but Python 3.4 is available if necessary.</p>
| 0 | 2016-10-13T17:56:12Z | 40,027,929 | <p>Here is a functional approach if you want learn more:</p>
<pre><code>In [13]: from operator import add
In [14]: from itertools import starmap
In [15]: list(starmap(add, zip(listA, listB)))
Out[15]: [['a', 'b', 'c', 'd', 'e', 'f'], [1, 2, 3, 4, 5, 6]]
</code></pre>
<p>Note that since <code>starmap</code> returns an iterator if you don't want the result in a list (maybe if you just want to iterate over the result) you shouldn't use <code>list()</code> here.</p>
| 1 | 2016-10-13T18:01:39Z | [
"python",
"list"
] |
Calling Python Django File in Android Java Code | 40,027,851 | <p>I'm developing an android app in Java. I was using a php file to interact w/ the database, but I want to use Python Django instead of php. Could I call the python file that interacts with the database in the same way that I called the php file?</p>
<pre><code>URL url = new URL("http://10.0.3.2/MYCODE/app/login.php");
String urlParams = "name="+name+"&password="+password;
HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();
httpURLConnection.setDoOutput(true);
OutputStream os = httpURLConnection.getOutputStream();
os.write(urlParams.getBytes());
os.flush();
os.close();
</code></pre>
<p>Another more general question is how do I API Java and Python Django?</p>
| 0 | 2016-10-13T17:56:55Z | 40,028,918 | <p>You could run a python socket server and interface with that using your app. There are many ways to do this though. I created a very small python socket server <a href="https://github.com/rstims/lightweight-python-socket-server" rel="nofollow">here</a>, you're welcome to use it, of course.</p>
<p><a href="http://www.oracle.com/technetwork/java/socket-140484.html" rel="nofollow">Java Socket Documentation</a></p>
| 1 | 2016-10-13T19:00:45Z | [
"php",
"android",
"python",
"django"
] |
Python Twitter API trying to retrieve tweet but error: AttributeError: 'int' object has no attribute 'encode' | 40,027,856 | <p>why am I getting an AttributeError: 'int' object has no attribute 'encode'?
I am trying to retrieve a tweet using the Twitter API on Python. Full traceback here: </p>
<pre><code>Traceback (most recent call last):
File "C:/Python27/lol.py", line 34, in <module>
headers = req.to_header()
File "build\bdist.win-amd64\egg\oauth2\__init__.py", line 398, in to_header
params_header = ', '.join(header_params)
File "build\bdist.win-amd64\egg\oauth2\__init__.py", line 397, in <genexpr>
header_params = ('%s="%s"' % (k, v) for k, v in stringy_params)
File "build\bdist.win-amd64\egg\oauth2\__init__.py", line 396, in <genexpr>
stringy_params = ((k, escape(v)) for k, v in oauth_params)
File "build\bdist.win-amd64\egg\oauth2\__init__.py", line 163, in escape
s = s.encode('utf-8')
AttributeError: 'int' object has no attribute 'encode'
</code></pre>
<p>Below is the code I'm using.</p>
<pre><code>import oauth2
import time
import urllib2
import json
url1="https://api.twitter.com/1.1/search/tweets.json"
params = {
"oauth_version": "1.9.0",
"oauth_nonce": oauth2.generate_nonce(),
"oauth_timestamp": int(time.time())
}
consumer = oauth2.Consumer(key="*********", secret="*********")
token = oauth2.Token(key="*********", secret="*********")
params["oauth_consumer_key"] = consumer.key
params["oauth_token"] = token.key
for i in range(1):
url = url1
req = oauth2.Request(method="GET", url=url, parameters=params)
signature_method = oauth2.SignatureMethod_HMAC_SHA1()
req.sign_request(signature_method, consumer, token)
headers = req.to_url()
print headers
print url
for i in range(1):
url = url1
params["q"] = "pictorial"
params["count"] = 2
req = oauth2.Request(method="GET", url=url, parameters=params)
signature_method = oauth2.SignatureMethod_HMAC_SHA1()
req.sign_request(signature_method, consumer, token)
headers = req.to_header()
url = req.to_url()
response = urllib2.Request(url)
data = json.load(urllib2.urlopen(response))
if data["statuses"] == []:
print "end of data"
break
else:
print data
</code></pre>
<p>And if I change int(time.time()) into str(time.time())
I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Python27/lol.py", line 37, in <module>
data = json.load(urllib2.urlopen(response))
File "C:\Python27\lib\urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 437, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 475, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
</code></pre>
| 1 | 2016-10-13T17:57:18Z | 40,030,527 | <pre><code>"oauth_timestamp": int(time.time())
</code></pre>
<p>here you use an <code>int</code>, but that field <em>must</em> be a string.</p>
| 0 | 2016-10-13T20:37:28Z | [
"python",
"api",
"twitter"
] |
Child calling parent's method without calling parent's __init__ in python | 40,027,861 | <p>I have a program that have a gui in PyQt in the main thread. It communicates to a photo-detector and gets power readings in another thread, which sends a signal to the main thread to update the gui's power value.
Now I want to use a motor to automatically align my optical fiber, getting feedback from the photo-detector. </p>
<p>So I created a class that controls the motors, but I have to somehow pass the photo-detector readings to that class. First, I tried to access parent's power variable but it didn't work.
Then I created a method in my gui to return the variable's value and tried to access it from the motor class. I got a problem saying that I couldn't use parent's method without using its <code>__init__</code> first. Is there a way to bypass it? I can't call the gui <code>__init__</code> again, I just want to use one of its methods from within the child class.</p>
<p>If there is an alternative way to do this, I'd be happy as well.</p>
<p>PS: I guess I can't give the child class the photo-detector object because it is in another thread, right? </p>
<p>--Edit--
The gui code is:</p>
<pre><code>class MyApp(QtGui.QMainWindow, Ui_MainWindow):
self.PDvalue = 0 #initial PD value
self.PDState = 0 #control the PD state (on-off)
self.PDport = self.dialog.pm100d.itemText(self.dialog.pm100d.currentIndex()) #gets pot info
def __init__(self):
... #a lot of other stuff
self.nano = AlgoNanoMax.NanoMax('COM12') #creates the motor object
self.nano_maxX.clicked.connect(self.NanoMaximizeX) #connect its fun to a buttom
self.actionConnect_PM100D.triggered.connect(self.ActionConnect_PM100D) #PD buttom
def NanoMaximizeX(self):
self.nano.maximize_nano_x() #uses motor object function
def ActionConnect_PM100D(self):
if self.PDState == 0: #check if PD is on
self.PD = PDThread(self.PDState, self.PDport) #creates thread
self.PD.valueupdate.connect(self.PDHandler) #signal connect
self.PD.dialogSignal.connect(self.PDdialog) #create error dialog
self.threads = []
self.threads.append(self.PD)
self.PD.start() #start thread
else:
self.PDState = 0
self.PD.state = 0 #stop thread
self.startpd.setText('Start PD') #change buttom name
def PDHandler(self, value):
self.PDvalue = value #slot to get pow from thread
def ReturnPow(self):
return self.PDvalue #return pow (I tried to use this to pass to the motor class)
def PDdialog(self):
self.dialog.set_instrument('PM100D') #I have a dialog that says error and asks you to type the right port
if self.dialog.exec_() == QtGui.QDialog.Accepted: #if Ok buttom try again
ret = self.dialog.pm100d.itemText(self.dialog.pm100d.currentIndex()) #new port
self.PD.port = str(ret)
self.PD.flagWhile = False #change PD stop loop condition to try again
else: #pressed cancel, so it gives up
self.PD.photodetector.__del__() #delete objects
self.PD.terminate() #stop thread
self.PD.quit()
</code></pre>
<p>Now the PD class, which is in another thread but in the same file as gui:</p>
<pre><code>class PDThread(QtCore.QThread):
valueupdate = QtCore.pyqtSignal(float) #creating signals
dialogSignal = QtCore.pyqtSignal() #signal in case of error
state = 1 #used to stop thread
def __init__(self, state, port):
QtCore.QThread.__init__(self)
self.photodetector = PM100D() #creates the PD object
self.port = port
def run(self):
while True:
self.flagWhile = True #used to leave while
try:
self.photodetector.connect(self.port) #try to connect
except:
self.dialogSignal.emit() #emit error signal
while self.flagWhile == True:
time.sleep(0.5) #wait here until user press something in the dialog, which is in another thread
else:
break #leave loop when connected
window.PDState = 1 #change state of main gui buttom (change functionality to turn off if pressed again)
window.startpd.setText('Stop PD') #change buttom label
while self.state == 1:
time.sleep(0.016)
value = self.photodetector.get_pow() #get PD pow
self.valueupdate.emit(value) #emit it
</code></pre>
<p>The AlgoNanoMax file:</p>
<pre><code>import gui
from NanoMax import Nano
class NanoMax(gui.MyApp): #inheriting parent
def __init__(self, mcontroller_port):
self.mcontroller = Nano(mcontroller_port) #mcontroller is the communication to the motor
def maximize_nano_x(self, step=0.001, spiral_number=3):
''' Alignment procedure with the nano motor X'''
print 'Optimizing X'
power = super(NanoMax, self).ReturnPow() #here I try to read from the photodetector
xpos = self.mcontroller.initial_position_x
position = []
position = [[power, xpos]]
xsign = 1
self.mcontroller.move_relative(self.mcontroller.xaxis, (-1) * spiral_number * step)
print 'X nano move: '+ str((-1) * spiral_number * step * 1000) + ' micrometers'
time.sleep(4)
power = super(NanoMax, self).ReturnPow()
xpos += (-1) * spiral_number * step
position.append([power, xpos])
for _ in xrange(2*spiral_number):
self.mcontroller.move_relative(self.mcontroller.xaxis, xsign * step)
print 'X nano move: '+ str(xsign * step * 1000) + ' micrometers'
time.sleep(5)
power = super(NanoMax, self).ReturnPow()
xpos += xsign * step
position.append([power, xpos])
pospower = [position[i][0] for i in xrange(len(position))]
optimalpoint = pospower.index(max(pospower))
x_shift = (-1) * (xpos - position[optimalpoint][1])
print 'Maximum power: ' + str(max(pospower)) + ' dBm'
print 'Current power: ' + str(super(NanoMax, self).ReturnPow()) + ' dBm'
self.mcontroller.move_relative(self.mcontroller.xaxis, x_shift)
</code></pre>
| 1 | 2016-10-13T17:57:29Z | 40,030,455 | <p>The <code>__init__</code> for <code>NanoMax</code> and <code>MyApp</code> should call <code>super().__init__()</code> to ensure initialization is done for all levels (if this is Python 2, you can't use no-arg <code>super</code>, so it would be <code>super(NanoMax, self).__init__()</code> and <code>super(MyApp, self).__init__()</code> respectively). This assumes the <code>PyQT</code> was properly written with new-style classes, and correct use of <code>super</code> itself; you're using <code>super</code> in other places, so presumably at least the former is true. Using <code>super</code> appropriately in all classes will ensure all levels are <code>__init__</code>-ed once, while manually listing super classes won't work in certain inheritance patterns, or might call some <code>__init__</code>s multiple times or not at all.</p>
<p>If there is a possibility that many levels might take arguments, you should also accept <code>*args</code>/<code>**kwargs</code> and forward them to the <code>super().__init__</code> call so the arguments are forwarded where then need to go.</p>
<p>Combining the two, your code should look like:</p>
<pre><code>class MyApp(QtGui.QMainWindow, Ui_MainWindow):
def __init__(self, *args, **kwargs):
super(MyApp, self).__init__(*args, **kwargs)
... rest of __init__ ...
class PDThread(QtCore.QThread):
def __init__(self, state, port, *args, **kwargs):
super(PDThread, self).__init__(*args, **kwargs)
...
class NanoMax(gui.MyApp): #inheriting parent
def __init__(self, mcontroller_port, *args, **kwargs):
super(NanoMax, self).__init__(*args, **kwargs)
self.mcontroller = Nano(mcontroller_port) #mcontroller is the communication to the motor
</code></pre>
<p>Note: If you've overloaded methods that the super class might call in its <code>__init__</code> and your overloads depend on state set in your own <code>__init__</code>, you'll need to set up that state before, rather than after the <code>super().__init__(...)</code> call. Cooperative multiple inheritance can be a pain that way. Also note that using positional arguments for anything but the lowest level class can be ugly with multiple inheritance, so it may make sense to pass all arguments by keyword, and only accept and forward <code>**kwargs</code>, not <code>*args</code>, so people don't pass positional arguments in ways that break if the inheritance hierarchy changes slightly.</p>
| 1 | 2016-10-13T20:33:04Z | [
"python",
"multithreading",
"inheritance",
"pyqt",
"parent-child"
] |
Child calling parent's method without calling parent's __init__ in python | 40,027,861 | <p>I have a program that have a gui in PyQt in the main thread. It communicates to a photo-detector and gets power readings in another thread, which sends a signal to the main thread to update the gui's power value.
Now I want to use a motor to automatically align my optical fiber, getting feedback from the photo-detector. </p>
<p>So I created a class that controls the motors, but I have to somehow pass the photo-detector readings to that class. First, I tried to access parent's power variable but it didn't work.
Then I created a method in my gui to return the variable's value and tried to access it from the motor class. I got a problem saying that I couldn't use parent's method without using its <code>__init__</code> first. Is there a way to bypass it? I can't call the gui <code>__init__</code> again, I just want to use one of its methods from within the child class.</p>
<p>If there is an alternative way to do this, I'd be happy as well.</p>
<p>PS: I guess I can't give the child class the photo-detector object because it is in another thread, right? </p>
<p>--Edit--
The gui code is:</p>
<pre><code>class MyApp(QtGui.QMainWindow, Ui_MainWindow):
self.PDvalue = 0 #initial PD value
self.PDState = 0 #control the PD state (on-off)
self.PDport = self.dialog.pm100d.itemText(self.dialog.pm100d.currentIndex()) #gets pot info
def __init__(self):
... #a lot of other stuff
self.nano = AlgoNanoMax.NanoMax('COM12') #creates the motor object
self.nano_maxX.clicked.connect(self.NanoMaximizeX) #connect its fun to a buttom
self.actionConnect_PM100D.triggered.connect(self.ActionConnect_PM100D) #PD buttom
def NanoMaximizeX(self):
self.nano.maximize_nano_x() #uses motor object function
def ActionConnect_PM100D(self):
if self.PDState == 0: #check if PD is on
self.PD = PDThread(self.PDState, self.PDport) #creates thread
self.PD.valueupdate.connect(self.PDHandler) #signal connect
self.PD.dialogSignal.connect(self.PDdialog) #create error dialog
self.threads = []
self.threads.append(self.PD)
self.PD.start() #start thread
else:
self.PDState = 0
self.PD.state = 0 #stop thread
self.startpd.setText('Start PD') #change buttom name
def PDHandler(self, value):
self.PDvalue = value #slot to get pow from thread
def ReturnPow(self):
return self.PDvalue #return pow (I tried to use this to pass to the motor class)
def PDdialog(self):
self.dialog.set_instrument('PM100D') #I have a dialog that says error and asks you to type the right port
if self.dialog.exec_() == QtGui.QDialog.Accepted: #if Ok buttom try again
ret = self.dialog.pm100d.itemText(self.dialog.pm100d.currentIndex()) #new port
self.PD.port = str(ret)
self.PD.flagWhile = False #change PD stop loop condition to try again
else: #pressed cancel, so it gives up
self.PD.photodetector.__del__() #delete objects
self.PD.terminate() #stop thread
self.PD.quit()
</code></pre>
<p>Now the PD class, which is in another thread but in the same file as gui:</p>
<pre><code>class PDThread(QtCore.QThread):
valueupdate = QtCore.pyqtSignal(float) #creating signals
dialogSignal = QtCore.pyqtSignal() #signal in case of error
state = 1 #used to stop thread
def __init__(self, state, port):
QtCore.QThread.__init__(self)
self.photodetector = PM100D() #creates the PD object
self.port = port
def run(self):
while True:
self.flagWhile = True #used to leave while
try:
self.photodetector.connect(self.port) #try to connect
except:
self.dialogSignal.emit() #emit error signal
while self.flagWhile == True:
time.sleep(0.5) #wait here until user press something in the dialog, which is in another thread
else:
break #leave loop when connected
window.PDState = 1 #change state of main gui buttom (change functionality to turn off if pressed again)
window.startpd.setText('Stop PD') #change buttom label
while self.state == 1:
time.sleep(0.016)
value = self.photodetector.get_pow() #get PD pow
self.valueupdate.emit(value) #emit it
</code></pre>
<p>The AlgoNanoMax file:</p>
<pre><code>import gui
from NanoMax import Nano
class NanoMax(gui.MyApp): #inheriting parent
def __init__(self, mcontroller_port):
self.mcontroller = Nano(mcontroller_port) #mcontroller is the communication to the motor
def maximize_nano_x(self, step=0.001, spiral_number=3):
''' Alignment procedure with the nano motor X'''
print 'Optimizing X'
power = super(NanoMax, self).ReturnPow() #here I try to read from the photodetector
xpos = self.mcontroller.initial_position_x
position = []
position = [[power, xpos]]
xsign = 1
self.mcontroller.move_relative(self.mcontroller.xaxis, (-1) * spiral_number * step)
print 'X nano move: '+ str((-1) * spiral_number * step * 1000) + ' micrometers'
time.sleep(4)
power = super(NanoMax, self).ReturnPow()
xpos += (-1) * spiral_number * step
position.append([power, xpos])
for _ in xrange(2*spiral_number):
self.mcontroller.move_relative(self.mcontroller.xaxis, xsign * step)
print 'X nano move: '+ str(xsign * step * 1000) + ' micrometers'
time.sleep(5)
power = super(NanoMax, self).ReturnPow()
xpos += xsign * step
position.append([power, xpos])
pospower = [position[i][0] for i in xrange(len(position))]
optimalpoint = pospower.index(max(pospower))
x_shift = (-1) * (xpos - position[optimalpoint][1])
print 'Maximum power: ' + str(max(pospower)) + ' dBm'
print 'Current power: ' + str(super(NanoMax, self).ReturnPow()) + ' dBm'
self.mcontroller.move_relative(self.mcontroller.xaxis, x_shift)
</code></pre>
| 1 | 2016-10-13T17:57:29Z | 40,030,584 | <pre><code>class MyApp(QtGui.QMainWindow, Ui_MainWindow):
self.PDvalue = 0 #initial PD value
self.PDState = 0 #control the PD state (on-off)
</code></pre>
<p>In the above code it is setting a variable outside of a function. To do this in a class don't put the self keyword in front of it. This way you can just have in the class definition</p>
<pre><code>class MyApp(QtGui.QMainWindow, Ui_MainWindow):
PDvalue = 0 #initial PD value
PDState = 0 #control the PD state (on-off)
</code></pre>
<p>and in the super line</p>
<pre><code>power = super(NanoMax, self).PDvalue
</code></pre>
<p>For example:</p>
<pre><code>>>> class Hi:
H = 5
def __init__(self):
self.g = 6
>>> class Bye(Hi):
def H(self):
print(super(Bye, self).H)
>>> e = Bye()
>>> e.H()
5
>>>
</code></pre>
| 0 | 2016-10-13T20:41:33Z | [
"python",
"multithreading",
"inheritance",
"pyqt",
"parent-child"
] |
Google API Authorization (Service Account) Error: HttpAccessTokenRefreshError: unauthorized_client: Unauthorized client or scope in request | 40,027,878 | <p>I'm attempting to connect to the YouTube Analytics API using Python. When I go to Google's Developer Console in the Credentials tab I click on the <code>Create credentials</code> drop-down menu and select <code>Help me choose</code>. I click on the API I want to use (<code>YouTube Analytics API</code>), where I will be calling from (<code>Other non-UI (e.g. cron job, daemon)</code>), what data I will be accessing (<code>Application Data</code>), and then whether I'm using Google App Engine (<code>no</code>). I click on the button to see which credentials I need and it tells me <code>You alread have credentials that are suitable for this purpose</code>.</p>
<p>I have a <code>Service account</code> that I use to connect to the Google Search Console API to access data for multiple sites my company owns. Because we have multiple sites I use delegated credentials based on my email address. This is the code I use to authenticate to the Search Console API:</p>
<pre><code>from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials
from apiclient.discovery import build
scopes = ['https://www.googleapis.com/auth/webmasters.readonly']
credentials = ServiceAccountCredentials.from_json_keyfile_name('keyfile.json', scopes=scopes)
delegated_credentials = credentials.create_delegated('<my-email>')
http_auth = delegated_credentials.authorize(Http())
webmasters_service = build('webmasters', 'v3', http=http_auth)
</code></pre>
<p>Now, I'm trying to use a similar approach with the YouTube Analytics API but I'm getting this error: <code>HttpAccessTokenRefreshError: unauthorized_client: Unauthorized client or scope in request.</code>. Here's my code:</p>
<pre><code>from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials
from apiclient.discovery import build
START_DATE = "2016-09-01"
END_DATE = "2016-09-30"
scopes = ['https://www.googleapis.com/auth/yt-analytics.readonly']
credentials = ServiceAccountCredentials.from_json_keyfile_name('keyfile.json', scopes=scopes)
delegated_credentials = credentials.create_delegated('<my-email>')
http_auth = delegated_credentials.authorize(Http())
youtube_service = build('youtubeAnalytics', 'v1', http=http_auth)
analytics_query_response = youtube_service.reports().query(
ids="channel==<my-youtube-channel-id>",
metrics="views",
start_date=START_DATE,
end_date=END_DATE
).execute()
</code></pre>
<p>The <code>keyfile.json</code> is the same file (containing the Service account credentials) that I use to connect to the Search Console API. I even tried created a new Service account and used those credentials but I didn't have any luck. And yes, I've enabled the all of the YouTube APIs in the developer console.</p>
<p><strong>Do you have any idea why I'm getting the <code>HttpAccessTokenRefreshError: unauthorized_client: ...</code> error?</strong></p>
<p>edit: Previously I used an OAuth ID as opposed to a Service account, when I ran my script a browser tab was presented and I had to choose an account. I was presented with two options: one was my email, the other was the YouTube account that I was added as a manager. Do you think that I'm getting that error because I'm using my email address to generate credentials (and not the YouTube account)?</p>
<p>edit2: It appears that the YouTube account may not fall under the umbrella of our Google Apps domain. So, this may be why I can't authorize using my email address, even though I've been made a Manager of that YouTube account with that email address.</p>
| 1 | 2016-10-13T17:58:15Z | 40,044,589 | <p>I found on this <a href="https://developers.google.com/youtube/analytics/authentication" rel="nofollow">documentation</a> that you cannot use Service account with YouTube Analytics API.</p>
<p>It is stated here that:</p>
<blockquote>
<p>The service account flow supports server-to-server interactions that
do not access user information. However, the YouTube Analytics API
does not support this flow. Since there is no way to link a Service
Account to a YouTube account, attempts to authorize requests with this
flow will generate an error.</p>
</blockquote>
<p>I also found it in this <a href="http://stackoverflow.com/questions/13586153/youtube-analytics-google-service-account">thread</a> answered by a Googler.</p>
<p>For more information, check this <a href="http://stackoverflow.com/questions/15554106/using-the-youtube-analytics-api-with-a-cms-account">SO question</a>.</p>
| 1 | 2016-10-14T13:38:04Z | [
"python",
"youtube-api",
"google-oauth",
"google-api-client"
] |
Error with if/elif statements and turtle commands | 40,027,925 | <p>I have been having an odd error with if/else statements. For some reason, when I use this code (intro project to turtle graphics)</p>
<pre><code>from turtle import *
print('Hello. I am Terry the Turtle. Enter a command to make me draw a line, or ask for help.')
on = True
shape('turtle')
if on == True:
c1 = str(input('Enter command: ')
if c1 == 'forward':
c2 = eval(input('How many degrees? ' ))
right(c2)
elif c1 == 'right':
c2 = eval(input('How many degrees? ' ))
right(c2)
elif c1 == 'left':
c2 = eval(input('How many degrees? ' ))
left(c2)
elif c1 == 'goodbye':
print('Bye!')
on = False
else:
print('Possible commands: forward, right, left, goodbye.')
</code></pre>
<p>For some odd reason, the <code>if</code> and <code>elif</code> statements keep returning syntax errors, yet they do not seem to have any visible errors. I tried doing similar things, but it kept on returning syntax errors. Is there any way to fix this?</p>
<p>Sorry if this is a dumb question, this is my first time here and I am just really confused.</p>
| 1 | 2016-10-13T18:00:41Z | 40,028,187 | <p>I think this is what you want</p>
<pre><code>from turtle import *
print('Hello. I am Terry the Turtle. Enter a command to make me draw a line, or ask for help.')
on = True
shape('turtle')
# Get commands until user enters goodbye
while on:
c1 = str(input('Enter command: '))
if c1 == 'forward':
# Ask how far when moving
c2 = int(input('How far? '))
# Use forward to move
forward(c2)
elif c1 == 'right':
c2 = int(input('How many degrees? '))
right(c2)
elif c1 == 'left':
c2 = int(input('How many degrees? '))
left(c2)
elif c1 == 'goodbye':
print('Bye!')
on = False
else:
print('Possible commands: forward, right, left, goodbye.')
</code></pre>
<p>This should be run with Python3 (e.g. <code>python3 myrtle.py</code>).</p>
| 1 | 2016-10-13T18:17:29Z | [
"python",
"turtle-graphics"
] |
Error with if/elif statements and turtle commands | 40,027,925 | <p>I have been having an odd error with if/else statements. For some reason, when I use this code (intro project to turtle graphics)</p>
<pre><code>from turtle import *
print('Hello. I am Terry the Turtle. Enter a command to make me draw a line, or ask for help.')
on = True
shape('turtle')
if on == True:
c1 = str(input('Enter command: ')
if c1 == 'forward':
c2 = eval(input('How many degrees? ' ))
right(c2)
elif c1 == 'right':
c2 = eval(input('How many degrees? ' ))
right(c2)
elif c1 == 'left':
c2 = eval(input('How many degrees? ' ))
left(c2)
elif c1 == 'goodbye':
print('Bye!')
on = False
else:
print('Possible commands: forward, right, left, goodbye.')
</code></pre>
<p>For some odd reason, the <code>if</code> and <code>elif</code> statements keep returning syntax errors, yet they do not seem to have any visible errors. I tried doing similar things, but it kept on returning syntax errors. Is there any way to fix this?</p>
<p>Sorry if this is a dumb question, this is my first time here and I am just really confused.</p>
| 1 | 2016-10-13T18:00:41Z | 40,028,254 | <p>For c1, you don't have to indicate it is a string as inputs are already strings
And no need for "eval".</p>
<p>Try:</p>
<pre><code>from turtle import *
print('Hello. I am Terry the Turtle. Enter a command to make me draw a line, or ask for help.')
on = True
shape('turtle')
while on == True:
c1 = input('Enter command: ')
if c1 == 'forward':
c2 = int(input('How far forward? ' ))
forward(c2)
elif c1 == 'right':
c2 = int(input('How many degrees? ' ))
right(c2)
elif c1 == 'left':
c2 = int(input('How many degrees? ' ))
left(c2)
elif c1 == 'goodbye':
print('Bye!')
on = False
else:
print('Possible commands: forward, right, left, goodbye.')
</code></pre>
| 0 | 2016-10-13T18:21:29Z | [
"python",
"turtle-graphics"
] |
How to hide hiding secret keys in commits to bitbucket private repo? | 40,027,965 | <p>I'm using python. Currently, I'm doing this. I have a file named <code>keys.py</code> where I store in my secret keys such as AWS_SECRET and etc.</p>
<p>Inside my <code>.gitignore</code> I have keys.py so that it doesn't get committed to bitbucket.</p>
<p>My <code>keys.py</code> looks like this.</p>
<pre><code>#!/usr/bin/env python
AWS_KEY = "1231231231"
AWS_SECRET = "23123123123"
PHONE_NUMBER = "12312312312"
</code></pre>
<p>Inside the python file that needs the keys, I do the following.</p>
<pre><code>import keys
print keys.AWS_KEY
</code></pre>
<p>The problem I'm having now is that now that bitbucket supports pipelines, I am able to do the testing and stuff and deploy to server straight. However, since <code>keys.py</code> isn't in my repo, bitbucket fails to test the code.</p>
<p>There is environment variables settings in bitbucket. But that will require me to change my codebase to accept environment variables.</p>
<p>What should I do to use bitbucket pipelines? Should I change my code to use environment variables? Is there any better approaches?</p>
<p>Thanks all.</p>
| 0 | 2016-10-13T18:03:16Z | 40,028,250 | <p>try this:</p>
<pre><code>import os
AWS_KEY = os.environ.get('AWS_KEY', 'XXXXXX')
AWS_SECRET = os.environ.get('AWS_SECRET', 'YYYYYY')
PHONE_NUMBER = os.environ.get('PHONE_NUMBER', 'ZZZZZ')
</code></pre>
<p>But first you need set AWS_KEY, AWS_SECRET, PHONE_NUMBER in the environment variables in <a href="https://confluence.atlassian.com/bitbucket/environment-variables-in-bitbucket-pipelines-794502608.html" rel="nofollow">bitbucket</a> </p>
| 1 | 2016-10-13T18:21:20Z | [
"python",
"git"
] |
How to hide hiding secret keys in commits to bitbucket private repo? | 40,027,965 | <p>I'm using python. Currently, I'm doing this. I have a file named <code>keys.py</code> where I store in my secret keys such as AWS_SECRET and etc.</p>
<p>Inside my <code>.gitignore</code> I have keys.py so that it doesn't get committed to bitbucket.</p>
<p>My <code>keys.py</code> looks like this.</p>
<pre><code>#!/usr/bin/env python
AWS_KEY = "1231231231"
AWS_SECRET = "23123123123"
PHONE_NUMBER = "12312312312"
</code></pre>
<p>Inside the python file that needs the keys, I do the following.</p>
<pre><code>import keys
print keys.AWS_KEY
</code></pre>
<p>The problem I'm having now is that now that bitbucket supports pipelines, I am able to do the testing and stuff and deploy to server straight. However, since <code>keys.py</code> isn't in my repo, bitbucket fails to test the code.</p>
<p>There is environment variables settings in bitbucket. But that will require me to change my codebase to accept environment variables.</p>
<p>What should I do to use bitbucket pipelines? Should I change my code to use environment variables? Is there any better approaches?</p>
<p>Thanks all.</p>
| 0 | 2016-10-13T18:03:16Z | 40,028,278 | <p>You should use environment variables. That is the norm. You will have to adapt your code for this change. </p>
<p>A simple fix may be to have your <code>keys.py</code> file look for the environment variables. You can retrieve environment variables as follows: <code>os.environ.get("VARIABLE_NAME")</code></p>
<p>As an aside: depending on your use case, you may want to write your tests such that they don't depend on <em>actually</em> calling the services; such as by writing mock classes specifically for testing. This is normal practice, and it will speed up your tests as well. Of course, that's not always easy or practical. So maybe look into mocks and other testing techniques.</p>
| 1 | 2016-10-13T18:23:24Z | [
"python",
"git"
] |
Django: Filtering query to a specific id | 40,028,150 | <p>I have a podcast management website where a user is able to setup his account and after that will be able to create multiple episode from that specific user. After an episode is done, a button will appear where he can see some links that is created automatically for the user to use. The problem I am having is that for every episode, I am trying to show the links for that specific one but it always keeps showing the links from the one I recently created and other episodes that were previously created.</p>
<p>This is the button where the user click when the episode has created the links:</p>
<pre><code><a class="btn btn-info box-shadow--6dp" href="{% url 'pf:episodereview' production_id=instance.id %}" role="button"><i class="fa fa-link" aria-hidden="true"></i>&nbsp Review Links</a>
</code></pre>
<p>The URL pattern in <code>urls.py</code>:</p>
<pre><code>url(r'^episodereview/(?P<production_id>[0-9]+)/$', views.EpisodeReview.as_view(), name="episodereview"),
</code></pre>
<p>This is what happens in <code>views.py</code>:</p>
<pre><code>class EpisodeReview(LoginRequiredMixin, ProductionRequiredMixin, ListView):
template_name = 'pf/forms_episode_review.html'
podcast = None
def get(self, request, *args, **kwargs):
production_id = kwargs.get('production_id', None)
if production_id:
production = Production.objects.filter(id=production_id).first()
if not production:
return self.handle_no_permission()
return super(EpisodeReview, self).get(request, *args, **kwargs)
def get_queryset(self):
return Production.objects.filter(podcast=self.podcast)
def get_success_url(self):
return reverse('pf:dashboard')
</code></pre>
<p>And the template where everything is displayed:</p>
<pre><code>{% extends "pf/base.html" %}
{% load crispy_forms_tags %}
{% block content %}
<br>
<br>
<div class="panel panel-default box-shadow--16dp col-sm-6 col-sm-offset-3">
<div class="panel-body">
<div class='row'>
<div class='col-sm-12'>
<h3><i class="fa fa-wpforms pull-right" aria-hidden="true"></i>Episode Review&nbsp</h3>
<h5>Following links are generated automatically with your accounts and can be used immediately.</h5>
<hr/>
{% if object_list %}
<table class='table'>
<tbody>
{% for instance in object_list %}
<ul>
<li><b>Wordpress URL:</b> {{ instance.wordpress_url }}</li>
<li><b>Wordpress Short URL:</b> {{ instance.wordpress_short_url }}</li>
<li><b>Soundcloud Result URL:</b>{{ instance.soundcloud_result_url }}</li>
<li><b>Youtube Result URL:</b>{{ instance.youtube_result_url }}</li>
<li><b>Libsyn Result URL:</b>{{ instance.libsyn_result_url }}</li>
</ul>
{% endfor %}
</tbody>
</table>
{% endif %}
<hr/>
<button type="submit" class="btn btn-info box-shadow--6dp"><i class="fa fa-floppy-o" aria-hidden="true"></i> &nbspSave
</button>
</div>
</div>
</div>
</div>
{% endblock %}
</code></pre>
<p>Welcome any suggestion!</p>
| 0 | 2016-10-13T18:14:46Z | 40,030,108 | <p>You filter by the id in the get method, but then don't do anything with the result. When it comes to construct the template context, Django calls get_queryset, which only filters by self.podcast - which is None.</p>
<p>You should move that filter logic into get_queryset. And if you also want to filter by podcast, you should find a way to define that parameter too.</p>
| 0 | 2016-10-13T20:12:43Z | [
"python",
"django",
"django-templates",
"django-views",
"django-urls"
] |
How do you get the name of the tensorflow output nodes in a Keras Model? | 40,028,175 | <p>I'm trying to create a pb file from my Keras (tensorflow backend) model so I can build it on iOS. I'm using freeze.py and I need to pass the output nodes. How do i get the names of the output nodes of my Keras model?</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py</a></p>
| 1 | 2016-10-13T18:16:42Z | 40,052,942 | <p>The <code>output_node_names</code> should contain the names of the graph nodes you intend to use for inference(e.g. softmax). It is used to extract the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/graph_util.py#L234" rel="nofollow">sub-graph</a> that will be needed for inference.
It may be useful to look at <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph_test.py" rel="nofollow">freeze_graph_test</a>.</p>
| 1 | 2016-10-14T22:37:10Z | [
"python",
"tensorflow",
"keras"
] |
aws boto - how to create instance and return instance_id | 40,028,223 | <p>I want to create a python script where I can pass arguments/inputs to specify instance type and later attach an extra EBS (if needed).</p>
<pre><code>ec2 = boto3.resource('ec2','us-east-1')
hddSize = input('Enter HDD Size if you want extra space ')
instType = input('Enter the instance type ')
def createInstance():
ec2.create_instances(
ImageId=AMI,
InstanceType = instType,
SubnetId='subnet-31d3ad3',
DisableApiTermination=True,
SecurityGroupIds=['sg-sa4q36fc'],
KeyName='key'
)
return instanceID; ## I know this does nothing
def createEBS():
ebsVol = ec2.Volume(
id = instanceID,
volume_type = 'gp2',
size = hddSize
)
</code></pre>
<p>Now, can ec2.create_instances() return ID or do I have to do an iteration of reservations?</p>
<p>or do I do an ec2.create(instance_id) / return instance_id? The documentation isn't specifically clear here.</p>
| 0 | 2016-10-13T18:19:13Z | 40,030,586 | <p>The docs state that the call to create_instances()</p>
<p><a href="https://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances" rel="nofollow">https://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances</a></p>
<p>Returns list(ec2.Instance). So you should be able to get the instance ID(s) from the 'id' property of the object(s) in the list.</p>
| 1 | 2016-10-13T20:41:35Z | [
"python",
"amazon-ec2",
"boto",
"boto3"
] |
aws boto - how to create instance and return instance_id | 40,028,223 | <p>I want to create a python script where I can pass arguments/inputs to specify instance type and later attach an extra EBS (if needed).</p>
<pre><code>ec2 = boto3.resource('ec2','us-east-1')
hddSize = input('Enter HDD Size if you want extra space ')
instType = input('Enter the instance type ')
def createInstance():
ec2.create_instances(
ImageId=AMI,
InstanceType = instType,
SubnetId='subnet-31d3ad3',
DisableApiTermination=True,
SecurityGroupIds=['sg-sa4q36fc'],
KeyName='key'
)
return instanceID; ## I know this does nothing
def createEBS():
ebsVol = ec2.Volume(
id = instanceID,
volume_type = 'gp2',
size = hddSize
)
</code></pre>
<p>Now, can ec2.create_instances() return ID or do I have to do an iteration of reservations?</p>
<p>or do I do an ec2.create(instance_id) / return instance_id? The documentation isn't specifically clear here.</p>
| 0 | 2016-10-13T18:19:13Z | 40,030,610 | <p>you can the following </p>
<pre><code>def createInstance():
instance = ec2.create_instances(
ImageId=AMI,
InstanceType = instType,
SubnetId='subnet-31d3ad3',
DisableApiTermination=True,
SecurityGroupIds=['sg-sa4q36fc'],
KeyName='key'
)
# return response
return instance.instance_id
</code></pre>
<p>actually <a href="http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances" rel="nofollow"><code>create_instances</code></a> returns an <a href="http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#instance" rel="nofollow"><code>ec2.instance</code></a> instance</p>
| 0 | 2016-10-13T20:42:47Z | [
"python",
"amazon-ec2",
"boto",
"boto3"
] |
How can I sort lines by the len of a field of each line? Python | 40,028,281 | <p>I have one code that should print out 10 lines sorted in ascending order from shortest to longest. I have a test text and I want to order the output lines using the len of the field in the position line[4] but I don´t know how to do it because I think I need to read the entire text and after to order the lines in function of the lenght of the 5th field. </p>
<pre><code>#!/usr/bin/python
import sys
import csv
def mapper():
reader = csv.reader(sys.stdin, delimiter='\t')
writer = csv.writer(sys.stdout, delimiter='\t', quotechar='"', quoting=csv.QUOTE_ALL)
for line in reader:
line.sort(key=len)
writer.writerow(line)
test_text = """\"\"\t\"\"\t\"\"\t\"\"\t\"333\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"88888888\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"1\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"11111111111\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"1000000000\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"22\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"4444\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"666666\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"55555\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"999999999\"\t\"\"
\"\"\t\"\"\t\"\"\t\"\"\t\"7777777\"\t\"\"
"""
# This function allows you to test the mapper with the provided test string
def main():
import StringIO
sys.stdin = StringIO.StringIO(test_text)
mapper()
sys.stdin = sys.__stdin__
main()
</code></pre>
<p>I want that the final result is:</p>
<pre><code>"" "" "" "" "22" ""
"" "" "" "" "333" ""
"" "" "" "" "4444" ""
"" "" "" "" "55555" ""
"" "" "" "" "666666" ""
"" "" "" "" "7777777" ""
"" "" "" "" "88888888" ""
"" "" "" "" "999999999" ""
"" "" "" "" "1000000000" ""
"" "" "" "" "11111111111" ""
</code></pre>
<p>How can I do this?</p>
| 0 | 2016-10-13T18:23:34Z | 40,028,415 | <p>Change your mapper method to this</p>
<pre><code>def mapper():
reader = csv.reader(sys.stdin, delimiter='\t')
writer = csv.writer(sys.stdout, delimiter='\t', quotechar='"', quoting=csv.QUOTE_ALL)
for line in sorted(list(reader), key=lambda x: len(x[-2])):
writer.writerow(line)
</code></pre>
| 0 | 2016-10-13T18:30:12Z | [
"python"
] |
How to get the x and y intercept in matplotlib? | 40,028,330 | <p>I have scoured the internet and can't find a python command to find the x and y intercepts of a curve on matplotlib. Is there a command that exists? or is there a much easier way that is going over my head? Any help would be appreciated. Thanks, </p>
<p>Nimrodian.</p>
| 1 | 2016-10-13T18:25:56Z | 40,028,408 | <p>Use this. Much faster:</p>
<pre><code>slope, intercept = np.polyfit(x, y, 1)
</code></pre>
| 1 | 2016-10-13T18:29:56Z | [
"python",
"matplotlib"
] |
How to get the x and y intercept in matplotlib? | 40,028,330 | <p>I have scoured the internet and can't find a python command to find the x and y intercepts of a curve on matplotlib. Is there a command that exists? or is there a much easier way that is going over my head? Any help would be appreciated. Thanks, </p>
<p>Nimrodian.</p>
| 1 | 2016-10-13T18:25:56Z | 40,028,537 | <pre><code>for x, y in zip(x_values, y_values):
if x == 0 or y == 0:
print(x, y)
</code></pre>
| 0 | 2016-10-13T18:38:22Z | [
"python",
"matplotlib"
] |
Random comma inserted at character 8192 in python "json" result called from node.js | 40,028,380 | <p>I'm a JS developer just learning python. This is my first time trying to use node (v6.7.0) and python (v2.7.1) together. I'm using restify with python-runner as a bridge to my python virtualenv. My python script uses a RAKE NLP keyword-extraction package.</p>
<p>I can't figure out for the life of me why my return data in <strong>server.js</strong> inserts a random comma at character 8192 and roughly multiples of. There's no pattern except the location; Sometimes it's in the middle of the object key string other times in the value, othertimes after the comma separating the object pairs. This completely breaks the JSON.parse() on the return data. Example outputs below. When I run the script from a python shell, this doesn't happen.</p>
<p>I seriously can't figure out why this is happening, any experienced devs have any ideas?</p>
<p><em>Sample output in browser</em></p>
<pre><code>[..., {...ate': 1.0, 'intended recipient': 4.,0, 'correc...}, ...]
</code></pre>
<p><em>Sample output in python shell</em></p>
<pre><code>[..., {...ate': 1.0, 'intended recipient': 4.0, 'correc...}, ...]
</code></pre>
<p><strong>DISREGARD ANY DISCREPANCIES REGARDING OBJECT CONVERSION AND HANDLING IN THE FILES BELOW. THE CODE HAS BEEN SIMPLIFIED TO SHOWCASE THE ISSUE</strong></p>
<p><strong>server.js</strong></p>
<pre><code>var restify = require('restify');
var py = require('python-runner');
var server = restify.createServer({...});
server.get('/keyword-extraction', function( req, res, next ) {
py.execScript(__dirname + '/keyword-extraction.py', {
bin: '.py/bin/python'
})
.then( function( data ) {
fData = JSON.parse(data); <---- ERROR
res.json(fData);
})
.catch( function( err ) {...});
return next();
});
server.listen(8001, 'localhost', function() {...});
</code></pre>
<p><strong>keyword-extraction.py</strong></p>
<pre><code>import csv
import json
import RAKE
f = open( 'emails.csv', 'rb' )
f.readline() # skip line containing col names
outputData = []
try:
reader = csv.reader(f)
for row in reader:
email = {}
emailBody = row[7]
Rake = RAKE.Rake('SmartStoplist.txt')
rakeOutput = Rake.run(emailBody)
for tuple in rakeOutput:
email[tuple[0]] = tuple[1]
outputData.append(email)
finally:
file.close()
print( json.dumps(outputData))
</code></pre>
| 2 | 2016-10-13T18:28:22Z | 40,029,368 | <p>This looks suspiciously like a bug related to size of some buffer, since 8192 is a power of two.</p>
<p>The main thing here is to isolate exactly where the failure is occurring. If I were debugging this, I would </p>
<ol>
<li><p>Take a closer look at the output from <code>json.dumps</code>, by printing several characters on either side of position 8191, ideally the integer character code (unicode, ASCII, or whatever). </p></li>
<li><p>If that looks OK, I would try capturing the output from the python script as a file and read that directly in the node server (i.e. don't run a python script). </p></li>
<li><p>If that works, then create a python script that takes that file and outputs it without manipulation and have your node server execute that python script instead of the one it is using now.</p></li>
</ol>
<p>That should help you figure out where the problem is occurring. From comments, I suspect that this is essentially a bug that you cannot control, unless you can increase the python buffer size enough to guarantee your data will never blow the buffer. 8K is pretty small, so that might be a realistic solution.</p>
<p>If that is inadequate, then you might consider processing the data on the the node server, to remove every character at <code>n * 8192</code>, if you can consistently rely on that. Good luck. </p>
| 0 | 2016-10-13T19:28:13Z | [
"javascript",
"python",
"json",
"node.js",
"restify"
] |
How to identify the name of the file which calls the function in python? | 40,028,401 | <p>I have a server.py which contains a function and other files like requestor1.py requestor2.py .... requestorN.py</p>
<p>Server.py contains a function :</p>
<pre><code>def callmeforhelp()
return "I am here to help you out!"
</code></pre>
<p>and requestor1.py file calls the function callmeforhelp() and it has the imports needed to call the function from server.py</p>
<p>Is there a way my server.py knows which file is calling it?</p>
<p>Something similar like below :</p>
<p>When requestor1.py calls the function, then :</p>
<pre><code>def callmeforhelp()
print "Now I am being called by : "+caller // caller must contain the value as requestor1.py or even full path of requestor1.py
return "I am here to help you out!"
</code></pre>
| 0 | 2016-10-13T18:29:44Z | 40,028,579 | <p>Try it in your <code>server</code> file:</p>
<pre><code>import inspect
def callmeforhelp():
result = inspect.getouterframes(inspect.currentframe(), 2)
print("Caller is: " + str(result[1][1]))
</code></pre>
| 1 | 2016-10-13T18:40:41Z | [
"python"
] |
How to identify the name of the file which calls the function in python? | 40,028,401 | <p>I have a server.py which contains a function and other files like requestor1.py requestor2.py .... requestorN.py</p>
<p>Server.py contains a function :</p>
<pre><code>def callmeforhelp()
return "I am here to help you out!"
</code></pre>
<p>and requestor1.py file calls the function callmeforhelp() and it has the imports needed to call the function from server.py</p>
<p>Is there a way my server.py knows which file is calling it?</p>
<p>Something similar like below :</p>
<p>When requestor1.py calls the function, then :</p>
<pre><code>def callmeforhelp()
print "Now I am being called by : "+caller // caller must contain the value as requestor1.py or even full path of requestor1.py
return "I am here to help you out!"
</code></pre>
| 0 | 2016-10-13T18:29:44Z | 40,028,603 | <p>Here is a way to get at the caller's local attributes:</p>
<pre><code>import sys
def callmeforhelp():
print("Called from", sys._getframe(1).f_locals['__file__'])
</code></pre>
<p>This is a feature of CPython and is not guaranteed to be present in other language implementations.</p>
| 2 | 2016-10-13T18:41:48Z | [
"python"
] |
adding data from different rows in a csv belonging to a common variable | 40,028,405 | <p>this is my csv excel file information:</p>
<pre><code> Receipt merchant Address Date Time Total price
25007 A ABC pte ltd 3/7/2016 10:40 12.30
25008 A ABC ptd ltd 3/7/2016 11.30 6.70
25009 B CCC ptd ltd 4/7/2016 07.35 23.40
25010 A ABC pte ltd 4/7/2016 12:40 9.90
</code></pre>
<p>how is it possible to add the 'Total Price' of each line together only if they belong to the same 'merchant', 'date' and 'time' then grouping them together in a list or dict, example: {['A','3/7/2016', '19.0'], ['A',4/7/2016, '9.90'],..}
My previous code does what i wanted except that i lack the code to count the total price for each same date and merchant. </p>
<pre><code>from collections import defaultdict
from csv import reader
with open("assignment_info.csv") as f:
next(f)
group_dict = defaultdict(list)
for rec, name, _, dte, time, price in reader(f):
group_dict[name, dte].extend(time)
for v in group_dict.values():v.sort()
from pprint import pprint as pp
print 'Sales tracker:'
pp(dict(group_dict))
</code></pre>
| 0 | 2016-10-13T18:29:48Z | 40,028,489 | <pre><code>import pandas as pd
df = pd.read_csv('assignment_info.csv')
df = df.groupby(['merchant', 'Date', 'Time']).sum().reset_index()
df
</code></pre>
| 1 | 2016-10-13T18:34:53Z | [
"python",
"csv",
"pandas",
"design-patterns",
"group"
] |
adding data from different rows in a csv belonging to a common variable | 40,028,405 | <p>this is my csv excel file information:</p>
<pre><code> Receipt merchant Address Date Time Total price
25007 A ABC pte ltd 3/7/2016 10:40 12.30
25008 A ABC ptd ltd 3/7/2016 11.30 6.70
25009 B CCC ptd ltd 4/7/2016 07.35 23.40
25010 A ABC pte ltd 4/7/2016 12:40 9.90
</code></pre>
<p>how is it possible to add the 'Total Price' of each line together only if they belong to the same 'merchant', 'date' and 'time' then grouping them together in a list or dict, example: {['A','3/7/2016', '19.0'], ['A',4/7/2016, '9.90'],..}
My previous code does what i wanted except that i lack the code to count the total price for each same date and merchant. </p>
<pre><code>from collections import defaultdict
from csv import reader
with open("assignment_info.csv") as f:
next(f)
group_dict = defaultdict(list)
for rec, name, _, dte, time, price in reader(f):
group_dict[name, dte].extend(time)
for v in group_dict.values():v.sort()
from pprint import pprint as pp
print 'Sales tracker:'
pp(dict(group_dict))
</code></pre>
| 0 | 2016-10-13T18:29:48Z | 40,028,771 | <p>As the other answer points out, <code>pandas</code> is an excellent library for this kind of data manipulation. My answer won't use <code>pandas</code> though.</p>
<p>A few issues:</p>
<ul>
<li>In your problem description, you state that you want to group by <em>three</em> columns, but in your example cases you are only grouping by two. Since the former makes more sense, I am only grouping by <code>name</code> and <code>date</code></li>
<li>You are looping and sorting each value, but for the life of me I can't figure out why.</li>
<li>You declare the default type of the <code>defaultdict</code> a list and then <code>extend</code> with a string, which ends up giving you a (sorted!) list of characters. You don't really want to do this.</li>
<li>Your example uses the syntax of a set: <code>{ [a,b,c], [d,e,f] }</code> but the syntax of a dict makes more sense: <code>{ (a, b): c, }</code>. I have changed the output to the latter.</li>
</ul>
<p>Here is a working example:</p>
<pre><code>from collections import defaultdict
from csv import reader
with open("assignment_info.csv") as f:
next(f)
group_dict = defaultdict(float)
for rec, name, _, dte, time, price in reader(f):
group_dict[name, dte] += float(price)
</code></pre>
<p><code>group_dict</code> is now:</p>
<pre><code>{('A', '3/7/2016'): 19.0, ('A', '4/7/2016'): 9.9, ('B', '4/7/2016'): 23.4}
</code></pre>
<p>I removed extra columns which aren't in your example: here's the file I worked with:</p>
<pre>Receipt,merchant,Address,Date,Time,Total price
25007,A,ABC pte ltd,3/7/2016,10:40,12.30
25008,A,ABC ptd ltd,3/7/2016,11.30,6.70
25009,B,CCC ptd ltd,4/7/2016,07.35,23.40
25010,A,ABC pte ltd,4/7/2016,12:40,9.90</pre>
| 0 | 2016-10-13T18:50:57Z | [
"python",
"csv",
"pandas",
"design-patterns",
"group"
] |
Best way to dynamically add multiple abstract class instances to inheriting class instance | 40,028,433 | <p>I spent quite a bit of time looking for an answer to this question but am unsure of even what it is exactly that I'm looking for. I may even be approaching this entirely wrong by using abstract classes so clarification in any way will be helpful.</p>
<p>I want to allow users to add multiple symptoms and treatments to a single disease from within the form template. With my limited knowledge, the only way I can imagine making this work is by having the maximum expected number of symptom and treatment model fields already defined i.e.:</p>
<pre><code>class Symptoms(models.Model):
symptom_one = models.CharField(max_lenth=20)
symptom_one_severity = models.PositiveIntegerField()
symptom_two = models.CharField(max_lenth=20, blank=True)
symptom_two_severity = models.PositiveIntegerField(blank=True, null=True)
etc.
</code></pre>
<p>This is what I currently have:</p>
<p>Models.py</p>
<pre><code>class Symptoms(models.Model):
symptom = models.CharField(max_lenth=20)
symptom_severity = models.PositiveIntegerField()
class Meta:
abstract = True
class Treatments(models.Model):
treatment = models.CharField(max_length=20)
class Meta:
abstract = True
class Diseases(Symptoms, Treatments):
disease = models.CharField(max_length=20)
</code></pre>
<p>Forms.py</p>
<pre><code>class DiseaseForm(ModelForm):
model = Diseases
fields = (
'symptom',
'symptom_severity',
'treatment',
'disease',
)
</code></pre>
<p>My proposed method isn't very DRY so I'm wondering <strong>what is the best way to dynamically add multiple abstract models to an inheriting class?</strong></p>
| 0 | 2016-10-13T18:31:09Z | 40,030,487 | <p>Based on your requirement, I would propose have a Disease model with many to many to fields to Symptoms and Treatments model. <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#relationships" rel="nofollow">Read more about django model relationships here</a>. So your models should look like,</p>
<pre><code>class Symptoms(models.Model):
symptom = models.CharField(max_lenth=20)
symptom_severity = models.PositiveIntegerField()
class Treatments(models.Model):
treatment = models.CharField(max_length=20)
class Diseases(Symptoms, Treatments):
disease = models.CharField(max_length=20)
symptoms = models.ManyToManyField(Symptoms)
treatments = models.ManyToManyField(Treatments)
</code></pre>
| 1 | 2016-10-13T20:34:53Z | [
"python",
"django",
"inheritance",
"django-models"
] |
Import Pandas on apache server causes timeout error | 40,028,497 | <p>I've got a Django project working on an Apache server.</p>
<p>I installed pandas and want to use it to start manipulating data - however something odd is happening.</p>
<p>Anytime I use the <code>import pandas</code> on the production environment, the server will hang up and (after a while) throw a 408 timeout error.</p>
<p>I can comment out the <code>pandas</code> portion and the server responds normally without issue. I can't recreate it in the development environment or command line interface with django.</p>
<p>Here are the <code>httpd-app.conf</code> file:</p>
<pre><code>Alias /tooltrack/static "C:/Users/myfolder/Bitnami Django Stack Projects/tooltrack/static/"
<Directory "C:/Users/myfolder/Bitnami Django Stack Projects/tooltrack/static/">
Options +MultiViews
AllowOverride All
<IfVersion < 2.3 >
Order allow,deny
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
<IfVersion < 2.3 >
Order allow,deny
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
</Directory>
WSGIScriptAlias / 'C:/Users/myfolder/Bitnami Django Stack projects/tooltrack/tooltrack/wsgi.py'
<Directory "C:/Users/myfolder/Bitnami Django Stack projects/tooltrack/tooltrack">
Options +MultiViews
AllowOverride All
<IfVersion < 2.3 >
Order allow,deny
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
<IfVersion < 2.3 >
Order allow,deny
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
</Directory>
<Directory "C:/Users/myfolder/Bitnami Django Stack projects/tooltrack">
Options +MultiViews
AllowOverride All
<IfVersion < 2.3 >
Order allow,deny
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
</Directory>
</code></pre>
<p>I know its hanging up on the import of pandas due to this:</p>
<pre><code>def panda_dataframe_r():
print 'importing pandas ' + str(timezone.now())
import pandas
print 'import done ' + str(timezone.now())
</code></pre>
<p>I can see the <code>importing pandas</code> in the log, however no following <code>import done</code></p>
<p>Any help is greatly appreciated!!</p>
| 0 | 2016-10-13T18:35:44Z | 40,031,407 | <p>Try adding:</p>
<pre><code>WSGIApplicationGroup %{GLOBAL}
</code></pre>
<p>Various of the scientific packages that it is going to need will not work in Python sub interpreters. That directive will force the use of the main interpreter context.</p>
| 2 | 2016-10-13T21:35:29Z | [
"python",
"django",
"apache",
"pandas",
"mod-wsgi"
] |
Pythonic way around Enums | 40,028,498 | <p>What is the <em>pythonic</em> way to tell the caller of a function what values a given parameter supports?</p>
<p>He is an example for PyQt (for GUI). Say I have a checkbox, </p>
<pre><code>class checkbox(object):
....
def setCheckState(self, value):
....
</code></pre>
<p>Here, <code>setCheckState()</code> should only expect <strong>checked</strong> or <strong>unchecked</strong>.</p>
<p>PyQt uses a built-in enumeration (i.e. <code>Qt.Checked</code> or <code>Qt.Unchecked</code>), but this is awful. I am constantly in the documentation looking for the enum for the object I am working with.</p>
<p>Obviously PyQt is written in an <em>unpythonic</em> C++ sytle. How <em>should</em> this or a similar problem be handled in Python? According to <a href="https://www.python.org/dev/peps/pep-0435/" rel="nofollow">PEP 435</a>, enums seem to be a recent addition to the language and for very specific applications, so I would assume there is/was a better way to handle this?</p>
<p>I want to make the code I write easy to use when my functions require specific parameter values--almost like a combobox for functions.</p>
| 2 | 2016-10-13T18:35:46Z | 40,029,336 | <p>The <em>One Obvious Way</em> is function annotations.</p>
<pre><code>class CheckBox(enum.Enum):
Off = 0
On = 1
def setCheckState(self, value: CheckBox):
...
</code></pre>
<p>This says quite clearly that <code>value</code> should be an instance of <code>CheckBox</code>. Having <code>Enum</code> just makes that a bit easier.</p>
<p>Annotations themselves aren't directly supported in 2.7, though. Common workarounds include putting that information in the function doc string (where various tools can find it) or in comments (as we already knew).</p>
<p>If looking for a method for your own code: use an annotating decorator. This has the advantage of continuing to work in 3+:</p>
<pre><code>class annotate(object):
def __init__(self, **kwds):
self.kwds = kwds
def __call__(self, func):
func.__annotations__ = self.kwds
@annotate(value=CheckBox)
def setCheckState(self, value):
...
</code></pre>
<p>To be a robust decorator it should check that the contents of <code>kwds</code> matches the function parameter names.</p>
| 1 | 2016-10-13T19:25:52Z | [
"python",
"python-2.7",
"function",
"parameters",
"enums"
] |
Pythonic way around Enums | 40,028,498 | <p>What is the <em>pythonic</em> way to tell the caller of a function what values a given parameter supports?</p>
<p>He is an example for PyQt (for GUI). Say I have a checkbox, </p>
<pre><code>class checkbox(object):
....
def setCheckState(self, value):
....
</code></pre>
<p>Here, <code>setCheckState()</code> should only expect <strong>checked</strong> or <strong>unchecked</strong>.</p>
<p>PyQt uses a built-in enumeration (i.e. <code>Qt.Checked</code> or <code>Qt.Unchecked</code>), but this is awful. I am constantly in the documentation looking for the enum for the object I am working with.</p>
<p>Obviously PyQt is written in an <em>unpythonic</em> C++ sytle. How <em>should</em> this or a similar problem be handled in Python? According to <a href="https://www.python.org/dev/peps/pep-0435/" rel="nofollow">PEP 435</a>, enums seem to be a recent addition to the language and for very specific applications, so I would assume there is/was a better way to handle this?</p>
<p>I want to make the code I write easy to use when my functions require specific parameter values--almost like a combobox for functions.</p>
| 2 | 2016-10-13T18:35:46Z | 40,030,794 | <p>That will do the trick</p>
<pre><code>import collections
def create_enum(container, start_num, *enum_words):
return collections.namedtuple(container, enum_words)(*range(start_num, start_num + len(enum_words)))
Switch = create_enum('enums', 1, 'On', 'Off')
</code></pre>
<p><em>Switch</em> is your enum:</p>
<pre><code>In [20]: Switch.On
Out[20]: 1
In [21]: Switch.Off
Out[21]: 2
</code></pre>
<p>OK, I got the error of my ways - I mixed up representation with value.</p>
<p>Nevertheless, if you want to enumerate a larger range - in my <strong>fake</strong> approach you don't have to add values manually. Of course, if you have sequential numbers.</p>
<p>And I hate extra typing :-)</p>
| 0 | 2016-10-13T20:53:56Z | [
"python",
"python-2.7",
"function",
"parameters",
"enums"
] |
Filtering syntax for pandas dataframe groupby with logic condition | 40,028,500 | <p>I have a pandas dataframe containing indices that have a one-to-many relationship. A very simplified and shortened example of my data is shown in the <a href="https://i.stack.imgur.com/7z1ih.jpg" rel="nofollow">DataFrame Example</a> link. I want to get a list or Series or ndarray of the unique namIdx values in which nCldLayers <= 1. The final result should show indices of 601 and 603.</p>
<ol>
<li><p>I am able to accomplish this with the 3 statements below, but I am wondering if there is a much better, more succinct way with perhaps 'filter', 'select', or 'where'.</p>
<pre><code>grouped=(namToViirs['nCldLayers']<=1).groupby(namToViirs.index).all(axis=0)
grouped = grouped[grouped==True]
filterIndex = grouped.index
</code></pre></li>
<li><p>Is there a better approach in accomplishing this result by applying the logical condition (namToViirs['nCldLayers >= 1) in a subsequent part of the chain, i.e., first group then apply logical condition, and then retrieve only the namIdx where the logical result is true for each member of the group?</p></li>
</ol>
| 1 | 2016-10-13T18:35:54Z | 40,028,734 | <p>I think your code works nice, only you can add use small changes:</p>
<p>In <code>all</code> can be omit <code>axis=0</code><br>
<code>grouped==True</code> can be omit <code>==True</code> </p>
<pre><code>grouped=(namToViirs['nCldLayers']<=1).groupby(level='namldx').all()
grouped = grouped[grouped]
filterIndex = grouped.index
print (filterIndex)
Int64Index([601, 603], dtype='int64', name='namldx')
</code></pre>
<p>I think better is first filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> and then <code>groupby</code>, because less loops -> better performance.</p>
| 1 | 2016-10-13T18:49:08Z | [
"python",
"pandas",
"dataframe"
] |
Filtering syntax for pandas dataframe groupby with logic condition | 40,028,500 | <p>I have a pandas dataframe containing indices that have a one-to-many relationship. A very simplified and shortened example of my data is shown in the <a href="https://i.stack.imgur.com/7z1ih.jpg" rel="nofollow">DataFrame Example</a> link. I want to get a list or Series or ndarray of the unique namIdx values in which nCldLayers <= 1. The final result should show indices of 601 and 603.</p>
<ol>
<li><p>I am able to accomplish this with the 3 statements below, but I am wondering if there is a much better, more succinct way with perhaps 'filter', 'select', or 'where'.</p>
<pre><code>grouped=(namToViirs['nCldLayers']<=1).groupby(namToViirs.index).all(axis=0)
grouped = grouped[grouped==True]
filterIndex = grouped.index
</code></pre></li>
<li><p>Is there a better approach in accomplishing this result by applying the logical condition (namToViirs['nCldLayers >= 1) in a subsequent part of the chain, i.e., first group then apply logical condition, and then retrieve only the namIdx where the logical result is true for each member of the group?</p></li>
</ol>
| 1 | 2016-10-13T18:35:54Z | 40,028,939 | <p>For question 1, see jezrael answer. For question 2, you could play with indexes as sets:</p>
<pre><code>namToViirs.index[namToViirs.nCldLayers <= 1] \
.difference(namToViirs.index[namToViirs.nCldLayers > 1])
</code></pre>
| 0 | 2016-10-13T19:01:44Z | [
"python",
"pandas",
"dataframe"
] |
Comparing two large text files column by column in Python | 40,028,560 | <p>I have two large tab separated text files with dimensions : 36000 rows x 3000 columns. The structure of the columns is same in both files but they may not be sorted.</p>
<p>I need to <em>compare only the numeric columns</em> between these two files(apprx 2970 columns) and export out those rows where there is a difference in the value between any two respective columns.</p>
<p>Problem: Memory issue</p>
<p>Things I tried:</p>
<p>1) Transposing data: Making the data from wide to long and reading the data chunk by chunk.
Problem: Data bloats to a more than few million rows and python throws me a memory error</p>
<p>2) Difflib: Difflib along with generators and without transposing did provide me an output which was efficient but it compares the two files row by row. It doesn't differentiate the columns in the tab separated file.(I need them to be differentiated into columns since I will be performing some column operations between the difference rows.</p>
<p>3) Chunk and join: This is third approach I am trying wherein I will divide one file into chunks and merge it on the common keys with the other file repeatedly and find the difference in those chunks. This is going to be a shitty approach and its going to take a lot of time but I am unable to think of any thing else.</p>
<p>Also:
These type of questions have been answered in the past but they only dealt with one huge file and processing the same. </p>
<p>Any suggestions for a better approach in <strong>Python</strong> will be greatly appreciated. Thank you.</p>
| 0 | 2016-10-13T18:39:26Z | 40,028,745 | <p>First of all, if files are that big, they should be read row by row.</p>
<p>Reading one file row by row is simple:</p>
<pre><code>with open(...) as f:
for row in f:
...
</code></pre>
<p>To iterate two files row by row, zip them:</p>
<pre><code>with open(...) as f1, open(...) as f2:
for row1, row2 in itertools.izip(f1, f2):
# compare rows, decide what to do with them
</code></pre>
<p>I used <code>izip</code>, as it won't zip everything at once, like <code>zip</code> would in Python 2.
In Python 3, use <code>zip</code>. It does the right thing there.
It will go row by row and yield the pairs.</p>
<p>The next question is comparing by column. Separate the columns:</p>
<pre><code>columns = row.split('\t') # they are separated by tabs, therefore \t
</code></pre>
<p>Now pick the relevant columns and compare them. Then discard irrelevant rows and write the relevant ones to the output.</p>
| 1 | 2016-10-13T18:49:46Z | [
"python",
"compare",
"large-files"
] |
possible to compile python.exe without Visual Studio, with MinGw | 40,028,671 | <p>I am trying to achieve things laid out on this page:
<a href="https://blogs.msdn.microsoft.com/pythonengineering/2016/04/26/cpython-embeddable-zip-file/" rel="nofollow">https://blogs.msdn.microsoft.com/pythonengineering/2016/04/26/cpython-embeddable-zip-file/</a></p>
<p>The code I am trying to compile is just this:</p>
<pre><code>#include "Python.h"
int
wmain(int argc, wchar_t **argv)
{
return Py_Main(argc, argv);
}
</code></pre>
<p>In VisualStudio 15 I have to add the python/include and link to the python libs directories in the project and also add:</p>
<pre><code>#include "stdafx.h"
</code></pre>
<p>and then it compiles and works fine. I'm just curious, is it possible to do this with mingw, or another open source C/C++ compiler?</p>
<p>If I place a file <code>my_python.cpp</code> which contains the following: </p>
<pre><code>#include "include/Python.h"
int
wmain(int argc, wchar_t **argv)
{
return Py_Main(argc, argv);
}
</code></pre>
<p>in the root directory of a fresh 32-bit python 3.5 install (windows 7 x64), cd to that directory and try to run:</p>
<p><code>gcc my_python.cpp -Llibs -lpython35 -o my_python.exe</code>
I get this error:</p>
<pre><code>c:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../libmingw32.a(main.o):(.text.startup+0xa0): undefined reference to `WinMain@16'
</code></pre>
<p>Any way to fix this, get it running without Visual Studio?</p>
| -1 | 2016-10-13T18:46:03Z | 40,065,866 | <p>The error is unrelated to Python. <code>wmain</code> is Visual Studio specific. GCC does not treat <code>wmain</code> as entry point, it just sits there as a function which never gets called.</p>
<p>GCC requires <code>main</code> or <code>WinMain</code> as entry point. If neither of those entry points is found, then the compiler will complain. So let's just use <code>main</code> as entry point.</p>
<p><code>Py_Main</code> presumably expects wide character string input. <code>CommandLineToArgvW</code> will always provide that. Example:</p>
<pre><code>#include <Windows.h>
#include "include/Python.h"
int main()
{
int argc;
wchar_t** argv = CommandLineToArgvW( GetCommandLineW(), &argc );
return Py_Main(argc, argv);
}
</code></pre>
<p>If you still get the same error, just provide <code>WinMain</code> entry point to make it happy</p>
<pre><code>#include <Windows.h>
#include "include/Python.h"
int WINAPI WinMain(HINSTANCE, HINSTANCE, LPSTR, int)
{
int argc;
wchar_t** argv = CommandLineToArgvW( GetCommandLineW(), &argc );
return Py_Main(argc, argv);
}
</code></pre>
<p>Also note, <code>*.lib</code> files are usually for Visual Studio. GCC version expects library names with <code>*.a</code> extension.</p>
| 0 | 2016-10-16T01:16:24Z | [
"python",
"c++",
"c",
"visual-studio",
"mingw"
] |
Python string assignment error occurs on second loop, but not first | 40,028,690 | <p>The first run-through of the while loop goes fine:</p>
<pre><code>hour_count = list('00/')
hours = 0
while hours < 24: #loop while hours < 24
hour_count[1] = hours #<- error occurs on this line
hour_count = ''.join(hour_count) #convert to string
...
hours += 1
</code></pre>
<p>However, upon the second loop, it gives a TypeError: 'str' object does not support item assignment. The purpose is to set a file path.</p>
| -3 | 2016-10-13T18:46:50Z | 40,028,743 | <p>When you run this line <code>hour_count = ''.join(hour_count)</code>, you're changing the data type of <code>hour_count</code> from a list to a string.</p>
<p>Because strings are immutable, you can't modify one character via the index notation (the line before this line attempts to do that).</p>
<p>I'm not totally sure what your goal is, but perhaps you're looking to append to the list. These docs will help with that.</p>
<p><a href="https://docs.python.org/3.4/tutorial/datastructures.html" rel="nofollow">https://docs.python.org/3.4/tutorial/datastructures.html</a></p>
| 1 | 2016-10-13T18:49:35Z | [
"python",
"python-3.x",
"typeerror"
] |
Python string assignment error occurs on second loop, but not first | 40,028,690 | <p>The first run-through of the while loop goes fine:</p>
<pre><code>hour_count = list('00/')
hours = 0
while hours < 24: #loop while hours < 24
hour_count[1] = hours #<- error occurs on this line
hour_count = ''.join(hour_count) #convert to string
...
hours += 1
</code></pre>
<p>However, upon the second loop, it gives a TypeError: 'str' object does not support item assignment. The purpose is to set a file path.</p>
| -3 | 2016-10-13T18:46:50Z | 40,028,761 | <p>You changed the type;</p>
<pre><code># hour_count at this point is an array
hour_count[1] = hours
# The result of join is a string object
hour_count = ''.join(hour_count)
</code></pre>
<p>Next time through <code>hour_count</code> is a string and you can't do "string[1] = ..."</p>
| 0 | 2016-10-13T18:50:33Z | [
"python",
"python-3.x",
"typeerror"
] |
Suppressing Pandas dataframe plot output | 40,028,727 | <p>I am plotting a dataframe:</p>
<pre><code> ax = df.plot()
fig = ax.get_figure()
fig.savefig("{}/{}ts.png".format(IMGPATH, series[pfxlen:]))
</code></pre>
<p>It works fine. But, on the console, I get:</p>
<pre><code>/usr/lib64/python2.7/site-packages/matplotlib/axes.py:2542: UserWarning: Attempting to set identical left==right results in singular transformations; automatically expanding. left=736249.924955, right=736249.924955 + 'left=%s, right=%s') % (left, right))
</code></pre>
<p>Basic searching hasn't showed me how to solve this error. So, I want to suppress these errors, since they garbage up the console. How can I do this?</p>
| 0 | 2016-10-13T18:48:25Z | 40,029,101 | <p>Those aren't errors, but warnings. If you aren't concerned by those and just want to silence them, it's as simple as:</p>
<pre><code>import warnings
warnings.filterwarnings('ignore')
</code></pre>
<p>Additionally, pandas and other libraries may trigger NumPY floating-point errors. If you encounter those, you have to silence them as well:</p>
<pre><code>import numpy as np
np.seterr('ignore')
</code></pre>
| 2 | 2016-10-13T19:12:31Z | [
"python",
"pandas"
] |
Format certain JSON objects on one line | 40,028,755 | <p>Consider the following code:</p>
<pre><code>>>> import json
>>> data = {
... 'x': [1, {'$special': 'a'}, 2],
... 'y': {'$special': 'b'},
... 'z': {'p': True, 'q': False}
... }
>>> print(json.dumps(data, indent=2))
{
"y": {
"$special": "b"
},
"z": {
"q": false,
"p": true
},
"x": [
1,
{
"$special": "a"
},
2
]
}
</code></pre>
<p>What I want is to format the JSON so that JSON objects that have only a single property <code>'$special'</code> are rendered on a single line, as follows.</p>
<pre><code>{
"y": {"$special": "b"},
"z": {
"q": false,
"p": true
},
"x": [
1,
{"$special": "a"},
2
]
}
</code></pre>
<p>I have played around with implementing a custom <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder" rel="nofollow"><code>JSONEncoder</code></a> and passing that in to <code>json.dumps</code> as the <code>cls</code> argument, but the two methods on <code>JSONEncoder</code> each have a problem:</p>
<ul>
<li><p>The <code>JSONEncoder</code> <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder.default" rel="nofollow"><code>default</code></a> method is called for each part of <code>data</code>, but the return value is not a raw JSON string, so there doesn't appear to be any way to adjust its formatting.</p></li>
<li><p>The <code>JSONEncoder</code> <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder.encode" rel="nofollow"><code>encode</code></a> method does return a raw JSON string, but it is only called once for the <code>data</code> as a whole.</p></li>
</ul>
<p>Is there any way I can get <code>JSONEncoder</code> to do what I want?</p>
| 2 | 2016-10-13T18:50:10Z | 40,029,630 | <p>You can do it, but you'd basically have to copy/modify a lot of the code out of <code>json.encoder</code> because the encoding functions aren't really designed to be partially overridden.</p>
<p>Basically, copy the entirety of <code>_make_iterencode</code> from <code>json.encoder</code> and make the changes so that your special dictionary gets printed without newline indents. Then monkeypatch the json package to use your modified version, run the json dump, then undo the monkeypatch (if you want).</p>
<p>The <code>_make_iterencode</code> function is pretty long, so I've only posted the portions that need to be changed.</p>
<pre><code>import json
import json.encoder
def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
...
def _iterencode_dict(dct, _current_indent_level):
...
if _indent is not None:
_current_indent_level += 1
if '$special' in dct:
newline_indent = ''
item_separator = _item_separator
else:
newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
item_separator = _item_separator + newline_indent
yield newline_indent
...
if newline_indent is not None:
_current_indent_level -= 1
if '$special' not in dct:
yield '\n' + (' ' * (_indent * _current_indent_level))
def main():
data = {
'x': [1, {'$special': 'a'}, 2],
'y': {'$special': 'b'},
'z': {'p': True, 'q': False},
}
orig_make_iterencoder = json.encoder._make_iterencode
json.encoder._make_iterencode = _make_iterencode
print(json.dumps(data, indent=2))
json.encoder._make_iterencode = orig_make_iterencoder
</code></pre>
| 0 | 2016-10-13T19:44:05Z | [
"python",
"json",
"python-3.x",
"formatting"
] |
Format certain JSON objects on one line | 40,028,755 | <p>Consider the following code:</p>
<pre><code>>>> import json
>>> data = {
... 'x': [1, {'$special': 'a'}, 2],
... 'y': {'$special': 'b'},
... 'z': {'p': True, 'q': False}
... }
>>> print(json.dumps(data, indent=2))
{
"y": {
"$special": "b"
},
"z": {
"q": false,
"p": true
},
"x": [
1,
{
"$special": "a"
},
2
]
}
</code></pre>
<p>What I want is to format the JSON so that JSON objects that have only a single property <code>'$special'</code> are rendered on a single line, as follows.</p>
<pre><code>{
"y": {"$special": "b"},
"z": {
"q": false,
"p": true
},
"x": [
1,
{"$special": "a"},
2
]
}
</code></pre>
<p>I have played around with implementing a custom <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder" rel="nofollow"><code>JSONEncoder</code></a> and passing that in to <code>json.dumps</code> as the <code>cls</code> argument, but the two methods on <code>JSONEncoder</code> each have a problem:</p>
<ul>
<li><p>The <code>JSONEncoder</code> <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder.default" rel="nofollow"><code>default</code></a> method is called for each part of <code>data</code>, but the return value is not a raw JSON string, so there doesn't appear to be any way to adjust its formatting.</p></li>
<li><p>The <code>JSONEncoder</code> <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder.encode" rel="nofollow"><code>encode</code></a> method does return a raw JSON string, but it is only called once for the <code>data</code> as a whole.</p></li>
</ul>
<p>Is there any way I can get <code>JSONEncoder</code> to do what I want?</p>
| 2 | 2016-10-13T18:50:10Z | 40,030,695 | <p>The <code>json</code> module is not really designed to give you that much control over the output; indentation is mostly meant to aid readability while debugging.</p>
<p>Instead of making <code>json</code> produce the output, you could <em>transform</em> the output using the standard library <a href="https://docs.python.org/3/library/tokenize.html" rel="nofollow"><code>tokenize</code> module</a>:</p>
<pre><code>import tokenize
from io import BytesIO
def inline_special(json_data):
def adjust(t, ld,):
"""Adjust token line number by offset"""
(sl, sc), (el, ec) = t.start, t.end
return t._replace(start=(sl + ld, sc), end=(el + ld, ec))
def transform():
with BytesIO(json_data.encode('utf8')) as b:
held = [] # to defer newline tokens
lastend = None # to track the end pos of the prev token
loffset = 0 # line offset to adjust tokens by
tokens = tokenize.tokenize(b.readline)
for tok in tokens:
if tok.type == tokenize.NL:
# hold newlines until we know there's no special key coming
held.append(adjust(tok, loffset))
elif (tok.type == tokenize.STRING and
tok.string == '"$special"'):
# special string, collate tokens until the next rbrace
# held newlines are discarded, adjust the line offset
loffset -= len(held)
held = []
text = [tok.string]
while tok.exact_type != tokenize.RBRACE:
tok = next(tokens)
if tok.type != tokenize.NL:
text.append(tok.string)
if tok.string in ':,':
text.append(' ')
else:
loffset -= 1 # following lines all shift
line, col = lastend
text = ''.join(text)
endcol = col + len(text)
yield tokenize.TokenInfo(
tokenize.STRING, text, (line, col), (line, endcol),
'')
# adjust any remaining tokens on this line
while tok.type != tokenize.NL:
tok = next(tokens)
yield tok._replace(
start=(line, endcol),
end=(line, endcol + len(tok.string)))
endcol += len(tok.string)
else:
# uninteresting token, yield any held newlines
if held:
yield from held
held = []
# adjust and remember last position
tok = adjust(tok, loffset)
lastend = tok.end
yield tok
return tokenize.untokenize(transform()).decode('utf8')
</code></pre>
<p>This reformats your sample successfully:</p>
<pre><code>import json
data = {
'x': [1, {'$special': 'a'}, 2],
'y': {'$special': 'b'},
'z': {'p': True, 'q': False}
}
>>> print(inline_special(json.dumps(data, indent=2)))
{
"x": [
1,
{"$special": "a"},
2
],
"y": {"$special": "b"},
"z": {
"p": true,
"q": false
}
}
</code></pre>
| 2 | 2016-10-13T20:48:30Z | [
"python",
"json",
"python-3.x",
"formatting"
] |
Format certain JSON objects on one line | 40,028,755 | <p>Consider the following code:</p>
<pre><code>>>> import json
>>> data = {
... 'x': [1, {'$special': 'a'}, 2],
... 'y': {'$special': 'b'},
... 'z': {'p': True, 'q': False}
... }
>>> print(json.dumps(data, indent=2))
{
"y": {
"$special": "b"
},
"z": {
"q": false,
"p": true
},
"x": [
1,
{
"$special": "a"
},
2
]
}
</code></pre>
<p>What I want is to format the JSON so that JSON objects that have only a single property <code>'$special'</code> are rendered on a single line, as follows.</p>
<pre><code>{
"y": {"$special": "b"},
"z": {
"q": false,
"p": true
},
"x": [
1,
{"$special": "a"},
2
]
}
</code></pre>
<p>I have played around with implementing a custom <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder" rel="nofollow"><code>JSONEncoder</code></a> and passing that in to <code>json.dumps</code> as the <code>cls</code> argument, but the two methods on <code>JSONEncoder</code> each have a problem:</p>
<ul>
<li><p>The <code>JSONEncoder</code> <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder.default" rel="nofollow"><code>default</code></a> method is called for each part of <code>data</code>, but the return value is not a raw JSON string, so there doesn't appear to be any way to adjust its formatting.</p></li>
<li><p>The <code>JSONEncoder</code> <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder.encode" rel="nofollow"><code>encode</code></a> method does return a raw JSON string, but it is only called once for the <code>data</code> as a whole.</p></li>
</ul>
<p>Is there any way I can get <code>JSONEncoder</code> to do what I want?</p>
| 2 | 2016-10-13T18:50:10Z | 40,114,245 | <p>I found the following regex-based solution to be simplest, albeit … <em>regex-based</em>.</p>
<pre><code>import json
import re
data = {
'x': [1, {'$special': 'a'}, 2],
'y': {'$special': 'b'},
'z': {'p': True, 'q': False}
}
text = json.dumps(data, indent=2)
pattern = re.compile(r"""
{
\s*
"\$special"
\s*
:
\s*
"
((?:[^"]|\\"))* # Captures zero or more NotQuote or EscapedQuote
"
\s*
}
""", re.VERBOSE)
print(pattern.sub(r'{"$special": "\1"}', text))
</code></pre>
<p>The output follows.</p>
<pre><code>{
"x": [
1,
{"$special": "a"},
2
],
"y": {"$special": "b"},
"z": {
"q": false,
"p": true
}
}
</code></pre>
| 0 | 2016-10-18T17:11:36Z | [
"python",
"json",
"python-3.x",
"formatting"
] |
django - subclassing a model just to add new methods | 40,028,828 | <p>I need to subclass a model from a third-party app (<code>django-oscar</code>).</p>
<p>If i do this</p>
<pre><code>from oscar.apps.catalogue.models import Category
class NewCategory(Category):
@property
def product_count(self):
return self.product_set.all().count()
class Meta:
db_table = 'catalogue_category'
</code></pre>
<p>Django will think that it is a multi-table inheritance, and <code>NewCategory</code> is a child model for <code>Category</code>. This will result in errors such as </p>
<pre><code>OperationalError at /api/categories/
no such column: catalogue_category.category_ptr_id
</code></pre>
<p>I can get away with this</p>
<pre><code>def product_count(self):
return self.product_set.all().count()
Category.product_count = product_count
</code></pre>
<p>but this doesn't seem nice, plus I am unable to add a <code>@property</code> decorator this way.</p>
<p>Is there a cleaner way to do this?</p>
| 0 | 2016-10-13T18:54:17Z | 40,029,027 | <p>You need a <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#proxy-models" rel="nofollow">proxy model</a>.</p>
<pre><code>class NewCategory(Category):
class Meta:
proxy = True
...
</code></pre>
| 1 | 2016-10-13T19:08:05Z | [
"python",
"django"
] |
Unable to import matplotlib.pyplot with latest conda | 40,028,842 | <p>I am running Conda version 4.2.9 with Python 2.7.12, I have confirmed this bug with a fresh conda environment and only the matplotlib package</p>
<p>My problem occurs when I try to import matplotlib.pyplot:</p>
<pre><code>>>> import matplotlib.pyplot
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/backend_qt5agg.py", line 16, in <module>
from .backend_qt5 import QtCore
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/backend_qt5.py", line 31, in <module>
from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/qt_compat.py", line 137, in <module>
from PyQt4 import QtCore, QtGui
ImportError: No module named PyQt4
</code></pre>
<p>I have googled this issue extensively and see that there are some temporary solutions, however, I don't know how to use one of them and the other one doesn't work for me.</p>
<p>One solution was to set the backend for matplotlib manually like so:</p>
<pre><code>>>> import matplotlib
>>> matplotlib.use('Qt5Agg')
>>> import matplotlib.pyplot as plt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/backend_qt5agg.py", line 16, in <module>
from .backend_qt5 import QtCore
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/backend_qt5.py", line 31, in <module>
from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__
File "/home/me/anaconda2/envs/snowflake/lib/python2.7/site-packages/matplotlib/backends/qt_compat.py", line 137, in <module>
from PyQt4 import QtCore, QtGui
ImportError: No module named PyQt4
</code></pre>
<p>As you can see this failed for me. The other solution was to use a patch (<a href="https://github.com/ContinuumIO/anaconda-issues/issues/1068" rel="nofollow">sourced from this thread</a>). Unfortunately I do not know how to use this patch or how to "pin pyqt to 4.11". Can anyone help?</p>
| 0 | 2016-10-13T18:55:25Z | 40,043,798 | <p>Pinning packages is explained <a href="http://conda.pydata.org/docs/faq.html#pinning-packages" rel="nofollow">here</a>.</p>
<p><a href="https://github.com/ContinuumIO/anaconda-recipes/blob/master/matplotlib/rctmp_pyside.patch" rel="nofollow">The patch</a> is so small you can apply it by hand. Look for your <a href="http://matplotlib.org/users/customizing.html#the-matplotlibrc-file" rel="nofollow">matplotlibrc</a> file (probably <code>.config/matplotlib/matplotlibrc</code>), and set the default backend to Qt5Agg by editing the following lines:</p>
<pre><code>backend : Qt5Agg
backend.qt5 : PyQt5
</code></pre>
<p>However, since you already have <code>use("Qt5Agg")</code> in your program, which overrides the rcfile, I don't think this will help.</p>
<p>I think your issue is caused by having a <code>QT_API</code> environment variable that still is set to <code>PyQt4</code> (or <code>PySide</code>). Check this, for instance, by adding <code>import os; print(os.environ.get('QT_API'))</code> to your program. If this is the case, add the line <code>export QT_API=PyQt5</code> to your <code>.bashrc</code> (assuming you run Linux).</p>
| 0 | 2016-10-14T12:58:38Z | [
"python",
"matplotlib",
"pyqt",
"conda"
] |
Automatic Image Inpainting with Keras and GpuElemwise Errors | 40,028,847 | <p>I have a Keras Model => </p>
<blockquote>
<p>Input : Gray Image : (1, 224, 224) </p>
<p>Output : RGB Image: (3, 224, 224)</p>
</blockquote>
<p>and I want to predict pixel colors by giving it Grayscale images and getting RGB ones.
I tried to make a network in Keras which mostly resembles <a href="http://tinyclouds.org/colorize" rel="nofollow">this one (which has been made in Tensorflow)</a>.</p>
<p>Here's the Model code :</p>
<pre><code>first_input = Input(batch_shape=(None, 1, 224, 224))
conv0_1_3 = Convolution2D(3, 3, 3, activation='relu', name='conv0_1_3', border_mode='same')(first_input)
conv1_1_64 = Convolution2D(64, 3, 3, activation='relu', name='conv1_1', border_mode='same')(conv0_1_3)
conv1_2_64 = Convolution2D(64, 3, 3, activation='relu', name='conv1_2', border_mode='same')(conv1_1_64)
conv1_2_64 = MaxPooling2D((2, 2))(conv1_2_64)
conv2_1_128 = Convolution2D(128, 3, 3, activation='relu', name='conv2_1', border_mode='same')(conv1_2_64)
conv2_2_128 = Convolution2D(128, 3, 3, activation='relu', name='conv2_2', border_mode='same')(conv2_1_128)
conv2_2_128 = MaxPooling2D((2, 2))(conv2_2_128)
conv3_1_256 = Convolution2D(256, 3, 3, activation='relu', name='conv3_1', border_mode='same')(conv2_2_128)
conv3_2_256 = Convolution2D(256, 3, 3, activation='relu', name='conv3_2', border_mode='same')(conv3_1_256)
conv3_3_256 = Convolution2D(256, 3, 3, activation='relu', name='conv3_3', border_mode='same')(conv3_2_256)
conv3_3_256 = MaxPooling2D((2, 2))(conv3_3_256)
conv4_1_512 = Convolution2D(512, 3, 3, activation='relu', name='conv4_1', border_mode='same')(conv3_3_256)
conv4_2_512 = Convolution2D(512, 3, 3, activation='relu', name='conv4_2', border_mode='same')(conv4_1_512)
conv4_3_512 = Convolution2D(512, 3, 3, activation='relu', name='conv4_3', border_mode='same')(conv4_2_512)
conv4_3_512 = MaxPooling2D((2, 2))(conv4_3_512)
residual1 = BatchNormalization(axis=1, name='batch1')(conv4_3_512)
residual1 = Convolution2D(256, 3, 3, activation='relu', name='residual1', border_mode='same')(residual1)
residual1 = UpSampling2D(name='upsample1')(residual1)
conv3_3_256_batch_norm = BatchNormalization(axis=1, name='batch2')(conv3_3_256)
merge1 = merge((conv3_3_256_batch_norm, residual1), mode='concat', name='merge1', concat_axis=0)
residual2 = Convolution2D(128, 3, 3, activation='relu', name='residual2', border_mode='same')(merge1)
residual2 = UpSampling2D(name='upsample2')(residual2)
conv2_2_128_batch_norm = BatchNormalization(axis=1, name='batch3')(conv2_2_128)
merge2 = merge((conv2_2_128_batch_norm, residual2), mode='concat', name='merge2', concat_axis=0)
residual3 = Convolution2D(64, 3, 3, activation='relu', name='residual3', border_mode='same')(merge2)
residual3 = UpSampling2D(name='upsample3')(residual3)
conv1_2_64_batch_norm = BatchNormalization(axis=1, name='batch4')(conv1_2_64)
merge3 = merge((conv1_2_64_batch_norm, residual3), mode='concat', name='merge3', concat_axis=0)
residual4 = Convolution2D(3, 3, 3, activation='relu', name='residual4', border_mode='same')(merge3)
residual4 = UpSampling2D(name='upsample4')(residual4)
conv0_1_3_batch_norm = BatchNormalization(axis=1, name='batch5')(conv0_1_3)
merge4 = merge((conv0_1_3_batch_norm, residual4), mode='concat', name='merge4', concat_axis=0)
residual5 = Convolution2D(3, 1, 1, activation='relu', name='residual5', border_mode='same')(merge4)
model = Model(input=first_input, output=residual5)
</code></pre>
<p>and here's the Model Summary:</p>
<pre><code>Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 224, 224) 0
____________________________________________________________________________________________________
conv0_1_3 (Convolution2D) (None, 3, 224, 224) 30 input_1[0][0]
____________________________________________________________________________________________________
conv1_1 (Convolution2D) (None, 64, 224, 224) 1792 conv0_1_3[0][0]
____________________________________________________________________________________________________
conv1_2 (Convolution2D) (None, 64, 224, 224) 36928 conv1_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 64, 112, 112) 0 conv1_2[0][0]
____________________________________________________________________________________________________
conv2_1 (Convolution2D) (None, 128, 112, 112) 73856 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
conv2_2 (Convolution2D) (None, 128, 112, 112) 147584 conv2_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 128, 56, 56) 0 conv2_2[0][0]
____________________________________________________________________________________________________
conv3_1 (Convolution2D) (None, 256, 56, 56) 295168 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
conv3_2 (Convolution2D) (None, 256, 56, 56) 590080 conv3_1[0][0]
____________________________________________________________________________________________________
conv3_3 (Convolution2D) (None, 256, 56, 56) 590080 conv3_2[0][0]
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D) (None, 256, 28, 28) 0 conv3_3[0][0]
____________________________________________________________________________________________________
conv4_1 (Convolution2D) (None, 512, 28, 28) 1180160 maxpooling2d_3[0][0]
____________________________________________________________________________________________________
conv4_2 (Convolution2D) (None, 512, 28, 28) 2359808 conv4_1[0][0]
____________________________________________________________________________________________________
conv4_3 (Convolution2D) (None, 512, 28, 28) 2359808 conv4_2[0][0]
____________________________________________________________________________________________________
maxpooling2d_4 (MaxPooling2D) (None, 512, 14, 14) 0 conv4_3[0][0]
____________________________________________________________________________________________________
batch1 (BatchNormalization) (None, 512, 14, 14) 1024 maxpooling2d_4[0][0]
____________________________________________________________________________________________________
residual1 (Convolution2D) (None, 256, 14, 14) 1179904 batch1[0][0]
____________________________________________________________________________________________________
batch2 (BatchNormalization) (None, 256, 28, 28) 512 maxpooling2d_3[0][0]
____________________________________________________________________________________________________
upsample1 (UpSampling2D) (None, 256, 28, 28) 0 residual1[0][0]
____________________________________________________________________________________________________
merge1 (Merge) (None, 256, 28, 28) 0 batch2[0][0]
upsample1[0][0]
____________________________________________________________________________________________________
residual2 (Convolution2D) (None, 128, 28, 28) 295040 merge1[0][0]
____________________________________________________________________________________________________
batch3 (BatchNormalization) (None, 128, 56, 56) 256 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
upsample2 (UpSampling2D) (None, 128, 56, 56) 0 residual2[0][0]
____________________________________________________________________________________________________
merge2 (Merge) (None, 128, 56, 56) 0 batch3[0][0]
upsample2[0][0]
____________________________________________________________________________________________________
residual3 (Convolution2D) (None, 64, 56, 56) 73792 merge2[0][0]
____________________________________________________________________________________________________
batch4 (BatchNormalization) (None, 64, 112, 112) 128 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
upsample3 (UpSampling2D) (None, 64, 112, 112) 0 residual3[0][0]
____________________________________________________________________________________________________
merge3 (Merge) (None, 64, 112, 112) 0 batch4[0][0]
upsample3[0][0]
____________________________________________________________________________________________________
residual4 (Convolution2D) (None, 3, 112, 112) 1731 merge3[0][0]
____________________________________________________________________________________________________
batch5 (BatchNormalization) (None, 3, 224, 224) 6 conv0_1_3[0][0]
____________________________________________________________________________________________________
upsample4 (UpSampling2D) (None, 3, 224, 224) 0 residual4[0][0]
____________________________________________________________________________________________________
merge4 (Merge) (None, 3, 224, 224) 0 batch5[0][0]
upsample4[0][0]
____________________________________________________________________________________________________
residual5 (Convolution2D) (None, 3, 224, 224) 12 merge4[0][0]
====================================================================================================
Total params: 9187699
</code></pre>
<p>I don't know what I'm doing wrong since the summary fits exactly with what I have in mind but no matter what, I keep getting this error :</p>
<blockquote>
<p>ValueError: GpuElemwise. Input dimension mis-match. Input 2 (indices start at 0) has shape[0] == 1, but the output's size on that axis is 5.
Apply node that caused the error: GpuElemwise{Composite{((i0 * (i1 + Abs(i1))) - i2)},no_inplace}(CudaNdarrayConstant{[[[[ 0.5]]]]}, GpuElemwise{Add}[(0, 0)].0, GpuFromHost.0)
Toposort index: 916
Inputs types: [CudaNdarrayType(float32, (True, True, True, True)), CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D)]
Inputs shapes: [(1, 1, 1, 1), (5, 3, 224, 224), (1, 3, 224, 224)]
Inputs strides: [(0, 0, 0, 0), (150528, 50176, 224, 1), (0, 50176, 224, 1)]
Inputs values: [CudaNdarray([[[[ 0.5]]]]), 'not shown', 'not shown']
Inputs type_num: ['', '', '']</p>
</blockquote>
<p>I've included a Graph of the Model with this question too:
<img src="https://cloud.githubusercontent.com/assets/22426131/19362112/3368c77c-9192-11e6-99d9-643600b77c4b.png" alt="model"></p>
<p>Debugging this is a nightmare... most of the other errors are pretty easy to understand and fix but these errors are really hard to understand... and unfortunately this isn't the first time I've had these errors with Keras.</p>
<p>Please! what is wrong with this model?! am I doing something completely wrong or perhaps this model shouldn't be designed this way?</p>
<p>Thanks so much...</p>
| 0 | 2016-10-13T18:55:55Z | 40,032,259 | <p>You are merging on the wrong axis. <code>axis = 0</code> is actually the axis with different samples. You can see from your model:</p>
<pre><code>batch2 (BatchNormalization) (None, 256, 28, 28) 512 maxpooling2d_3[0][0]
upsample1 (UpSampling2D) (None, 256, 28, 28) 0 residual1[0][0]
merge1 (Merge) (None, 256, 28, 28) 0 batch2[0][0]
</code></pre>
<p>The number of feature maps is not changing at all after the merge. Set <code>axis = 1</code> to fix this.</p>
| 2 | 2016-10-13T22:48:52Z | [
"python",
"machine-learning",
"theano",
"keras",
"conv-neural-network"
] |
SQLAlchemy Delete Persistency Error | 40,028,868 | <p>For some reason the code below is generating a persistency error:</p>
<pre><code>'<Data at 0x1041db8d0>' is not persisted
</code></pre>
<p>Does the object need to be initialized before it's deleted, what's wrong here?</p>
<pre><code>if request.method == 'POST' and form3.validate():
data_entered = Data(notes=form3.dbDelete.data)
try:
db.session.delete(data_entered)
db.session.commit()
db.session.close()
return render_template('deleted.html', notes=form3.dbDelete.data)
</code></pre>
<p>Much appreciated.</p>
| -1 | 2016-10-13T18:57:15Z | 40,029,915 | <p><code>data_entered = Data(notes=form3.dbDelete.data)</code></p>
<p>This is creating a new instance (or row) of <code>Data</code>. Then you are running a <code>session.delete</code> procedure to remove it from the database, but it never was inserted (via <code>session.commit()</code> for example).</p>
<p>I'm guessing this is what you are trying to do:</p>
<pre><code>try:
data_entered = db.session.query(Data).filter(Data.notes == form3.dbDelete.data).one()
except sqlalchemy.orm.exc.NoResultFound:
pass
else:
try:
db.session.delete(data_entered)
db.session.commit()
db.session.close()
return render_template('deleted.html', notes=form3.dbDelete.data)
</code></pre>
| 0 | 2016-10-13T20:01:12Z | [
"python",
"sqlalchemy"
] |
Multiprocessing in Flask | 40,028,869 | <p>Probably, this was repeated in a different way.
May I know where am I goin wrong?
Following is the code and the error</p>
<p><strong>CODE:</strong></p>
<pre><code>from flask import Flask, render_template
import thread
from multiprocessing import Process
app = Flask(__name__)
def print_time():
i = 0
while 1:
i += 1
def server():
@app.route('/')
def index():
return 'Index Page'
@app.route('/hello/')
def hello(name=None):
return render_template('index.html', name=i)
if __name__ == '__main__':
Process(target=server).start()
Process(target=print_time).start()
</code></pre>
<p><strong>ERROR(PREV):</strong></p>
<pre><code>File "C:\Program Files\Anaconda2\envs\hvc\dashboard.py", line 15
return 'Index Page'
^
IndentationError: expected an indented block
</code></pre>
<p><strong>ERROR(NOW):</strong></p>
<pre><code>Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
</code></pre>
<p>Thank you.</p>
| -5 | 2016-10-13T18:57:17Z | 40,029,118 | <p>The code mixes tabs and spaces.</p>
<p>These two lines:</p>
<pre><code> def index():
return 'Index Page'
</code></pre>
<p>Are actually:</p>
<pre><code>[tab]def index():
[tab]return 'Index Page'
</code></pre>
<p>When tabs are used in Python source code, they are first replaced with spaces until the first column that is a multiple of 8. That makes the above euivalent to:</p>
<pre><code> def index():
return 'Index Page'
</code></pre>
<p>So, the <code>return</code> line is not indented.</p>
<p>The moral of the story is: do not use tabs. Configure the editor to replace tabs with 4 spaces.</p>
| 1 | 2016-10-13T19:13:18Z | [
"python",
"flask",
"multiprocessing"
] |
Pushing to Diego: Cannot write: No space left on device | 40,028,883 | <p>Our application is one of the few left running on DEA. On DEA we were able to use a specific custom buildbpack:</p>
<p><a href="https://github.com/ihuston/python-conda-buildpack" rel="nofollow">https://github.com/ihuston/python-conda-buildpack</a></p>
<p>Now that we have to move on Diego runtime, we run out of space while pushing the app. I believe the disk space is only required during staging, because quite a few libraries are coming with the buildpack and have to be built (we need the whole scientific python stack, which is all included in the above buildpack).</p>
<p>The build script outputs everything fine, except that the app cannot start. The logs then show:</p>
<pre><code>2016-10-13T19:10:42.29+0200 [CELL/0] ERR Copying into the container failed: stream-in: nstar: error streaming in: exit status 2. Output: tar: ./app/.conda/pkgs/cache/db552c1e.json: Wrote only 8704 of 10240 bytes
</code></pre>
<p>and further many files:</p>
<pre><code>2016-10-13T19:10:42.29+0200 [CELL/0] ERR tar: ./app/.conda/pkgs/cache/9779607c273dc0786bd972b4cb308b58.png: Cannot write: No space left on device
</code></pre>
<p>and then</p>
<pre><code>2016-10-13T20:16:48.30+0200 [API/0] OUT App instance exited with guid b2f4a1be-aeda-44fa-87bc-9871f432062d payload: {"instance"=>"", "index"=>0, "reason"=>"CRASHED", "exit_description"=>"Copying into the container failed", "crash_count"=>14, "crash_timestamp"=>1476382608296511944, "version"=>"ca10412e-717a-413b-875a-535f8c3f7be4"}
</code></pre>
<p>When trying to add more disk quota (above 1G) there is an error:</p>
<pre><code>Server error, status code: 400, error code: 100001, message: The app is invalid: disk_quota too much disk requested (must be less than 1024)
</code></pre>
<p>Is there a way to give a bit more space? At least for the build process?</p>
| 0 | 2016-10-13T18:58:14Z | 40,037,302 | <p>You can use a <code>.cfignore</code> file just like a <code>.gitignore</code> file to exclude any unneeded files from being <code>cf push</code>ed. Maybe if you really only push what is necessary, the disk space could be sufficient.</p>
<p><a href="https://docs.developer.swisscom.com/devguide/deploy-apps/prepare-to-deploy.html#exclude" rel="nofollow">https://docs.developer.swisscom.com/devguide/deploy-apps/prepare-to-deploy.html#exclude</a></p>
| 0 | 2016-10-14T07:24:08Z | [
"python",
"swisscomdev"
] |
Pushing to Diego: Cannot write: No space left on device | 40,028,883 | <p>Our application is one of the few left running on DEA. On DEA we were able to use a specific custom buildbpack:</p>
<p><a href="https://github.com/ihuston/python-conda-buildpack" rel="nofollow">https://github.com/ihuston/python-conda-buildpack</a></p>
<p>Now that we have to move on Diego runtime, we run out of space while pushing the app. I believe the disk space is only required during staging, because quite a few libraries are coming with the buildpack and have to be built (we need the whole scientific python stack, which is all included in the above buildpack).</p>
<p>The build script outputs everything fine, except that the app cannot start. The logs then show:</p>
<pre><code>2016-10-13T19:10:42.29+0200 [CELL/0] ERR Copying into the container failed: stream-in: nstar: error streaming in: exit status 2. Output: tar: ./app/.conda/pkgs/cache/db552c1e.json: Wrote only 8704 of 10240 bytes
</code></pre>
<p>and further many files:</p>
<pre><code>2016-10-13T19:10:42.29+0200 [CELL/0] ERR tar: ./app/.conda/pkgs/cache/9779607c273dc0786bd972b4cb308b58.png: Cannot write: No space left on device
</code></pre>
<p>and then</p>
<pre><code>2016-10-13T20:16:48.30+0200 [API/0] OUT App instance exited with guid b2f4a1be-aeda-44fa-87bc-9871f432062d payload: {"instance"=>"", "index"=>0, "reason"=>"CRASHED", "exit_description"=>"Copying into the container failed", "crash_count"=>14, "crash_timestamp"=>1476382608296511944, "version"=>"ca10412e-717a-413b-875a-535f8c3f7be4"}
</code></pre>
<p>When trying to add more disk quota (above 1G) there is an error:</p>
<pre><code>Server error, status code: 400, error code: 100001, message: The app is invalid: disk_quota too much disk requested (must be less than 1024)
</code></pre>
<p>Is there a way to give a bit more space? At least for the build process?</p>
| 0 | 2016-10-13T18:58:14Z | 40,038,542 | <p>The conda installer from <a href="https://github.com/ihuston/python-conda-buildpack" rel="nofollow">https://github.com/ihuston/python-conda-buildpack</a> installs by default with the Intel MKL library. Now this is usually a good thing, but seemingly uses too much space and thus cannot be deployed. </p>
<p>I adapted the buildpack and added to the line</p>
<pre><code> $CONDA_BIN/conda install --yes --quiet --file "$BUILD_DIR/conda_requirements.txt"
</code></pre>
<p>The flag <code>nomkl</code></p>
<pre><code> $CONDA_BIN/conda install nomkl --yes --quiet --file "$BUILD_DIR/conda_requirements.txt"
</code></pre>
<p>As described in continuums blog post here:</p>
<p><a href="https://www.continuum.io/blog/developer-blog/anaconda-25-release-now-mkl-optimizations" rel="nofollow">https://www.continuum.io/blog/developer-blog/anaconda-25-release-now-mkl-optimizations</a></p>
<p>This will then use OpenBLAS instead and results in a much smaller droplet (175M instead of 330MB) and the deployment can successfully finish.</p>
| 1 | 2016-10-14T08:30:56Z | [
"python",
"swisscomdev"
] |
How to run a script in PySpark | 40,028,919 | <p>I'm trying to run a script in the pyspark environment but so far I haven't been able to. How can I run a script like python script.py but in pyspark? Thanks</p>
| 0 | 2016-10-13T19:00:45Z | 40,029,121 | <p>You can do: <code>./bin/spark-submit mypythonfile.py</code></p>
<p>Running python applications through 'pyspark' is not supported as of Spark 2.0.</p>
| 2 | 2016-10-13T19:13:28Z | [
"python",
"apache-spark",
"pyspark"
] |
port scanning an IP range in python | 40,028,975 | <p>So I'm working on a simple port scanner in python for a class (not allowed to use the python-nmap library), and while I can get it to work when passing a single IP address, I can't get it to work using a range of IPs. </p>
<p>This is what I have:</p>
<pre><code>#!/usr/bin/env python
from socket import *
from netaddr import *
# port scanner
def port_scan(port, host)
s = socket(AF_INET, SOCK_STREAM)
try:
s = s.connect((host, port))
print "Port ", port, " is open"
except Exception, e:
pass
# get user input for range in form xxx.xxx.xxx.xxx-xxx.xxx.xxx.xxx and xx-xx
ipStart, ipEnd = raw_input ("Enter IP-IP: ").split("-")
portStart, portEnd = raw_input ("Enter port-port: ").split("-")
# cast port string to int
portStart, portEnd = [int(portStart), int(portEnd)]
# define IP range
iprange = IPRange(ipStart, ipEnd)
# this is where my problem is
for ip in iprange:
host = ip
for port in range(startPort, endPort + 1)
port_scan(port, host)
</code></pre>
<p>So when I run the code, after adding print statements below</p>
<pre><code>host = ip
print host # added
</code></pre>
<p>and then again after</p>
<pre><code>port_scan(port, host)
print port # added
</code></pre>
<p>I end up with the following output:</p>
<pre><code>root@kali:~/Desktop/python# python what.py
Enter IP-IP: 172.16.250.100-172.16.250.104
Enter port-port: 20-22
172.16.250.100
20
21
22
172.16.250.101
20
21
22
...and so on
</code></pre>
<p>Thanks in advance everyone!
I appreciate any help that I can get!</p>
<p><a href="https://i.stack.imgur.com/yDApd.png" rel="nofollow">code picture for reference, slightly different</a></p>
<p><a href="https://i.stack.imgur.com/HJF6d.png" rel="nofollow">output picture for reference</a></p>
| 0 | 2016-10-13T19:04:34Z | 40,047,252 | <p>The problem turned out to be an issue with using the netaddr.IPRange, as suggested by @bravosierra99. </p>
<p>Thanks again everyone!</p>
| 0 | 2016-10-14T15:48:23Z | [
"python",
"python-2.7",
"port-scanning"
] |
Python2: Using .decode with errors='replace' still returns errors | 40,029,017 | <p>So I have a <code>message</code> which is read from a file of unknown encoding. I want to send to a webpage for display. I've grappled a lot with UnicodeErrors and have gone through many Q&As on StackOverflow and think I have decent understand of how Unicode and encoding works. My current code looks like this</p>
<pre><code>try :
return message.decode(encoding='utf-8')
except:
try:
return message.decode(encoding='latin-1')
except:
try:
print("Unable to entirely decode in latin or utf-8, will replace error characters with '?'")
return message.decode(encoding='utf-8', errors="replace")
</code></pre>
<p>The returned message is then dumped into a JSON and send to the front end.</p>
<p>I assumed that because I'm using <code>errors="replace"</code>on the last <code>try except</code> that I was going to avoid exceptions at the expense of having a few '?' characters in my display. An acceptable cost.</p>
<p>However, it seems that I was too hopeful, and for some files I still get a <code>UnicodeDecodeException</code> saying "ascii codecs cannot decode" for some character. Why doesn't <code>errors="replace"</code> just take care of this? </p>
<p>(also as a bonus question, what does ascii have to do with any of this?.. I'm specifying UTF-8)</p>
| 0 | 2016-10-13T19:07:18Z | 40,029,494 | <p>decode with error replace implements the 'replace' error handling (for <strong>text encodings</strong> only): substitutes '?' for encoding errors (to be encoded by the codec), and '\ufffd' (the Unicode replacement character) for decoding errors</p>
<p>text encodings means A "codec which encodes Unicode strings to bytes."</p>
<p>maybe your data is malformed - u should try 'ignore' error handling where malformed data is ignored and encoding or decoding is continued without further notice.</p>
<pre><code>message.decode(encoding='utf-8', errors="ignore")
</code></pre>
| 0 | 2016-10-13T19:36:07Z | [
"python",
"python-2.7",
"unicode",
"character-encoding"
] |
Python2: Using .decode with errors='replace' still returns errors | 40,029,017 | <p>So I have a <code>message</code> which is read from a file of unknown encoding. I want to send to a webpage for display. I've grappled a lot with UnicodeErrors and have gone through many Q&As on StackOverflow and think I have decent understand of how Unicode and encoding works. My current code looks like this</p>
<pre><code>try :
return message.decode(encoding='utf-8')
except:
try:
return message.decode(encoding='latin-1')
except:
try:
print("Unable to entirely decode in latin or utf-8, will replace error characters with '?'")
return message.decode(encoding='utf-8', errors="replace")
</code></pre>
<p>The returned message is then dumped into a JSON and send to the front end.</p>
<p>I assumed that because I'm using <code>errors="replace"</code>on the last <code>try except</code> that I was going to avoid exceptions at the expense of having a few '?' characters in my display. An acceptable cost.</p>
<p>However, it seems that I was too hopeful, and for some files I still get a <code>UnicodeDecodeException</code> saying "ascii codecs cannot decode" for some character. Why doesn't <code>errors="replace"</code> just take care of this? </p>
<p>(also as a bonus question, what does ascii have to do with any of this?.. I'm specifying UTF-8)</p>
| 0 | 2016-10-13T19:07:18Z | 40,038,844 | <p>You should not get a <code>UnicodeDecodeError</code> with <code>errors='replace'</code>. Also <code>str.decode('latin-1')</code> should never fail, because ISO-8859-1 has a valid character mapping for every possible byte sequence.</p>
<p>My suspicion is that <code>message</code> is already a <code>unicode</code> string, not bytes. Unicode text has already been âdecodedâ from bytes and can't be decoded any more.</p>
<p>When you call <code>.decode()</code> an a <code>unicode</code> string, Python 2 tries to be helpful and decides to <em>encode</em> the Unicode string back to bytes (using the default encoding), so that you have something that you can really decode. This implicit encoding step <em>doesn't</em> use <code>errors='replace'</code>, so if there are any characters in the Unicode string that aren't in the default encoding (probably ASCII) you'll get a <code>Unicode<strong>En</strong>codeError</code>.</p>
<p>(Python 3 no longer does this as it is terribly confusing.)</p>
<p>Check the type of <code>message</code> and assuming it is indeed <code>Unicode</code>, work back from there to find where it was decoded (possibly implicitly) to replace that with the correct decoding.</p>
| 0 | 2016-10-14T08:46:26Z | [
"python",
"python-2.7",
"unicode",
"character-encoding"
] |
Setting Series as index | 40,029,071 | <p>I'm using python 2.7 to take a numerical column of my dataframe <code>data</code> and make it an individual object (series) with an index of dates that is another column from <code>data</code>. </p>
<pre><code>new_series = pd.Series(data['numerical_column'] , index=data['dates'])
</code></pre>
<p>However, when I do this, I get a bunch of <code>NaN</code> values in the Series: </p>
<pre><code>dates
1980-01-31 NaN
1980-02-29 NaN
1980-03-31 NaN
1980-04-30 NaN
1980-05-31 NaN
1980-06-30 NaN
...
</code></pre>
<p>Why does my <code>numerical_data</code> values just disappear? </p>
<p>I realize that I can apparently achieve this goal by doing the following, although I'm curious why my initial approach failed. </p>
<pre><code>new_series = data.set_index('dates')['numerical_column']
</code></pre>
| 2 | 2016-10-13T19:11:12Z | 40,029,176 | <p>I think there is problem with not align index of column <code>data['numerical_column']</code>.</p>
<p>So need convert it to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a>:</p>
<pre><code>new_series = pd.Series(data['numerical_column'].values , index=data['dates'])
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
import datetime
data = pd.DataFrame({
'dates': {0: datetime.date(1980, 1, 31), 1: datetime.date(1980, 2, 29),
2: datetime.date(1980, 3, 31), 3: datetime.date(1980, 4, 30),
4: datetime.date(1980, 5, 31), 5: datetime.date(1980, 6, 30)},
'numerical_column': {0: 1, 1: 4, 2: 5, 3: 3, 4: 1, 5: 0}})
print (data)
dates numerical_column
0 1980-01-31 1
1 1980-02-29 4
2 1980-03-31 5
3 1980-04-30 3
4 1980-05-31 1
5 1980-06-30 0
new_series = pd.Series(data['numerical_column'].values , index=data['dates'])
print (new_series)
dates
1980-01-31 1
1980-02-29 4
1980-03-31 5
1980-04-30 3
1980-05-31 1
1980-06-30 0
dtype: int64
</code></pre>
<p>But method with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> is nicer, but slowier:</p>
<pre><code>#[60000 rows x 2 columns]
data = pd.concat([data]*10000).reset_index(drop=True)
In [65]: %timeit pd.Series(data['numerical_column'].values , index=data['dates'])
1000 loops, best of 3: 308 µs per loop
In [66]: %timeit data.set_index('dates')['numerical_column']
1000 loops, best of 3: 1.28 ms per loop
</code></pre>
<p><strong>Verification</strong>:</p>
<p>If index of column has same index, it works nice:</p>
<pre><code>s = data.set_index('dates')['numerical_column']
df = s.to_frame()
print (df)
numerical_column
dates
1980-01-31 1
1980-02-29 4
1980-03-31 5
1980-04-30 3
1980-05-31 1
1980-06-30 0
new_series = pd.Series(df['numerical_column'] , index=data['dates'])
print (new_series)
dates
1980-01-31 1
1980-02-29 4
1980-03-31 5
1980-04-30 3
1980-05-31 1
1980-06-30 0
Name: numerical_column, dtype: int64
</code></pre>
| 2 | 2016-10-13T19:16:36Z | [
"python",
"python-2.7",
"pandas",
"dataframe",
"series"
] |
Python string formatting trim float precision to no more than needed | 40,029,245 | <p>What's the best way to convert a float to a string with no more precision than needed to represent the number, and never more than a specified upper limit precision such that the following would happen?</p>
<pre><code>>>> values = [1, 1., 1.01, 1.00000000001, 1.123456789, 100.123456789]
>>> print([ my_func(value,precision=6) for value in values])
['1', '1', '1.01', '1', '1.123457', '100.123457']
</code></pre>
<p>I've looked through <code>str.format</code>, but can only find methods for a specific precision that could include too many zeros:</p>
<pre><code>>>> '{:0.6f}'.format(1.0)
'1.000000'
</code></pre>
<p>The <code>g</code> option comes close, but you have to do funny things based on the length of the integer part:</p>
<pre><code>def my_func(value,precision=6):
return '{v:0.{p}g}'.format(v=value,p=precision+len('{:d}'.format(int(value))))
</code></pre>
<p>Is there a better way?</p>
| 0 | 2016-10-13T19:20:16Z | 40,029,293 | <p>I don't believe there is a built-in option, but you could just use <code>string.rstrip</code> to remove the trailing zeros from the normal string representation of the float.</p>
<p>EDIT: You would also need to format it to the fixed maximum precision using the %f formatting token before using <code>string.rstrip</code>.</p>
| 1 | 2016-10-13T19:23:07Z | [
"python",
"string",
"string-formatting"
] |
Python string formatting trim float precision to no more than needed | 40,029,245 | <p>What's the best way to convert a float to a string with no more precision than needed to represent the number, and never more than a specified upper limit precision such that the following would happen?</p>
<pre><code>>>> values = [1, 1., 1.01, 1.00000000001, 1.123456789, 100.123456789]
>>> print([ my_func(value,precision=6) for value in values])
['1', '1', '1.01', '1', '1.123457', '100.123457']
</code></pre>
<p>I've looked through <code>str.format</code>, but can only find methods for a specific precision that could include too many zeros:</p>
<pre><code>>>> '{:0.6f}'.format(1.0)
'1.000000'
</code></pre>
<p>The <code>g</code> option comes close, but you have to do funny things based on the length of the integer part:</p>
<pre><code>def my_func(value,precision=6):
return '{v:0.{p}g}'.format(v=value,p=precision+len('{:d}'.format(int(value))))
</code></pre>
<p>Is there a better way?</p>
| 0 | 2016-10-13T19:20:16Z | 40,029,303 | <p>Does <code>rstrip</code>-ing the <code>str.format</code> version do what you want? E.g.,</p>
<pre><code>'{:0.6f}'.format(num).rstrip('0').rstrip('.')
</code></pre>
| 2 | 2016-10-13T19:23:46Z | [
"python",
"string",
"string-formatting"
] |
Split and keep deliminator, preferably with regex | 40,029,387 | <p>Let's say I have this text:</p>
<pre><code>1.1 This is the 2,1 first 1.2 This is the 2,2 second 1.3 This is the 2,3 third
</code></pre>
<p>and I want:</p>
<pre><code>["1.1 This is the 2,1 first","1.2 This is the 2,2 second","1.3 This is the 2,3 third"]
</code></pre>
<p>Note that:</p>
<ul>
<li><p>I can't use <code>re.findall</code>, since I can't think of a way to properly terminate the match. The best I could think of was <code>'[0-9]+\.[0-9]+^([0-9]+\.[0-9]+)*'</code>, which didn't work.</p></li>
<li><p>I can't just store the delimiter as a global variable, since it changes with each match.</p></li>
<li><p>I could not use a regular <code>re.split</code> because I want to keep the delimiter. I can't use a lookbehind because it has to be fixed width, and this isn't.</p></li>
</ul>
<p>I have read <a href="http://stackoverflow.com/questions/18323764/regexp-split-and-keep-the-seperator">regexp split and keep the seperator</a>, <a href="http://stackoverflow.com/questions/7866128/python-split-without-removing-the-delimiter">Python split() without removing the delimiter</a>, and <a href="http://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators">In Python, how do I split a string and keep the separators?</a>, and still don't have an answer.</p>
| 0 | 2016-10-13T19:29:09Z | 40,029,469 | <p>Yes, you can:</p>
<pre><code>\b\d+\.\d+
.+?(?=\d+\.\d+|$)
</code></pre>
<p>See it <a href="https://regex101.com/r/ABNxgQ/1" rel="nofollow"><strong>working on regex101.com</strong></a>. To be used in addition to <code>re.findall()</code>:</p>
<pre><code>import re
rx = re.compile(r'\b\d+\.\d+.+?(?=\d+\.\d+|$)')
string = "1.1 This is the 2,1 first 1.2 This is the 2,2 second 1.3 This is the 2,3 third "
matches = rx.findall(string)
print(matches)
# ['1.1 This is the 2,1 first ', '1.2 This is the 2,2 second ', '1.3 This is the 2,3 third ']
</code></pre>
<p>If the string spans across multiple lines, use either the <strong>dotall mode</strong> or <code>[\s\S]*?</code>.<br>
See <a href="http://ideone.com/HXe4Tb" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 2 | 2016-10-13T19:34:47Z | [
"python",
"regex"
] |
Split and keep deliminator, preferably with regex | 40,029,387 | <p>Let's say I have this text:</p>
<pre><code>1.1 This is the 2,1 first 1.2 This is the 2,2 second 1.3 This is the 2,3 third
</code></pre>
<p>and I want:</p>
<pre><code>["1.1 This is the 2,1 first","1.2 This is the 2,2 second","1.3 This is the 2,3 third"]
</code></pre>
<p>Note that:</p>
<ul>
<li><p>I can't use <code>re.findall</code>, since I can't think of a way to properly terminate the match. The best I could think of was <code>'[0-9]+\.[0-9]+^([0-9]+\.[0-9]+)*'</code>, which didn't work.</p></li>
<li><p>I can't just store the delimiter as a global variable, since it changes with each match.</p></li>
<li><p>I could not use a regular <code>re.split</code> because I want to keep the delimiter. I can't use a lookbehind because it has to be fixed width, and this isn't.</p></li>
</ul>
<p>I have read <a href="http://stackoverflow.com/questions/18323764/regexp-split-and-keep-the-seperator">regexp split and keep the seperator</a>, <a href="http://stackoverflow.com/questions/7866128/python-split-without-removing-the-delimiter">Python split() without removing the delimiter</a>, and <a href="http://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators">In Python, how do I split a string and keep the separators?</a>, and still don't have an answer.</p>
| 0 | 2016-10-13T19:29:09Z | 40,036,092 | <p>split with blank whose right is 1.2 2.2 ...</p>
<pre><code>re.split(r' (?=\d.\d)',s)
</code></pre>
| 0 | 2016-10-14T06:10:23Z | [
"python",
"regex"
] |
JSON.Dump doesn't capture the whole stream | 40,029,399 | <p>So I have a simple crawler that crawls 3 store location pages and parses the locations of the stores to json. I print(app_data['stores']) and it prints all three pages of stores. However, when I try to write it out I only get one of the three pages, at random, written to my json file. I'd like everything that streams to be written to the file. Any help would be great. Here's the code:</p>
<pre><code>import scrapy
import json
import js2xml
from pprint import pprint
class StlocSpider(scrapy.Spider):
name = "stloc"
allowed_domains = ["bestbuy.com"]
start_urls = (
'http://www.bestbuy.com/site/store-locator/11356',
'http://www.bestbuy.com/site/store-locator/46617',
'http://www.bestbuy.com/site/store-locator/77521'
)
def parse(self, response):
js = response.xpath('//script[contains(.,"window.appData")]/text()').extract_first()
jstree = js2xml.parse(js)
# print(js2xml.pretty_print(jstree))
app_data_node = jstree.xpath('//assign[left//identifier[@name="appData"]]/right/*')[0]
app_data = js2xml.make_dict(app_data_node)
print(app_data['stores'])
for store in app_data['stores']:
yield store
with open('stores.json', 'w') as f:
json.dump(app_data['stores'], f, indent=4)
</code></pre>
| 0 | 2016-10-13T19:29:45Z | 40,029,558 | <p>You are opening the file for writing every time, but you want to append. Try changing the last part to this:</p>
<pre><code>with open('stores.json', 'a') as f:
json.dump(app_data['stores'], f, indent=4)
</code></pre>
<p>Where <code>'a'</code> opens the file for appending.</p>
| 0 | 2016-10-13T19:39:49Z | [
"python",
"json",
"scrapy",
"js2xml"
] |
How to use multiple token using Regex Expression | 40,029,483 | <p>To extract first three letters 'abc' and three sets of three-digits numbers in <code>000_111_222</code> I am using the following expression:</p>
<pre><code>text = 'abc_000_111_222'
print re.findall('^[a-z]{3}_[0-9]{3}_[0-9]{3}_[0-9]{3}', text)
</code></pre>
<p>But the expression returns empty list when instead of underscores there are minuses or periods used instead: <code>abc.000.111.222</code> or <code>abc-000-111-222</code> or any combination of it like: <code>abc_000.111-222</code></p>
<p>Sure I could use a simple replace method to unify the text variable <code>text=text.replace('-','_').replace('.','_')</code></p>
<p>But I wonder if instead of replacing I could modify regex expression that would recognize the underscores, minuses and periods.</p>
| 0 | 2016-10-13T19:35:15Z | 40,029,572 | <p>You can use regex character classes with <code>[</code>...<code>]</code>. For your case, it can be <code>[_.-]</code> (note the hyphen at the end, if it isn't at the end, it will be considered as a range like <code>[a-z]</code>).</p>
<p>You can use a regex like this:</p>
<pre><code>print re.findall('^[a-z]{3}[_.-][0-9]{3}[_.-][0-9]{3}[_.-][0-9]{3}', text)
</code></pre>
<p><a href="https://i.stack.imgur.com/9oYZz.png" rel="nofollow"><img src="https://i.stack.imgur.com/9oYZz.png" alt="enter image description here"></a></p>
<p>Btw, you can shorten your regex to have something like this:</p>
<pre><code>print re.findall('^[a-z]{3}[_.-](\d{3}[_.-]){2}\d{3}', text)
</code></pre>
<p>Just as a comment, in case you want to match the same separator, then you can use capture groups and reference its content like this:</p>
<pre><code>^[a-z]{3}([_.-])[0-9]{3}\1[0-9]{3}\1[0-9]{3}
</code></pre>
| 3 | 2016-10-13T19:40:32Z | [
"python",
"regex"
] |
How to use multiple token using Regex Expression | 40,029,483 | <p>To extract first three letters 'abc' and three sets of three-digits numbers in <code>000_111_222</code> I am using the following expression:</p>
<pre><code>text = 'abc_000_111_222'
print re.findall('^[a-z]{3}_[0-9]{3}_[0-9]{3}_[0-9]{3}', text)
</code></pre>
<p>But the expression returns empty list when instead of underscores there are minuses or periods used instead: <code>abc.000.111.222</code> or <code>abc-000-111-222</code> or any combination of it like: <code>abc_000.111-222</code></p>
<p>Sure I could use a simple replace method to unify the text variable <code>text=text.replace('-','_').replace('.','_')</code></p>
<p>But I wonder if instead of replacing I could modify regex expression that would recognize the underscores, minuses and periods.</p>
| 0 | 2016-10-13T19:35:15Z | 40,051,125 | <p>Why not abandon <code>regexes</code> altogether, and use a clearer and simpler solution?</p>
<pre><code>$ cat /tmp/tmp.py
SEP = '_.,;-=+'
def split_str(text):
for s in list(SEP):
res = text.split(s)
if len(res) > 1:
return text.split(s)
print(split_str('abc_000_111_222'))
print(split_str('abc;000;111;222'))
print(split_str('abc.000.111.222'))
print(split_str('abc-000-111-222'))
</code></pre>
<p>Which gives:</p>
<pre><code>$ python3 /tmp/tmp.py
['abc', '000', '111', '222']
['abc', '000', '111', '222']
['abc', '000', '111', '222']
['abc', '000', '111', '222']
$
</code></pre>
| -1 | 2016-10-14T20:00:56Z | [
"python",
"regex"
] |
Upgrading pip You are using pip version 7.1.2 to version 8.1.2 | 40,029,526 | <p>Hi I am trying to upgrade my pip version to 8.1.2
this command does not work:
pip install --upgrade pip </p>
<p>And I also tried to get it directly from the github directory, like explained here <a href="https://github.com/pypa/pip/archive/8.1.1.zip" rel="nofollow">https://github.com/pypa/pip/archive/8.1.1.zip</a> but then i get the following error:</p>
<pre><code>Checking .pth file support in /Library/Python/2.7/site-packages/
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-21442.pth'
The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/Library/Python/2.7/site-packages/
Perhaps your account does not have write access to this directory? If the
installation directory is a system-owned directory, you may need to sign in
as the administrator or "root" account. If you do not have administrative
access to this machine, you may wish to choose a different installation
directory, preferably one that is listed in your PYTHONPATH environment
variable.
For information on other options, you may wish to consult the
documentation at:
https://pythonhosted.org/setuptools/easy_install.html
Please make the appropriate changes for your system and try again.
</code></pre>
<p>I m working on a Mac btw</p>
| 0 | 2016-10-13T19:38:09Z | 40,029,600 | <p>Try to open command prompt as an administrator, then try upgrade it, should works.</p>
| 0 | 2016-10-13T19:42:01Z | [
"python",
"terminal",
"install",
"pip"
] |
Upgrading pip You are using pip version 7.1.2 to version 8.1.2 | 40,029,526 | <p>Hi I am trying to upgrade my pip version to 8.1.2
this command does not work:
pip install --upgrade pip </p>
<p>And I also tried to get it directly from the github directory, like explained here <a href="https://github.com/pypa/pip/archive/8.1.1.zip" rel="nofollow">https://github.com/pypa/pip/archive/8.1.1.zip</a> but then i get the following error:</p>
<pre><code>Checking .pth file support in /Library/Python/2.7/site-packages/
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-21442.pth'
The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/Library/Python/2.7/site-packages/
Perhaps your account does not have write access to this directory? If the
installation directory is a system-owned directory, you may need to sign in
as the administrator or "root" account. If you do not have administrative
access to this machine, you may wish to choose a different installation
directory, preferably one that is listed in your PYTHONPATH environment
variable.
For information on other options, you may wish to consult the
documentation at:
https://pythonhosted.org/setuptools/easy_install.html
Please make the appropriate changes for your system and try again.
</code></pre>
<p>I m working on a Mac btw</p>
| 0 | 2016-10-13T19:38:09Z | 40,029,634 | <p>Your problem is that your user doesn't have write privileges to the <code>/Library/Python/2.7/site-packages</code> directory. That is what it means by <code>[Errno 13] Permission denied</code></p>
<p>Try running the command with <code>sudo</code> prefixing the original command. Sudo allows you to safely run programs as root. You will need to type your password at the prompt.</p>
| 0 | 2016-10-13T19:44:30Z | [
"python",
"terminal",
"install",
"pip"
] |
Upgrading pip You are using pip version 7.1.2 to version 8.1.2 | 40,029,526 | <p>Hi I am trying to upgrade my pip version to 8.1.2
this command does not work:
pip install --upgrade pip </p>
<p>And I also tried to get it directly from the github directory, like explained here <a href="https://github.com/pypa/pip/archive/8.1.1.zip" rel="nofollow">https://github.com/pypa/pip/archive/8.1.1.zip</a> but then i get the following error:</p>
<pre><code>Checking .pth file support in /Library/Python/2.7/site-packages/
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-21442.pth'
The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/Library/Python/2.7/site-packages/
Perhaps your account does not have write access to this directory? If the
installation directory is a system-owned directory, you may need to sign in
as the administrator or "root" account. If you do not have administrative
access to this machine, you may wish to choose a different installation
directory, preferably one that is listed in your PYTHONPATH environment
variable.
For information on other options, you may wish to consult the
documentation at:
https://pythonhosted.org/setuptools/easy_install.html
Please make the appropriate changes for your system and try again.
</code></pre>
<p>I m working on a Mac btw</p>
| 0 | 2016-10-13T19:38:09Z | 40,029,669 | <p>You can follow it up with:</p>
<pre><code>pip install -U pip
sudo !!
</code></pre>
<p>This has the same effect as:</p>
<pre><code>sudo pip install -U pip
</code></pre>
<p>Or even better, move away from OS X's Python install, and instead install Python / pip via <a href="http://brew.sh/" rel="nofollow">Homebrew</a>:</p>
<pre><code>brew install python
brew install python3
</code></pre>
| 0 | 2016-10-13T19:46:52Z | [
"python",
"terminal",
"install",
"pip"
] |
How to avoid StaleElementReferenceException in Selenium - Python | 40,029,549 | <p>I am stuck in writing a Python Selenium script and can't seem to satisfactorily resolve this StaleElementReferenceException I am getting. </p>
<p>I have my page loaded and click a button which opens a form that allows the user to add a new credit card to the order. At this point I do a WebDriverWait to pause the script until the Save button on this form becomes visible. At that point, recreate the page object since it has changed, and my intent is to populate the fields and save the card. </p>
<p>The problem is that after refreshing the page object the script fails with the StaleElementReferenceException. My understanding is that the WebDriverWait will pause the the execution giving the page time to load all the elements that need to load, but that doesn't appear to be happening. Instead something in that refresh of the page object is stale and causes the error (different part of the object creation each time). </p>
<p>If I just uncomment the line 'time.sleep(2)' then this script runs fine and it will pass. So I know I just need to give the page time to reload correctly before I refresh the object. The WebDriverWait just doesn't seem to be doing that effectively for me.</p>
<p>Is there a more correct way I can do this without the sleep command?</p>
<pre><code>checkout = CheckoutProcess(self.driver)
# Add Credit Card
checkout.add_credit_card()
# Wait for form to display
WebDriverWait(self.driver,30).until(expected_conditions.presence_of_element_located((By.CLASS_NAME, 'total')))
# time.sleep(2)
# Refresh the page object so form can be filled in
checkout = CheckoutProcess(self.driver) # Script Fails Here
checkout.populate_credit_card_data(credit_card_name, credit_card_number, credit_card_expiration_date, credit_card_cvc)
checkout.click_credit_card_save_button()
</code></pre>
| 1 | 2016-10-13T19:39:29Z | 40,070,927 | <p>A StaleElementReferenceException is thrown when the element you were interacting is destroyed and then recreated. Most complex web pages these days will move things about on the fly as the user interacts with it and this requires elements in the DOM to be destroyed and recreated.</p>
<p>Try doing</p>
<pre><code>wait.until(ExpectedConditions.stalenessOf(whatever element));
</code></pre>
<p>or</p>
<pre><code>wait.until(ExpectedConditions.presenceOfElementLocated(By.id("whatever elemnt")))
</code></pre>
<p>Hope it will help you</p>
| 1 | 2016-10-16T13:41:02Z | [
"python",
"selenium"
] |
how do I reset a input in python | 40,029,550 | <p>so i have this code that basically consists of you asking questions, but i have it so the input answers the question, so you can only ask one question, then you have to reset the whole thing again and again, and i have it to ask you your name first so i want a loop that ignores that.</p>
<pre><code> print("hello,what is your name?")
name = input()
print("hello",name)
while True:
question = input("ask me anything:")
if question == ("what is love"):
print("love is a emotion that makes me uneasy, i'm a inteligence not a human",name)
break
if question == ("i want a dog"):
print("ask your mother, she knows what to do",name)
break
if question == ("what is my name"):
print("your name is",name)
break
</code></pre>
| -3 | 2016-10-13T19:39:31Z | 40,029,607 | <p>Take out the <code>break</code>s. Then have one of the options be "quit" with a <code>break</code></p>
| -1 | 2016-10-13T19:42:21Z | [
"python"
] |
how do I reset a input in python | 40,029,550 | <p>so i have this code that basically consists of you asking questions, but i have it so the input answers the question, so you can only ask one question, then you have to reset the whole thing again and again, and i have it to ask you your name first so i want a loop that ignores that.</p>
<pre><code> print("hello,what is your name?")
name = input()
print("hello",name)
while True:
question = input("ask me anything:")
if question == ("what is love"):
print("love is a emotion that makes me uneasy, i'm a inteligence not a human",name)
break
if question == ("i want a dog"):
print("ask your mother, she knows what to do",name)
break
if question == ("what is my name"):
print("your name is",name)
break
</code></pre>
| -3 | 2016-10-13T19:39:31Z | 40,029,619 | <pre><code>print("hello,what is your name?")
name = input()
print("hello",name)
while True:
question = input("ask me anything:")
if question == ("what is love"):
print("love is a emotion that makes me uneasy, i'm a inteligence not a human",name)
elif question == ("i want a dog"):
print("ask your mother, she knows what to do",name)
elif question == ("what is my name"):
print("your name is",name)
</code></pre>
| 0 | 2016-10-13T19:43:13Z | [
"python"
] |
how do I reset a input in python | 40,029,550 | <p>so i have this code that basically consists of you asking questions, but i have it so the input answers the question, so you can only ask one question, then you have to reset the whole thing again and again, and i have it to ask you your name first so i want a loop that ignores that.</p>
<pre><code> print("hello,what is your name?")
name = input()
print("hello",name)
while True:
question = input("ask me anything:")
if question == ("what is love"):
print("love is a emotion that makes me uneasy, i'm a inteligence not a human",name)
break
if question == ("i want a dog"):
print("ask your mother, she knows what to do",name)
break
if question == ("what is my name"):
print("your name is",name)
break
</code></pre>
| -3 | 2016-10-13T19:39:31Z | 40,029,625 | <p>Get rid of the <code>break</code>s, so the loop keeps prompting for new questions. For performance, change the subsequent <code>if</code> tests to <code>elif</code> tests (not strictly necessary, but it avoids rechecking the string if you get a hit early on):</p>
<pre><code>while True:
question = input("ask me anything:")
if question == "what is love":
print("love is a emotion that makes me uneasy, i'm a inteligence not a human",name)
elif question == "i want a dog":
print("ask your mother, she knows what to do",name)
elif question == "what is my name":
print("your name is",name)
</code></pre>
<p>Of course, in this specific case, you could avoid the repeated tests by using a <code>dict</code> to perform a lookup, making an arbitrary number of prompts possible without repeated tests:</p>
<pre><code># Defined once up front
question_map = {
'what is love': "love is a emotion that makes me uneasy, i'm a inteligence not a human",
'i want a dog': 'ask your mother, she knows what to do',
'what is my name': 'your name is',
# Put as many more mappings as you want, code below doesn't change
# and performance remains similar even for a million+ mappings
}
print("hello,what is your name?")
name = input()
print("hello",name)
while True:
question = input("ask me anything:")
try:
print(question_map[question], name)
except KeyError:
# Or check for "quit"/break the loop/alert to unrecognized question/etc.
pass
</code></pre>
| 3 | 2016-10-13T19:43:47Z | [
"python"
] |
How to update `xarray.DataArray` using `.sel()` indexer? | 40,029,618 | <p>I found the easiest way to visualize creating a <code>N-dimensional</code> <code>DataArray</code> was to make a <code>np.ndarray</code> and then fill in the values by the coordinates I've created. When I tried to actually do it, I couldn't figure out how to update the <code>xr.DataArray</code>.</p>
<p><strong>How can I update the <code>xr.DataArray</code> I've initialized using the labels I've created?</strong> My actual data is a much more complicated dataset but this sums up what I'm trying to do. I can use <code>.loc</code> but sometimes my <code>ndarrays</code> get huge and complicated where I don't really know the order of the dims. </p>
<pre><code># Construct DataArray
DA = xr.DataArray(np.ndarray((3,3,5)), dims=["axis_A","axis_B","axis_C"], coords={"axis_A":["A_%d"%_ for _ in range(3)],
"axis_B":["B_%d"%_ for _ in range(3)],
"axis_C":["C_%d"%_ for _ in range(5)]})
# <xarray.DataArray (axis_A: 3, axis_B: 3, axis_C: 5)>
# array([[[ 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0.]],
# [[ 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0.]],
# [[ 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0.]]])
# Coordinates:
# * axis_B (axis_B) <U3 'B_0' 'B_1' 'B_2'
# * axis_A (axis_A) <U3 'A_0' 'A_1' 'A_2'
# * axis_C (axis_C) <U3 'C_0' 'C_1' 'C_2' 'C_3' 'C_4'
# # Update?
DA.sel(axis_A="A_1", axis_B="B_1", axis_C="C_1").values = 1
DA.max()
# # <xarray.DataArray ()>
# # array(0.0)
DA.sel(axis_A="A_1", axis_B="B_1", axis_C="C_1") = 1
# # File "<ipython-input-17-8feb7332633f>", line 4
# # DA.sel(axis_A="A_1", axis_B="B_1", axis_C="C_1") = 1
# # ^
# # SyntaxError: can't assign to function call
</code></pre>
| 1 | 2016-10-13T19:43:04Z | 40,030,606 | <p>This is really awkward, due to the unfortunate limitation of Python syntax that keyword arguments are not supported in inside square bracket.</p>
<p>So instead, you need to put the arguments to <code>.sel</code> as a dictionary in <code>.loc</code> instead:</p>
<pre><code>DA.loc[dict(axis_A="A_1", axis_B="B_1", axis_C="C_1")] = 1
</code></pre>
| 2 | 2016-10-13T20:42:41Z | [
"python",
"database",
"numpy",
"dataframe",
"python-xarray"
] |
Function to find the factors of a negative number | 40,029,623 | <p>I need to write a function that will find the factors for a negative number and output them to a list. How would I do that? I can get my function to do positive numbers (see below) but not negative ones.</p>
<pre><code>#Finds factors for A and C
def factorspos(x):
factorspos = [1,-6];
print("The factors of",x,"are:")
for i in range(1, x + 1):
if x % i == 0:
factorspos.append(i)
print(i)
</code></pre>
<p>I tried changing the values that the loop counts from so it would count from the number chosen to 1 (Code below) but still yielded no results :(</p>
<pre><code>#Finds factors for A and C
def factorspos(x):
factorspos = [int(-6),1];
print("The factors of",x,"are:")
for i in range(int(-6), x + 1):
if x % i == 0:
factorspos.append(i)
print(i)
</code></pre>
<p>I have changed Cco to a fixed number.</p>
<pre><code>#Finds factors for A and C
def factorspos(x):
Cco = -6
factorspos = [int(Cco),1];
print("The factors of",x,"are:")
for i in range(int(Cco), x + 1):
if x % i == 0:
factorspos.append(i)
print(i)
return factorspos
</code></pre>
| 1 | 2016-10-13T19:43:34Z | 40,030,772 | <pre><code>def factorspos(x):
x = int(x)
factorspos = []
print("The factors of",x,"are:")
if x > 0: # if input is postive
for i in range(1,x+1):
if x % i == 0:
factorspos.append(i)
print(i)
return factorspos
elif x < 0: #if input is negative
for i in range(x,0):
if x % i == 0:
factorspos.append(i)
print(i)
return factorspos
print(factorspos(12)) #outputs [1, 2, 3, 4, 6, 12]
print(factorspos(-12)) #outputs [-12, -6, -4, -3, -2, -1]
</code></pre>
<p>You were actually really close to fixing your issue. I took the liberty of adding an extra function to what you had. Basically I added a condiction checker to see if the input <code>x</code> is positive or negative, the the function did two different things. What they do was what you provided, but cleaned up. </p>
<p>Things to note <code>range()</code> starts from one the first number inclusive, and ends one number short of the second parameter. <code>range(1,10)</code> will give you 1 to 9. So that's why if you look, the negative section the range goes from x to 0 since that will say x to -1. In the positive section it will go from 1 to x+1 since +1 insures we include our input. The rest you know about since, well you wrote it; if not feel free to ask questions. </p>
| 1 | 2016-10-13T20:52:29Z | [
"python",
"python-3.x"
] |
Use eval with dictionary without losing imported modules in Python2 | 40,029,746 | <p>I have a string to be executed inside my python program and I want to change some variables in the string like x[1], x[2] to something else.
I had previously used eval with 2 arguments (the second being a dict with replaced_word: new_word) but now I noticed I can't use previously imported modules like this. So if I do this</p>
<pre><code>from math import log
eval(log(x[1], {x[1]: 1})
</code></pre>
<p>it will say it doesn't recognize the name log.
How can I use eval like this without losing the global variables?
I can't really make sense of the documentation:
<a href="https://docs.python.org/2/library/functions.html#eval" rel="nofollow">https://docs.python.org/2/library/functions.html#eval</a>
so an explanation would be useful too.</p>
| 0 | 2016-10-13T19:51:05Z | 40,029,834 | <p>Build your globals <code>dict</code> with <a href="https://docs.python.org/2/library/functions.html#globals" rel="nofollow"><code>globals()</code></a> as a base:</p>
<pre><code>from math import log
# Copy the globals() dict so changes don't affect real globals
eval_globals = globals().copy()
# Tweak the copy to add desired new global
eval_globals[x[1]] = 1
# eval using the updated copy
eval('log(x[1])', eval_globals)
</code></pre>
<p>Alternatively, you can use <a href="https://docs.python.org/2/library/functions.html#eval" rel="nofollow">three-arg <code>eval</code></a> to use <code>globals()</code> unmodified, but also supply a <code>local</code>s <code>dict</code> that will be checked (and modified) first, in preference to global values:</p>
<pre><code>eval('log(x[1])', globals(), {x[1]: 1})
</code></pre>
<p>In theory, the latter approach could allow the expression to mutate the original globals, so adding <code>.copy()</code> to make it <code>eval('log(x[1])', globals().copy(), {x[1]: 1})</code> minimizes the risk of that happening accidentally. But pathological/malicious code could work around that; <code>eval</code> is dangerous after all, don't trust it for arbitrary inputs no matter how sandboxed you make it.</p>
| 1 | 2016-10-13T19:56:48Z | [
"python",
"eval",
"python-2.x"
] |
Freeing memory in python parallel process loops | 40,029,768 | <p>I'm using a master-slaves structure to implement a parallel computation. A single master process (<code>0</code>) loads data, and distributes relevant chunks and instructions to slave processes (<code>1</code>-<code>N</code>) which do the heavy lifting, using large objects... blah blah blah. The issue is memory usage, which I'm monitoring using <code>resource.getrusage(resource.RUSAGE_SELF).ru_maxrss</code> on each slave process.</p>
<p>The first task uses about 6GB of memory, as expected, but when the slave receives the second task, it balloons up to just over 10GB --- as if the previous memory wasn't being collected. My understanding was that as soon as a variable looses its references (in the below code, when the <code>_gwb</code> variable is reset) garbage collection should clean house. Why isn't this happening?</p>
<p>Would throwing in a <code>del _gwb</code> at the end of each loop help?<br>
What about a manual call to <code>gc.collect()</code>?<br>
Or do I need to spawn <code>subprocess</code>es as <a href="http://stackoverflow.com/a/24126616/230468">described in this answer</a>?</p>
<p>I'm using <code>mpi4py</code> on a SLURM managed cluster.</p>
<p>The <strong>master</strong> process looks something like:</p>
<pre><code>for jj, tt in enumerate(times):
for ii, sim in enumerate(sims):
search = True
# Find a slave to give this task to
while search:
# Repackage HDF5 data into dictionary to work with MPI
sim_dat = ... # load some data
# Look for available slave process
data = comm.recv(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)
src = stat.Get_source()
# Store Results
if tag == TAGS.DONE:
_store_slave_results(data, ...)
num_done += 1
elif tag == TAGS.READY:
# Distribute tasks
comm.send(sim_data, dest=src, tag=TAGS.START)
# Stop searching, move to next task
search = False
cycles += 1
</code></pre>
<p>And the <strong>slaves</strong>:</p>
<pre><code>while True:
# Tell Master this process is ready
comm.send(None, dest=0, tag=TAGS.READY)
# Receive ``task`` ([number, gravPot, ndensStars])
task = comm.recv(source=0, tag=MPI.ANY_TAG, status=stat)
tag = stat.Get_tag()
if tag == TAGS.START:
_gwb = Large_Data_Structure(task)
data = _gwb.do_heavy_lifting(task)
comm.send(data, dest=0, tag=TAGS.DONE)
elif tag == TAGS.EXIT:
break
cycles += 1
</code></pre>
<hr>
<p>Edit: Some other strange subtleties (in case they might be relevant):<br>
1) only some processes show the memory growing, other stay roughly the same;<br>
2) The specific amount of memory active is <em>different</em> on the different slave processes (differing by <code>100s of MB</code> ... even though they should necessarily be running the same code!</p>
| 0 | 2016-10-13T19:52:30Z | 40,030,013 | <p><code>del _gwb</code> should make a big difference. With <code>_gwb = Large_Data_Structure(task)</code> the new data is generated and then assigned to _gwd. Only then is the old data released. A specific <code>del</code> will get rid of the object early. You may still see a memory increase for the second loop - python releases the object into its heap but there's nothing to say that the next allocation will get exactly the same bunch of memory.</p>
<p>The garbage collector only comes into play in cases where regular reference counting isn't sufficient to trigger freeing of the memory. Assuming <code>do_heavy_lifting</code> isn't doing anything funky, it won't make a difference.</p>
<p>You mention <code>subprocess</code>... another option on linux systems is <code>os.fork</code>. The child process gets a copy-on-write view of the parent address space. The big object is generated in the child memory and that goes away on exit. I can't guarantee this will work but would be an interesting experiment.</p>
<pre><code>while True:
# Tell Master this process is ready
comm.send(None, dest=0, tag=TAGS.READY)
# Receive ``task`` ([number, gravPot, ndensStars])
task = comm.recv(source=0, tag=MPI.ANY_TAG, status=stat)
tag = stat.Get_tag()
if tag == TAGS.START:
pid = os.fork()
if pid:
# parent waits for child
os.waitpid(pid)
else:
# child does work, sends results and exits
_gwb = Large_Data_Structure(task)
data = _gwb.do_heavy_lifting(task)
comm.send(data, dest=0, tag=TAGS.DONE)
os._exit()
elif tag == TAGS.EXIT:
break
cycles += 1
</code></pre>
| 1 | 2016-10-13T20:06:57Z | [
"python",
"performance",
"memory",
"parallel-processing",
"numerical-methods"
] |
Change Array of multi integer vectors to single value for each vector in Numpy | 40,029,777 | <p>I have an array below, which I want to make each row randomly have a single 1 or all zeros, but only the current 1 values can be converted to 0. I have a check below that was going to do this by seeing if there is a 1 in the row and if the summed value is greater or = to 0. I am hoping there is a simple approach to do this that is just escaping me at present.</p>
<pre><code>A = np.array([
[ 0 , 1 , 1 , 0 , 1 ] ,
[ 1 , 0 , 0 , 1 , 1 ] ,
[ 0 , 0 , 1 , 0 , 0 ] ,
[ 0 , 1 , 0 , 0 , 1 ] ,
[ 1 , 0 , 0 , 0 , 0 ] ,
[ 0 , 0 , 1 , 1 , 0 ] ,
[ 0 , 0 , 0 , 0 , 0 ] ,
[ 1 , 0 , 1 , 0 , 0 ] ,
[ 1 , 0 , 1 , 1 , 1 ] ,
[ 0 , 0 , 1 , 1 , 0 ] ,
[ 0 , 1 , 0 , 1 , 0 ] ,
[ 0 , 1 , 0 , 0 , 1 ] ,
[ 0 , 0 , 1 , 1 , 0 ] ,
[ 0 , 1 , 1 , 0 , 1 ] ,
[ 0 , 0 , 1 , 0 , 0 ] ])
if np.any(A[0] == 1)==True and np.sum(A[0])>=0:
change row to all 0's randomly or keep one of the existing 1 values randomly. Ideally, if it could do it to the whole array, it would be very useful, but row by row is fine.
</code></pre>
| 1 | 2016-10-13T19:52:58Z | 40,032,617 | <p>You can randomly select per row the column you want to keep:</p>
<pre><code>m, n = A.shape
J = np.random.randint(n, size=m)
</code></pre>
<p>You can use these to create a new array:</p>
<pre><code>I = np.arange(m)
B = np.zeros_like(A)
B[I,J] = A[I,J]
</code></pre>
<p>Or if you want to modify <code>A</code>, e.g. use bit shifting:</p>
<pre><code>I = np.arange(m)
A[I,J] <<= 1
A >>= 1
</code></pre>
| 0 | 2016-10-13T23:25:09Z | [
"python",
"numpy"
] |
Why does the class definition's metaclass keyword argument accept a callable? | 40,029,807 | <h2>Background</h2>
<p>The Python 3 <a href="https://docs.python.org/3.6/reference/datamodel.html#determining-the-appropriate-metaclass">documentation</a> clearly describes how the metaclass of a class is determined:</p>
<blockquote>
<ul>
<li>if no bases and no explicit metaclass are given, then type() is used</li>
<li>if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass</li>
<li>if an instance of type() is given as the explicit metaclass, or bases are defined, then the most derived metaclass is used</li>
</ul>
</blockquote>
<p>Therefore, according to the second rule, it is possible to specify a metaclass using a callable. E.g.,</p>
<pre><code>class MyMetaclass(type):
pass
def metaclass_callable(name, bases, namespace):
print("Called with", name)
return MyMetaclass(name, bases, namespace)
class MyClass(metaclass=metaclass_callable):
pass
class MyDerived(MyClass):
pass
print(type(MyClass), type(MyDerived))
</code></pre>
<h2>Question 1</h2>
<p>Is the metaclass of <code>MyClass</code>: <code>metaclass_callable</code> or <code>MyMetaclass</code>? The second rule in the documentation says that the provided callable "is used directly as the metaclass". However, it seems to make more sense to say that the metaclass is <code>MyMetaclass</code> since</p>
<ul>
<li><code>MyClass</code> and <code>MyDerived</code> have type <code>MyMetaclass</code>,</li>
<li><code>metaclass_callable</code> is called once and then appears to be unrecoverable,</li>
<li>derived classes do not use (as far as I can tell) <code>metaclass_callable</code> in any way (they use <code>MyMetaclass</code>).</li>
</ul>
<h2>Question 2</h2>
<p>Is there anything you can do with a callable that you can't do with an instance of <code>type</code>? What is the purpose of accepting an arbitrary callable?</p>
| 8 | 2016-10-13T19:54:43Z | 40,030,066 | <p>Well, the <code>type</code> is of course <code>MyMetaClass</code>. <code>metaclass_callable</code> is initially 'selected' as the metaclass since <a href="https://github.com/python/cpython/blob/master/Python/bltinmodule.c#L100" rel="nofollow">it's been specified in the <code>metaclass</code> kwarg</a> and as such, it's <code>__call__</code> (a simple function call) is going to be performed. </p>
<p>It just so happens that calling it will <code>print</code> and then invoke <code>MyMetaClass.__call__</code> (which calls <code>type.__call__</code> since <code>__call__</code> hasn't been overridden for <code>MyMetaClass</code>). <a href="https://github.com/python/cpython/blob/master/Objects/typeobject.c#L2693" rel="nofollow"><em>There</em> the assignment of <code>cls.__class__</code> is made</a> to <code>MyMetaClass</code>. </p>
<blockquote>
<p><code>metaclass_callable</code> is called once and then appears to be unrecoverable</p>
</blockquote>
<p>Yes, it is only initially invoked and then hands control over to <code>MyMetaClass</code>. I'm not aware of any class attribute that keeps that information around. </p>
<blockquote>
<p>derived classes do not use (as far as I can tell) <code>metaclass_callable</code> in any way.</p>
</blockquote>
<p>Nope, if no <code>metaclass</code> is explicitly defined, <a href="https://github.com/python/cpython/blob/master/Python/bltinmodule.c#L130" rel="nofollow">the best match for the metaclasses of <code>bases</code></a> (here <code>MyClass</code>) will be used (resulting in <code>MyMetaClass</code>).</p>
<hr>
<p>As for question <code>2</code>, pretty sure everything you can do with a callable is also possible by using an instance of type with <code>__call__</code> overridden accordingly. As to <em>why</em>, you might not want to go full blown class-creation if you simply want to make minor changes when actually creating a class.</p>
| 2 | 2016-10-13T20:10:23Z | [
"python",
"class",
"python-3.x",
"metaclass"
] |
Why does the class definition's metaclass keyword argument accept a callable? | 40,029,807 | <h2>Background</h2>
<p>The Python 3 <a href="https://docs.python.org/3.6/reference/datamodel.html#determining-the-appropriate-metaclass">documentation</a> clearly describes how the metaclass of a class is determined:</p>
<blockquote>
<ul>
<li>if no bases and no explicit metaclass are given, then type() is used</li>
<li>if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass</li>
<li>if an instance of type() is given as the explicit metaclass, or bases are defined, then the most derived metaclass is used</li>
</ul>
</blockquote>
<p>Therefore, according to the second rule, it is possible to specify a metaclass using a callable. E.g.,</p>
<pre><code>class MyMetaclass(type):
pass
def metaclass_callable(name, bases, namespace):
print("Called with", name)
return MyMetaclass(name, bases, namespace)
class MyClass(metaclass=metaclass_callable):
pass
class MyDerived(MyClass):
pass
print(type(MyClass), type(MyDerived))
</code></pre>
<h2>Question 1</h2>
<p>Is the metaclass of <code>MyClass</code>: <code>metaclass_callable</code> or <code>MyMetaclass</code>? The second rule in the documentation says that the provided callable "is used directly as the metaclass". However, it seems to make more sense to say that the metaclass is <code>MyMetaclass</code> since</p>
<ul>
<li><code>MyClass</code> and <code>MyDerived</code> have type <code>MyMetaclass</code>,</li>
<li><code>metaclass_callable</code> is called once and then appears to be unrecoverable,</li>
<li>derived classes do not use (as far as I can tell) <code>metaclass_callable</code> in any way (they use <code>MyMetaclass</code>).</li>
</ul>
<h2>Question 2</h2>
<p>Is there anything you can do with a callable that you can't do with an instance of <code>type</code>? What is the purpose of accepting an arbitrary callable?</p>
| 8 | 2016-10-13T19:54:43Z | 40,030,142 | <p>Regarding your first question the metaclass should be <code>MyMetaclass</code> (which it's so):</p>
<pre><code>In [7]: print(type(MyClass), type(MyDerived))
<class '__main__.MyMetaclass'> <class '__main__.MyMetaclass'>
</code></pre>
<p>The reason is that if the metaclass is not an instance of type python calls the methaclass by passing these arguments to it <code>name, bases, ns, **kwds</code> (see <code>new_class</code>) and since you are returning your real metaclass in that function it gets the correct type for metaclass.</p>
<p>And about the second question:</p>
<blockquote>
<p>What is the purpose of accepting an arbitrary callable?</p>
</blockquote>
<p>There is no special purpose, <strong>it's actually the nature of metaclasses</strong> which is because that making an instance from a class always calls the metaclass by calling it's <code>__call__</code> method:</p>
<pre><code>Metaclass.__call__()
</code></pre>
<p>Which means that you can pass any callable as your metaclass. So for example if you test it with a nested function the result will still be the same:</p>
<pre><code>In [21]: def metaclass_callable(name, bases, namespace):
def inner():
return MyMetaclass(name, bases, namespace)
return inner()
....:
In [22]: class MyClass(metaclass=metaclass_callable):
pass
....:
In [23]: print(type(MyClass), type(MyDerived))
<class '__main__.MyMetaclass'> <class '__main__.MyMetaclass'>
</code></pre>
<hr>
<p>For more info here is how Python crates a class:</p>
<p>It calls the <code>new_class</code> function which it calls <code>prepare_class</code> inside itself, then as you can see inside the <code>prepare_class</code> python calls the <code>__prepare__</code> method of the appropriate metaclass, beside of finding the proper meta (using <code>_calculate_meta</code> function ) and creating the appropriate namespace for the class.</p>
<p>So all in one here is the hierarchy of executing a metacalss's methods:</p>
<ol>
<li><code>__prepare__</code> <sup>1</sup></li>
<li><code>__call__</code></li>
<li><code>__new__</code></li>
<li><code>__init__</code></li>
</ol>
<p>And here is the source code:</p>
<pre><code># Provide a PEP 3115 compliant mechanism for class creation
def new_class(name, bases=(), kwds=None, exec_body=None):
"""Create a class object dynamically using the appropriate metaclass."""
meta, ns, kwds = prepare_class(name, bases, kwds)
if exec_body is not None:
exec_body(ns)
return meta(name, bases, ns, **kwds)
def prepare_class(name, bases=(), kwds=None):
"""Call the __prepare__ method of the appropriate metaclass.
Returns (metaclass, namespace, kwds) as a 3-tuple
*metaclass* is the appropriate metaclass
*namespace* is the prepared class namespace
*kwds* is an updated copy of the passed in kwds argument with any
'metaclass' entry removed. If no kwds argument is passed in, this will
be an empty dict.
"""
if kwds is None:
kwds = {}
else:
kwds = dict(kwds) # Don't alter the provided mapping
if 'metaclass' in kwds:
meta = kwds.pop('metaclass')
else:
if bases:
meta = type(bases[0])
else:
meta = type
if isinstance(meta, type):
# when meta is a type, we first determine the most-derived metaclass
# instead of invoking the initial candidate directly
meta = _calculate_meta(meta, bases)
if hasattr(meta, '__prepare__'):
ns = meta.__prepare__(name, bases, **kwds)
else:
ns = {}
return meta, ns, kwds
def _calculate_meta(meta, bases):
"""Calculate the most derived metaclass."""
winner = meta
for base in bases:
base_meta = type(base)
if issubclass(winner, base_meta):
continue
if issubclass(base_meta, winner):
winner = base_meta
continue
# else:
raise TypeError("metaclass conflict: "
"the metaclass of a derived class "
"must be a (non-strict) subclass "
"of the metaclasses of all its bases")
return winner
</code></pre>
<hr>
<p><sub>
1. Note that it get called implicitly inside the <em>new_class</em> function and before the return.
</sub></p>
| 7 | 2016-10-13T20:14:46Z | [
"python",
"class",
"python-3.x",
"metaclass"
] |
Why does the class definition's metaclass keyword argument accept a callable? | 40,029,807 | <h2>Background</h2>
<p>The Python 3 <a href="https://docs.python.org/3.6/reference/datamodel.html#determining-the-appropriate-metaclass">documentation</a> clearly describes how the metaclass of a class is determined:</p>
<blockquote>
<ul>
<li>if no bases and no explicit metaclass are given, then type() is used</li>
<li>if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass</li>
<li>if an instance of type() is given as the explicit metaclass, or bases are defined, then the most derived metaclass is used</li>
</ul>
</blockquote>
<p>Therefore, according to the second rule, it is possible to specify a metaclass using a callable. E.g.,</p>
<pre><code>class MyMetaclass(type):
pass
def metaclass_callable(name, bases, namespace):
print("Called with", name)
return MyMetaclass(name, bases, namespace)
class MyClass(metaclass=metaclass_callable):
pass
class MyDerived(MyClass):
pass
print(type(MyClass), type(MyDerived))
</code></pre>
<h2>Question 1</h2>
<p>Is the metaclass of <code>MyClass</code>: <code>metaclass_callable</code> or <code>MyMetaclass</code>? The second rule in the documentation says that the provided callable "is used directly as the metaclass". However, it seems to make more sense to say that the metaclass is <code>MyMetaclass</code> since</p>
<ul>
<li><code>MyClass</code> and <code>MyDerived</code> have type <code>MyMetaclass</code>,</li>
<li><code>metaclass_callable</code> is called once and then appears to be unrecoverable,</li>
<li>derived classes do not use (as far as I can tell) <code>metaclass_callable</code> in any way (they use <code>MyMetaclass</code>).</li>
</ul>
<h2>Question 2</h2>
<p>Is there anything you can do with a callable that you can't do with an instance of <code>type</code>? What is the purpose of accepting an arbitrary callable?</p>
| 8 | 2016-10-13T19:54:43Z | 40,031,127 | <p>Concerning question 1, I think the "metaclass" of a class <code>cls</code> should be understood as <code>type(cls)</code>. That way of understanding is compatible with Python's error message in the following example:</p>
<pre><code>>>> class Meta1(type): pass
...
>>> class Meta2(type): pass
...
>>> def metafunc(name, bases, methods):
... if methods.get('version') == 1:
... return Meta1(name, bases, methods)
... return Meta2(name, bases, methods)
...
>>> class C1:
... __metaclass__ = metafunc
... version = 1
...
>>> class C2:
... __metaclass__ = metafunc
... version = 2
...
>>> type(C1)
<class '__main__.Meta1'>
>>> type(C2)
<class '__main__.Meta2'>
>>> class C3(C1,C2): pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Error when calling the metaclass bases
metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
</code></pre>
<p>I.e., according to the error message, the metaclass of a class is a class, even though the callable used to construct the class can be just anything.</p>
<p>Concerning the second question, indeed with a subclass of type used as a metaclass, you can do the same as with any other callable. In particular, it is possible that it yields something that is not its instance:</p>
<pre><code>>>> class Mockup(type):
... def __new__(cls, name, bases, methods):
... return Meta1(name, bases, methods)
...
>>> class Foo:
... __metaclass__ = Mockup
...
>>> type(Foo)
<class '__main__.Meta1'>
>>> isinstance(Foo, Mockup)
False
>>> Foo.__metaclass__
<class '__main__.Mockup'>
</code></pre>
<p>As to why Python gives the freedom of using any callable: The previous example shows that it is actually irrelevant whether the callable is a type or not.</p>
<p>BTW, here is a fun example: It is possible to code metaclasses that, themselves, have a metaclass different from <code>type</code>---let's call it a metametaclass. The metametaclass implements what happens when a metaclass is called. In that way, it is possible to create a class with two bases whose metaclasses are <em>not</em> subclass of each other (compare with Python's error message in the example above!). Indeed, only the metaclass of the resulting class is subclass of the metaclass of the bases, and this metaclass is created on the fly:</p>
<pre><code>>>> class MetaMeta(type):
... def __call__(mcls, name, bases, methods):
... metabases = set(type(X) for X in bases)
... metabases.add(mcls)
... if len(metabases) > 1:
... mcls = type(''.join([X.__name__ for X in metabases]), tuple(metabases), {})
... return mcls.__new__(mcls, name, bases, methods)
...
>>> class Meta1(type):
... __metaclass__ = MetaMeta
...
>>> class Meta2(type):
... __metaclass__ = MetaMeta
...
>>> class C1:
... __metaclass__ = Meta1
...
>>> class C2:
... __metaclass__ = Meta2
...
>>> type(C1)
<class '__main__.Meta1'>
>>> type(C2)
<class '__main__.Meta2'>
>>> class C3(C1,C2): pass
...
>>> type(C3)
<class '__main__.Meta1Meta2'>
</code></pre>
<p>What is less fun: The preceding example won't work in Python 3. If I understand correctly, Python 2 creates the class and checks whether its metaclass is a subclass of all its bases, whereas Python 3 <em>first</em> checks whether there is one base whose metaclass is superclass of the metaclasses of all other bases, and only <em>then</em> creates the new class. That's a regression, from my point of view. But that shall be the topic of a new question that I am about to post...</p>
<p><strong>Edit</strong>: The new question is <a href="http://stackoverflow.com/questions/40031906/is-it-possible-to-dynamically-create-a-metaclass-for-a-class-with-several-bases">here</a></p>
| 2 | 2016-10-13T21:15:49Z | [
"python",
"class",
"python-3.x",
"metaclass"
] |
How to properly import sub-modules in a Python package? | 40,029,862 | <p>I am a bit lost about how I should import and organise my sub-modules and I need some literature and some conventions.</p>
<h2>The problem</h2>
<p>We want to write a new package written in Python that is composed of several components: </p>
<ul>
<li>Classes and functions useful for the end user</li>
<li>Classes and functions rarely used </li>
<li>Utilities classes and functions only needed by the package itself</li>
<li>External modules</li>
</ul>
<p>We consider this architecture:</p>
<pre><code>pizzafactory
âââ __init__.py
âââ salt.py
âââ water.py
âââ vegetables
â  âââ __init__.py
â âââ tomatoes.py
âââ dough
  âââ __init__.py
âââ flour.py
</code></pre>
<h2>Some considerations</h2>
<ul>
<li>The end user don't need to use raw ingredients such as dough or water</li>
<li>The customer only need pizzas described in <code>pizzafactory/__init__.py</code></li>
<li>The dough factory requires salt and water</li>
<li>The customer may want to add some tomatoes on its pizza. </li>
</ul>
<p>The file <code>pizzafactory/__init__.py</code> needs almost all the modules, but because we don't want to pollute the end user namespace with useless things. I would propose to import the ingredients quietly except for those that may be used by the customer:</p>
<pre><code># pizzafactory/__init__.py
import salt as _salt
import dough as _dough
import vegetables.tomatoes
import oven as _oven # External package
</code></pre>
<p>The dough factory will require some water, but the user who need to use that sub-module (to make bread), may don't want to see the <code>water</code>. </p>
<pre><code># pizzafactory/dough/__init__.py
import pizzafactory.water as _water
</code></pre>
<h2>Discussion</h2>
<p>First, I feel it's always easier to directly import everything either the full package:</p>
<pre><code>import pizzafactory
def grab_tomato():
return pizzafactory.vegetables.tomatoes.BeefsteakTomato()
</code></pre>
<p>or only the required elements:</p>
<pre><code>from pizzafactory.vegetables.tomatoes import BeefsteakTomato
def grab_tomato():
return BeefsteakTomato()
</code></pre>
<p>Both of these methods are common, but it may pollute the <code>pizzafactory</code> namespace, so it may be preferable to mangle the import names. I relalized that nobody does that and I don't know why. </p>
<h2>Question</h2>
<p>In this generic example, I would like to know how to properly import modules, sub-modules and external packages in order to: </p>
<ul>
<li>Minimize the namespace footprint</li>
<li>Help the end user to clearly see only what he is intended to used</li>
</ul>
| 0 | 2016-10-13T19:58:09Z | 40,030,121 | <blockquote>
<p>Both of these methods are common, but it may pollute the pizzafactory namespace, so it may be preferable to mangle the import names. I relalized that nobody does that and I don't know why.</p>
</blockquote>
<p>Python is a consenting adult language, we leave the doors unlocked and everything out in the open, for the most part. </p>
<p>If you're concern is just crowding the namespaces, you should define <code>__all__</code> as well as use single-leading underscores. -- PEP8 suggests that name mangling should only be used to avoid naming clashes, so that's probably why nobody does that. </p>
<p>See the <a href="https://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces" rel="nofollow">Public and internal interfaces</a> section of PEP8 as well as <a href="https://www.python.org/dev/peps/pep-0008/#naming-conventions" rel="nofollow">Naming Convensions.</a> </p>
<p>PEP8 is the guide for the "proper" way to do these kinds of things. Though, it is a <em>guide</em> not necessarily <em>law</em>. You have the flexibility to do what you feel is appropriate for your package, which leads to my favorite section of PEP8 - <a href="https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds" rel="nofollow">A Foolish Consistency is the Hobgoblin of Little Minds</a></p>
<p>Without being sufficiently intimate with the code in a package, one probably could not offer much advice beyond PEP8 on how it <em>should</em> be done. If you have the time, Raymond Hettinger's talk <a href="https://www.youtube.com/watch?v=wf-BqAjZb8M" rel="nofollow">Beyond PEP 8</a> is a worthwhile watch.</p>
| 1 | 2016-10-13T20:13:40Z | [
"python",
"module",
"package",
"packages",
"python-import"
] |
Understanding why tensorflow RNN is not learning toy data | 40,029,904 | <p>I am trying to train a Recurrent Neural Network using Tensorflow (r0.10, python 3.5) on a toy classification problem, but I am getting confusing results.</p>
<p>I want to feed in a sequence of zeros and ones into an RNN, and have the target class for a given element of the sequence to be the number represented by the current and previous values of the sequence, treated as a binary number. For example:</p>
<pre><code>input sequence: [0, 0, 1, 0, 1, 1]
binary digits : [-, [0,0], [0,1], [1,0], [0,1], [1,1]]
target class : [-, 0, 1, 2, 1, 3]
</code></pre>
<p>It seems like this is something an RNN should be able to learn quite easily, but instead my model is only able to distinguish classes [0,2] from [1,3]. In other words, it is able to distinguish the classes whose current digit is 0 from those whose current digit is 1. This is leading me to believe that the RNN model is not correctly learning to look at the previous value(s) of the sequence.</p>
<p>There are several tutorials and examples ([<a href="https://www.tensorflow.org/versions/r0.9/tutorials/recurrent/index.html" rel="nofollow">1</a>], [<a href="https://github.com/sherjilozair/char-rnn-tensorflow" rel="nofollow">2</a>], [<a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/recurrent_network.ipynb" rel="nofollow">3</a>]) that demonstrate how to build and use Recurrent Neural Networks (RNNs) in tensorflow, but after studying them I still do not see my problem (it does not help that all the examples use text as their source data).</p>
<p>I am inputting my data to <code>tf.nn.rnn()</code> as a list of length <code>T</code>, whose elements are <code>[batch_size x input_size]</code> sequences. Since my sequence is one dimensional, <code>input_size</code> is equal to one, so essentially I believe I am inputting a list of sequences of length <code>batch_size</code> (the <a href="https://www.tensorflow.org/versions/r0.9/tutorials/recurrent/index.html" rel="nofollow">documentation</a> is unclear to me about which dimension is being treated as the time dimension). <strong>Is that understanding correct?</strong> If that is the case, then I don't understand why the RNN model is not learning correctly.</p>
<p>It's hard to get a small set of code that can run through my full RNN, this is the best I could do (it is mostly adapted from <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/models/rnn/ptb" rel="nofollow">the PTB model here</a> and <a href="https://github.com/sherjilozair/char-rnn-tensorflow" rel="nofollow">the char-rnn model here</a>):</p>
<pre><code>import tensorflow as tf
import numpy as np
input_size = 1
batch_size = 50
T = 2
lstm_size = 5
lstm_layers = 2
num_classes = 4
learning_rate = 0.1
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size, state_is_tuple=True)
lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * lstm_layers, state_is_tuple=True)
x = tf.placeholder(tf.float32, [T, batch_size, input_size])
y = tf.placeholder(tf.int32, [T * batch_size * input_size])
init_state = lstm.zero_state(batch_size, tf.float32)
inputs = [tf.squeeze(input_, [0]) for input_ in tf.split(0,T,x)]
outputs, final_state = tf.nn.rnn(lstm, inputs, initial_state=init_state)
w = tf.Variable(tf.truncated_normal([lstm_size, num_classes]), name='softmax_w')
b = tf.Variable(tf.truncated_normal([num_classes]), name='softmax_b')
output = tf.concat(0, outputs)
logits = tf.matmul(output, w) + b
probs = tf.nn.softmax(logits)
cost = tf.reduce_mean(tf.nn.seq2seq.sequence_loss_by_example(
[logits], [y], [tf.ones_like(y, dtype=tf.float32)]
))
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
10.0)
train_op = optimizer.apply_gradients(zip(grads, tvars))
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
curr_state = sess.run(init_state)
for i in range(3000):
# Create toy data where the true class is the value represented
# by the current and previous value treated as binary, i.e.
train_x = np.random.randint(0,2,(T * batch_size * input_size))
train_y = train_x + np.concatenate(([0], (train_x[:-1] * 2)))
# Reshape into T x batch_size x input_size
train_x = np.reshape(train_x, (T, batch_size, input_size))
feed_dict = {
x: train_x, y: train_y
}
for j, (c, h) in enumerate(init_state):
feed_dict[c] = curr_state[j].c
feed_dict[h] = curr_state[j].h
fetch_dict = {
'cost': cost, 'final_state': final_state, 'train_op': train_op
}
# Evaluate the graph
fetches = sess.run(fetch_dict, feed_dict=feed_dict)
curr_state = fetches['final_state']
if i % 300 == 0:
print('step {}, train cost: {}'.format(i, fetches['cost']))
# Test
test_x = np.array([[0],[0],[1],[0],[1],[1]]*(T*batch_size*input_size))
test_x = test_x[:(T*batch_size*input_size),:]
probs_out = sess.run(probs, feed_dict={
x: np.reshape(test_x, [T, batch_size, input_size]),
init_state: curr_state
})
# Get the softmax outputs for the points in the sequence
# that have [0, 0], [0, 1], [1, 0], [1, 1] as their
# last two values.
for i in [1, 2, 3, 5]:
print('{}: [{:.4f} {:.4f} {:.4f} {:.4f}]'.format(
[1, 2, 3, 5].index(i), *list(probs_out[i,:]))
)
</code></pre>
<p>The final output here is</p>
<pre><code>0: [0.4899 0.0007 0.5080 0.0014]
1: [0.0003 0.5155 0.0009 0.4833]
2: [0.5078 0.0011 0.4889 0.0021]
3: [0.0003 0.5052 0.0009 0.4936]
</code></pre>
<p>which indicates that it is only learning to distinguish [0,2] from [1,3]. <strong>Why isn't this model learning to use the previous value in the sequence?</strong></p>
| 0 | 2016-10-13T20:00:36Z | 40,031,736 | <p>Figured it out, with the help of <a href="http://killianlevacher.github.io/blog/posts/post-2016-03-01/post.html" rel="nofollow">this blog post</a> (it has wonderful diagrams of the input tensors). It turns out that I was not understanding the shape of the inputs to <code>tf.nn.rnn()</code> correctly:</p>
<p>Let's say you've got <code>batch_size</code> number of sequences. Each sequence has <code>input_size</code> dimensions and has length <code>T</code> (these names were chosen to match the documentation of <code>tf.nn.rnn()</code> <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/nn.html#rnn" rel="nofollow">here</a>). Then you need to split your input into a <code>T</code>-length list where each element has shape <code>batch_size x input_size</code>. <em>This means that your contiguous sequence will be spread out across the elements of the <strong>list</em></strong>. I thought that contiguous sequences would be kept together so that each element of the list <code>inputs</code> would be an example of one sequence.</p>
<p>This makes sense in retrospect, since we wish to parallelize each step through the sequence, so we want to run do the first step of each sequence (first element in list), then second step of each sequence (second element in list), etc.</p>
<p>Working version of the code:</p>
<pre><code>import tensorflow as tf
import numpy as np
sequence_size = 50
batch_size = 7
num_features = 1
lstm_size = 5
lstm_layers = 2
num_classes = 4
learning_rate = 0.1
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size, state_is_tuple=True)
lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * lstm_layers, state_is_tuple=True)
x = tf.placeholder(tf.float32, [batch_size, sequence_size, num_features])
y = tf.placeholder(tf.int32, [batch_size * sequence_size * num_features])
init_state = lstm.zero_state(batch_size, tf.float32)
inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(1,sequence_size,x)]
outputs, final_state = tf.nn.rnn(lstm, inputs, initial_state=init_state)
w = tf.Variable(tf.truncated_normal([lstm_size, num_classes]), name='softmax_w')
b = tf.Variable(tf.truncated_normal([num_classes]), name='softmax_b')
output = tf.reshape(tf.concat(1, outputs), [-1, lstm_size])
logits = tf.matmul(output, w) + b
probs = tf.nn.softmax(logits)
cost = tf.reduce_mean(tf.nn.seq2seq.sequence_loss_by_example(
[logits], [y], [tf.ones_like(y, dtype=tf.float32)]
))
# Now optimize on that cost
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
10.0)
train_op = optimizer.apply_gradients(zip(grads, tvars))
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
curr_state = sess.run(init_state)
for i in range(3000):
# Create toy data where the true class is the value represented
# by the current and previous value treated as binary, i.e.
train_x = np.random.randint(0,2,(batch_size * sequence_size * num_features))
train_y = train_x + np.concatenate(([0], (train_x[:-1] * 2)))
# Reshape into T x batch_size x sequence_size
train_x = np.reshape(train_x, [batch_size, sequence_size, num_features])
feed_dict = {
x: train_x, y: train_y
}
for j, (c, h) in enumerate(init_state):
feed_dict[c] = curr_state[j].c
feed_dict[h] = curr_state[j].h
fetch_dict = {
'cost': cost, 'final_state': final_state, 'train_op': train_op
}
# Evaluate the graph
fetches = sess.run(fetch_dict, feed_dict=feed_dict)
curr_state = fetches['final_state']
if i % 300 == 0:
print('step {}, train cost: {}'.format(i, fetches['cost']))
# Test
test_x = np.array([[0],[0],[1],[0],[1],[1]]*(batch_size * sequence_size * num_features))
test_x = test_x[:(batch_size * sequence_size * num_features),:]
probs_out = sess.run(probs, feed_dict={
x: np.reshape(test_x, [batch_size, sequence_size, num_features]),
init_state: curr_state
})
# Get the softmax outputs for the points in the sequence
# that have [0, 0], [0, 1], [1, 0], [1, 1] as their
# last two values.
for i in [1, 2, 3, 5]:
print('{}: [{:.4f} {:.4f} {:.4f} {:.4f}]'.format(
[1, 2, 3, 5].index(i), *list(probs_out[i,:]))
)
</code></pre>
| 0 | 2016-10-13T22:00:25Z | [
"python",
"tensorflow",
"recurrent-neural-network"
] |
redirect output of ipython script into a csv or text file like sqlplus spool | 40,029,938 | <p>I try to redirect the output of my script to a file.
I don't want to do something like</p>
<pre><code>python myscript.py > xy.out
</code></pre>
<p>as a lot of the variable is being stored in my ipython environment and I like to carry it over.</p>
<p>I try to follow this link </p>
<p><a href="http://stackoverflow.com/questions/14571090/ipython-redirecting-output-of-a-python-script-to-a-file-like-bash">IPython: redirecting output of a Python script to a file (like bash >)</a></p>
<p>however, when I try to do </p>
<pre><code>with redirect_output("my_output.txt"):
%run my_script.py
</code></pre>
<p>It gives the error </p>
<pre><code>---> 10 self.sys_stdout = sys.stdout
NameError: global name 'sys' is not defined
</code></pre>
<p>There is a similar solution to copy the output of a ipython shell to a file but It says my cell its not defined</p>
<p><a href="https://www.quora.com/How-do-I-export-the-output-of-the-IPython-command-to-a-text-file-or-a-CSV-file" rel="nofollow">https://www.quora.com/How-do-I-export-the-output-of-the-IPython-command-to-a-text-file-or-a-CSV-file</a></p>
<p>Is there a easier way or build in feature with ipython that does that ??
e.g.</p>
<p>In oracle sqlplus there is a spool command
e.g.</p>
<pre><code>spool /tmp/abc
select * from table_x;
spool off
</code></pre>
<p>now the sql statement output is in /tmp/abc</p>
<p>Is such a equivalent for ipython ??</p>
| 0 | 2016-10-13T20:02:20Z | 40,048,677 | <p>Greeting. I end up using this solution:</p>
<pre><code>%%capture var2
%run -I script2
import sys
orig_stdout = sys.stdout
f = file('out5.txt', 'w')
sys.stdout = f
print var2
stdout = orig_stdout
f.close()
</code></pre>
<p>It is not very handy but it works!</p>
| 0 | 2016-10-14T17:17:20Z | [
"python",
"ipython"
] |
regular expression does not replace hyphens with dots | 40,029,957 | <p>For the following text:</p>
<pre><code>comment = """ I took the pill - I realized only side-effect after I went off it how it affected my eating habits - I put on weight - around 10 lbs - in the 2.5 months on it - no control and syndrome - this was counterproductive !"""
</code></pre>
<p>I wrote regular expression to <code>replace hyphen (-) with dot (.)</code></p>
<pre><code>comment = re.sub (r'(w+\s+)(-)(\s+\w+)', r'\1\. \3 ', comment )
</code></pre>
<p>But it does not work.</p>
<p>I do not want the hyphen between two words such as side-effect replace with dot.
thats why I cannot use <code>comment.replace ('-', '.')</code></p>
<p>Any Suggestion ?</p>
| 0 | 2016-10-13T20:03:41Z | 40,029,987 | <p>You could also use the <code>str.replace</code> method</p>
<pre><code>comment.replace('-', '...')
</code></pre>
| 4 | 2016-10-13T20:05:32Z | [
"python",
"regex"
] |
Compare two files for differences in python | 40,029,985 | <p>I want to compare two files (take line from first file and look up in whole second file) to see differences between them and write missing line from fileA.txt to end of fileB.txt. I am new to python so at first time I thought abou simple program like this:</p>
<pre><code>import difflib
file1 = "fileA.txt"
file2 = "fileB.txt"
diff = difflib.ndiff(open(file1).readlines(),open(file2).readlines())
print ''.join(diff),
</code></pre>
<p>but in result I have got a combination of two files with suitable tags for each line. I know that I can look for line start with tag "-" and then write it to end of file fileB.txt, but with huge file (~100 MB) this method will be inefficient. Can somebody help me to improve program?</p>
<p>File structure will be like this:</p>
<p>input:</p>
<p>fileA.txt</p>
<pre><code>Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:46:58 user sshd[12844]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:46:58 user sshd[12844]: pam_unix(sshd:session): session closed for user root
Oct 9 15:47:58 user sshd[12868]: pam_unix(sshd:session): session closed for user root
Oct 11 22:17:31 user sshd[2655]: Accepted password for root from 17X.XXX.XXX.X19 port 5567 ssh2
Oct 11 22:17:31 user sshd[2655]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
<p>fileB.txt</p>
<pre><code> Oct 9 12:19:16 user sshd[12744]: Accepted password for root from 213.XXX.XXX.XX7 port 60554 ssh2
Oct 9 12:19:16 user sshd[12744]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:24:42 user sshd[12744]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:24:42 user sshd[12744]: pam_unix(sshd:session): session closed for user root
Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
<p>Output:</p>
<p>fileB_after.txt</p>
<pre><code>Oct 9 12:19:16 user sshd[12744]: Accepted password for root from 213.XXX.XXX.XX7 port 60554 ssh2
Oct 9 12:19:16 user sshd[12744]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:24:42 user sshd[12744]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:24:42 user sshd[12744]: pam_unix(sshd:session): session closed for user root
Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:46:58 user sshd[12844]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:46:58 user sshd[12844]: pam_unix(sshd:session): session closed for user root
Oct 9 15:47:58 user sshd[12868]: pam_unix(sshd:session): session closed for user root
Oct 11 22:17:31 user sshd[2655]: Accepted password for root from 17X.XXX.XXX.X19 port 5567 ssh2
Oct 11 22:17:31 user sshd[2655]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
| 0 | 2016-10-13T20:05:24Z | 40,030,529 | <p>Try with this in the <code>bash</code>:</p>
<pre><code>cat fileA.txt fileB.txt | sort -M | uniq > new_file.txt
</code></pre>
<p><strong><a href="http://ss64.com/bash/sort.html" rel="nofollow">sort -M</a>:</strong>
sorts based on initial string, consisting of any amount of whitespace, followed
by a month name abbreviation, is folded to UPPER case and compared
in the order 'JAN' < 'FEB' < ... < 'DEC'. Invalid names compare
low to valid names. The `LC_TIME' locale determines the month
spellings.</p>
<p><strong>uniq:</strong> filters out repeated lines in a file.</p>
<p><strong>|:</strong> passes the output of one command to another for further processing.</p>
<p>What this will do is take the two files, sort them in the way described above, keep the unique items and store them in <code>new_file.txt</code></p>
<p><strong>Note:</strong> This is not a python solution but you have tagged the question with <code>linux</code> so I thought it might interest you. Also you can find more detailed info about the commands used, <a href="http://www.computerhope.com/unix.htm" rel="nofollow">here</a>. </p>
| 1 | 2016-10-13T20:37:35Z | [
"python",
"linux",
"diff"
] |
Compare two files for differences in python | 40,029,985 | <p>I want to compare two files (take line from first file and look up in whole second file) to see differences between them and write missing line from fileA.txt to end of fileB.txt. I am new to python so at first time I thought abou simple program like this:</p>
<pre><code>import difflib
file1 = "fileA.txt"
file2 = "fileB.txt"
diff = difflib.ndiff(open(file1).readlines(),open(file2).readlines())
print ''.join(diff),
</code></pre>
<p>but in result I have got a combination of two files with suitable tags for each line. I know that I can look for line start with tag "-" and then write it to end of file fileB.txt, but with huge file (~100 MB) this method will be inefficient. Can somebody help me to improve program?</p>
<p>File structure will be like this:</p>
<p>input:</p>
<p>fileA.txt</p>
<pre><code>Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:46:58 user sshd[12844]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:46:58 user sshd[12844]: pam_unix(sshd:session): session closed for user root
Oct 9 15:47:58 user sshd[12868]: pam_unix(sshd:session): session closed for user root
Oct 11 22:17:31 user sshd[2655]: Accepted password for root from 17X.XXX.XXX.X19 port 5567 ssh2
Oct 11 22:17:31 user sshd[2655]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
<p>fileB.txt</p>
<pre><code> Oct 9 12:19:16 user sshd[12744]: Accepted password for root from 213.XXX.XXX.XX7 port 60554 ssh2
Oct 9 12:19:16 user sshd[12744]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:24:42 user sshd[12744]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:24:42 user sshd[12744]: pam_unix(sshd:session): session closed for user root
Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
<p>Output:</p>
<p>fileB_after.txt</p>
<pre><code>Oct 9 12:19:16 user sshd[12744]: Accepted password for root from 213.XXX.XXX.XX7 port 60554 ssh2
Oct 9 12:19:16 user sshd[12744]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:24:42 user sshd[12744]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:24:42 user sshd[12744]: pam_unix(sshd:session): session closed for user root
Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:46:58 user sshd[12844]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:46:58 user sshd[12844]: pam_unix(sshd:session): session closed for user root
Oct 9 15:47:58 user sshd[12868]: pam_unix(sshd:session): session closed for user root
Oct 11 22:17:31 user sshd[2655]: Accepted password for root from 17X.XXX.XXX.X19 port 5567 ssh2
Oct 11 22:17:31 user sshd[2655]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
| 0 | 2016-10-13T20:05:24Z | 40,030,592 | <p>read in two files and convert to set <br><br>
find union of two sets<br>
sort union set based on time <br>
join set to string with new line <br></p>
<pre><code>import datetime
import
file1 = "fileA.txt"
file2 = "fileB.txt"
with open(file1 ,'rb') as f:
sa = set( line for line in f )
with open(file2 ,'rb') as f:
sb = set( line for line in f )
print '\n'.join( sorted( sa.union(sb), key = lambda x: datetime.datetime.strptime( ' '.join( x.split()[:3]), '%b %d %H:%M:%S' )) )
Oct 9 12:19:16 user sshd[12744]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 12:19:16 user sshd[12744]: Accepted password for root from 213.XXX.XXX.XX7 port 60554 ssh2
Oct 9 13:24:42 user sshd[12744]: pam_unix(sshd:session): session closed for user root
Oct 9 13:24:42 user sshd[12744]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 13:25:31 user sshd[12844]: Accepted password for root from 213.XXX.XXX.XX7 port 33254 ssh2
Oct 9 13:25:31 user sshd[12844]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:35:48 user sshd[12868]: Accepted password for root from 213.XXX.XXX.XX7 port 33574 ssh2
Oct 9 13:35:48 user sshd[12868]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 9 13:46:58 user sshd[12844]: pam_unix(sshd:session): session closed for user root
Oct 9 13:46:58 user sshd[12844]: Received disconnect from 213.XXX.XXX.XX7: 11: disconnected by user
Oct 9 15:47:58 user sshd[12868]: pam_unix(sshd:session): session closed for user root
Oct 11 22:17:31 user sshd[2655]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 11 22:17:31 user sshd[2655]: Accepted password for root from 17X.XXX.XXX.X19 port 5567 ssh2
</code></pre>
| 1 | 2016-10-13T20:41:57Z | [
"python",
"linux",
"diff"
] |
"pythonic" way to fill bag of words | 40,029,996 | <p>I've got a list of words, about 273000 of them in the list <code>Word_array</code>
There are about 17000 unique words, and they're stored in <code>Word_arrayU</code></p>
<p>I want a count for each one</p>
<pre><code>#make bag of worsds
Word_arrayU = np.unique(Word_array)
wordBag = [['0','0'] for _ in range(len(Word_array))] #prealocate necessary space
i=0
while i< len(Word_arrayU): #for each unique word
wordBag[i][0] = Word_arrayU[i]
#I think this is the part that takes a long time. summing up a list comprehension with a conditional. Just seems sloppy
wordBag[i][1]=sum([1 if x == Word_arrayU[i] else 0 for x in Word_array])
i=i+1
</code></pre>
<p>summing up a list comprehension with a conditional. Just seems sloppy; is there a better way to do it?</p>
| 0 | 2016-10-13T20:05:57Z | 40,030,147 | <pre><code>from collections import Counter
counter = Counter(Word_array)
the_count_of_some_word = counter["some_word"]
#printing the counts
for word, count in counter.items():
print("{} appears {} times.".format(word, count)
</code></pre>
| 1 | 2016-10-13T20:15:00Z | [
"python",
"string",
"counter"
] |
"pythonic" way to fill bag of words | 40,029,996 | <p>I've got a list of words, about 273000 of them in the list <code>Word_array</code>
There are about 17000 unique words, and they're stored in <code>Word_arrayU</code></p>
<p>I want a count for each one</p>
<pre><code>#make bag of worsds
Word_arrayU = np.unique(Word_array)
wordBag = [['0','0'] for _ in range(len(Word_array))] #prealocate necessary space
i=0
while i< len(Word_arrayU): #for each unique word
wordBag[i][0] = Word_arrayU[i]
#I think this is the part that takes a long time. summing up a list comprehension with a conditional. Just seems sloppy
wordBag[i][1]=sum([1 if x == Word_arrayU[i] else 0 for x in Word_array])
i=i+1
</code></pre>
<p>summing up a list comprehension with a conditional. Just seems sloppy; is there a better way to do it?</p>
| 0 | 2016-10-13T20:05:57Z | 40,030,177 | <p>Building on the suggestion from @jonrsharpe...</p>
<pre><code>from collections import Counter
words = Counter()
words['foo'] += 1
words['foo'] += 1
words['bar'] += 1
</code></pre>
<p>Output</p>
<pre><code>Counter({'bar': 1, 'foo': 2})
</code></pre>
<p>It's really convenient because you don't have to initialize words.</p>
<p>You can also initialize directly from a list of words:</p>
<pre><code>Counter(['foo', 'foo', 'bar'])
</code></pre>
<p>Output</p>
<pre><code>Counter({'bar': 1, 'foo': 2})
</code></pre>
| 0 | 2016-10-13T20:16:23Z | [
"python",
"string",
"counter"
] |
"pythonic" way to fill bag of words | 40,029,996 | <p>I've got a list of words, about 273000 of them in the list <code>Word_array</code>
There are about 17000 unique words, and they're stored in <code>Word_arrayU</code></p>
<p>I want a count for each one</p>
<pre><code>#make bag of worsds
Word_arrayU = np.unique(Word_array)
wordBag = [['0','0'] for _ in range(len(Word_array))] #prealocate necessary space
i=0
while i< len(Word_arrayU): #for each unique word
wordBag[i][0] = Word_arrayU[i]
#I think this is the part that takes a long time. summing up a list comprehension with a conditional. Just seems sloppy
wordBag[i][1]=sum([1 if x == Word_arrayU[i] else 0 for x in Word_array])
i=i+1
</code></pre>
<p>summing up a list comprehension with a conditional. Just seems sloppy; is there a better way to do it?</p>
| 0 | 2016-10-13T20:05:57Z | 40,030,182 | <p>I don't know about most 'Pythonic' but definitely the easiest way of doing this would be to use <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow">collections.Counter</a>.</p>
<pre><code>from collections import Counter
Word_array = ["word1", "word2", "word3", "word1", "word2", "word1"]
wordBag = Counter(Word_array).items()
</code></pre>
| -1 | 2016-10-13T20:16:32Z | [
"python",
"string",
"counter"
] |
"pythonic" way to fill bag of words | 40,029,996 | <p>I've got a list of words, about 273000 of them in the list <code>Word_array</code>
There are about 17000 unique words, and they're stored in <code>Word_arrayU</code></p>
<p>I want a count for each one</p>
<pre><code>#make bag of worsds
Word_arrayU = np.unique(Word_array)
wordBag = [['0','0'] for _ in range(len(Word_array))] #prealocate necessary space
i=0
while i< len(Word_arrayU): #for each unique word
wordBag[i][0] = Word_arrayU[i]
#I think this is the part that takes a long time. summing up a list comprehension with a conditional. Just seems sloppy
wordBag[i][1]=sum([1 if x == Word_arrayU[i] else 0 for x in Word_array])
i=i+1
</code></pre>
<p>summing up a list comprehension with a conditional. Just seems sloppy; is there a better way to do it?</p>
| 0 | 2016-10-13T20:05:57Z | 40,030,202 | <p>If you want a less efficient (than <code>Counter</code>), but more transparent solution, you can use <code>collections.defaultdict</code></p>
<pre><code>from collections import defaultdict
my_counter = defaultdict(int)
for word in word_array:
my_counter[word] += 1
</code></pre>
| -1 | 2016-10-13T20:17:45Z | [
"python",
"string",
"counter"
] |
"pythonic" way to fill bag of words | 40,029,996 | <p>I've got a list of words, about 273000 of them in the list <code>Word_array</code>
There are about 17000 unique words, and they're stored in <code>Word_arrayU</code></p>
<p>I want a count for each one</p>
<pre><code>#make bag of worsds
Word_arrayU = np.unique(Word_array)
wordBag = [['0','0'] for _ in range(len(Word_array))] #prealocate necessary space
i=0
while i< len(Word_arrayU): #for each unique word
wordBag[i][0] = Word_arrayU[i]
#I think this is the part that takes a long time. summing up a list comprehension with a conditional. Just seems sloppy
wordBag[i][1]=sum([1 if x == Word_arrayU[i] else 0 for x in Word_array])
i=i+1
</code></pre>
<p>summing up a list comprehension with a conditional. Just seems sloppy; is there a better way to do it?</p>
| 0 | 2016-10-13T20:05:57Z | 40,030,349 | <p>In python 3 there is a built-in list.count function. For example:</p>
<pre><code>>>> h = ["a", "b", "a", "a", "c"]
>>> h.count("a")
3
>>>
</code></pre>
<p>So, you could make it more efficient by doing something like:</p>
<pre><code>Word_arrayU = np.unique(Word_array)
wordBag = []
for uniqueWord in Word_arrayU:
wordBag.append([uniqueWord, Word_array.count(uniqueWord)])
</code></pre>
| 0 | 2016-10-13T20:26:07Z | [
"python",
"string",
"counter"
] |
"pythonic" way to fill bag of words | 40,029,996 | <p>I've got a list of words, about 273000 of them in the list <code>Word_array</code>
There are about 17000 unique words, and they're stored in <code>Word_arrayU</code></p>
<p>I want a count for each one</p>
<pre><code>#make bag of worsds
Word_arrayU = np.unique(Word_array)
wordBag = [['0','0'] for _ in range(len(Word_array))] #prealocate necessary space
i=0
while i< len(Word_arrayU): #for each unique word
wordBag[i][0] = Word_arrayU[i]
#I think this is the part that takes a long time. summing up a list comprehension with a conditional. Just seems sloppy
wordBag[i][1]=sum([1 if x == Word_arrayU[i] else 0 for x in Word_array])
i=i+1
</code></pre>
<p>summing up a list comprehension with a conditional. Just seems sloppy; is there a better way to do it?</p>
| 0 | 2016-10-13T20:05:57Z | 40,030,369 | <p>Since you are already using <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.unique.html" rel="nofollow"><em>numpy.unique</em></a>, just set <em>return_counts=True</em> in the unique call:</p>
<pre><code>import numpy as np
unique, count = np.unique(Word_array, return_counts=True)
</code></pre>
<p>That will give you two arrays, the unique elements and their counts:</p>
<pre><code>n [10]: arr = [1,3,2,11,3,4,5,2,3,4]
In [11]: unique, count = np.unique(arr, return_counts=True)
In [12]: unique
Out[12]: array([ 1, 2, 3, 4, 5, 11])
In [13]: count
Out[13]: array([1, 2, 3, 2, 1, 1])
</code></pre>
| 1 | 2016-10-13T20:28:10Z | [
"python",
"string",
"counter"
] |
Value won't increment inside while loop | 40,030,338 | <p>my problem is that the 'month' value increments once to month = 1, then stays there the whole time, causing an infinite loop. How do I get this to change every time through the loop? I know I'm probably missing something extremely simple. </p>
<pre><code>def rem_bal(balance, annualInterestRate, monthlyInterestRate):
month = 0
while month <= 12:
monthly_interest = (annualInterestRate) / 12.0
minimum_monthly = (monthlyInterestRate) * balance
monthly_unpaid= (balance) - (minimum_monthly)
updated_balance = round(((monthly_unpaid) + (monthly_interest * monthly_unpaid)), 2)
month =+ 1
print("Month " + str(month) + "Remaining balance: " + str(updated_balance) + " .")
balance = updated_balance
return balance
</code></pre>
| 0 | 2016-10-13T20:25:45Z | 40,030,361 | <pre><code>month += 1
</code></pre>
<p>not </p>
<pre><code>month = +1
</code></pre>
<p>which is just </p>
<pre><code>month = 1
</code></pre>
| 4 | 2016-10-13T20:27:27Z | [
"python",
"python-3.x",
"while-loop"
] |
Value won't increment inside while loop | 40,030,338 | <p>my problem is that the 'month' value increments once to month = 1, then stays there the whole time, causing an infinite loop. How do I get this to change every time through the loop? I know I'm probably missing something extremely simple. </p>
<pre><code>def rem_bal(balance, annualInterestRate, monthlyInterestRate):
month = 0
while month <= 12:
monthly_interest = (annualInterestRate) / 12.0
minimum_monthly = (monthlyInterestRate) * balance
monthly_unpaid= (balance) - (minimum_monthly)
updated_balance = round(((monthly_unpaid) + (monthly_interest * monthly_unpaid)), 2)
month =+ 1
print("Month " + str(month) + "Remaining balance: " + str(updated_balance) + " .")
balance = updated_balance
return balance
</code></pre>
| 0 | 2016-10-13T20:25:45Z | 40,030,373 | <p>It needs to be <code>month += 1</code> not <code>month =+ 1</code>; the latter is just plain assignment rather than incrementing the value of <code>month</code> (i.e., assigning <code>month</code> to <code>+1</code>/<code>1</code>).</p>
| 0 | 2016-10-13T20:28:13Z | [
"python",
"python-3.x",
"while-loop"
] |
Value won't increment inside while loop | 40,030,338 | <p>my problem is that the 'month' value increments once to month = 1, then stays there the whole time, causing an infinite loop. How do I get this to change every time through the loop? I know I'm probably missing something extremely simple. </p>
<pre><code>def rem_bal(balance, annualInterestRate, monthlyInterestRate):
month = 0
while month <= 12:
monthly_interest = (annualInterestRate) / 12.0
minimum_monthly = (monthlyInterestRate) * balance
monthly_unpaid= (balance) - (minimum_monthly)
updated_balance = round(((monthly_unpaid) + (monthly_interest * monthly_unpaid)), 2)
month =+ 1
print("Month " + str(month) + "Remaining balance: " + str(updated_balance) + " .")
balance = updated_balance
return balance
</code></pre>
| 0 | 2016-10-13T20:25:45Z | 40,030,933 | <p>BTW, this is not how you write a code in python.
Why parentheses around almost everything?
Why recalculate monthly_interest over and over, when it doesn't change?
Using a while loop for this isn't pythonic. You should rather use </p>
<pre><code>for month in range(13):
</code></pre>
| 0 | 2016-10-13T21:02:30Z | [
"python",
"python-3.x",
"while-loop"
] |
Value won't increment inside while loop | 40,030,338 | <p>my problem is that the 'month' value increments once to month = 1, then stays there the whole time, causing an infinite loop. How do I get this to change every time through the loop? I know I'm probably missing something extremely simple. </p>
<pre><code>def rem_bal(balance, annualInterestRate, monthlyInterestRate):
month = 0
while month <= 12:
monthly_interest = (annualInterestRate) / 12.0
minimum_monthly = (monthlyInterestRate) * balance
monthly_unpaid= (balance) - (minimum_monthly)
updated_balance = round(((monthly_unpaid) + (monthly_interest * monthly_unpaid)), 2)
month =+ 1
print("Month " + str(month) + "Remaining balance: " + str(updated_balance) + " .")
balance = updated_balance
return balance
</code></pre>
| 0 | 2016-10-13T20:25:45Z | 40,032,005 | <p>month = month+1 - tried this works</p>
| 0 | 2016-10-13T22:22:17Z | [
"python",
"python-3.x",
"while-loop"
] |
Embedding a range in a pandas series | 40,030,362 | <p>I have a table of 14 columns and I want to pull select ones into a new dataframe.
Let's say I want column 0 then column 8-14</p>
<pre><code> dfnow = pd.Series([df.iloc[row_count,0], \
df.iloc[row_count,8], \
df.iloc[row_count,9], \
....
</code></pre>
<p>Works but seems clumsy</p>
<p>I'd like to write </p>
<pre><code> dfnow = pd.Series([df.iloc[row_count,0], \
df.iloc[row_count, range (8, 14)]])
</code></pre>
<p>But this throws a ValueError: Wrong number of items passed</p>
<p>Now, from the answer below, I know I can create two separate sereis and concatenate them, but that seems a little sub-optimal as well. </p>
<p><a href="https://stackoverflow.com/questions/12504493/adding-pandas-series-with-different-indices-without-getting-nans">Adding pandas Series with different indices without getting NaNs</a></p>
| 2 | 2016-10-13T20:27:31Z | 40,030,475 | <p>Is that what you want?</p>
<pre><code>In [52]: df = pd.DataFrame(np.arange(30).reshape(5,6), columns=list('abcdef'))
In [53]: df
Out[53]:
a b c d e f
0 0 1 2 3 4 5
1 6 7 8 9 10 11
2 12 13 14 15 16 17
3 18 19 20 21 22 23
4 24 25 26 27 28 29
In [54]: df[[0,2,4]]
Out[54]:
a c e
0 0 2 4
1 6 8 10
2 12 14 16
3 18 20 22
4 24 26 28
</code></pre>
<p>concatenating (reshaping) columns <code>0</code>,<code>2</code>,<code>4</code> into single series:</p>
<pre><code>In [68]: df[[0,2,4]].values.T.reshape(-1,)
Out[68]: array([ 0, 6, 12, 18, 24, 2, 8, 14, 20, 26, 4, 10, 16, 22, 28])
In [69]: pd.Series(df[[0,2,4]].values.T.reshape(-1,))
Out[69]:
0 0
1 6
2 12
3 18
4 24
5 2
6 8
7 14
8 20
9 26
10 4
11 10
12 16
13 22
14 28
dtype: int32
</code></pre>
| 1 | 2016-10-13T20:34:10Z | [
"python",
"pandas"
] |
Embedding a range in a pandas series | 40,030,362 | <p>I have a table of 14 columns and I want to pull select ones into a new dataframe.
Let's say I want column 0 then column 8-14</p>
<pre><code> dfnow = pd.Series([df.iloc[row_count,0], \
df.iloc[row_count,8], \
df.iloc[row_count,9], \
....
</code></pre>
<p>Works but seems clumsy</p>
<p>I'd like to write </p>
<pre><code> dfnow = pd.Series([df.iloc[row_count,0], \
df.iloc[row_count, range (8, 14)]])
</code></pre>
<p>But this throws a ValueError: Wrong number of items passed</p>
<p>Now, from the answer below, I know I can create two separate sereis and concatenate them, but that seems a little sub-optimal as well. </p>
<p><a href="https://stackoverflow.com/questions/12504493/adding-pandas-series-with-different-indices-without-getting-nans">Adding pandas Series with different indices without getting NaNs</a></p>
| 2 | 2016-10-13T20:27:31Z | 40,030,589 | <p>I think you can convert all values to <code>lists</code> and then create <code>Series</code>, but then lost indices:</p>
<pre><code>df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
row_count = 1
print (df.iloc[row_count, range (2, 4)])
C 8
D 3
Name: 1, dtype: int64
dfnow = pd.Series([df.iloc[row_count,0]] + df.iloc[row_count, range (2, 4)].tolist())
print (dfnow)
0 2
1 8
2 3
dtype: int64
</code></pre>
<hr>
<p>Or you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>, then indices are column names:</p>
<pre><code>row_count = 1
a = df.iloc[row_count, range (2, 4)]
b = df.iloc[row_count, range (4, 6)]
print (a)
C 8
D 3
Name: 1, dtype: int64
print (b)
E 3
F 4
Name: 1, dtype: int64
print (pd.concat([a,b]))
C 8
D 3
E 3
F 4
Name: 1, dtype: int64
</code></pre>
<p>But if need add scalar (<code>a</code>), it is a bit complicated - need <code>Series</code>:</p>
<pre><code>row_count = 1
a = pd.Series(df.iloc[row_count, 0], index=[df.columns[0]])
b = df.iloc[row_count, range (2, 4)]
c = df.iloc[row_count, range (4, 6)]
print (a)
A 2
dtype: int64
print (b)
C 8
D 3
Name: 1, dtype: int64
print (c)
E 3
F 4
Name: 1, dtype: int64
print (pd.concat([a,b,c]))
A 2
C 8
D 3
E 3
F 4
dtype: int64
</code></pre>
| 0 | 2016-10-13T20:41:37Z | [
"python",
"pandas"
] |
Embedding a range in a pandas series | 40,030,362 | <p>I have a table of 14 columns and I want to pull select ones into a new dataframe.
Let's say I want column 0 then column 8-14</p>
<pre><code> dfnow = pd.Series([df.iloc[row_count,0], \
df.iloc[row_count,8], \
df.iloc[row_count,9], \
....
</code></pre>
<p>Works but seems clumsy</p>
<p>I'd like to write </p>
<pre><code> dfnow = pd.Series([df.iloc[row_count,0], \
df.iloc[row_count, range (8, 14)]])
</code></pre>
<p>But this throws a ValueError: Wrong number of items passed</p>
<p>Now, from the answer below, I know I can create two separate sereis and concatenate them, but that seems a little sub-optimal as well. </p>
<p><a href="https://stackoverflow.com/questions/12504493/adding-pandas-series-with-different-indices-without-getting-nans">Adding pandas Series with different indices without getting NaNs</a></p>
| 2 | 2016-10-13T20:27:31Z | 40,031,043 | <p>consider the <code>df</code></p>
<pre><code>from string import ascii_uppercase
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(150).reshape(-1, 15),
columns=list(ascii_uppercase[:15]))
df
</code></pre>
<p><a href="https://i.stack.imgur.com/84S4N.png" rel="nofollow"><img src="https://i.stack.imgur.com/84S4N.png" alt="enter image description here"></a></p>
<p>use <code>np.r_</code> to construct the array neccesary for the slice you want</p>
<pre><code>np.r_[0, 8:14]
array([ 0, 8, 9, 10, 11, 12, 13])
</code></pre>
<p>then slice</p>
<pre><code>df.iloc[:, np.r_[0, 8:14]]
</code></pre>
<p><a href="https://i.stack.imgur.com/RBwdB.png" rel="nofollow"><img src="https://i.stack.imgur.com/RBwdB.png" alt="enter image description here"></a></p>
| 1 | 2016-10-13T21:09:08Z | [
"python",
"pandas"
] |
Speeding up Numpy Masking | 40,030,448 | <p>I'm still an amature when it comes to thinking about how to optimize. I have this section of code that takes in a list of found peaks and finds where these peaks,+/- some value, are located in a multidimensional array. It then adds +1 to their indices of a zeros array. The code works well, but it takes a long time to execute. For instance it is taking close to 45min to run if <code>ind</code> has 270 values and <code>refVals</code> has a shape of (3050,3130,80). I understand that its a lot of data to churn through, but is there a more efficient way of going about this? </p>
<pre><code>maskData = np.zeros_like(refVals).astype(np.int16)
for peak in ind:
tmpArr = np.ma.masked_outside(refVals,x[peak]-2,x[peak]+2).astype(np.int16)
maskData[tmpArr.mask == False ] += 1
tmpArr = None
maskData = np.sum(maskData,axis=2)
</code></pre>
| 2 | 2016-10-13T20:32:40Z | 40,031,225 | <p><strong>Approach #1 :</strong> Memory permitting, here's a vectorized approach using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p>
<pre><code># Craate +,-2 limits usind ind
r = x[ind[:,None]] + [-2,2]
# Use limits to get inside matches and sum over the iterative and last dim
mask = (refVals >= r[:,None,None,None,0]) & (refVals <= r[:,None,None,None,1])
out = mask.sum(axis=(0,3))
</code></pre>
<hr>
<p><strong>Approach #2 :</strong> If running out of memory with the previous one, we could use a loop and use NumPy boolean arrays and that could be more efficient than masked arrays. Also, we would perform one more level of <code>sum-reduction</code>, so that we would be dragging less data with us when moving across iterations. Thus, the alternative implementation would look something like this -</p>
<pre><code>out = np.zeros(refVals.shape[:2]).astype(np.int16)
x_ind = x[ind]
for i in x_ind:
out += ((refVals >= i-2) & (refVals <= i+2)).sum(-1)
</code></pre>
<p><strong>Approach #3 :</strong> Alternatively, we could replace that limit based comparison with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.isclose.html" rel="nofollow"><code>np.isclose</code></a> in approach #2. Thus, the only step inside the loop would become -</p>
<pre><code>out += np.isclose(refVals,i,atol=2).sum(-1)
</code></pre>
| 2 | 2016-10-13T21:22:32Z | [
"python",
"arrays",
"performance",
"numpy",
"masking"
] |
numpy.savetxt- Save one column as int and the rest as floats? | 40,030,481 | <p><strong>The Problem</strong></p>
<p>So I have a 2D array (151 rows, 52 columns) I'd like to save as a text file using np.savetxt. However, I want the first column's numbers to save as integers (1950, 1951, etc) while the rest of the data saves as precision 5 (4 if rounded) floating point numbers (2.7419, 2.736, etc). I can't figure out how to do this.</p>
<p><strong>The Code</strong></p>
<p>When I print the first 4 rows & 3 columns of the output of the array, it looks like this.</p>
<p>[[ 1950. 2.7407 2.7396]</p>
<p>[ 1951. 2.7419 2.736 ]</p>
<p>[ 1952. 2.741 2.7374]</p>
<p>[ 1953. 2.7417 2.7325]]</p>
<p>When I use the following...</p>
<pre><code>np.savetxt('array.txt',data,fmt="%1.4f")
</code></pre>
<p>The array saves the first column as a precision 5 floating point numbers like the rest of the data (1950.0000, 1951.0000, etc). When I try to specify different formats as such...</p>
<pre><code>np.savetxt('array.txt',data,fmt="%i %1.4f")
</code></pre>
<p>I get the following error: "ValueError: fmt has wrong number of % formats: %i %1.4f"</p>
<p><strong>The Question</strong></p>
<p>Is there a way I say save the first column as integers and the rest of the columns as floating point numbers?</p>
| 1 | 2016-10-13T20:34:33Z | 40,030,726 | <p><code>data</code> has 3 columns, so you need supply 3 <code>'%format'</code>s. For example:</p>
<pre><code>np.savetxt('array.txt', data, fmt='%i %1.4f %1.4f')
</code></pre>
<p>should work. If you have a lot more than 3 columns, you can try something like:</p>
<pre><code>np.savetxt('array.txt', data, fmt=' '.join(['%i'] + ['%1.4f']*N))
</code></pre>
<p>where <code>N</code> is the number of columns needing float formatting.</p>
| 2 | 2016-10-13T20:50:02Z | [
"python",
"arrays",
"numpy",
"text-files",
"number-formatting"
] |
numpy.savetxt- Save one column as int and the rest as floats? | 40,030,481 | <p><strong>The Problem</strong></p>
<p>So I have a 2D array (151 rows, 52 columns) I'd like to save as a text file using np.savetxt. However, I want the first column's numbers to save as integers (1950, 1951, etc) while the rest of the data saves as precision 5 (4 if rounded) floating point numbers (2.7419, 2.736, etc). I can't figure out how to do this.</p>
<p><strong>The Code</strong></p>
<p>When I print the first 4 rows & 3 columns of the output of the array, it looks like this.</p>
<p>[[ 1950. 2.7407 2.7396]</p>
<p>[ 1951. 2.7419 2.736 ]</p>
<p>[ 1952. 2.741 2.7374]</p>
<p>[ 1953. 2.7417 2.7325]]</p>
<p>When I use the following...</p>
<pre><code>np.savetxt('array.txt',data,fmt="%1.4f")
</code></pre>
<p>The array saves the first column as a precision 5 floating point numbers like the rest of the data (1950.0000, 1951.0000, etc). When I try to specify different formats as such...</p>
<pre><code>np.savetxt('array.txt',data,fmt="%i %1.4f")
</code></pre>
<p>I get the following error: "ValueError: fmt has wrong number of % formats: %i %1.4f"</p>
<p><strong>The Question</strong></p>
<p>Is there a way I say save the first column as integers and the rest of the columns as floating point numbers?</p>
| 1 | 2016-10-13T20:34:33Z | 40,030,731 | <p>your <code>fmt</code> parameter needs to have the the same number of <code>%</code> as the columns you are trying ot format. You are trying to format 3 columns but only giving it 2 formats. </p>
<p>Try changing your <code>np.savetxt(...)</code> to</p>
<pre><code>np.savetxt('array.txt',data,fmt="%i %1.4f %1.4f")
</code></pre>
| 0 | 2016-10-13T20:50:14Z | [
"python",
"arrays",
"numpy",
"text-files",
"number-formatting"
] |
Pycharm - Python Console | 40,030,494 | <p>I am testing my python code in the console window.</p>
<p>It is not allowing me to enter any code and instead it passes this error message:</p>
<pre><code>/Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 /Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py 55724 55725
PyDev console: starting.
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py", line 512, in <module>
pydevconsole.start_server(pydev_localhost.get_localhost(), int(port), int(client_port))
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py", line 353, in start_server
process_exec_queue(interpreter)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py", line 181, in process_exec_queue
from _pydev_bundle.pydev_import_hook import import_hook_manager
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 29, in <module>
import_hook_manager = ImportHookManager(__name__ + '.import_hook', builtins.__import__)
NameError: name 'ImportHookManager' is not defined
Process finished with exit code 1
</code></pre>
<p>I understand that it must be an error in my settings but I am not sure how to rectify it</p>
| 0 | 2016-10-13T20:35:33Z | 40,030,711 | <p>This is how your <code>pydev_import_hook.py</code> should look like:</p>
<pre><code>import sys
from _pydevd_bundle.pydevd_constants import dict_contains
from types import ModuleType
class ImportHookManager(ModuleType):
def __init__(self, name, system_import):
ModuleType.__init__(self, name)
self._system_import = system_import
self._modules_to_patch = {}
def add_module_name(self, module_name, activate_function):
self._modules_to_patch[module_name] = activate_function
def do_import(self, name, *args, **kwargs):
activate_func = None
if dict_contains(self._modules_to_patch, name):
activate_func = self._modules_to_patch.pop(name)
module = self._system_import(name, *args, **kwargs)
try:
if activate_func:
activate_func() #call activate function
except:
sys.stderr.write("Matplotlib support failed\n")
return module
try:
import __builtin__ as builtins
except ImportError:
import builtins
import_hook_manager = ImportHookManager(__name__ + '.import_hook', builtins.__import__)
builtins.__import__ = import_hook_manager.do_import
sys.modules[import_hook_manager.__name__] = import_hook_manager
del builtins
</code></pre>
| 0 | 2016-10-13T20:49:13Z | [
"python",
"console",
"pycharm"
] |
Need main to call functions in order, properly | 40,030,623 | <pre><code>def load():
global name
global count
global shares
global pp
global sp
global commission
name=input("Enter stock name OR -999 to Quit: ")
count =0
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("\nEnter stock name OR -999 to Quit: ")
totalpr=0
def calc():
global amount_paid
global amount_sold
global profit_loss
global commission_paid_sale
global commission_paid_purchase
global totalpr
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def display():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
def main():
load()
calc()
display()
main()
print("\nTotal Profit is $", format(totalpr, '10,.2f'))
</code></pre>
<p>I <em>need</em> the <code>main():</code> to call <code>load()</code>,<code>calc()</code> and <code>display()</code> in that order. However, the program stops after load. The output will merely loop the load without calc or print. </p>
<p>I have been instructed <strong>specifically</strong> to NOT place <code>calc()</code> and <code>display()</code> in the while loop block, tempting as that may be. Also note, that solves the problem but that is not the solution I am specifically looking for. </p>
<p>What do I need to do to make this program work properly? </p>
<p>OUTPUT SHOULD LOOK LIKE THIS: </p>
<pre><code>Enter stock name OR -999 to Quit: APPLE
Enter number of shares: 10000
Enter purchase price: 400
Enter selling price: 800
Enter commission: 0.04
Stock Name: APPLE
Amount paid for the stock: $ 4,000,000.00
Commission paid on the purchase: $ 160,000.00
Amount the stock sold for: $ 8,000,000.00
Commission paid on the sale: $ 320,000.00
Profit (or loss if negative): $ 3,520,000.00
Enter stock name OR -999 to Quit: FACEBOOK
Enter number of shares: 10000
Enter purchase price: 5
Enter selling price: 500
Enter commission: 0.04
Stock Name: FACEBOOK
Amount paid for the stock: $ 50,000.00
Commission paid on the purchase: $ 2,000.00
Amount the stock sold for: $ 5,000,000.00
Commission paid on the sale: $ 200,000.00
Profit (or loss if negative): $ 4,748,000.00
Enter stock name OR -999 to Quit: -999
Total Profit is $ 14,260,000.00
</code></pre>
<p>HERE IS THE OUTPUT ERROR I AM GETTING: </p>
<pre><code>====== RESTART: C:\Users\Elsa\Desktop\Homework 3, Problem 1.py ======
Enter stock name OR -999 to Quit: YAHOO!
Enter number of shares: 10000
Enter purchase price: 10
Enter selling price: 100
Enter commission: 0.04
Enter stock name OR -999 to Quit: GOOGLE
Enter number of shares: 10000
Enter purchase price: 15
Enter selling price: 150
Enter commission: 0.03
Enter stock name OR -999 to Quit: -999
Stock Name: -999
Amount paid for the stock: $ 150,000.00
Commission paid on the purchase: $ 4,500.00
Amount the stock sold for: $ 1,500,000.00
Commission paid on the sale: $ 45,000.00
Profit (or loss if negative): $ 1,300,500.00
Total Profit is $ 1,300,500.00
>>>
</code></pre>
| -2 | 2016-10-13T20:43:49Z | 40,032,181 | <p>The first problem is that you are clobbering the variable <code>name</code> every time you execute the input statement. You need to assign the result of input to temporary variable and check that for equality to -999, like this:</p>
<pre><code>def load():
global name
global count
global shares
global pp
global sp
global commission
count =0
while True:
s=input("Enter stock name OR -999 to Quit: ")
if s == '-999':
break
name = s
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
</code></pre>
<p>Now when the function returns, the value of <code>name</code> will be a valid stock name. </p>
<p>The second problem is that your teacher has apparently instructed you not to do the very thing that you need to do in order to make your program work. If you can't put the functions that you need to call inside a loop, they can only run once. In that case it is logically impossible to get a separate printout for each stock. I can't help you with that. </p>
| 0 | 2016-10-13T22:38:54Z | [
"python",
"function",
"python-3.x"
] |
Need main to call functions in order, properly | 40,030,623 | <pre><code>def load():
global name
global count
global shares
global pp
global sp
global commission
name=input("Enter stock name OR -999 to Quit: ")
count =0
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("\nEnter stock name OR -999 to Quit: ")
totalpr=0
def calc():
global amount_paid
global amount_sold
global profit_loss
global commission_paid_sale
global commission_paid_purchase
global totalpr
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def display():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
def main():
load()
calc()
display()
main()
print("\nTotal Profit is $", format(totalpr, '10,.2f'))
</code></pre>
<p>I <em>need</em> the <code>main():</code> to call <code>load()</code>,<code>calc()</code> and <code>display()</code> in that order. However, the program stops after load. The output will merely loop the load without calc or print. </p>
<p>I have been instructed <strong>specifically</strong> to NOT place <code>calc()</code> and <code>display()</code> in the while loop block, tempting as that may be. Also note, that solves the problem but that is not the solution I am specifically looking for. </p>
<p>What do I need to do to make this program work properly? </p>
<p>OUTPUT SHOULD LOOK LIKE THIS: </p>
<pre><code>Enter stock name OR -999 to Quit: APPLE
Enter number of shares: 10000
Enter purchase price: 400
Enter selling price: 800
Enter commission: 0.04
Stock Name: APPLE
Amount paid for the stock: $ 4,000,000.00
Commission paid on the purchase: $ 160,000.00
Amount the stock sold for: $ 8,000,000.00
Commission paid on the sale: $ 320,000.00
Profit (or loss if negative): $ 3,520,000.00
Enter stock name OR -999 to Quit: FACEBOOK
Enter number of shares: 10000
Enter purchase price: 5
Enter selling price: 500
Enter commission: 0.04
Stock Name: FACEBOOK
Amount paid for the stock: $ 50,000.00
Commission paid on the purchase: $ 2,000.00
Amount the stock sold for: $ 5,000,000.00
Commission paid on the sale: $ 200,000.00
Profit (or loss if negative): $ 4,748,000.00
Enter stock name OR -999 to Quit: -999
Total Profit is $ 14,260,000.00
</code></pre>
<p>HERE IS THE OUTPUT ERROR I AM GETTING: </p>
<pre><code>====== RESTART: C:\Users\Elsa\Desktop\Homework 3, Problem 1.py ======
Enter stock name OR -999 to Quit: YAHOO!
Enter number of shares: 10000
Enter purchase price: 10
Enter selling price: 100
Enter commission: 0.04
Enter stock name OR -999 to Quit: GOOGLE
Enter number of shares: 10000
Enter purchase price: 15
Enter selling price: 150
Enter commission: 0.03
Enter stock name OR -999 to Quit: -999
Stock Name: -999
Amount paid for the stock: $ 150,000.00
Commission paid on the purchase: $ 4,500.00
Amount the stock sold for: $ 1,500,000.00
Commission paid on the sale: $ 45,000.00
Profit (or loss if negative): $ 1,300,500.00
Total Profit is $ 1,300,500.00
>>>
</code></pre>
| -2 | 2016-10-13T20:43:49Z | 40,032,191 | <p>Based on your comment</p>
<pre><code>take the calc() and display() inside the loop and move them directly into def main():
</code></pre>
<p>this is consistent with your expected output:</p>
<pre><code>def main():
name=input("Enter stock name OR -999 to Quit: ")
count, totalpr = 0, 0
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
# calculation
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
# printing
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
# next loop
name=input("\nEnter stock name OR -999 to Quit: ")
print("\nTotal Profit is $", format(totalpr, '10,.2f'))
</code></pre>
<p>It seems like an exercise in refactoring - moving function together into a single function.</p>
| 0 | 2016-10-13T22:39:38Z | [
"python",
"function",
"python-3.x"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.