title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
SimpleITK N4BiasFieldCorrection, not working with any data type | 39,999,646 | <p>Just installed the latest version of SimpleITK and I'm trying to run a simple code as:</p>
<pre><code>im = sitk.ReadImage('img.nii.gz')
im_bin = sitk.ReadImage('img_bin.nii.gz')
im_bfc = sitk.N4BiasFieldCorrection(im, im_bin)
</code></pre>
<p>the error is </p>
<pre><code>RuntimeError: Exception thrown in SimpleITK N4BiasFieldCorrection: /scratch/dashboards/SimpleITK-OSX10.7-intel-pkg/SimpleITK/Code/Common/include/sitkDualMemberFunctionFactory.hxx:214:
sitk::ERROR: Pixel type: 64-bit signed integer is not supported in 2D byN3itk6simple32N4BiasFieldCorrectionImageFilterE
</code></pre>
<p>I tried with casting to different type, int, float, signed, unsigned, and I tried with 2d and 3d images. I tried as well to use <a href="https://itk.org/SimpleITKDoxygen07/html/N4BiasFieldCorrection_8py-example.html" rel="nofollow">https://itk.org/SimpleITKDoxygen07/html/N4BiasFieldCorrection_8py-example.html</a>
And the error has always been the same. Other modules of SimpleITK appears to work.
Any idea? Can you reproduce the error?<br>
Thanks!</p>
| 1 | 2016-10-12T13:18:06Z | 40,004,118 | <p>In answering my question, I found that the errors raised do not seem to be related to the real cause. If the mask is made with a threshold with sitk as</p>
<pre><code>bth = sitk.BinaryThresholdImageFilter()
img_bin = bth.Execute(img)
img_bin = -1 * (img_mask - 1)
im_bfc = sitk.N4BiasFieldCorrection(im, im_bin)
</code></pre>
<p>The algorithm works and no error is raised.
- will update if I find the real cause of the problem!</p>
| 0 | 2016-10-12T16:50:18Z | [
"python"
] |
Incorrect reindexing when Summer Time shifts by 1 hour | 39,999,694 | <p>I am trying to solve a 1 hour time shift which happens for US daylight saving time zone.</p>
<p>This of part of a time series (snipping below)</p>
<pre><code> In [3] eurusd
Out[3]:
BID-CLOSE
TIME
1994-03-28 22:00:00 1.15981
1994-03-29 22:00:00 1.16681
1994-03-30 22:00:00 1.15021
1994-03-31 22:00:00 1.14851
1994-04-03 21:00:00 1.14081
1994-04-04 21:00:00 1.13921
1994-04-05 21:00:00 1.13881
1994-04-06 21:00:00 1.14351
1994-04-07 21:00:00 1.14411
1994-04-10 21:00:00 1.14011
1994-04-11 21:00:00 1.14391
1994-04-12 21:00:00 1.14451
1994-04-13 21:00:00 1.14201
1994-04-14 21:00:00 1.13911
1994-04-17 21:00:00 1.14821
1994-04-18 21:00:00 1.15181
1994-04-19 21:00:00 1.15621
1994-04-20 21:00:00 1.15381
1994-04-21 21:00:00 1.16201
1994-04-24 21:00:00 1.16251
1994-04-25 21:00:00 1.16721
1994-04-26 21:00:00 1.17101
1994-04-27 21:00:00 1.17721
1994-04-28 21:00:00 1.18421
1994-05-01 21:00:00 1.18751
1994-05-02 21:00:00 1.17331
1994-05-03 21:00:00 1.16801
1994-05-04 21:00:00 1.17141
1994-05-05 21:00:00 1.17691
1994-05-08 21:00:00 1.16541
...
1994-09-26 21:00:00 1.25501
1994-09-27 21:00:00 1.25761
1994-09-28 21:00:00 1.25541
1994-09-29 21:00:00 1.25421
1994-10-02 21:00:00 1.25721
1994-10-03 21:00:00 1.26131
1994-10-04 21:00:00 1.26121
1994-10-05 21:00:00 1.26101
1994-10-06 21:00:00 1.25761
1994-10-10 21:00:00 1.26161
1994-10-11 21:00:00 1.26341
1994-10-12 21:00:00 1.27821
1994-10-13 21:00:00 1.29411
1994-10-16 21:00:00 1.29401
1994-10-17 21:00:00 1.29371
1994-10-18 21:00:00 1.29531
1994-10-19 21:00:00 1.29681
1994-10-20 21:00:00 1.29971
1994-10-23 21:00:00 1.30411
1994-10-24 21:00:00 1.30311
1994-10-25 21:00:00 1.30091
1994-10-26 21:00:00 1.28921
1994-10-27 21:00:00 1.29341
1994-10-30 22:00:00 1.29931
1994-10-31 22:00:00 1.29281
1994-11-01 22:00:00 1.27771
1994-11-02 22:00:00 1.27821
1994-11-03 22:00:00 1.28321
1994-11-06 22:00:00 1.28751
1994-11-07 22:00:00 1.27091
</code></pre>
<p>Currently when I apply a new date range using:</p>
<pre><code>idx = pd.date_range('1994-03-28 22:00:00', '1994-11-07 22:00:00', freq= 'D')
In [4] idx
Out[4]:
DatetimeIndex(['1994-03-28 22:00:00', '1994-03-29 22:00:00',
'1994-03-30 22:00:00', '1994-03-31 22:00:00',
'1994-04-01 22:00:00', '1994-04-02 22:00:00',
'1994-04-03 22:00:00', '1994-04-04 22:00:00',
'1994-04-05 22:00:00', '1994-04-06 22:00:00',
...
'1994-10-29 22:00:00', '1994-10-30 22:00:00',
'1994-10-31 22:00:00', '1994-11-01 22:00:00',
'1994-11-02 22:00:00', '1994-11-03 22:00:00',
'1994-11-04 22:00:00', '1994-11-05 22:00:00',
'1994-11-06 22:00:00', '1994-11-07 22:00:00'],
dtype='datetime64[ns]', length=225, freq='D')
</code></pre>
<p>Then, I reindex the dataframe using the new date range, the timeseries converts all 21:00 values to 22:00, and the BID-CLOSE become NaN's. I understand why, however I am unsure how to make the code aware of the 1 hour time step as per the US Summer Time schedule.</p>
<p>Output of reindex:</p>
<pre><code>In[5]: eurusd_copy1 = eurusd.reindex(idx, fill_value=None)
In[6]: eurusd_copy1
Out[6]:
BID-CLOSE
1994-03-28 22:00:00 1.15981
1994-03-29 22:00:00 1.16681
1994-03-30 22:00:00 1.15021
1994-03-31 22:00:00 1.14851
1994-04-01 22:00:00 NaN
1994-04-02 22:00:00 NaN
1994-04-03 22:00:00 NaN
1994-04-04 22:00:00 NaN
1994-04-05 22:00:00 NaN
1994-04-06 22:00:00 NaN
1994-04-07 22:00:00 NaN
1994-04-08 22:00:00 NaN
1994-04-09 22:00:00 NaN
1994-04-10 22:00:00 NaN
1994-04-11 22:00:00 NaN
1994-04-12 22:00:00 NaN
1994-04-13 22:00:00 NaN
1994-04-14 22:00:00 NaN
1994-04-15 22:00:00 NaN
1994-04-16 22:00:00 NaN
1994-04-17 22:00:00 NaN
1994-04-18 22:00:00 NaN
1994-04-19 22:00:00 NaN
1994-04-20 22:00:00 NaN
1994-04-21 22:00:00 NaN
1994-04-22 22:00:00 NaN
1994-04-23 22:00:00 NaN
1994-04-24 22:00:00 NaN
1994-04-25 22:00:00 NaN
1994-04-26 22:00:00 NaN
...
1994-10-09 22:00:00 NaN
1994-10-10 22:00:00 NaN
1994-10-11 22:00:00 NaN
1994-10-12 22:00:00 NaN
1994-10-13 22:00:00 NaN
1994-10-14 22:00:00 NaN
1994-10-15 22:00:00 NaN
1994-10-16 22:00:00 NaN
1994-10-17 22:00:00 NaN
1994-10-18 22:00:00 NaN
1994-10-19 22:00:00 NaN
1994-10-20 22:00:00 NaN
1994-10-21 22:00:00 NaN
1994-10-22 22:00:00 NaN
1994-10-23 22:00:00 NaN
1994-10-24 22:00:00 NaN
1994-10-25 22:00:00 NaN
1994-10-26 22:00:00 NaN
1994-10-27 22:00:00 NaN
1994-10-28 22:00:00 NaN
1994-10-29 22:00:00 NaN
1994-10-30 22:00:00 1.29931
1994-10-31 22:00:00 1.29281
1994-11-01 22:00:00 1.27771
1994-11-02 22:00:00 1.27821
1994-11-03 22:00:00 1.28321
1994-11-04 22:00:00 NaN
1994-11-05 22:00:00 NaN
1994-11-06 22:00:00 1.28751
1994-11-07 22:00:00 1.27091
[225 rows x 1 columns]
</code></pre>
<p>The desired output would have any date gaps filled with NaN, however keeping the BID-CLOSE values which already have dates unchnaged. Please note the output below is fictitious and just to illustrate the desired outcome.</p>
<pre><code> BID-CLOSE
28/03/1994 22:00:00 1.15981
29/03/1994 22:00:00 1.16681
30/03/1994 22:00:00 1.15021
31/03/1994 22:00:00 1.14851
01/04/1994 21:00:00 NaN
02/04/1994 21:00:00 NaN
03/04/1994 21:00:00 1.13881
04/04/1994 21:00:00 1.14351
05/04/1994 21:00:00 1.14411
06/04/1994 21:00:00 1.14011
07/04/1994 21:00:00 1.14391
08/04/1994 21:00:00 NaN
09/04/1994 21:00:00 NaN
10/04/1994 21:00:00 1.14451
11/04/1994 21:00:00 1.14201
12/04/1994 21:00:00 1.13911
13/04/1994 21:00:00 1.14821
â¦
25/10/1994 21:00:00 1.29371
26/10/1994 21:00:00 NaN
27/10/1994 21:00:00 1.29681
28/10/1994 21:00:00 1.29971
29/10/1994 21:00:00 1.30411
30/10/1994 22:00:00 1.30311
31/10/1994 22:00:00 NaN
01/11/1994 22:00:00 NaN
02/11/1994 22:00:00 1.29341
</code></pre>
<p>How can I make the code aware of the US timezone?</p>
| 1 | 2016-10-12T13:20:00Z | 39,999,983 | <p>I am guessing that your date index is time zone naive.</p>
<p>first set the time zone, I will assume they are UTC</p>
<pre><code>eurusd = eurusd.tz_localize('UTC')
</code></pre>
<p>then you can convert them to whatever time zone you like such has</p>
<pre><code>eurusd = eurusd.tz_convert('America/New_York')
</code></pre>
<p>then you could re-index as you'd like</p>
| 1 | 2016-10-12T13:31:57Z | [
"python",
"pandas",
"quantitative-finance"
] |
can we call python script in node js and run node js to get call? | 39,999,710 | <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>python.py
from pymongo import MongoClient
from flask import Flask
app = Flask(__name__)
host = "10.0.0.10"
port = 8085
@app.route('/name/<string:name>',methods=['GET','POST'])
def GetNoteText(name):
print name
return "Data Received"
@app.route('/', methods=['POST'])
def abc():
print "Hii"
return ('Welcome')
users=[]
@app.route('/getNames')
def getName():
client = MongoClient('mongodb://localhost:27017/')
db = client.bridgeUserInformationTable
cursor = db.bridgeUsersInfo.find()
for document in cursor:
#print "Name : ",document['name']
users.append(document['name'])
print document['name']
#print (users)
return "<html><body><h1>"+str(users)+"</h1></body></html>"
if __name__ == '__main__':
app.run(
host=host, port=port
)</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>node.j
var PythonShell = require('python-shell');
PythonShell.run('pass.py', function (err) {
if (err) throw err;
console.log('finished');
});</code></pre>
</div>
</div>
</p>
<p>As i tried can we call python script in node js after running node js script from getting input from the android device? I am lit bit confused how it should be solved? And how both languages should communicate each other like python to node js? </p>
| 1 | 2016-10-12T13:20:40Z | 40,000,080 | <p><a href="http://www.zerorpc.io/" rel="nofollow">ZERORPC</a> is a really nifty library built on top of ZeroMQ. This is probably the easiest way to make call python code from Node.</p>
<p>For a really simple approach and non-robust approach, you could use a tmp file to write the python commands from Node. With an event loop running inside Python, read the tmp file for any changes and execute the commands therein. </p>
| 0 | 2016-10-12T13:35:47Z | [
"android",
"python",
"node.js"
] |
Skip debug instructions (and their content) if logging is set to INFO | 39,999,767 | <p>I have a program in which I wrote logs both as info and debug.</p>
<p>Since the debug contains also calls to slow functions, my program is running slow even if I set debugging to INFO.</p>
<p>Is it possible to completely skip those line from computation?</p>
<p>in the next example 10 seconds have to pass before the info log is executed.</p>
<pre><code>import logging.handlers
import sys
import time
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging_stream_handler = logging.StreamHandler(sys.stdout)
logging_stream_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s --- %(message)s'))
logger.addHandler(logging_stream_handler)
logger.debug("aaa", time.sleep(10))
logger.debug("bbb")
logger.info("ccc")
</code></pre>
| 1 | 2016-10-12T13:22:45Z | 39,999,839 | <p>You could check if the logger is enabled for such a level with the <a href="https://docs.python.org/2/library/logging.html#logging.Logger.isEnabledFor" rel="nofollow"><code>isEnabledFor</code></a> method:</p>
<pre><code>if logger.isEnabledFor(logging.DEBUG):
logger.debug("aaa", time.sleep(10))
logger.debug("bbb")
</code></pre>
| 1 | 2016-10-12T13:25:56Z | [
"python",
"logging"
] |
Skip debug instructions (and their content) if logging is set to INFO | 39,999,767 | <p>I have a program in which I wrote logs both as info and debug.</p>
<p>Since the debug contains also calls to slow functions, my program is running slow even if I set debugging to INFO.</p>
<p>Is it possible to completely skip those line from computation?</p>
<p>in the next example 10 seconds have to pass before the info log is executed.</p>
<pre><code>import logging.handlers
import sys
import time
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging_stream_handler = logging.StreamHandler(sys.stdout)
logging_stream_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s --- %(message)s'))
logger.addHandler(logging_stream_handler)
logger.debug("aaa", time.sleep(10))
logger.debug("bbb")
logger.info("ccc")
</code></pre>
| 1 | 2016-10-12T13:22:45Z | 39,999,857 | <p>You shouldn't have logging inside debug commands. If you must, then in order to skip it you must branch your code.</p>
<pre><code>import logging.handlers
import sys
import time
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging_stream_handler = logging.StreamHandler(sys.stdout)
logging_stream_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s --- %(message)s'))
logger.addHandler(logging_stream_handler)
if logger.isEnabledFor(logging.DEBUG):
logger.debug("aaa", time.sleep(10))
logger.debug("bbb")
logger.info("ccc")
</code></pre>
| 0 | 2016-10-12T13:27:16Z | [
"python",
"logging"
] |
PyMongo .find() Special Characters | 39,999,976 | <p>So I have set up a database in MongoDB and a majority of documents contain a special code which can begin with a * and finish with a #. When I use the following query on the MongoDb command line, it works fine but when I try to use it in a python script, it doesn't work.</p>
<pre><code>cursor = collect.find({$and:[{"key":/.*\*.*/},{"key":/.*\#.*/}]})
</code></pre>
<p>I think the problem lies with the # in the query but when I wrap it in " ", it doesn't work.</p>
<pre><code>cursor = collect.find({'$and':[{"key":'/.*\*.*/'},{"key":'/.*\#.*/'}]})
</code></pre>
<p>Please note that I put ' ' around $and and the first expression to match because syntax errors appear when I attempt to run it.</p>
<p>Thanks</p>
| 0 | 2016-10-12T13:31:33Z | 40,000,389 | <p>If you want to query using a regular expression, then you have to create a Python regular expression object. Note that in this case the string should not be surrounded by <code>//</code>.</p>
<pre><code>import re
cursor = collect.find({'$and':[{"key":re.compile('.*\*.*')},{"key":re.compile('.*\#.*')}]})
</code></pre>
<p>Alternatively, you can use the <code>$regex</code> operator</p>
<pre><code>cursor = collect.find({'$and':[{"key": {"$regex": '.*\*.*'}},{"key": {"$regex": '.*\#.*'}}]})
</code></pre>
| 0 | 2016-10-12T13:49:45Z | [
"python",
"mongodb",
"pymongo"
] |
QMenu displays incorrectly when setParent called | 40,000,081 | <p>I want to create a function to build a context menu that can be dynamically added to a window's menubar. Consider the following minimal example for adding a simple QMenu:</p>
<pre><code>from PyQt5 import QtWidgets
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
menu = QtWidgets.QMenu('Menu', parent=self)
act1 = menu.addAction('Action 1')
act2 = menu.addAction('Action 2')
self.menuBar().addMenu(menu)
app = QtWidgets.QApplication([])
window = MainWindow()
window.show()
app.exec_()
</code></pre>
<p><a href="https://i.stack.imgur.com/4WqDq.png" rel="nofollow"><img src="https://i.stack.imgur.com/4WqDq.png" alt="enter image description here"></a></p>
<p>This works as expected. Note that setting the parent for the QMenu is required for it to show up.</p>
<hr>
<p>Now, if I break the menu code out into its own function and set the parent explicitly, I get the following. <strong>What's going on here?</strong></p>
<pre><code>from PyQt5 import QtWidgets
def createMenu():
menu = QtWidgets.QMenu('Menu')
act1 = menu.addAction('Action 1')
act2 = menu.addAction('Action 2')
return menu
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
menu = createMenu()
menu.setParent(self)
self.menuBar().addMenu(menu)
app = QtWidgets.QApplication([])
window = MainWindow()
window.show()
app.exec_()
</code></pre>
<p><a href="https://i.stack.imgur.com/tDNcG.png" rel="nofollow"><img src="https://i.stack.imgur.com/tDNcG.png" alt="enter image description here"></a></p>
| 1 | 2016-10-12T13:35:50Z | 40,002,859 | <p>The way you're calling <code>setParent</code> resets the window flags, so do this instead:</p>
<pre><code> menu.setParent(self, menu.windowFlags())
</code></pre>
| 3 | 2016-10-12T15:44:43Z | [
"python",
"qt",
"pyqt",
"qmenu",
"qmenubar"
] |
Python handle 'NoneType' object has no attribute 'find_all' error with if else statement | 40,000,090 | <p>I am using beautifulsoup4 to grab stock data and send to a spreadsheet in python. The problem I am having is that I cannot get my loop to skip over attributes that return None. So what I am needing is the code to add null values to rows where
attribute would return none.</p>
<pre><code>//my dictionay for storing data
data = {
'Fiscal Quarter End' : [],
'Date Reported' : [],
'Earnings Per Share' : [],
'Consensus EPS* Forecast' : [],
'% Surprise' : []
}
url = ""
html = requests.get(url)
data = html.text
soup = bs4.BeautifulSoup(data)
table = soup.find("div", class_="genTable")
for row in table.find_all('tr')[1:]:
if row.has_attr('tr'):
cols = row.find_all("td")
data['Fiscal Quarter End'].append( cols[0].get_text() )
data['Date Reported'].append( cols[1].get_text() )
data['Earnings Per Share'].append( cols[2].get_text() )
data['Consensus EPS* Forecast'].append( cols[3].get_text() )
data['% Surprise'].append( cols[4].get_text() )
else:
//where i need to add in the empty 'n/a' values
data['Fiscal Quarter End'].append()
data['Date Reported'].append()
data['Earnings Per Share'].append()
data['Consensus EPS* Forecast'].append()
data['% Surprise'].append()
</code></pre>
| -1 | 2016-10-12T13:36:09Z | 40,109,751 | <p>You have used the <code>data</code> variable for two different things. The second usage overwrote your dictionary. It is simpler to just use <code>html.text</code> in the call to <code>soup.find()</code>. Try the following:</p>
<pre><code>import requests
import bs4
# My dictionary for storing data
data = {
'Fiscal Quarter End' : [],
'Date Reported' : [],
'Earnings Per Share' : [],
'Consensus EPS* Forecast' : [],
'% Surprise' : []
}
empty = 'n/a'
url = ""
html = requests.get(url)
soup = bs4.BeautifulSoup(html.text, "html.parser")
table = soup.find("div", class_="genTable")
rows = []
if table:
rows = table.find_all('tr')[1:]
for row in rows:
cols = row.find_all("td")
data['Fiscal Quarter End'].append(cols[0].get_text())
data['Date Reported'].append(cols[1].get_text())
data['Earnings Per Share'].append(cols[2].get_text())
data['Consensus EPS* Forecast'].append(cols[3].get_text())
data['% Surprise'].append(cols[4].get_text())
if len(rows) == 0:
# Add in the empty 'n/a' values if no columns found
data['Fiscal Quarter End'].append(empty)
data['Date Reported'].append(empty)
data['Earnings Per Share'].append(empty)
data['Consensus EPS* Forecast'].append(empty)
data['% Surprise'].append(empty)
</code></pre>
<p>In the event the <code>table</code> or <code>rows</code> was empty, <code>data</code> would hold the following:</p>
<pre><code>{'Date Reported': ['n/a'], 'Earnings Per Share': ['n/a'], '% Surprise': ['n/a'], 'Consensus EPS* Forecast': ['n/a'], 'Fiscal Quarter End': ['n/a']}
</code></pre>
| 0 | 2016-10-18T13:33:04Z | [
"python",
"csv",
"dictionary",
"beautifulsoup",
"nonetype"
] |
Memory error when ploting large array | 40,000,192 | <p>I have a problem while ploting big data on python 2.7, with spyder.</p>
<p>X, Y and Z are about 560,000 array length... wich is a lot! </p>
<pre><code># ======
## plot:
fig = plt.figure("Map 3D couleurs")
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_trisurf(Xs, Ys, Zs, cmap=cm.jet, linewidth=0)
fig.colorbar(surf)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
ax.set_title("Map 3D couleurs")
#ax.xaxis.set_major_locator(MaxNLocator(5))
#ax.yaxis.set_major_locator(MaxNLocator(6))
#ax.zaxis.set_major_locator(MaxNLocator(5))
fig.tight_layout()
plt.show();
</code></pre>
<p>Python reply this: </p>
<pre><code>Traceback (most recent call last):
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_qt5.py", line 427, in idle_draw
self.draw()
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 148, in draw
FigureCanvasAgg.draw(self)
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_agg.py", line 469, in draw
self.figure.draw(self.renderer)
File "C:\Python27\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Python27\lib\site-packages\matplotlib\figure.py", line 1085, in draw
func(*args)
File "C:\Python27\lib\site-packages\mpl_toolkits\mplot3d\axes3d.py", line 254, in draw
for col in self.collections]
File "C:\Python27\lib\site-packages\mpl_toolkits\mplot3d\art3d.py", line 580, in do_3d_projection
PolyCollection.set_verts(self, segments_2d)
File "C:\Python27\lib\site-packages\matplotlib\collections.py", line 842, in set_verts
self._paths.append(mpath.Path(xy, codes))
MemoryError
</code></pre>
<p>Do you have an idea to solve this problem on python 2.7.
May be an other library, function... or stop using python?!</p>
| 0 | 2016-10-12T13:40:42Z | 40,000,530 | <p>try to read file partly and do not set data at once
for example put just one datum in a line
then read a tuple (x,y,z) and draw and repeat that </p>
| -1 | 2016-10-12T13:56:11Z | [
"python",
"python-2.7",
"memory",
"matplotlib"
] |
Memory error when ploting large array | 40,000,192 | <p>I have a problem while ploting big data on python 2.7, with spyder.</p>
<p>X, Y and Z are about 560,000 array length... wich is a lot! </p>
<pre><code># ======
## plot:
fig = plt.figure("Map 3D couleurs")
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_trisurf(Xs, Ys, Zs, cmap=cm.jet, linewidth=0)
fig.colorbar(surf)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
ax.set_title("Map 3D couleurs")
#ax.xaxis.set_major_locator(MaxNLocator(5))
#ax.yaxis.set_major_locator(MaxNLocator(6))
#ax.zaxis.set_major_locator(MaxNLocator(5))
fig.tight_layout()
plt.show();
</code></pre>
<p>Python reply this: </p>
<pre><code>Traceback (most recent call last):
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_qt5.py", line 427, in idle_draw
self.draw()
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 148, in draw
FigureCanvasAgg.draw(self)
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_agg.py", line 469, in draw
self.figure.draw(self.renderer)
File "C:\Python27\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Python27\lib\site-packages\matplotlib\figure.py", line 1085, in draw
func(*args)
File "C:\Python27\lib\site-packages\mpl_toolkits\mplot3d\axes3d.py", line 254, in draw
for col in self.collections]
File "C:\Python27\lib\site-packages\mpl_toolkits\mplot3d\art3d.py", line 580, in do_3d_projection
PolyCollection.set_verts(self, segments_2d)
File "C:\Python27\lib\site-packages\matplotlib\collections.py", line 842, in set_verts
self._paths.append(mpath.Path(xy, codes))
MemoryError
</code></pre>
<p>Do you have an idea to solve this problem on python 2.7.
May be an other library, function... or stop using python?!</p>
| 0 | 2016-10-12T13:40:42Z | 40,018,621 | <p>Do you need the amount of detail from a length <code>560'000</code> array? If not, you could easily sub-sample the arrays, using for example:</p>
<pre><code>n = 1000 # sample every n-th data point
surf = ax.plot_trisurf(Xs[::n], Ys[::n], Zs[::n], cmap=cm.jet, linewidth=0)
</code></pre>
| 2 | 2016-10-13T10:36:16Z | [
"python",
"python-2.7",
"memory",
"matplotlib"
] |
Import excel file in python and identify cells of which the content is strikethrough | 40,000,239 | <p>I want to read in many Excel documents and I would like to receive at least one important bit of information on the format. However, I am afraid that there is no tool for it, so my hope is on you!</p>
<p>Each excel file that I am reading in contains a few cells that of which the content is strikethrough. For those who don't know the word (I didn't know it either), strikethrough means that there is a horizontal line through the content.</p>
<p>I have figured out that I will need to read in my documents with xlrd to be able to identify the fonts. However, I have been going over a list of possibilities and none of them contains a check on strikethrough.</p>
| 0 | 2016-10-12T13:42:52Z | 40,000,906 | <p>You have to open the workbook with <code>formatting_info</code> kwarg as <code>True</code>. Then, get <code>the</code> XF object of the cells and get the <code>Font</code> object. The <code>struck_out</code> attribute is what you're looking for. An example:</p>
<pre><code>workbook = xlrd.open_workbook(filename, formatting_info=True)
sh = workbook.sheet_by_name(sheet)
xf = workbook.xf_list[sh.cell_xf_index(row, col)]
font = workbook.font_list[xf.font_index]
if font.struck_out:
print(row, col)
</code></pre>
| 0 | 2016-10-12T14:13:23Z | [
"python",
"excel",
"fonts",
"strikethrough"
] |
How to make post request in Django TastyPie using ApiKeyAuthentication | 40,000,273 | <p>I have resource like this:</p>
<pre><code>class EntryResource(ModelResource):
class Meta:
queryset = Entry.objects.all()
resource_name = 'entry'
allowed_methods = ['post']
authentication = ApiKeyAuthentication()
authorization = Authorization()
</code></pre>
<p>And try to make request to this resource according to <a href="http://django-tastypie.readthedocs.io/en/latest/authentication.html#apikeyauthentication" rel="nofollow">documentation</a>:</p>
<pre><code>requests.post('http://localhost/api/entry/',
json={"key1": "value1",
"key2": "value2"},
headers={"content-type": "application/json",
"Authorization": "ApiKey",
"<username>": "<api_key>"})
</code></pre>
<p>But get 401.</p>
| 0 | 2016-10-12T13:44:35Z | 40,000,554 | <p>from documentation:</p>
<blockquote>
<p>Authorization: ApiKey daniel:204db7bcfafb2deb7506b89eb3b9b715b09905c8</p>
</blockquote>
<p>your request must be like this:</p>
<pre><code>requests.post('http://localhost/api/entry/',
json={"key1": "value1",
"key2": "value2"},
headers={"content-type": "application/json",
"Authorization": "ApiKey <username>:<api_key>"})
</code></pre>
| 1 | 2016-10-12T13:57:17Z | [
"python",
"django",
"tastypie"
] |
Pip not installing package properly | 40,000,314 | <p>So I am trying to get hmmlearn working in Jupyter, and I have come across an error while installing Hmmlearn using <code>pip</code>. I have tried this <a href="http://stackoverflow.com/questions/38575860/error-compiling-c-code-for-python-hmmlearn-package">solution</a>, but it didn't work. </p>
<p>It seems to me that <code>pip</code> does install the _hmmc file, but it does so incorrect. instead it has the name </p>
<blockquote>
<p>_hmmc.cp35-win_amd64</p>
</blockquote>
<p>and the file extesion is <code>.PYD</code>, instead of <code>.c</code></p>
<p>When I run the code to import it, I get this error :</p>
<pre><code> ImportError Traceback (most recent call last)
<ipython-input-1-dee84c3d5ff9> in <module>()
7 import os
8 from pyAudioAnalysis import audioBasicIO as aB
----> 9 from pyAudioAnalysis import audioAnalysis as aA
C:\Users\gover_000\Documents\GitHub\Emotion-Recognition-Prototype\pyAudioAnalysis\audioAnalysis.py in <module>()
15 import audioFeatureExtraction as aF
16 import audioTrainTest as aT
---> 17 import audioSegmentation as aS
18 import audioVisualization as aV
19 import audioBasicIO
C:\Users\gover_000\Documents\GitHub\Emotion-Recognition-Prototype\pyAudioAnalysis\audioSegmentation.py in <module>()
16 import sklearn
17 import sklearn.cluster
---> 18 import hmmlearn.hmm
19 import cPickle
20 import glob
C:\Users\gover_000\Anaconda3\envs\python2\lib\site-packages\hmmlearn\hmm.py in <module>()
19 from sklearn.utils import check_random_state
20
---> 21 from .base import _BaseHMM
22 from .utils import iter_from_X_lengths, normalize
23
C:\Users\gover_000\Anaconda3\envs\python2\lib\site-packages\hmmlearn\base.py in <module>()
11 from sklearn.utils.validation import check_is_fitted
12
---> 13 from . import _hmmc
14 from .utils import normalize, log_normalize, iter_from_X_lengths
15
ImportError: cannot import name _hmmc
</code></pre>
<p>I don't know why <code>pip</code> just doesn't install it correctly, even when I tried to use <code>--no-cache-dir</code></p>
<p>Edit: So i figured out what the problem was. my active python enviroment was python 3.5, as i was manually transferring the installed files to my enviroment, it failed because i had the wrong version.
I had to change my active python enviroment: using <code>activate <my_enviroment name></code>
after that i could just use <code>pip</code> to install it again and it worked this time.</p>
| 0 | 2016-10-12T13:46:50Z | 40,000,766 | <p>Looking at your error message I guess that you have downloaded the hmmlearn package from GIT. Have you tried using a wheel (*.whl) file instead? You can download one from <a href="https://pypi.python.org/pypi/hmmlearn#downloads" rel="nofollow">here</a>. Check out which version fits your python installation.</p>
<p>Then use: </p>
<pre><code>pip install <the_wheel_that_corresponds_to_your_python_version>.whl
</code></pre>
<p>Hope it helps. </p>
| 0 | 2016-10-12T14:06:34Z | [
"python",
"pip",
"jupyter",
"hmmlearn"
] |
Pip not installing package properly | 40,000,314 | <p>So I am trying to get hmmlearn working in Jupyter, and I have come across an error while installing Hmmlearn using <code>pip</code>. I have tried this <a href="http://stackoverflow.com/questions/38575860/error-compiling-c-code-for-python-hmmlearn-package">solution</a>, but it didn't work. </p>
<p>It seems to me that <code>pip</code> does install the _hmmc file, but it does so incorrect. instead it has the name </p>
<blockquote>
<p>_hmmc.cp35-win_amd64</p>
</blockquote>
<p>and the file extesion is <code>.PYD</code>, instead of <code>.c</code></p>
<p>When I run the code to import it, I get this error :</p>
<pre><code> ImportError Traceback (most recent call last)
<ipython-input-1-dee84c3d5ff9> in <module>()
7 import os
8 from pyAudioAnalysis import audioBasicIO as aB
----> 9 from pyAudioAnalysis import audioAnalysis as aA
C:\Users\gover_000\Documents\GitHub\Emotion-Recognition-Prototype\pyAudioAnalysis\audioAnalysis.py in <module>()
15 import audioFeatureExtraction as aF
16 import audioTrainTest as aT
---> 17 import audioSegmentation as aS
18 import audioVisualization as aV
19 import audioBasicIO
C:\Users\gover_000\Documents\GitHub\Emotion-Recognition-Prototype\pyAudioAnalysis\audioSegmentation.py in <module>()
16 import sklearn
17 import sklearn.cluster
---> 18 import hmmlearn.hmm
19 import cPickle
20 import glob
C:\Users\gover_000\Anaconda3\envs\python2\lib\site-packages\hmmlearn\hmm.py in <module>()
19 from sklearn.utils import check_random_state
20
---> 21 from .base import _BaseHMM
22 from .utils import iter_from_X_lengths, normalize
23
C:\Users\gover_000\Anaconda3\envs\python2\lib\site-packages\hmmlearn\base.py in <module>()
11 from sklearn.utils.validation import check_is_fitted
12
---> 13 from . import _hmmc
14 from .utils import normalize, log_normalize, iter_from_X_lengths
15
ImportError: cannot import name _hmmc
</code></pre>
<p>I don't know why <code>pip</code> just doesn't install it correctly, even when I tried to use <code>--no-cache-dir</code></p>
<p>Edit: So i figured out what the problem was. my active python enviroment was python 3.5, as i was manually transferring the installed files to my enviroment, it failed because i had the wrong version.
I had to change my active python enviroment: using <code>activate <my_enviroment name></code>
after that i could just use <code>pip</code> to install it again and it worked this time.</p>
| 0 | 2016-10-12T13:46:50Z | 40,016,817 | <p>So i figured out what the problem was. my active python enviroment was python 3.5, as i was manually transferring the installed files to my enviroment, it failed because i had the wrong version. I had to change my active python enviroment: using <code>activate <my_enviroment_name></code> after that i could just use <code>pip</code> to install it again and it worked this time.</p>
| 0 | 2016-10-13T09:13:14Z | [
"python",
"pip",
"jupyter",
"hmmlearn"
] |
Mouse Recorder - how to detect mouse click in a while loop? with win32api | 40,000,353 | <p>I want to build a mouse recorder that records the actions and movement the mouse makes.</p>
<p>the problem is i didn't find a way to detect a mouse press in a while loop with win32api.</p>
<p>So i am trying to use two threads to do the job.</p>
<hr>
<p><strong>EDIT- Taken a bit different approach , writing the data to two files + time
now i need to combine it into a single file with the right order.</strong></p>
<p><strong>The only question that remains for me is if there is a way to detect
a mouse click in a while loop with win32api?
(so i dont need to use another thread)</strong></p>
<p>CODE:</p>
<pre><code>import win32api, win32con
import time
import threading
from pynput.mouse import Listener
from datetime import datetime
import os
from pathlib import Path
clkFile = Path("clkTrk.txt")
posFile = Path('posTrk.txt')
if posFile.is_file():
os.remove('posTrk.txt')
if clkFile.is_file():
os.remove('clkTrk.txt')
class RecordClick(threading.Thread):
def __init__(self,TID,Name,Counter):
threading.Thread.__init__(self)
self.id = TID
self.name = Name
self.counter = Counter
def run(self):
def on_click(x, y, button, pressed):
if pressed: # Here put the code when the event occurres
# Write Which button is clicked to the file
button = str(button)
file = open("clkTrk.txt", "at", encoding='UTF-8')
file.write(str(datetime.now().second)+"-"+ button + "\n")
print(button)
with Listener(on_click=on_click, ) as listener:
listener.join()
class RecordPos(threading.Thread):
def __init__(self,TID,Name,Counter):
threading.Thread.__init__(self)
self.id = TID
self.name = Name
self.counter = Counter
def run(self):
file = open("posTrk.txt", "wt", encoding='UTF-8')
while win32api.GetAsyncKeyState(win32con.VK_ESCAPE) != True:
x = str(win32api.GetCursorPos()[0])
y = str(win32api.GetCursorPos()[1])
l = ",".join([x, y])
print(l)
file.write(str(datetime.now().second)+"-"+ l + "\n")
time.sleep(0.2)
thread = RecordPos(1,"First",1)
thread2 = RecordClick(2,"Second",2)
thread.start()
thread2.start()
</code></pre>
| 0 | 2016-10-12T13:48:19Z | 40,114,070 | <p>Why separate files when it is easier to record one file in the first place? Just put the lines from the different threads into a queue and write the contents of said queue in the main thread into a file:</p>
<pre><code>#!/usr/bin/env python
# coding: utf8
from __future__ import absolute_import, division, print_function
import time
from functools import partial
from threading import Thread
from Queue import Queue
import win32api
import win32con
from pynput.mouse import Listener
def on_click(queue, x, y, button, pressed):
if pressed:
queue.put('{0},{1} {2}\n'.format(x, y, button))
print(button)
def detect_clicks(queue):
with Listener(on_click=partial(on_click, queue)) as listener:
listener.join()
def track_movement(queue):
while not win32api.GetAsyncKeyState(win32con.VK_ESCAPE):
x, y = win32api.GetCursorPos()
print(x, y)
queue.put('{0},{1}\n'.format(x, y))
time.sleep(0.2)
def main():
queue = Queue()
for function in [detect_clicks, track_movement]:
thread = Thread(target=function, args=[queue])
thread.daemon = True
thread.start()
with open('log.txt', 'w') as log_file:
while True:
log_file.write(queue.get())
if __name__ == '__main__':
main()
</code></pre>
<p>As you see there is also not need to write classes for threads if those have just an <code>__init__()</code> and a <code>run()</code> method. Classes with just an <code>__init__()</code> and one other method are often just functions disguised as classes.</p>
| 0 | 2016-10-18T17:01:07Z | [
"python",
"multithreading",
"python-2.7",
"python-3.x",
"python-multithreading"
] |
Scrapy not calling parse function with start_requests | 40,000,368 | <p>I am fairly new to Python and Scrapy, but something just seems not right. According to documentation and example, re-implementing <em>start_requests</em> function will cause Scrapy to use return of <em>start_requests</em> instead of <em>start_urls</em> array variable.</p>
<p>Everything works fine with <em>start_urls</em>, but when I add <em>start_requests</em>, it does not go into <em>parse</em> function. Documentation states that <em>parse</em> method is</p>
<blockquote>
<p>the default callback used by Scrapy to process downloaded responses,
when their requests donât specify a callback</p>
</blockquote>
<p>but the <em>parse</em> is never executed, tracing my logger prints.</p>
<p>Here is my code, it is very short as I am just toying around with it.</p>
<pre><code>class Crawler(scrapy.Spider):
name = 'Hearthpwn'
allowed_domains = ['hearthpwn.com']
storage_dir = 'C:/Users/Michal/PycharmProjects/HearthpwnCrawler/'
start_urls = ['http://www.hearthpwn.com/decks/645987-nzoth-warrior']
def start_requests(self):
logging.log(logging.INFO, "Loading requests")
yield Request(url='http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter')
def parse(self, response):
logging.log(logging.INFO, "parsing response")
filename = response.url.split("/")[-1] + '.html'
with open('html/' + filename, 'wb') as f:
f.write(response.body)
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(Crawler)
process.start()
</code></pre>
<p>And print of the console:</p>
<pre><code>2016-10-12 15:33:39 [scrapy] INFO: Scrapy 1.2.0 started (bot: scrapybot)
2016-10-12 15:33:39 [scrapy] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
2016-10-12 15:33:39 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.logstats.LogStats']
2016-10-12 15:33:39 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-12 15:33:39 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-12 15:33:39 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-12 15:33:39 [scrapy] INFO: Spider opened
2016-10-12 15:33:39 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-12 15:33:39 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-12 15:33:39 [root] INFO: Loading requests
2016-10-12 15:33:41 [scrapy] DEBUG: Redirecting (302) to <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter?cookieTest=1> from <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter>
2016-10-12 15:33:41 [scrapy] DEBUG: Redirecting (302) to <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter> from <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter?cookieTest=1>
2016-10-12 15:33:41 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2016-10-12 15:33:41 [scrapy] INFO: Closing spider (finished)
2016-10-12 15:33:41 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 655,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 1248,
'downloader/response_count': 2,
'downloader/response_status_count/302': 2,
'dupefilter/filtered': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 12, 13, 33, 41, 740724),
'log_count/DEBUG': 4,
'log_count/INFO': 8,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2016, 10, 12, 13, 33, 39, 441736)}
2016-10-12 15:33:41 [scrapy] INFO: Spider closed (finished)
</code></pre>
<p>Thanks for any leads.</p>
| 0 | 2016-10-12T13:48:54Z | 40,001,042 | <p>Using <strong>dont_merge_cookies</strong> attribute in the <strong>meta</strong> dictionary would solve this issue.</p>
<pre><code> def start_requests(self):
logging.log(logging.INFO, "Loading requests")
yield Request(url='http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter',
meta={'dont_merge_cookies': True})
</code></pre>
| 0 | 2016-10-12T14:19:10Z | [
"python",
"request",
"scrapy"
] |
Scrapy not calling parse function with start_requests | 40,000,368 | <p>I am fairly new to Python and Scrapy, but something just seems not right. According to documentation and example, re-implementing <em>start_requests</em> function will cause Scrapy to use return of <em>start_requests</em> instead of <em>start_urls</em> array variable.</p>
<p>Everything works fine with <em>start_urls</em>, but when I add <em>start_requests</em>, it does not go into <em>parse</em> function. Documentation states that <em>parse</em> method is</p>
<blockquote>
<p>the default callback used by Scrapy to process downloaded responses,
when their requests donât specify a callback</p>
</blockquote>
<p>but the <em>parse</em> is never executed, tracing my logger prints.</p>
<p>Here is my code, it is very short as I am just toying around with it.</p>
<pre><code>class Crawler(scrapy.Spider):
name = 'Hearthpwn'
allowed_domains = ['hearthpwn.com']
storage_dir = 'C:/Users/Michal/PycharmProjects/HearthpwnCrawler/'
start_urls = ['http://www.hearthpwn.com/decks/645987-nzoth-warrior']
def start_requests(self):
logging.log(logging.INFO, "Loading requests")
yield Request(url='http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter')
def parse(self, response):
logging.log(logging.INFO, "parsing response")
filename = response.url.split("/")[-1] + '.html'
with open('html/' + filename, 'wb') as f:
f.write(response.body)
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(Crawler)
process.start()
</code></pre>
<p>And print of the console:</p>
<pre><code>2016-10-12 15:33:39 [scrapy] INFO: Scrapy 1.2.0 started (bot: scrapybot)
2016-10-12 15:33:39 [scrapy] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
2016-10-12 15:33:39 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.logstats.LogStats']
2016-10-12 15:33:39 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-12 15:33:39 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-12 15:33:39 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-12 15:33:39 [scrapy] INFO: Spider opened
2016-10-12 15:33:39 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-12 15:33:39 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-12 15:33:39 [root] INFO: Loading requests
2016-10-12 15:33:41 [scrapy] DEBUG: Redirecting (302) to <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter?cookieTest=1> from <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter>
2016-10-12 15:33:41 [scrapy] DEBUG: Redirecting (302) to <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter> from <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter?cookieTest=1>
2016-10-12 15:33:41 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2016-10-12 15:33:41 [scrapy] INFO: Closing spider (finished)
2016-10-12 15:33:41 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 655,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 1248,
'downloader/response_count': 2,
'downloader/response_status_count/302': 2,
'dupefilter/filtered': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 12, 13, 33, 41, 740724),
'log_count/DEBUG': 4,
'log_count/INFO': 8,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2016, 10, 12, 13, 33, 39, 441736)}
2016-10-12 15:33:41 [scrapy] INFO: Spider closed (finished)
</code></pre>
<p>Thanks for any leads.</p>
| 0 | 2016-10-12T13:48:54Z | 40,005,679 | <pre><code>2016-10-12 15:33:41 [scrapy] DEBUG: Redirecting (302) to <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter?cookieTest=1> from <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter>
2016-10-12 15:33:41 [scrapy] DEBUG: Redirecting (302) to <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter> from <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter?cookieTest=1>
2016-10-12 15:33:41 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.hearthpwn.com/decks/646673-s31-legend-2eu-3asia-smorc-hunter> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
</code></pre>
<p>What happens here is that the website redirects you several times and you end up crawling the same url twice because of it. Scrapy spider by default has middleware which filters out duplicate requests, so you need to set parameter <code>dont_filter</code> to <code>True</code> when creating the Request object to ignore this middleware.</p>
<p>e.g.:</p>
<pre><code>def start_requests(self):
yield ('http://scrapy.org', dont_filter=True)
</code></pre>
| 0 | 2016-10-12T18:18:09Z | [
"python",
"request",
"scrapy"
] |
How to query RethinkDB based on the current time | 40,000,448 | <p>I'm trying to write a 'controller' program for a RethinkDB database which continuously dumps to JSON and deletes data which is older than 3 days, using RethinkDB's changefeed feature. </p>
<p>The problem is that the query 'hangs' from the current time, which is evaluated using <code>datetime.utcnow()</code> (or, alternatively, <code>rethinkdb.now()</code>) at the time the query is defined, remaining fixed thereafter. So as the changefeed progresses, the query becomes 'outdated'.</p>
<p>How can I make a query which is continuously 'updated' to reflect the current time?</p>
<p>To illustrate the problem, here is the script so far:</p>
<pre><code>import json
import rethinkdb as r
import pytz
from datetime import datetime, timedelta
# The database and table are assumed to have been previously created
database_name = "sensor_db"
table_name = "sensor_data"
table = r.db(database_name).table(table_name)
port_offset = 1 # To avoid interference of this testing program with the main program, all ports are initialized at an offset of 1 from the default ports using "rethinkdb --port_offset 1" at the command line.
conn = r.connect("localhost", 28015 + port_offset)
current_time = datetime.utcnow().replace(tzinfo=pytz.utc) # Current time including timezone (assumed to be UTC)
retention_period = timedelta(days=3) # Period of time during which data is retained on the main server
expiry_time = current_time - retention_period # Age at which data is removed from the main database
if "timestamp" in table.index_list().run(conn): # Assuming the table has "timestamp" as a secondary index, use "between" (for improved speed)
beginning_of_time = r.time(1400, 1, 1, 'Z') # The minimum time of a ReQL time object (the year 1400)
data_to_archive = table.between(beginning_of_time, expiry_time, index="timestamp")
else: # Else, use "filter" (requires more memory, but does not require "timestamp" to be a secondary index)
data_to_archive = table.filter(r.row['timestamp'] < expiry_time)
output_file = "archived_sensor_data.json"
with open(output_file, 'a') as f:
for change in data_to_archive.changes().run(conn, time_format="raw"): # The time_format="raw" option is passed to prevent a "RqlTzinfo object is not JSON serializable" error when dumping
if change['new_val'] is not None: # If the change is not a deletion
print change
json.dump(change['new_val'], f) # Since the main database we are reading from is append-only, the 'old_val' of the change is always None and we are interested in the 'new_val' only
f.write("\n") # Separate entries by a new line
ID_to_delete = change['new_val']['id'] # Get the ID of the data to be deleted from the database
r.db(database_name).table(table_name).get(ID_to_delete).delete().run(conn)
</code></pre>
<p>The query is stored in the variable <code>data_to_archive</code>. However, the time interval in the <code>between</code> statement is based on the <code>utcnow()</code> when the <code>current_time</code> variable is defined, and is not continuously updated in the changefeed. How could I make it so?</p>
| 0 | 2016-10-12T13:52:18Z | 40,038,907 | <p>I finally worked around the problem by doing the dumps in 'batch' mode rather than continuously using <code>changes()</code>. (To wit, I'm using the <a href="https://pypi.python.org/pypi/schedule" rel="nofollow">schedule</a> module).</p>
<p>Here is the script:</p>
<pre><code>import json
import rethinkdb as r
import pytz
from datetime import datetime, timedelta
import schedule
import time
import functools
def generate_archiving_query(retention_period=timedelta(days=3), database_name="ipercron", table_name="sensor_data", conn=None):
if conn is None:
conn = r.connect("localhost", 28015)
table = r.db(database_name).table(table_name) # RethinkDB cursor for the table of interest
current_time = r.now()
expiry_time = current_time - retention_period.total_seconds()
if "timestamp" in table.index_list().run(conn): # If the table has "timestamp" as a secondary index, use "between" (for improved speed)
beginning_of_time = r.time(1400, 1, 1, 'Z') # The minimum time of a ReQL time object (the year 1400)
data_to_archive = table.between(beginning_of_time, expiry_time, index="timestamp")
else: # Else, use "filter" (requires more memory, but does not require "timestamp" to be a secondary index)
data_to_archive = table.filter(r.row['timestamp'] < expiry_time)
# try:
# beginning_of_time = r.time(1400, 1, 1, 'Z') # The minimum time of a ReQL time object (the year 1400)
# data_to_archive = table.between(beginning_of_time, expiry_time, index="timestamp")
# except:
# data_to_archive = table.filter(r.row['timestamp'] < expiry_time)
return data_to_archive
def archiving_job(data_to_archive=None, output_file="archived_sensor_data.json", database_name="ipercron", table_name="sensor_data", conn=None):
if data_to_archive is None:
data_to_archive = generate_archiving_query()
if conn is None:
conn = r.connect("localhost", 28015)
table = r.db(database_name).table(table_name)
old_data = data_to_archive.run(conn, time_format="raw") # Without time_format="raw" the output does not dump to JSON
with open(output_file, 'a') as f:
ids_to_delete = []
for item in old_data:
print item
json.dump(item, f)
f.write('\n') # Separate each document by a new line
ids_to_delete.append(item['id'])
# table.get(item['id']).delete().run(conn)
table.get_all(r.args(ids_to_delete)).delete().run(conn)
if __name__ == "__main__":
# The database and table are assumed to have been previously created
database_name = "ipercron"
table_name = "sensor_data"
# table = r.db(database_name).table(table_name)
port_offset = 1 # To avoid interference of this testing program with the main program, all ports are initialized at an offset of 1 from the default ports using "rethinkdb --port_offset 1" at the command line.
conn = r.connect("localhost", 28015 + port_offset)
clean_slate = True
if clean_slate:
r.db(database_name).table(table_name).delete().run(conn) # For testing, start with an empty table and add a fixed amount of data
import rethinkdb_add_data
data_to_archive = generate_archiving_query(conn=conn, database_name=database_name, table_name=table_name) # Because r.now() is evaluated upon run(), the query needs only to be generated once
archiving_job_fixed_query = functools.partial(archiving_job, data_to_archive=data_to_archive, conn=conn)
schedule.every(0.1).minutes.do(archiving_job_fixed_query)
while True:
schedule.run_pending()
</code></pre>
| 0 | 2016-10-14T08:49:46Z | [
"python",
"rethinkdb"
] |
In Python application, can't find where a item comes from | 40,000,451 | <p>I'm trying to understand how web the framework <strong>web2py</strong> works, and there is this item called <code>request.client</code> in <code>access.py</code> that I can't figure out where it comes from. I retrieved the code from <a href="https://github.com/web2py/web2py/blob/0d646fa5e7c731cb5c392adf6a885351e77e4903/applications/admin/models/access.py" rel="nofollow">here</a> and it begins as follows:</p>
<pre><code>import base64
import os
import time
from gluon.admin import apath
from gluon.fileutils import read_file
from gluon.utils import web2py_uuid
from pydal.contrib import portalocker
# ###########################################################
# ## make sure administrator is on localhost or https
# ###########################################################
http_host = request.env.http_host.split(':')[0]
</code></pre>
<p>My question is: when we don't know from where an item comes in a code, what is the method to find it out ? </p>
| 1 | 2016-10-12T13:52:40Z | 40,000,623 | <p>When trying to figure out how an item has been imported into Python, you need to look at the explicit <code>import</code> statements in the file of interest. In well-written code, this will be obvious.</p>
<p>If you import the object of interest into a python shell, then you could use the <code>.__file__</code> attribute to find the bytecode file this object was imported from. There are other such attributes that you might find helpful, such as</p>
<pre><code>.__name__
.__path__
.__package__
</code></pre>
<p>If you use IPython, which is highly recommended, you could easily get a list of these double underscore methods by typing in <code><obj>.__</code> and hitting tab.</p>
<p>Looking at the file you linked, I am not sure where <code>request</code> was imported. <code>client</code>, though, is an attribute of the <code>request</code> object.</p>
| 0 | 2016-10-12T14:00:26Z | [
"python",
"web2py"
] |
In Python application, can't find where a item comes from | 40,000,451 | <p>I'm trying to understand how web the framework <strong>web2py</strong> works, and there is this item called <code>request.client</code> in <code>access.py</code> that I can't figure out where it comes from. I retrieved the code from <a href="https://github.com/web2py/web2py/blob/0d646fa5e7c731cb5c392adf6a885351e77e4903/applications/admin/models/access.py" rel="nofollow">here</a> and it begins as follows:</p>
<pre><code>import base64
import os
import time
from gluon.admin import apath
from gluon.fileutils import read_file
from gluon.utils import web2py_uuid
from pydal.contrib import portalocker
# ###########################################################
# ## make sure administrator is on localhost or https
# ###########################################################
http_host = request.env.http_host.split(':')[0]
</code></pre>
<p>My question is: when we don't know from where an item comes in a code, what is the method to find it out ? </p>
| 1 | 2016-10-12T13:52:40Z | 40,001,380 | <p>The file to which you link, <code>access.py</code>, mentions the name <code>request</code> for the first time on line 13:</p>
<pre><code>http_host = request.env.http_host.split(':')[0]
</code></pre>
<p>This symbol, <code>request</code>, is not <code>import</code>ed from anywhere. This is an unorthodox way of writing Python programs in general, and in isolation it looks impossible. But it <em>is</em> possible, if the symbol <code>request</code> was added to the global workspace of this file by some other file that wraps the execution of <code>access.py</code>. Here is a minimal example:</p>
<pre><code>d = { 'request' : some_object() }
execfile( 'access.py', d ) # run the named file using d as its global workspace
</code></pre>
<p>So, in this particular case the way to investigate is to find the file that executes <code>access.py</code>. </p>
<p>[ In response to your comment about examining every file in the library: for such an analysis, and in general for a better quality of life as a programmer, you <em>will</em> need some tool that can search for a particular string in a whole hierarchy of files (like the command-line tool <code>grep</code> on Linux and MacOS, or like many third-party text editors and IDEs on Windows, but <strong><em>not</em></strong> the perennially unreliable Windows "Search" function that claims to be able to do this). ]</p>
<p>In answer to your more general question about finding where things come from: The other answers and comments mention attributes like <code>__file__</code> and/or <code>__module__</code>. These can help if you have a way of making your code spit out debug information. Since you're working with a non-console (web) framework, that may be non-trivial, but one quick-and-dirty way might be to insert a line like this in <code>access.py</code>:</p>
<pre><code>open('some_temporary_file_somewhere.txt', 'wt').write('\n'.join([
repr( request ),
request.__module__,
request.__class__,
])
</code></pre>
<p>This should get you the details of where <code>request</code>'s class was defined (but not necessarily the name of the file in which the actual instance named <code>request</code> was created).</p>
| 2 | 2016-10-12T14:35:38Z | [
"python",
"web2py"
] |
In Python application, can't find where a item comes from | 40,000,451 | <p>I'm trying to understand how web the framework <strong>web2py</strong> works, and there is this item called <code>request.client</code> in <code>access.py</code> that I can't figure out where it comes from. I retrieved the code from <a href="https://github.com/web2py/web2py/blob/0d646fa5e7c731cb5c392adf6a885351e77e4903/applications/admin/models/access.py" rel="nofollow">here</a> and it begins as follows:</p>
<pre><code>import base64
import os
import time
from gluon.admin import apath
from gluon.fileutils import read_file
from gluon.utils import web2py_uuid
from pydal.contrib import portalocker
# ###########################################################
# ## make sure administrator is on localhost or https
# ###########################################################
http_host = request.env.http_host.split(':')[0]
</code></pre>
<p>My question is: when we don't know from where an item comes in a code, what is the method to find it out ? </p>
| 1 | 2016-10-12T13:52:40Z | 40,002,707 | <p>As explained <a href="http://web2py.com/books/default/chapter/29/04/the-core#API" rel="nofollow">here</a>, web2py model, controller, and view files are executed by the framework in an environment that has already been populated with the core API objects, include <code>request</code>, <code>response</code>, <code>session</code>, <code>cache</code>, <code>DAL</code>, <code>Field</code>, HTML helpers, and form validators (other parts of the API, such Auth, Mail, Services, Scheduler, and many additional tools and contributed libraries, are contained in modules and imported as usual).</p>
<p>Specifically, the <code>request</code> object is created (upon each request) inside the main WSGI application (<a href="https://github.com/web2py/web2py/blob/e6a3081b42ecd58441419930266baf286561c4c7/gluon/main.py#L263" rel="nofollow">gluon.main.wsgibase</a>) at <a href="https://github.com/web2py/web2py/blob/e6a3081b42ecd58441419930266baf286561c4c7/gluon/main.py#L293" rel="nofollow">this line</a>, and it is subsequently passed to <a href="https://github.com/web2py/web2py/blob/e6a3081b42ecd58441419930266baf286561c4c7/gluon/main.py#L148" rel="nofollow">gluon.main.serve_controller</a> at <a href="https://github.com/web2py/web2py/blob/e6a3081b42ecd58441419930266baf286561c4c7/gluon/main.py#L442" rel="nofollow">this line</a>. The <code>serve_controller</code> function adds the <code>request</code> object to the execution environment and then executes model, controller, and view files in that environment.</p>
<p>In addition to model, controller, and view files, your application can include its own modules. Modules in applications are not executed by the framework (they must be imported) -- therefore, those modules must access the web2py core API objects via imports from <code>gluon</code> (the request-dependent objects, such as <code>request</code> and <code>response</code>, are accessed via the <code>current</code> thread-local object, as explained <a href="http://web2py.com/books/default/chapter/29/04/the-core#Sharing-the-global-scope-with-modules-using-the-current-object" rel="nofollow">here</a>).</p>
<p>It may also be helpful to review the <a href="http://web2py.com/books/default/chapter/29/04/the-core#Workflow" rel="nofollow">Workflow</a>, <a href="http://web2py.com/books/default/chapter/29/04/the-core#Libraries" rel="nofollow">Libraries</a>, and <a href="http://web2py.com/books/default/chapter/29/04/the-core#Execution-environment" rel="nofollow">Execution environment</a> sections of the documentation.</p>
| 2 | 2016-10-12T15:36:42Z | [
"python",
"web2py"
] |
How to encode bytes in JSON? json.dumps() throwing a TypeError | 40,000,495 | <p>I am trying to encode a dictionary containing a string of bytes with <code>json</code>, and getting a <code>is not JSON serializable error</code>.</p>
<p>Sample code:</p>
<pre><code>import base64
import json
data={}
encoded = base64.encodebytes(b'data to be encoded')
data['bytes']=encoded
print(json.dumps(data))
</code></pre>
<p>The error I receive:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: b'ZGF0YSB0byBiZSBlbmNvZGVk\n' is not JSON serializable
</code></pre>
<p>How can I correctly encode my dictionary containing bytes with JSON?</p>
| 1 | 2016-10-12T13:54:34Z | 40,000,564 | <p>The JSON format only supports <em>unicode strings</em>. Since Base64 encodes bytes to ASCII-only bytes, you can use that codec to decode the data:</p>
<pre><code>encoded = base64.encodestring(b'data to be encoded')
data['bytes'] = encoded.decode('ascii')
</code></pre>
| 5 | 2016-10-12T13:57:39Z | [
"python",
"json",
"python-3.x"
] |
Indexing variables and adding month to variable name in python3.5 | 40,000,595 | <p>I'm trying to set up a list of variables to work with in a linear programming problem. For this I'd like to work with some index values to make the code significantly shorter and easier to read. I'd tried something as:</p>
<pre><code>from datetime import *
months = ["Unknown", "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]
d={}
for x in range(1,13):
d["string{0}".format(x)]=("Production in months[d]")
print(d)
</code></pre>
<p>This returns me a list:</p>
<pre><code>{'string7': 'Production in months[d]', 'string8': 'Production in months[d]', 'string2': 'Production in months[d]', 'string9': 'Production in months[d]', 'string11': 'Production in months[d]', 'string6': 'Production in months[d]', 'string12': 'Production in months[d]', 'string3': 'Production in months[d]', 'string10': 'Production in months[d]', 'string4': 'Production in months[d]', 'string1': 'Production in months[d]', 'string5': 'Production in months[d]'}
</code></pre>
<p>I would like to have the name of the month where months[d] is printed, corresponding to the index number i in 'string[i]'.</p>
| 0 | 2016-10-12T13:59:15Z | 40,000,689 | <p>Use: </p>
<pre><code>d["string{0}".format(x)]=("Production in %s" % months[d])
</code></pre>
| 0 | 2016-10-12T14:03:25Z | [
"python",
"python-3.x"
] |
Indexing variables and adding month to variable name in python3.5 | 40,000,595 | <p>I'm trying to set up a list of variables to work with in a linear programming problem. For this I'd like to work with some index values to make the code significantly shorter and easier to read. I'd tried something as:</p>
<pre><code>from datetime import *
months = ["Unknown", "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]
d={}
for x in range(1,13):
d["string{0}".format(x)]=("Production in months[d]")
print(d)
</code></pre>
<p>This returns me a list:</p>
<pre><code>{'string7': 'Production in months[d]', 'string8': 'Production in months[d]', 'string2': 'Production in months[d]', 'string9': 'Production in months[d]', 'string11': 'Production in months[d]', 'string6': 'Production in months[d]', 'string12': 'Production in months[d]', 'string3': 'Production in months[d]', 'string10': 'Production in months[d]', 'string4': 'Production in months[d]', 'string1': 'Production in months[d]', 'string5': 'Production in months[d]'}
</code></pre>
<p>I would like to have the name of the month where months[d] is printed, corresponding to the index number i in 'string[i]'.</p>
| 0 | 2016-10-12T13:59:15Z | 40,001,831 | <p>That's simple, and you have already done it once in your code!</p>
<pre><code>for x in range(1,13):
d["string{0}".format(x)]="Production in {}".format(months[x])
for key, value in d.items():
print(key, value)
</code></pre>
<p>Output:</p>
<pre><code>string5 Production in May
string4 Production in April
string1 Production in January
string2 Production in February
string11 Production in November
string9 Production in September
string7 Production in July
string8 Production in August
string10 Production in October
string3 Production in March
string12 Production in December
string6 Production in June
</code></pre>
<p>Note that the order maybe different. Also, when you don't specify any position in the placeholders for <code>format()</code> arguments as in <code>"Production in {}".format(months[x])</code>, the arguments are inserted in the order they are supplied.</p>
| 1 | 2016-10-12T14:55:19Z | [
"python",
"python-3.x"
] |
How to ping from a tuple? | 40,000,617 | <p>How can I ping a list of hosts from a tuple and store the response into another tuple, list?</p>
<p>I know how to ping a single host:</p>
<pre><code>hostname = "10.0.0.250" #example
response = os.system("ping -c 1 " + hostname)
</code></pre>
| 0 | 2016-10-12T14:00:09Z | 40,000,681 | <p>You can simply use a <em>list comprehension</em> / <em>generator expression</em> (for tuple), given a tuple of hostnames:</p>
<pre><code>hostnames = ("10.0.0.250", "10.0.0.240", ...)
responses = tuple(os.system("ping -c 1 " + h) for h in hostnames)
</code></pre>
| 2 | 2016-10-12T14:03:10Z | [
"python"
] |
How to ping from a tuple? | 40,000,617 | <p>How can I ping a list of hosts from a tuple and store the response into another tuple, list?</p>
<p>I know how to ping a single host:</p>
<pre><code>hostname = "10.0.0.250" #example
response = os.system("ping -c 1 " + hostname)
</code></pre>
| 0 | 2016-10-12T14:00:09Z | 40,000,715 | <pre><code>hostnames = ["10.0.0.1", "10.0.0.2"]
# Can use a tuple instead of list.
responses = [os.system("ping -c 1 " + hostname) for hostname in hostnames]
# You can enwrap the list comprehension in a call to the tuple() function
# to make `responses` a tuple instead of list.
</code></pre>
| 1 | 2016-10-12T14:04:24Z | [
"python"
] |
python pandas-possible to compare 3 dfs of same shape using where(max())? is this a masking issue? | 40,000,718 | <p>I have a dict containing 3 dataframes of identical shape. I would like to create:</p>
<ol>
<li>a 4th dataframe which identifies the largest value from the original 3 at each coordinate - so dic['four'].ix[0,'A'] = MAX( dic['one'].ix[0,'A'], dic['two'].ix[0,'A'], dic['three'].ix[0,'A'] )</li>
<li><p>a 5th with the second largest value</p>
<pre><code>dic = {}
for i in ['one','two','three']:
dic[i] = pd.DataFrame(np.random.randint(0,100,size=(10,3)), columns=list('ABC'))
</code></pre></li>
</ol>
<p>I cannot figure out how to use .where() to compare the original 3 dfs. Looping through would be inefficient for ultimate data set. </p>
| 2 | 2016-10-12T14:04:31Z | 40,001,247 | <p>The 1st question is easy to answer, you could use the <code>numpy.maximum()</code> function to find the element wise maximum value in each cell, across multiple dataframes</p>
<pre><code>dic ['four'] = pd.DataFrame(np.maximum(dic['one'].values,dic['two'].values,dic['three'].values),columns = list('ABC'))
</code></pre>
| 0 | 2016-10-12T14:29:31Z | [
"python",
"pandas",
"numpy",
"max",
"where"
] |
python pandas-possible to compare 3 dfs of same shape using where(max())? is this a masking issue? | 40,000,718 | <p>I have a dict containing 3 dataframes of identical shape. I would like to create:</p>
<ol>
<li>a 4th dataframe which identifies the largest value from the original 3 at each coordinate - so dic['four'].ix[0,'A'] = MAX( dic['one'].ix[0,'A'], dic['two'].ix[0,'A'], dic['three'].ix[0,'A'] )</li>
<li><p>a 5th with the second largest value</p>
<pre><code>dic = {}
for i in ['one','two','three']:
dic[i] = pd.DataFrame(np.random.randint(0,100,size=(10,3)), columns=list('ABC'))
</code></pre></li>
</ol>
<p>I cannot figure out how to use .where() to compare the original 3 dfs. Looping through would be inefficient for ultimate data set. </p>
| 2 | 2016-10-12T14:04:31Z | 40,001,276 | <p>consider the <code>dict</code> <code>dfs</code> which is a dictionary of <code>pd.DataFrame</code>s</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed([3,1415])
dfs = dict(
one=pd.DataFrame(np.random.randint(1, 10, (5, 5))),
two=pd.DataFrame(np.random.randint(1, 10, (5, 5))),
three=pd.DataFrame(np.random.randint(1, 10, (5, 5))),
)
</code></pre>
<p>the best way to handle this is with a <code>pd.Panel</code> object, which is the higher dimensional object analogous to <code>pd.DataFrame</code>.</p>
<pre><code>p = pd.Panel(dfs)
</code></pre>
<p>then the answers you need are very straighforward</p>
<p><strong><em>max</em></strong><br>
<code>p.max(axis='items')</code> or <code>p.max(0)</code></p>
<p><strong><em>penultimate</em></strong><br>
<code>p.apply(lambda x: np.sort(x)[-2], axis=0)</code></p>
| 3 | 2016-10-12T14:30:24Z | [
"python",
"pandas",
"numpy",
"max",
"where"
] |
How to fix matplotlib's subplots when the entire layout can not be filled? | 40,000,784 | <p>I would like to plot a DataFrame with say 29 columns, so I use the matplotlib's "subplots" command using 5x6 layout, (5x6 = 30 > 29)</p>
<p>fig, ax = plt.subplots(5,6)</p>
<p>However, when I plot all of the columns, the last subplot (i.e., row=5, col = 6) is empty because there is no data to show there. Is there a way to remove that last subplot?</p>
| 0 | 2016-10-12T14:07:30Z | 40,000,893 | <p>Try this</p>
<pre><code>fig.delaxes(axs[5,6])
plt.show()
</code></pre>
| 2 | 2016-10-12T14:12:40Z | [
"python",
"pandas",
"matplotlib"
] |
Python Logging for a module shared by different scripts | 40,000,929 | <p>I like the python logging infrastructure and I want to use it for a number of different overnight jobs that I run. A lot of these jobs use module X let's say. I want the logging for module X to write to a log file not dependent on module X, but based on the job that ultimately led to calling functionality in module X. </p>
<p>So if overnight_script_1.py calls foo() in module X, I want the log of foo() to go to overnight_script_1.log. I also want overnight_script_2.py's call of foo() to log to overnight_script_2.log.</p>
<p>A potential solution to this problem would be to set up the log file based on looking at the 0th argument to sys.argv which can be mapped to my preferred file. Something seems hacky about doing it this way. Is there a preferred design pattern for doing this? I don't want to rummage through different log files based on the module where the function was called to find my diagnostic information for one of my scripts. Here is some code because I am not sure I am making myself clear.</p>
<p>Here is script1.py</p>
<pre><code>import X
import logging_utils as lu
import sys
logname=sys.argv[0][:-3] # logname==script1 with the .py cut off
logger=lu.setup_logger(log_name) # assume this does the formatting and sets the filehandlers
# furthermore assume the file handler is set so that the output goes to script1.log.
# do a bunch of thing
logger.info('I am doing a bunch of things in script1 and I will now call X.foo()')
X.foo() # see module X below
logger.info('I finished X.foo()')
</code></pre>
<p>Similary, here is script2.py</p>
<pre><code>import X
import logging_utils as lu
import sys
logname=sys.argv[0][:-3] # logname==script2 with the .py cut off
logger=lu.setup_logger(log_name) # assume this does the formatting and sets the filehandlers
# furthermore assume the file handler is set so that the output goes to script2.log.
# do a bunch of thing
logger.info('I am doing a bunch of things in script2 and I will now call X.foo()')
X.foo() # see module X below
logger.info('I finished X.foo()')
</code></pre>
<p>Here is X.py</p>
<pre><code>import logging
import sys
logname=sys.argv[0][:-3] # could be script1 or script2
logger=logging.getLogger(logname)
def foo():
try:
i=1/0
except:
logger.error('oops - division by zero')
</code></pre>
<p>Then I want to run:</p>
<p>python script1.py</p>
<p>python script2.py</p>
<p>and get two log files script1.log and script2.log where the division by zero error that occurred in module X is logged in each.</p>
| 0 | 2016-10-12T14:14:24Z | 40,001,073 | <p>You could probably use as many RotatingFileHandler(s) as different modules you have and setup their filenames accordingly </p>
<p><a href="https://docs.python.org/2/library/logging.handlers.html" rel="nofollow">https://docs.python.org/2/library/logging.handlers.html</a></p>
<p>-Edit- </p>
<p>Based on the comment then you may want to use a <code>Formatter</code> that takes into account where was the call made (to know if the call to the shared library was made from <code>module_1</code> or <code>module_2</code> and so on) and then and then simply log to the same file.</p>
<p>There are some caveats to think about: </p>
<ul>
<li>I'm not sure if there is order preservation in the writing of the file because there may be two or more processes that write to that file</li>
<li>The log file may increase in size arbitrarily ending up with 0 disk space</li>
</ul>
<p><a href="https://docs.python.org/2/library/logging.html#formatter-objects" rel="nofollow">https://docs.python.org/2/library/logging.html#formatter-objects</a>
<a href="https://docs.python.org/2/library/logging.html#logrecord-attributes" rel="nofollow">https://docs.python.org/2/library/logging.html#logrecord-attributes</a>
<a href="https://docs.python.org/2.7/library/logging.config.html#configuration-file-format" rel="nofollow">https://docs.python.org/2.7/library/logging.config.html#configuration-file-format</a></p>
| 0 | 2016-10-12T14:21:01Z | [
"python",
"logging"
] |
Python Logging for a module shared by different scripts | 40,000,929 | <p>I like the python logging infrastructure and I want to use it for a number of different overnight jobs that I run. A lot of these jobs use module X let's say. I want the logging for module X to write to a log file not dependent on module X, but based on the job that ultimately led to calling functionality in module X. </p>
<p>So if overnight_script_1.py calls foo() in module X, I want the log of foo() to go to overnight_script_1.log. I also want overnight_script_2.py's call of foo() to log to overnight_script_2.log.</p>
<p>A potential solution to this problem would be to set up the log file based on looking at the 0th argument to sys.argv which can be mapped to my preferred file. Something seems hacky about doing it this way. Is there a preferred design pattern for doing this? I don't want to rummage through different log files based on the module where the function was called to find my diagnostic information for one of my scripts. Here is some code because I am not sure I am making myself clear.</p>
<p>Here is script1.py</p>
<pre><code>import X
import logging_utils as lu
import sys
logname=sys.argv[0][:-3] # logname==script1 with the .py cut off
logger=lu.setup_logger(log_name) # assume this does the formatting and sets the filehandlers
# furthermore assume the file handler is set so that the output goes to script1.log.
# do a bunch of thing
logger.info('I am doing a bunch of things in script1 and I will now call X.foo()')
X.foo() # see module X below
logger.info('I finished X.foo()')
</code></pre>
<p>Similary, here is script2.py</p>
<pre><code>import X
import logging_utils as lu
import sys
logname=sys.argv[0][:-3] # logname==script2 with the .py cut off
logger=lu.setup_logger(log_name) # assume this does the formatting and sets the filehandlers
# furthermore assume the file handler is set so that the output goes to script2.log.
# do a bunch of thing
logger.info('I am doing a bunch of things in script2 and I will now call X.foo()')
X.foo() # see module X below
logger.info('I finished X.foo()')
</code></pre>
<p>Here is X.py</p>
<pre><code>import logging
import sys
logname=sys.argv[0][:-3] # could be script1 or script2
logger=logging.getLogger(logname)
def foo():
try:
i=1/0
except:
logger.error('oops - division by zero')
</code></pre>
<p>Then I want to run:</p>
<p>python script1.py</p>
<p>python script2.py</p>
<p>and get two log files script1.log and script2.log where the division by zero error that occurred in module X is logged in each.</p>
| 0 | 2016-10-12T14:14:24Z | 40,002,919 | <p>I believe you should just follow the standard set up for library code:</p>
<p>Say you have a package <code>mypkg</code> and you want this package to log information while letting the user of the package decide which level of logging to use and where the output should go.</p>
<p>The <code>mypkg</code> should <strong>not</strong> set the level nor any handler for the logs.
Module <code>x</code> of <code>mypkg</code> should be something like this:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
def my_function():
logger.info('something')
</code></pre>
<p>No configuration of the logging whatsoever just <code>getLogger(__name__)</code>.</p>
<p>Then your <code>script.py</code> that uses <code>mypkg</code> and wants to log at level <code>INFO</code> on the console will do:</p>
<pre><code>import logging
import mypkg
root_logger = logging.getLogger()
console_handler = logging.StreamHandler()
root_logger.setLevel(logging.INFO)
root_logger.addHandler(console_handler)
mypkg.my_function() # produces output to stderr
</code></pre>
<p>While <code>script2.py</code> will log at level <code>DEBUG</code> to a file:</p>
<pre><code>import logging
import mypkg
root_logger = logging.getLogger()
file_handler = logging.FileHandler('some_file.log')
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(file_handler)
mypkg.my_function()
</code></pre>
<p>Note that by setting the level and handler on the root logger we are setting the level "globally". If the user has some own loggers that he wants to use with level <code>DEBUG</code> but he wants to use level <code>INFO</code> for <code>mypkg</code> he can do:</p>
<pre><code>root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
mypkg_logger = logging.getLogger('mypkg')
mypkg_logger.setLevel(logging.INFO)
handler = #... whatever you want
root_logger.addHandler(handler)
</code></pre>
<p>If your <code>mypkg</code> contains the modules <code>x.py</code>, <code>y.py</code>, <code>a.py</code>, <code>b.py</code> and <code>c.py</code> and you want to log the functions inside <code>x</code> at <code>DEBUG</code> and those inside <code>y</code> at <code>WARNING</code> and those for <code>a/b/c.py</code> at <code>INFO</code> you can do so by setting the levels on the corresponding loggers:</p>
<pre><code>mypkg_logger = logging.getLogger('mypkg')
my_pkg_logger.setLevel(logging.INFO)
x_logger = logging.getLogger('mypkg.x')
x_logger.setLevel(logging.DEBUG)
y_logger = logging.getLogger('mypkg.y')
y_logger.setLevel(logging.WARNING)
</code></pre>
<p>Put the handlers only attached to the <code>root_logger</code> and everything should work fine.</p>
<hr>
<p>Note: on some versions of python the <code>logging</code> module might warn you that you defined a logger with no handlers for the library modules. To fix ths you can use a <code>NullHandler</code> that just drops the logging messages:</p>
<pre><code># x.py
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
</code></pre>
<p>So the <code>NullHandler</code> literally does nothing, except from preventing the <code>logging</code> module from complaining. AFAIK newer versions do not need this at all.</p>
| 2 | 2016-10-12T15:48:04Z | [
"python",
"logging"
] |
Python in SPSS - KEEP variables | 40,001,022 | <p>I have selected the variables I need based on a string within the variable name. I'm not sure how to keep only these variables from my SPSS file. </p>
<pre><code>begin program.
import spss,spssaux
spssaux.OpenDataFile(r'XXXX.sav')
target_string = 'qb2'
variables = [var for var in spssaux.GetVariableNamesList() if target_string in var]
vars = spssaux.VariableDict().expand(variables)
nvars=len(vars)
for i in range(nvars):
print vars[i]
spss.Submit(r"""
SAVE OUTFILE='XXXX_reduced.sav'.
ADD FILES FILE=* /KEEP \n %s.
""" %(vars))
end program.
</code></pre>
<p>The list of variables that it prints out is correct, but it's falling over trying to KEEP them. I'm guessing it's something to do with not activating a dataset or bringing in the file again as to why there's errors?</p>
| 0 | 2016-10-12T14:18:05Z | 40,003,749 | <p>Have you tried reversing the order of the SAVE OUTFILE and ADD FILES commands? I haven't run this in SPSS via Python, but in standard SPSS, your syntax will write the file to disk, and then select the variables for the active version in memory--so if you later access the saved file, it will be the version before you selected variables.
If that doesn't work, can you explain what you mean by falling over trying to KEEP them?</p>
| 2 | 2016-10-12T16:29:17Z | [
"python",
"spss"
] |
Python in SPSS - KEEP variables | 40,001,022 | <p>I have selected the variables I need based on a string within the variable name. I'm not sure how to keep only these variables from my SPSS file. </p>
<pre><code>begin program.
import spss,spssaux
spssaux.OpenDataFile(r'XXXX.sav')
target_string = 'qb2'
variables = [var for var in spssaux.GetVariableNamesList() if target_string in var]
vars = spssaux.VariableDict().expand(variables)
nvars=len(vars)
for i in range(nvars):
print vars[i]
spss.Submit(r"""
SAVE OUTFILE='XXXX_reduced.sav'.
ADD FILES FILE=* /KEEP \n %s.
""" %(vars))
end program.
</code></pre>
<p>The list of variables that it prints out is correct, but it's falling over trying to KEEP them. I'm guessing it's something to do with not activating a dataset or bringing in the file again as to why there's errors?</p>
| 0 | 2016-10-12T14:18:05Z | 40,005,505 | <ol>
<li>You'll want to use the <code>ADD FILES FILE</code> command before the <code>SAVE</code> for your saved file to be the "reduced" file</li>
<li>I think your very last line in the python program should be trying to join the elements in the list <code>vars</code>. For example: <code>%( " ".join(vars) )</code> </li>
</ol>
| 1 | 2016-10-12T18:08:23Z | [
"python",
"spss"
] |
Python in SPSS - KEEP variables | 40,001,022 | <p>I have selected the variables I need based on a string within the variable name. I'm not sure how to keep only these variables from my SPSS file. </p>
<pre><code>begin program.
import spss,spssaux
spssaux.OpenDataFile(r'XXXX.sav')
target_string = 'qb2'
variables = [var for var in spssaux.GetVariableNamesList() if target_string in var]
vars = spssaux.VariableDict().expand(variables)
nvars=len(vars)
for i in range(nvars):
print vars[i]
spss.Submit(r"""
SAVE OUTFILE='XXXX_reduced.sav'.
ADD FILES FILE=* /KEEP \n %s.
""" %(vars))
end program.
</code></pre>
<p>The list of variables that it prints out is correct, but it's falling over trying to KEEP them. I'm guessing it's something to do with not activating a dataset or bringing in the file again as to why there's errors?</p>
| 0 | 2016-10-12T14:18:05Z | 40,022,078 | <p>It appears that the problem has been solved, but I would like to point out another solution that can be done without writing any Python code. The extension command SPSSINC SELECT VARIABLES defines a macro based on properties of the variables. This can be used in the ADD FILES command. </p>
<p>SPSSINC SELECT VARIABLES MACRONAME="!selected"
/PROPERTIES PATTERN = ".*qb2".<br>
ADD FILES /FILE=* /KEEP !selected. </p>
<p>The SELECT VARIABLES command is actually implemented in Python. Its selection criteria can also include other metadata such as type and measurement level.</p>
| 1 | 2016-10-13T13:15:47Z | [
"python",
"spss"
] |
Why is there no output showing in this python code? | 40,001,080 | <pre><code>man=[]
other=[]
try:
data=open('sketch.txt')
for each_line in data:
try:
(role,line_spoken) = each_line.split(':',1)
line_spoken= line_spoken.strip()
if role == 'Man':
man.append(line_spoken)
elif role == 'Other Man':
other.append(line_spoken)
except ValueError:
pass
data.close()
except IOError:
print('The datafile is missing!')
try:
man_file=open('man_data.txt','w')
other_file=open('other_data.txt','w')
print(man, file=man_file)
print(other, file=other_file)
man_file.close()
other_file.close()
except IOError:
print('File error.')
</code></pre>
<p>Shouldn't it create man_data and other_data file?
There is no error message or any kind of input in the idle.</p>
<p><a href="https://i.stack.imgur.com/V0pD3.png" rel="nofollow"><img src="https://i.stack.imgur.com/V0pD3.png" alt="enter image description here"></a></p>
| -1 | 2016-10-12T14:21:24Z | 40,001,654 | <p>The indentation in your screenshot is different from your question. In your question you claimed your code was this (with some bits trimmed out):</p>
<pre><code>try:
# Do something
except IOError:
# Handle error
try:
# Write to man_data.txt and other_data.txt
except IOError:
# Handle error
</code></pre>
<p>But your screenshot shows that you actually ran this code:</p>
<pre><code>try:
# Do something
except IOError:
# Handle error
try:
# Write to man_data.txt and other_data.txt
except IOError:
# Handle error
</code></pre>
<p>The whole of the second <code>try</code>/<code>except</code> block is within the <code>except</code> clause of the first one, so it will only be executed if there is an exception in the first <code>try</code> block. The solution is to run the code that is in your question i.e. unindent the second <code>try</code>/<code>except</code> block so that it is at the same level as the first one.</p>
| 2 | 2016-10-12T14:47:01Z | [
"python",
"python-3.x"
] |
numpy.float64 is not iterable | 40,001,301 | <p>I'm trying to print a function which uses several parameters from numpy array's and lists, but I keep getting the error "numpy.float 64 object is not iterable". I've looked at several questions on the forum about this topic and tried different answers but none seem to work (or I might be doing something wrong I'm still a beginner at python) but it all comes down to the same thing, I'm stuck and I hope you guys can help. I'm using python 2.7, this is the code: </p>
<p>EDIT: Included the error message and changed the print to "print(T, (obj(T),))"</p>
<pre><code> from __future__ import division
import numpy as np
import random
K = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1,])
x = len(K)
#Production rates and demand rates of products setup costs and holding costs (p, d, c, h)
p = np.array([193, 247, 231, 189, 159])
d = np.array([16, 16, 21, 19, 23])
#c1 = np.array([random.random() for _ in range(x)]) use these values as test values for c
c = [0.752, 0.768, 0.263, 0.152, 0.994, 0.449, 0.431, 0.154, 0.772]
h = [0.10*c[i]/240 for i in range(x)]
n = len(p)
t = [10.76, 74.61, 47.54, 29.40, 45.00, 90.48, 17.09, 85.19, 35.33]
def obj(T):
for i in range(n):
for q in range(x):
for k in range(x):
return ((1. / T) * c[q] + sum((.5*h[k]*(p[i]-d[i])* (p[i]/d[i])*(t[k])**2)))
for T in range(200, 900):
print(T, (obj(T),))
runfile('C:/Users/Jasper/Anaconda2/Shirodkar.py', wdir='C:/Users/Jasper/Anaconda2')
Traceback (most recent call last):
File "<ipython-input-1-0cfdc6b9fe69>", line 1, in <module>
runfile('C:/Users/Jasper/Anaconda2/Shirodkar.py', wdir='C:/Users/Jasper/Anaconda2')
File "C:\Users\Jasper\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\Jasper\Anaconda2\lib\site- packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/Jasper/Anaconda2/Shirodkar.py", line 24, in <module>
print(T, (obj(T),))
File "C:/Users/Jasper/Anaconda2/Shirodkar.py", line 21, in obj
return ((1. / T) * c[q] + sum((.5*h[k]*(p[i]-d[i])*(p[i]/d[i])*(t[k])**2)))
TypeError: 'numpy.float64' object is not iterable
</code></pre>
| -1 | 2016-10-12T14:31:31Z | 40,001,458 | <p>I suspect the problem is here:</p>
<pre><code>sum((.5*h[k]*(p[i]-d[i])* (p[i]/d[i])*(t[k])**2))
</code></pre>
<p>The end result of that expression is a float, isn't it? What is the sum() for?</p>
| 2 | 2016-10-12T14:39:00Z | [
"python",
"function",
"numpy",
"iterable"
] |
NameError when importing function from another script? | 40,001,314 | <p>I am having difficulty importing a function from another script. Both of the scripts below are in the same directory. Why can't the function from another script handle an object with the same name (<code>arr</code>)?</p>
<p><strong>evens.py</strong></p>
<pre><code>def find_evens():
return [x for x in arr if x % 2 == 0]
if __name__ == '__main__':
arr = list(range(11))
print(find_evens())
</code></pre>
<p><strong>import_evens.py</strong></p>
<pre><code>from evens import find_evens
if __name__ == '__main__':
arr = list(range(11))
print(find_evens())
</code></pre>
<p><strong>Traceback</strong></p>
<pre><code>Traceback (most recent call last):
File "C:\Users\user\Desktop\import_evens.py", line 7, in <module>
find_evens()
File "C:\Users\user\Desktop\evens.py", line 2, in find_evens
return [x for x in arr if x % 2 == 0]
NameError: name 'arr' is not defined
</code></pre>
| -1 | 2016-10-12T14:32:37Z | 40,001,775 | <p>Modules in python have separate namespaces. The qualified names <code>evens.arr</code> and <code>import_evens.arr</code> are separate entities. In each module, using just the name <code>arr</code> refers to the one local to it, so <code>arr</code> in <code>import_evens</code> is actually <code>import_evens.arr</code>.</p>
<p>Since you are defining <code>arr</code> inside of <code>if __name__ ...</code>, the name <code>arr</code> is only the defined in the <em>executed</em> module. The name <code>evens.arr</code> is never defined.</p>
<p>Further, there is no notion of truly global names. A name can be global to a module, so all entities inside it can use it. Any other module still has to address it as <code>a_module.global_variables_name</code>. It can also be imported as <code>from a_module import global_variables_name</code>, but this is just sugar for importing it and binding it to a <em>new</em> local name.</p>
<pre><code># same as `from a_module import global_variables_name`
import a_module
global_variables_name = a_module.global_variables_name
</code></pre>
<hr>
<p>What you have shown is best done via parameters to the function:</p>
<pre><code># evens.py
def find_evens(arr):
return [x for x in arr if x % 2 == 0]
# import_evens.py
if __name__ == '__main__':
arr = list(range(11))
print(find_evens(arr))
</code></pre>
<p>If you think it's better to have global variables for this but don't understand how a language uses global variables, it's better not to have global variables.</p>
| 3 | 2016-10-12T14:52:34Z | [
"python",
"python-import"
] |
Remove Quotation mark from in between of the String but from 0th index and -1th index | 40,001,326 | <p>I want to remove quotation mark from this string:</p>
<pre><code>'"Hello World - October 1 Not Trending Twitter """"""""""""""""Spark 2, sparkCSV parser"""""""""""""""" - DDSAD"""""""""""'
</code></pre>
<p>Output should be</p>
<pre><code>'"Hello World - October 1 Not Trending Twitter Spark 2, sparkCSV parser - DDSAD"'
</code></pre>
<p>Any ideas?</p>
| 0 | 2016-10-12T14:33:14Z | 40,001,513 | <p>Take the string and replace the <code>'"'</code> with <code>''</code>; then place them back in <code>'""'</code> with <code>'"{}"'.format</code>:</p>
<pre><code>s = '"Hello World - October 1 Not Trending Twitter """"""""""""""""Spark 2, sparkCSV parser"""""""""""""""" - DDSAD"""""""""""'
r = '"{}"'.format(s.replace('"', ''))
</code></pre>
<p>Result being:</p>
<pre><code>'"Hello World - October 1 Not Trending Twitter Spark 2, sparkCSV parser - DDSAD"'
</code></pre>
<p>For your larger string as supplied in the comment, you could <code>split</code> on the <code>comma</code> and <em>then</em> join the formatted strings on the comma again:</p>
<pre><code>s = '"EM16203120","Hello World - October 1 Not Trending Twitter """"""""""""""""Spark 2, sparkCSV parser"""""""""""""""" - DDSAD"""""""""""'
r = ','.join('"{}"'.format(sb.replace('"', '')) for sb in s.split(','))
</code></pre>
<p>With <code>r</code> now being:</p>
<pre><code>'"EM16203120","Hello World - October 1 Not Trending Twitter Spark 2"," sparkCSV parser - DDSAD"'
</code></pre>
| 3 | 2016-10-12T14:41:05Z | [
"python",
"string",
"python-3.x"
] |
Find all the occurences of a string in an imperfect text | 40,001,336 | <p>I am trying to find a string within a long text extracted from a PDF file, and get the string's position in the text, and then return 100 words before the string and 100 after.
The problem is that the extraction is not perfect, so I am having a problem like this:</p>
<p>The query string is "test text"</p>
<p>The text may look like:</p>
<blockquote>
<p>This is atest textwith a problem</p>
</blockquote>
<p>as you can see the word "test" is joined with the letter "a" and the word "text" is joined with the word "with"</p>
<p>So the only function is working with me is __contains __ which doesn't return the position of the word.</p>
<p>Any ideas to find all the occurences of a word in such a text with their postions?</p>
<p>Thank you very much</p>
| 1 | 2016-10-12T14:33:35Z | 40,001,524 | <p>If you're looking for the position of the text within the string, you can use <a href="https://docs.python.org/2/library/string.html" rel="nofollow"><code>string.find()</code></a>.</p>
<pre><code>>>> query_string = 'test text'
>>> text = 'This is atest textwith a problem'
>>> if query_string in text:
print text.find(query_string)
9
</code></pre>
| 2 | 2016-10-12T14:41:27Z | [
"python"
] |
Find all the occurences of a string in an imperfect text | 40,001,336 | <p>I am trying to find a string within a long text extracted from a PDF file, and get the string's position in the text, and then return 100 words before the string and 100 after.
The problem is that the extraction is not perfect, so I am having a problem like this:</p>
<p>The query string is "test text"</p>
<p>The text may look like:</p>
<blockquote>
<p>This is atest textwith a problem</p>
</blockquote>
<p>as you can see the word "test" is joined with the letter "a" and the word "text" is joined with the word "with"</p>
<p>So the only function is working with me is __contains __ which doesn't return the position of the word.</p>
<p>Any ideas to find all the occurences of a word in such a text with their postions?</p>
<p>Thank you very much</p>
| 1 | 2016-10-12T14:33:35Z | 40,001,602 | <p>You did not specify all your requirements but this works for your current problem. The program prints out <code>9 and 42</code>, which are the beginning of two occurrences of the <code>test text</code>.</p>
<pre><code>import re
filt = re.compile("test text")
for match in filt.finditer('This is atest textwith a problem. another test text'):
print match.start()
</code></pre>
| 3 | 2016-10-12T14:44:28Z | [
"python"
] |
Find all the occurences of a string in an imperfect text | 40,001,336 | <p>I am trying to find a string within a long text extracted from a PDF file, and get the string's position in the text, and then return 100 words before the string and 100 after.
The problem is that the extraction is not perfect, so I am having a problem like this:</p>
<p>The query string is "test text"</p>
<p>The text may look like:</p>
<blockquote>
<p>This is atest textwith a problem</p>
</blockquote>
<p>as you can see the word "test" is joined with the letter "a" and the word "text" is joined with the word "with"</p>
<p>So the only function is working with me is __contains __ which doesn't return the position of the word.</p>
<p>Any ideas to find all the occurences of a word in such a text with their postions?</p>
<p>Thank you very much</p>
| 1 | 2016-10-12T14:33:35Z | 40,001,852 | <p>You might have a look at the <a href="https://pypi.python.org/pypi/regex" rel="nofollow">regex</a> module which allows for 'fuzzy' matching:</p>
<pre><code>>>> import regex
>>> s='This is atest textwith a problem'
>>> regex.search(r'(?:text with){e<2}', s)
<regex.Match object; span=(14, 22), match='textwith', fuzzy_counts=(0, 0, 1)>
>>> regex.search(r'(?:test text){e<2}', s)
<regex.Match object; span=(8, 18), match='atest text', fuzzy_counts=(0, 1, 0)>
</code></pre>
<p>You can match text that has insertions, deletions, and errors. The match group returned has the span and index. </p>
<p>You can use <code>regex.findall</code> to find all the potential target matches.</p>
<p>Perfect for what you are describing.</p>
| 1 | 2016-10-12T14:57:00Z | [
"python"
] |
Find all the occurences of a string in an imperfect text | 40,001,336 | <p>I am trying to find a string within a long text extracted from a PDF file, and get the string's position in the text, and then return 100 words before the string and 100 after.
The problem is that the extraction is not perfect, so I am having a problem like this:</p>
<p>The query string is "test text"</p>
<p>The text may look like:</p>
<blockquote>
<p>This is atest textwith a problem</p>
</blockquote>
<p>as you can see the word "test" is joined with the letter "a" and the word "text" is joined with the word "with"</p>
<p>So the only function is working with me is __contains __ which doesn't return the position of the word.</p>
<p>Any ideas to find all the occurences of a word in such a text with their postions?</p>
<p>Thank you very much</p>
| 1 | 2016-10-12T14:33:35Z | 40,002,884 | <p>You could take the following kind of approach. This first attempts to split the whole text into words, and keeps note of the index of each word. </p>
<p>Next it iterates through the text looking for <code>test text</code> with possible 0 or more spaces between. For each match it notes the start and then creates a list of words found before and after that point using Python's <code>bisect</code> library to locate the required entries in the <code>words</code> list.</p>
<pre><code>import bisect
import re
test = "aa bb cc dd test text ee ff gg testtextwith hh ii jj"
words = [(w.start(), w.group(0)) for w in re.finditer(r'(\b\w+?\b)', test)]
adjacent_words = 2
for match in re.finditer(r'(test\s*?text)', test):
start, end = match.span()
words_start = bisect.bisect_left(words, (start, ''))
words_end = bisect.bisect_right(words, (end, ''))
words_before = [w for i, w in words[words_start-adjacent_words : words_start]]
words_after = [w for i, w in words[words_end : words_end + adjacent_words]]
# Adjacent words as a list
print words_before, match.group(0), words_after
# Or, surrounding text as is.
print test[words[words_start-adjacent_words][0] : words[words_end+adjacent_words][0]]
print
</code></pre>
<p>So for this example with 2 adjacent words, you would get the following output:</p>
<pre><code>['cc', 'dd'] test text ['ee', 'ff']
cc dd test text ee ff
['ff', 'gg'] testtext ['hh', 'ii']
ff gg testtextwith hh ii
</code></pre>
| 3 | 2016-10-12T15:45:56Z | [
"python"
] |
Python loop to iterate through elements in an XML and get sub-elements values | 40,001,345 | <p>I am working with an XML being read from aN API that has several hotel locations. Each individual hotel has a "hotel code" element that is a value unique to each hotel in the XML output and I would like to GET the "latitude" and "longitude" attributes for each hotel. My code right now can parse through the XML and record each instance of "latitude" and "longitude" but not organized as paired lat/lon for a hotel, rather it records every latitude in the XML then every longitude in the XML. I am having trouble figuring out how to say: IF hotel code == the previous hotel code, record latitude/longitude together; ELSE move on to next hotel and record that lat/lon. An example section of the XML output is below as is my code and my code's output:</p>
<p>XML: </p>
<pre><code><hotel code="13272" name="Sonesta Fort Lauderdale Beach" categoryCode="4EST" categoryName="4 STARS" destinationCode="FLL" destinationName="Fort Lauderdale - Hollywood Area - FL" zoneCode="1" zoneName="Fort Lauderdale Beach Area" latitude="26.137508" longitude="-80.103438" minRate="1032.10" maxRate="1032.10" currency="USD"><rooms><room code="DBL.DX" name="DOUBLE DELUXE"><rates><rate rateKey="20161215|20161220|W|235|13272|DBL.DX|GC-ALL|RO||1~1~0||N@675BEABED1984D9E8073EB6154B41AEE" rateClass="NOR" rateType="BOOKABLE" net="1032.10" allotment="238" rateCommentsId="235|38788|431" paymentType="AT_WEB" packaging="false" boardCode="RO" boardName="ROOM ONLY" rooms="1" adults="1" children="0"><cancellationPolicies><cancellationPolicy amount="206.42" from="2016-12-11T23:59:00-05:00"/></cancellationPolicies></rate></rates></room></rooms></hotel>
</code></pre>
<p>CODE:</p>
<pre><code>import time, hashlib
import urllib2
from xml.dom import minidom
# Your API Key and secret
apiKey =
Secret =
# Signature is generated by SHA256 (Api-Key + Secret + Timestamp (in seconds))
sigStr = "%s%s%d" % (apiKey,Secret,int(time.time()))
signature = hashlib.sha256(sigStr).hexdigest()
endpoint = "https://api.test.hotelbeds.com/hotel-api/1.0/hotels"
try:
# Create http request and add headers
req = urllib2.Request(url=endpoint)
req.add_header("X-Signature", signature)
req.add_header("Api-Key", apiKey)
req.add_header("Accept", "application/xml")
req.add_header("Content-Type", "application/xml")
req.add_data(' <availabilityRQ xmlns="http://www.hotelbeds.com/schemas/messages" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ><stay checkIn="2016-12-15" checkOut="2016-12-20"/><occupancies><occupancy rooms="1" adults="1" children="0"/></occupancies><geolocation longitude="-80.265323" latitude="26.131510" radius="10" unit="km"/></availabilityRQ>')
# Reading response and print-out
file = minidom.parse(urllib2.urlopen(req))
hotels = file.getElementsByTagName("hotel")
lat = [items.attributes['latitude'].value for items in hotels]
lon = [items.attributes['longitude'].value for items in hotels]
print lat + lon
except urllib2.HTTPError, e:
# Reading body of response
httpResonponse = e.read()
print "%s, reason: %s " % (str(e), httpResonponse)
except urllib2.URLError, e:
print "Client error: %s" % e.reason
except Exception, e:
print "General exception: %s " % str(e)
</code></pre>
<p>MY OUTPUT RIGHT NOW:</p>
<p>[u'26.144224', u'26.122569', u'26.11437', u'26.1243414605478', u'26.119195', u'26.1942424979814', u'26.145488', u'26.1632044819114', u'26.194145', u'26.1457688280936', u'26.1868547339183', u'26.1037652256159', u'26.090442389015', u'26.187242', u'-80.325579', u'-80.251829', u'-80.25315', u'-80.2564349700697', u'-80.262738', u'-80.2919112076052', u'-80.258274', u'-80.2584546734579', u'-80.261252', u'-80.2576325763948', u'-80.1963213016279', u'-80.2630081633106', u'-80.2272565662588', u'-80.20161000000002']</p>
| 1 | 2016-10-12T14:34:12Z | 40,003,333 | <p>You could place the result of your XML file in to an iterable structure like a dictionary.
I've taken your sample xml data and placed it into a file called hotels.xml. </p>
<pre><code>from xml.dom import minidom
hotels_position = {}
dom = minidom.parse('hotels.xml')
hotels = dom.getElementsByTagName("hotel")
for hotel in hotels:
hotel_id = hotel.attributes['code'].value
position = {}
position['latitude'] = hotel.attributes['latitude'].value
position['longitude'] = hotel.attributes['longitude'].value
hotels_position[hotel_id] = position
print hotels_position
</code></pre>
<p>This code outputs the following structure (I added a second hotel)</p>
<pre><code>{u'13272': {'latitude': u'26.137508', 'longitude': u'-80.103438'}, u'13273': {'latitude': u'26.137508', 'longitude': u'-80.103438'}}
</code></pre>
<p>You can now iterate through each hotel in the dictionary.</p>
<pre><code>for hotel in hotels_position:
print("Hotel {} is located at ({},{})".format(hotel,
hotels_position[hotel]['latitude'],
hotels_position[hotel]['latitude']))
</code></pre>
<p>Now that you have your data in an organised structure, your 'logic' will be much easier to write.</p>
| 0 | 2016-10-12T16:09:06Z | [
"python",
"xml",
"api",
"urllib2",
"minidom"
] |
Find and print only the first ORF in fasta | 40,001,347 | <p>How to find only the first start_codon for each frame. In the code below it is giving me all start_codon position.</p>
<pre><code>from Bio.SeqRecord import SeqRecord
from Bio import SeqIO
def test(seq, start, stop):
start = ["ATG"]
start_codon_index = 0
for frame in range(0,3):
for i in range(frame, len(seq), 3):
current_codon = seq[i:i+3]
if current_codon in start:
start_codons.append(start_codon_index)
return start_codons
f = open("a.fa","r")
start = ["ATG"]
for record in SeqIO.parse(f,"fasta"):
seq=record.seq
name=record.id
start_codons=test(seq, start, stop)
print name, start_codons
</code></pre>
| 0 | 2016-10-12T14:34:14Z | 40,001,548 | <p>If you have a DNA string and you want to find the first occurrence of an "ATG" sequence, the easiest is to just do: </p>
<pre><code>DNA = "ACCACACACCATATAATGATATATAGGAAATG"
print(DNA.find("ATG"))
</code></pre>
<p>Prints out <code>15</code>, note that the indexing in python starts from 0</p>
<p>In case you consider nucleotide triplets as well:</p>
<pre><code>DNA = "ACCACACACCATATAATGATATATAGGAAATG"
for i in range(0, len(DNA), 3):
if DNA[i:i+3] == "ATG":
print(i)
break
</code></pre>
<p>Returns <code>15</code> as well.</p>
| 0 | 2016-10-12T14:42:23Z | [
"python"
] |
Integration with Scipy giving incorrect results with negative lower bound | 40,001,402 | <p>I am attempting to calculate integrals between two limits using python/scipy.</p>
<p>I am using online calculators to double check my results (<a href="http://www.wolframalpha.com/widgets/view.jsp?id=8c7e046ce6f4d030f0b386ea5c17b16a" rel="nofollow">http://www.wolframalpha.com/widgets/view.jsp?id=8c7e046ce6f4d030f0b386ea5c17b16a</a>, <a href="http://www.integral-calculator.com/" rel="nofollow">http://www.integral-calculator.com/</a>), and my results disagree when I have certain limits set.</p>
<p>The code used is:</p>
<pre><code>import scipy as sp
import numpy as np
def integrand(x):
return np.exp(-0.5*x**2)
def int_test(a,b):
# a and b are the lower and upper bounds of the integration
return sp.integrate.quad(integrand,a,b)
</code></pre>
<p>When setting the limits (a,b) to (-np.inf,1) I get answers that agree (2.10894...)
however if I set (-np.inf,300) I get an answer of zero.</p>
<p>On further investigation using:</p>
<pre><code>for i in range(50):
print(i,int_test(-np.inf,i))
</code></pre>
<p>I can see that the result goes wrong at i=36.</p>
<p>I was wondering if there was a way to avoid this?</p>
<p>Thanks,</p>
<p>Matt</p>
| 0 | 2016-10-12T14:36:25Z | 40,002,922 | <p>I am guessing this has to do with the infinite bounds. scipy.integrate.quad is a wrapper around quadpack routines.</p>
<p><a href="https://people.sc.fsu.edu/~jburkardt/f_src/quadpack/quadpack.html" rel="nofollow">https://people.sc.fsu.edu/~jburkardt/f_src/quadpack/quadpack.html</a></p>
<p>In the end, these routines chose suitable intervals and try to get the value of the integral through function evaluations and then numerical integrations. This works fine for finite integrals, assuming you know roughly how fine you can make the steps of the function evaluation.</p>
<p>For infinite integrals it depends how well the algorithms choose respective subintervals and how accurately they are computed.</p>
<p>My advice: do NOT use numerical integration software AT ALL if you are interested in accurate values for infinite integrals. </p>
<p>If your problem can be solved analytically, try that or confine yourself to certain bounds.</p>
| 1 | 2016-10-12T15:48:11Z | [
"python",
"numpy",
"scipy",
"integration"
] |
Filtering plain models on their relation to a subclass of a django polymorphic model? | 40,001,574 | <p>I have a plain Django model that has a ForeignKey relation to a django-polymorphic model. </p>
<p>Let's call the first <code>PlainModel</code> that has a <code>content</code> ForeignKey field to a polymorphic <code>Content</code> model with subtypes <code>Video</code> and <code>Audio</code> (simplified example).</p>
<p>Now I want to query for all <code>PlainModel</code> instances that refer to a <code>Video</code>.</p>
<p>Problem is all the docs I find are about filtering directly via the polymorphic model itself. So in this example something like <code>Content.objects.instance_of(Video)</code>. But I need <code>PlainModel</code>'s, so it would need to look something like <code>PlainModel.objects.filter(content__instance_of=Video)</code>. I tried a bunch of variations but I cannot find anything that works. </p>
<p>In the docs they use <code>Q(instance_of=ModelB)</code>, but this doesn't work on the relation as <code>Q(content__instance_of=ModelB)</code>. It gives an error like 'Cannot query "x": Must be "y" instance.' even with the translation call, I guess because the PlainModel is not polymorphic aware.</p>
<p>I have a temporary hack that directly filters on the <code>polymorphic_ctype</code> field using regular django filter like <code>PlainModel.objects.filter(content__polymorphic_ctype_id=my_content_type_id)</code>, but this doesn't handle subclasses. Eg: <code>ExtendedVideo</code> would not be found when looking for <code>Video</code>, because it would have a different ContentType id. </p>
<p>I could go solve this and keep a list of allowed subtypes or parse the type hierarchy to get more content types for the filter but that seems to duplicating functionality from django-polymorphic.</p>
| 1 | 2016-10-12T14:43:27Z | 40,002,929 | <p>You can do this by first getting all the <code>PlainModel</code> instances that have a <code>Video</code> subtype, and then querying for the foreign key relationships that are in that queryset:</p>
<pre><code>content_videos = Content.objects.instance_of(Video)
plain_model_videos = PlainModel.objects.filter(content__in=content_videos)
</code></pre>
<p>Please see <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#in" rel="nofollow">the docs</a> for reference.</p>
| 0 | 2016-10-12T15:48:36Z | [
"python",
"django",
"django-polymorphic"
] |
Python error in sorting a list | 40,001,632 | <p>I simply want to sort a list... and I have a 2 parameter lamda
Here is my simple code:</p>
<pre><code>I.sort(key = lambda x, y: x.finish - y.finish)
</code></pre>
<p>And the compiler return this error</p>
<pre><code>builtins.TypeError: <lambda>() missing 1 required positional argument: 'y'
</code></pre>
| -1 | 2016-10-12T14:46:03Z | 40,001,673 | <p>You are trying to use <code>key</code> function as a <code>cmp</code> function (removed in Python 3.x), but don't you mean to simply sort by the "finish" attribute:</p>
<pre><code>I.sort(key=lambda x: x.finish)
</code></pre>
<p>Or, with the <a href="https://docs.python.org/3/library/operator.html#operator.attrgetter" rel="nofollow">"attrgetter"</a>:</p>
<pre><code>from operator import attrgetter
I.sort(key=attrgetter("finish"))
</code></pre>
| 2 | 2016-10-12T14:47:43Z | [
"python",
"sorting"
] |
Spark Connector MongoDB - Python API | 40,001,640 | <p>I'd like pull data from Mongo by Spark, especially by PySpark..
I have found official guide from Mongo <a href="https://docs.mongodb.com/spark-connector/python-api/" rel="nofollow">https://docs.mongodb.com/spark-connector/python-api/</a></p>
<p>I have all prerequisites:</p>
<ul>
<li>Scala 2.11.8</li>
<li>Spark 1.6.2</li>
<li><p>MongoDB 3.0.8 (not on same device where is Spark)</p>
<p>$ pyspark --conf
"spark.mongodb.input.uri=mongodb://mongo1:27019/xxx.xxx?readPreference=primaryPreferred"
--packages org.mongodb.spark:mongo-spark-connector_2.11:1.1.0</p></li>
</ul>
<p>and PySpark showed me this:</p>
<pre><code>Python 3.4.2 (default, Oct 8 2014, 10:45:20)
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
Ivy Default Cache set to: /root/.ivy2/cache
The jars for the packages stored in: /root/.ivy2/jars
:: loading settings :: url = jar:file:/usr/local/spark/lib/spark-assembly-1.6.2-hadoop2.6.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.mongodb.spark#mongo-spark-connector_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found org.mongodb.spark#mongo-spark-connector_2.11;1.1.0 in central
found org.mongodb#mongo-java-driver;3.2.2 in central
:: resolution report :: resolve 280ms :: artifacts dl 6ms
:: modules in use:
org.mongodb#mongo-java-driver;3.2.2 from central in [default]
org.mongodb.spark#mongo-spark-connector_2.11;1.1.0 from central in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 2 | 0 | 0 | 0 || 2 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 2 already retrieved (0kB/9ms)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/10/12 16:35:46 INFO SparkContext: Running Spark version 1.6.2
16/10/12 16:35:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/12 16:35:47 INFO SecurityManager: Changing view acls to: root
16/10/12 16:35:47 INFO SecurityManager: Changing modify acls to: root
16/10/12 16:35:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/10/12 16:35:47 INFO Utils: Successfully started service 'sparkDriver' on port 35485.
16/10/12 16:35:48 INFO Slf4jLogger: Slf4jLogger started
16/10/12 16:35:48 INFO Remoting: Starting remoting
16/10/12 16:35:48 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:39860]
16/10/12 16:35:48 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39860.
16/10/12 16:35:48 INFO SparkEnv: Registering MapOutputTracker
16/10/12 16:35:48 INFO SparkEnv: Registering BlockManagerMaster
16/10/12 16:35:48 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-1e9185bd-fd1a-4d36-8c7e-9b6430e9f5c6
16/10/12 16:35:48 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/10/12 16:35:48 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/12 16:35:48 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/12 16:35:48 INFO SparkUI: Started SparkUI at http://192.168.28.194:4040
16/10/12 16:35:48 INFO HttpFileServer: HTTP File server directory is /tmp/spark-c1d32422-8241-411f-a751-e01e5f6e2b5c/httpd-d62ed1b8-e4ab-4891-9b61-5f0f5ae7eb6e
16/10/12 16:35:48 INFO HttpServer: Starting HTTP Server
16/10/12 16:35:48 INFO Utils: Successfully started service 'HTTP file server' on port 34716.
16/10/12 16:35:48 INFO SparkContext: Added JAR file:/root/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-1.1.0.jar at http://192.168.28.194:34716/jars/org.mongodb.spark_mongo-spark-connector_2.11-1.1.0.jar with timestamp 1476282948892
16/10/12 16:35:48 INFO SparkContext: Added JAR file:/root/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar at http://192.168.28.194:34716/jars/org.mongodb_mongo-java-driver-3.2.2.jar with timestamp 1476282948898
16/10/12 16:35:49 INFO Utils: Copying /root/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-1.1.0.jar to /tmp/spark-c1d32422-8241-411f-a751-e01e5f6e2b5c/userFiles-549541b8-aaba-4444-b2eb-438baa7e82e8/org.mongodb.spark_mongo-spark-connector_2.11-1.1.0.jar
16/10/12 16:35:49 INFO SparkContext: Added file file:/root/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-1.1.0.jar at file:/root/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-1.1.0.jar with timestamp 1476282949018
16/10/12 16:35:49 INFO Utils: Copying /root/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar to /tmp/spark-c1d32422-8241-411f-a751-e01e5f6e2b5c/userFiles-549541b8-aaba-4444-b2eb-438baa7e82e8/org.mongodb_mongo-java-driver-3.2.2.jar
16/10/12 16:35:49 INFO SparkContext: Added file file:/root/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar at file:/root/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar with timestamp 1476282949029
16/10/12 16:35:49 INFO Executor: Starting executor ID driver on host localhost
16/10/12 16:35:49 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43448.
16/10/12 16:35:49 INFO NettyBlockTransferService: Server created on 43448
16/10/12 16:35:49 INFO BlockManagerMaster: Trying to register BlockManager
16/10/12 16:35:49 INFO BlockManagerMasterEndpoint: Registering block manager localhost:43448 with 511.1 MB RAM, BlockManagerId(driver, localhost, 43448)
16/10/12 16:35:49 INFO BlockManagerMaster: Registered BlockManager
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Using Python version 3.4.2 (default, Oct 8 2014 10:45:20)
SparkContext available as sc, HiveContext available as sqlContext.
</code></pre>
<p>then i put this code:</p>
<pre><code>df = sqlContext.read.format("com.mongodb.spark.sql.DefaultSource").load()
</code></pre>
<p>and there was this:</p>
<pre><code>16/10/12 16:40:51 INFO HiveContext: Initializing execution hive, version 1.2.1
16/10/12 16:40:51 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
16/10/12 16:40:51 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/10/12 16:40:51 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/10/12 16:40:51 INFO ObjectStore: ObjectStore, initialize called
16/10/12 16:40:51 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/10/12 16:40:51 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/10/12 16:40:51 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/10/12 16:40:51 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/10/12 16:40:53 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/10/12 16:40:53 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:53 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:54 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:54 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:54 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/10/12 16:40:54 INFO ObjectStore: Initialized ObjectStore
16/10/12 16:40:55 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/10/12 16:40:55 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
16/10/12 16:40:55 INFO HiveMetaStore: Added admin role in metastore
16/10/12 16:40:55 INFO HiveMetaStore: Added public role in metastore
16/10/12 16:40:55 INFO HiveMetaStore: No user is added in admin role, since config is empty
16/10/12 16:40:55 INFO HiveMetaStore: 0: get_all_databases
16/10/12 16:40:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
16/10/12 16:40:55 INFO HiveMetaStore: 0: get_functions: db=default pat=*
16/10/12 16:40:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
16/10/12 16:40:55 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:55 INFO SessionState: Created local directory: /tmp/8733297b-e0d2-49cf-8557-62c8c4e7cc4a_resources
16/10/12 16:40:55 INFO SessionState: Created HDFS directory: /tmp/hive/root/8733297b-e0d2-49cf-8557-62c8c4e7cc4a
16/10/12 16:40:55 INFO SessionState: Created local directory: /tmp/root/8733297b-e0d2-49cf-8557-62c8c4e7cc4a
16/10/12 16:40:55 INFO SessionState: Created HDFS directory: /tmp/hive/root/8733297b-e0d2-49cf-8557-62c8c4e7cc4a/_tmp_space.db
16/10/12 16:40:55 INFO HiveContext: default warehouse location is /user/hive/warehouse
16/10/12 16:40:55 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/10/12 16:40:55 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
16/10/12 16:40:55 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/10/12 16:40:56 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/10/12 16:40:56 INFO ObjectStore: ObjectStore, initialize called
16/10/12 16:40:56 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/10/12 16:40:56 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/10/12 16:40:56 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/10/12 16:40:56 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/10/12 16:40:57 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/10/12 16:40:58 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:58 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:59 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:59 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:59 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/10/12 16:40:59 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/10/12 16:40:59 INFO ObjectStore: Initialized ObjectStore
16/10/12 16:40:59 INFO HiveMetaStore: Added admin role in metastore
16/10/12 16:40:59 INFO HiveMetaStore: Added public role in metastore
16/10/12 16:40:59 INFO HiveMetaStore: No user is added in admin role, since config is empty
16/10/12 16:40:59 INFO HiveMetaStore: 0: get_all_databases
16/10/12 16:40:59 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
16/10/12 16:40:59 INFO HiveMetaStore: 0: get_functions: db=default pat=*
16/10/12 16:40:59 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
16/10/12 16:40:59 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/10/12 16:40:59 INFO SessionState: Created local directory: /tmp/cc4f12a5-e5b2-4a22-a240-04e1ca3727ae_resources
16/10/12 16:40:59 INFO SessionState: Created HDFS directory: /tmp/hive/root/cc4f12a5-e5b2-4a22-a240-04e1ca3727ae
16/10/12 16:40:59 INFO SessionState: Created local directory: /tmp/root/cc4f12a5-e5b2-4a22-a240-04e1ca3727ae
16/10/12 16:40:59 INFO SessionState: Created HDFS directory: /tmp/hive/root/cc4f12a5-e5b2-4a22-a240-04e1ca3727ae/_tmp_space.db
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/spark/python/pyspark/sql/readwriter.py", line 139, in load
return self._df(self._jreader.load())
File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o24.load.
: java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
at com.mongodb.spark.config.MongoCompanionConfig$class.getOptionsFromConf(MongoCompanionConfig.scala:209)
at com.mongodb.spark.config.ReadConfig$.getOptionsFromConf(ReadConfig.scala:39)
at com.mongodb.spark.config.MongoCompanionConfig$class.apply(MongoCompanionConfig.scala:101)
at com.mongodb.spark.config.ReadConfig$.apply(ReadConfig.scala:39)
at com.mongodb.spark.sql.DefaultSource.createRelation(DefaultSource.scala:67)
at com.mongodb.spark.sql.DefaultSource.createRelation(DefaultSource.scala:50)
at com.mongodb.spark.sql.DefaultSource.createRelation(DefaultSource.scala:36)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
</code></pre>
<p>I have tried a lot of possible option to pull data from Mongo by Spark..Any tips?</p>
| 0 | 2016-10-12T14:46:38Z | 40,002,940 | <p>It looks like an error I'd expect to see if I were using code compiled in a different version of Scala. Have you tried running it with <code>--packages org.mongodb.spark:mongo-spark-connector_2.10:1.1.0</code>?</p>
<p>By default in Spark 1.6.x is compiled against Scala 2.10 and you have to manually build it for Scala 2.11 like so:</p>
<pre><code>./dev/change-scala-version.sh 2.11
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
</code></pre>
| 1 | 2016-10-12T15:49:03Z | [
"python",
"mongodb",
"scala",
"apache-spark",
"pyspark"
] |
Get a random sample of a dict | 40,001,646 | <p>I'm working with a big dictionary and for some reason I also need to work on small random samples from that dictionary. How can I get this small sample (for example of length 2)? </p>
<p>Here is a toy-model:</p>
<pre><code>dy={'a':1, 'b':2, 'c':3, 'd':4, 'e':5}
</code></pre>
<p>I need to perform some task on dy which involves all the entries. Let us say, to simplify, I need to sum together all the values:</p>
<pre><code>s=0
for key in dy.key:
s=s+dy[key]
</code></pre>
<p>Now, I also need to perform the same task on a random sample of dy; for that I need a random sample of the keys of dy. The simple solution I can imagine is</p>
<pre><code>sam=list(dy.keys())[:1]
</code></pre>
<p>In that way I have a list of two keys of the dictionary which are somehow random. So, going back to may task, the only change I need in the code is:</p>
<pre><code>s=0
for key in sam:
s=s+dy[key]
</code></pre>
<p>The point is I do not fully understand how dy.keys is constructed and then I can't foresee any future issue</p>
| 0 | 2016-10-12T14:46:49Z | 40,002,638 | <p>Given your example of:</p>
<pre><code>dy = {'a':1, 'b':2, 'c':3, 'd':4, 'e':5}
</code></pre>
<p>Then the sum of all the values is more simply put as:</p>
<pre><code>s = sum(dy.values())
</code></pre>
<p>Then if it's not memory prohibitive, you can sample using:</p>
<pre><code>import random
values = list(dy.values())
s = sum(random.sample(values, 2))
</code></pre>
<p>Or, since <code>random.sample</code> can take a <code>set</code>-like object, then:</p>
<pre><code>from operator import itemgetter
import random
s = sum(itemgetter(*random.sample(dy.keys(), 2))(dy))
</code></pre>
<p>Or just use:</p>
<pre><code>s = sum(dy[k] for k in random.sample(dy.keys(), 2))
</code></pre>
<p>An alternative is to use a <code>heapq</code>, eg:</p>
<pre><code>import heapq
import random
s = sum(heapq.nlargest(2, dy.values(), key=lambda L: random.random()))
</code></pre>
| 2 | 2016-10-12T15:33:11Z | [
"python",
"dictionary",
"random",
"python-3.4"
] |
Mac OS X Cannot install packages with pip anymore | 40,001,729 | <p>I have completely screwed up my Python environment. My first big mistake was always installing packages using sudo and not using virtual environments. I'm not exactly sure what happened, but at some point I was not able to install certain packages anymore, possibly because of some dependency problems.</p>
<p>I decided to remove all the packages I installed with pip, unfortunately this did not help. Then I decided to uninstall pip itself and I actually tried to uninstall python myself.</p>
<p>Later on I found that the problem was probably caused because I had multiple python versions installed on my machine. </p>
<p>When I was trying to uninstall Python, I deleted the 2.7 framework version that was located in <code>/Library/Frameworks/Python.framework</code></p>
<p>If I use <code>/Library/Frameworks/Python.framework/Versions</code>, I only have a folder for 3.3.
In <code>/usr/local/bin</code> there is no python2.7, only python3.
In <code>/System/Library/Frameworks/Python.framework/Versions/</code> there is still a 2.7 folder.</p>
<p>If I call <code>which python</code> I get: <code>/usr/bin/python</code></p>
<p>This is where I am right now: The default python version is 2.7.5, I can use packages that I installed system wide using sudo, if I use the default python version. But some packages are not completely installed, e.g. scikit-learn is missing all modules and scipy is also missing some functionality. E.g. this is what I get when I import sklearn:</p>
<pre><code>In [2]: dir(sklearn)
Out[2]:
['__SKLEARN_SETUP__',
'__all__',
'__builtins__',
'__check_build',
'__doc__',
'__file__',
'__name__',
'__package__',
'__path__',
'__version__',
'base',
'clone',
'exceptions',
'externals',
're',
'setup_module',
'sys',
'utils',
'warnings']
</code></pre>
<p>This is the path where Python looks for packages:</p>
<pre><code>['',
'/usr/local/bin',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages',
'/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload',
'/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC',
'/Library/Python/2.7/site-packages',
'/Library/Python/2.7/site-packages/setuptools-28.3.0-py2.7.egg',
'/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg',
'/Library/Python/2.7/site-packages/IPython/extensions',
'/Users/bastiannaber/.ipython']
</code></pre>
<p>As I said before the <code>/System/Library/Frameworks/Python.framework/Versions/2.7</code> does not exist anymore on my system. <code>/Library/Python/2.7/site-packages</code> does still exist.</p>
<p>Can anybody tell me which python version I am using right now? And how can I fix these problems? Am I not supposed to have the default version in <code>/System/Library/Frameworks/Python.framework/Versions</code>. Can somebody please tell me how to fix my python environment?</p>
| 0 | 2016-10-12T14:50:54Z | 40,037,767 | <p>The system Python is in <code>/usr/bin/</code>, not <code>/usr/local/bin/</code>, as you noted when you ran <code>which python</code>. Type <code>python --version</code> and it should tell you which version you're running.</p>
<p>In regards to your Python environment, I would recommend you ditch using the system environment, since you've accidentally removed Python. I'd suggest you install <a href="http://brew.sh" rel="nofollow">Homebrew</a>, then <code>brew install python</code>. That will also give you <code>pip</code> which will exist separately, as a fresh version, to your system python and pip.</p>
| 2 | 2016-10-14T07:49:28Z | [
"python",
"osx",
"python-2.7",
"pip"
] |
Python/Numpy: problems with type conversion in vectorize and item | 40,001,868 | <p>I am writing a function to extract values from datetimes over arrays. I want the function to operate on a Pandas DataFrame or a numpy ndarray. </p>
<p>The values should be returned in the same way as the Python datetime properties, e.g. </p>
<pre><code>from datetime import datetime
dt = datetime(2016, 10, 12, 13)
dt.year
=> 2016
dt.second
=> 0
</code></pre>
<p>For a DataFrame this is reasonably easy to handle using <code>applymap()</code> (although there may well be a better way). I tried the same approach for numpy ndarrays using <code>vectorize()</code>, and I'm running into problems. Instead of the values I was expecting, I end up with very large integers, sometimes negative.</p>
<p>This was pretty baffling at first, but I figured out what is happening: the vectorized function is using <code>item</code> instead of <code>__get__</code> to get the values out of the ndarray. This seems to automatically convert each <code>datetime64</code> object to a <code>long</code>:</p>
<pre><code>nd[1][0]
=> numpy.datetime64('1986-01-15T12:00:00.000000000')
nd[1].item()
=> 506174400000000000L
</code></pre>
<p>The long seems to be the number of nanoseconds since epoch (1970-01-01T00:00:00). Somewhere along the line the values are converted to integers and they overflow, hence the negative numbers.</p>
<p>So that's the problem. Please can someone help me fix it? The only thing I can think of is doing the conversion manually, but this would effectively mean reimplementing a chunk of the <code>datetime</code> module.</p>
<p>Is there some alternative to <code>vectorize</code> that doesn't use <code>item()</code>?</p>
<p>Thanks!</p>
<p>Minimal code example:</p>
<pre><code>## DataFrame works fine
import pandas as pd
from datetime import datetime
df = pd.DataFrame({'dts': [datetime(1970, 1, 1, 1), datetime(1986, 1, 15, 12),
datetime(2016, 7, 15, 23)]})
exp = pd.DataFrame({'dts': [1, 15, 15]})
df_func = lambda x: x.day
out = df.applymap(df_func)
assert out.equals(exp)
## numpy ndarray is more difficult
from numpy import datetime64 as dt64, timedelta64 as td64, vectorize # for brevity
# The unary function is a little more complex, especially for days and months where the minimum value is 1
nd_func = lambda x: int((dt64(x, 'D') - dt64(x, 'M') + td64(1, 'D')) / td64(1, 'D'))
nd = df.as_matrix()
exp = exp.as_matrix()
=> array([[ 1],
[15],
[15]])
# The function works as expected on a single element...
assert nd_func(nd[1][0]) == 15
# ...but not on an ndarray
nd_vect = vectorize(nd_func)
out = nd_vect(nd)
=> array([[ -105972749999999],
[ 3546551532709551616],
[-6338201187830896640]])
</code></pre>
| 0 | 2016-10-12T14:57:32Z | 40,004,667 | <p>In Py3 the error is <code>OverflowError: Python int too large to convert to C long</code>.</p>
<pre><code>In [215]: f=np.vectorize(nd_func,otypes=[int])
In [216]: f(dts)
...
OverflowError: Python int too large to convert to C long
</code></pre>
<p>but if I change the datetime units, it runs ok</p>
<pre><code>In [217]: f(dts.astype('datetime64[ms]'))
Out[217]: array([ 1, 15, 15])
</code></pre>
<p>We could dig into this in more depth, but this seems to be simplest solution.</p>
<p>Keep in mind that <code>vectorize</code> is a convenience function; it makes iterating over multidimensions easier. But for a 1d array it is basically</p>
<pre><code>np.array([nd_func(i) for i in dts])
</code></pre>
<p>But note that we don't have to use iteration:</p>
<pre><code>In [227]: (dts.astype('datetime64[D]') - dts.astype('datetime64[M]') + td64(1,'D')) / td64(1,'D').astype(int)
Out[227]: array([ 1, 15, 15], dtype='timedelta64[D]')
</code></pre>
| 2 | 2016-10-12T17:20:56Z | [
"python",
"datetime",
"numpy"
] |
Python 2.7 function with keyword arguments in superclass: how to access from subclass? | 40,001,876 | <p>Are keyword arguments handled somehow specially in inherited methods? </p>
<p>When I call an instance method with keyword arguments from the class it's defined in, all goes well. When I call it from a subclass, Python complains about too many parameters passed. </p>
<p>Here's the example. The "simple" methods don't use keyword args, and inheritance works fine (even for me :-) The "KW" methods use keyword args, and inheritance doesn't work anymore... at least I can't see the difference. </p>
<pre><code>class aClass(object):
def aSimpleMethod(self, show):
print('show: %s' % show)
def aKWMethod(self, **kwargs):
for kw in kwargs:
print('%s: %s' % (kw, kwargs[kw]))
class aSubClass(aClass):
def anotherSimpleMethod(self, show):
self.aSimpleMethod(show)
def anotherKWMethod(self, **kwargs):
self.aKWMethod(kwargs)
aClass().aSimpleMethod('this')
aSubClass().anotherSimpleMethod('that')
aClass().aKWMethod(show='this')
</code></pre>
<p>prints <code>this</code>, <code>that</code>, and <code>this</code>, as I expected. But</p>
<pre><code>aSubClass().anotherKWMethod(show='that')
</code></pre>
<p>throws:</p>
<pre><code>TypeError: aKWMethod() takes exactly 1 argument (2 given)
</code></pre>
| 2 | 2016-10-12T14:57:48Z | 40,001,925 | <p>You need to use **kwargs when you call the method, it takes no positional args, just <em>keyword arguments</em>:</p>
<pre><code> self.aKWMethod(**kwargs)
</code></pre>
<p>Once you do, it works fine:</p>
<pre><code>In [2]: aClass().aSimpleMethod('this')
...: aSubClass().anotherSimpleMethod('that')
...: aClass().aKWMethod(show='this')
...:
show: this
show: that
show: this
</code></pre>
| 1 | 2016-10-12T15:00:10Z | [
"python",
"python-2.7",
"inheritance",
"variadic"
] |
Python 2.7 function with keyword arguments in superclass: how to access from subclass? | 40,001,876 | <p>Are keyword arguments handled somehow specially in inherited methods? </p>
<p>When I call an instance method with keyword arguments from the class it's defined in, all goes well. When I call it from a subclass, Python complains about too many parameters passed. </p>
<p>Here's the example. The "simple" methods don't use keyword args, and inheritance works fine (even for me :-) The "KW" methods use keyword args, and inheritance doesn't work anymore... at least I can't see the difference. </p>
<pre><code>class aClass(object):
def aSimpleMethod(self, show):
print('show: %s' % show)
def aKWMethod(self, **kwargs):
for kw in kwargs:
print('%s: %s' % (kw, kwargs[kw]))
class aSubClass(aClass):
def anotherSimpleMethod(self, show):
self.aSimpleMethod(show)
def anotherKWMethod(self, **kwargs):
self.aKWMethod(kwargs)
aClass().aSimpleMethod('this')
aSubClass().anotherSimpleMethod('that')
aClass().aKWMethod(show='this')
</code></pre>
<p>prints <code>this</code>, <code>that</code>, and <code>this</code>, as I expected. But</p>
<pre><code>aSubClass().anotherKWMethod(show='that')
</code></pre>
<p>throws:</p>
<pre><code>TypeError: aKWMethod() takes exactly 1 argument (2 given)
</code></pre>
| 2 | 2016-10-12T14:57:48Z | 40,001,931 | <p>When you do <code>self.aKWMethod(kwargs)</code>, you're passing the whole dict of keyword arguments as a single positional argument to the (superclass's) <code>aKWMethod</code> method.</p>
<p>Change that to <code>self.aKWMethod(**kwargs)</code> and it should work as expected.</p>
| 1 | 2016-10-12T15:00:18Z | [
"python",
"python-2.7",
"inheritance",
"variadic"
] |
Python 2.7 function with keyword arguments in superclass: how to access from subclass? | 40,001,876 | <p>Are keyword arguments handled somehow specially in inherited methods? </p>
<p>When I call an instance method with keyword arguments from the class it's defined in, all goes well. When I call it from a subclass, Python complains about too many parameters passed. </p>
<p>Here's the example. The "simple" methods don't use keyword args, and inheritance works fine (even for me :-) The "KW" methods use keyword args, and inheritance doesn't work anymore... at least I can't see the difference. </p>
<pre><code>class aClass(object):
def aSimpleMethod(self, show):
print('show: %s' % show)
def aKWMethod(self, **kwargs):
for kw in kwargs:
print('%s: %s' % (kw, kwargs[kw]))
class aSubClass(aClass):
def anotherSimpleMethod(self, show):
self.aSimpleMethod(show)
def anotherKWMethod(self, **kwargs):
self.aKWMethod(kwargs)
aClass().aSimpleMethod('this')
aSubClass().anotherSimpleMethod('that')
aClass().aKWMethod(show='this')
</code></pre>
<p>prints <code>this</code>, <code>that</code>, and <code>this</code>, as I expected. But</p>
<pre><code>aSubClass().anotherKWMethod(show='that')
</code></pre>
<p>throws:</p>
<pre><code>TypeError: aKWMethod() takes exactly 1 argument (2 given)
</code></pre>
| 2 | 2016-10-12T14:57:48Z | 40,002,053 | <p>Just to demonstrate in the simplest terms possible what is going wrong, note that this error has nothing to do with inheritance. Consider the following case:</p>
<pre><code>>>> def f(**kwargs):
... pass
...
>>> f(a='test') # works fine!
>>> f('test')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() takes 0 positional arguments but 1 was given
</code></pre>
<p>The point is that <code>**kwargs</code> <em>only</em> allows keyword arguments and cannot be replaced by a positional argument. </p>
| 1 | 2016-10-12T15:06:07Z | [
"python",
"python-2.7",
"inheritance",
"variadic"
] |
Cannot get psycopg2 to work, but installed correctly. Mac OS | 40,001,881 | <p>I'm trying to work with psycopg2 natively on Mac. It installs fine, with no errors at least, but when i import it get an error message.</p>
<p>I've seen dozens of threads with similar issues and solutions that vary massively and just seem excessive for such a common module. </p>
<p>can anyone help? </p>
<pre><code>Last login: Wed Oct 12 15:47:24 on console
Gurmokhs-MBP:~ Gurmokh$ pip install psycopg2
Requirement already satisfied (use --upgrade to upgrade): psycopg2 in /Library/Python/2.7/site-packages
Gurmokhs-MBP:~ Gurmokh$ python
Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/psycopg2-2.6.2-py2.7-macosx-10.6-intel.egg/psycopg2/__init__.py", line 50, in <module>
from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site -packages/psycopg2-2.6.2-py2.7-macosx-10.6- intel.egg/psycopg2/_psycopg.so, 2): Library not loaded: libssl.1.0.0.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/psycopg2-2.6.2-py2.7-macosx-10.6-intel.egg/psycopg2/_psycopg.so
Reason: image not found
>>> ^D
Gurmokhs-MBP:~ Gurmokh$
</code></pre>
<p>I can see some copies floating round from different applications.
I'm assuming i could copy one of these. The above message tells me what is referencing this file, but they do not tell me where they expect to find it. If i knew where it should go i would try this.</p>
<pre><code>bash-3.2# find . -name "libssl.1.0.0.dylib"
./Library/Application Support/Fitbit Connect/libssl.1.0.0.dylib
./Library/PostgreSQL/9.5/lib/libssl.1.0.0.dylib
./Library/PostgreSQL/9.5/pgAdmin3.app/Contents/Frameworks/libssl.1.0.0.dylib
./Users/Gurmokh/.Trash/Navicat for PostgreSQL.app/Contents/Frameworks/libssl.1.0.0.dylib
</code></pre>
| 0 | 2016-10-12T14:58:07Z | 40,032,074 | <p>Thanks guys. </p>
<p>@maxymoo I went with your suggestion. I have installed anaconda2. The install updated my path to include /anaconda/bin. </p>
<p>Then using the navigator I installed pyscopg2. Now I am able to use this in the shebang and my scripts execute fine and i'm able to import this module. </p>
<pre><code>Gurmokhs-MBP:rest Gurmokh$ python
Python 2.7.12 |Anaconda 4.2.0 (x86_64)| (default, Jul 2 2016, 17:43:17)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import psycopg2
if psycopg2.connect("dbname='postgres' user='postgres' host='localhost'"):
... print "connection made"
...
connection made
>>>
</code></pre>
| 0 | 2016-10-13T22:29:09Z | [
"python",
"osx",
"psycopg2"
] |
Python - Access column based on another column value | 40,001,888 | <p>I have the following dataframe in python</p>
<pre><code>+-------+--------+
| Value | Number |
+-------+--------+
| true | 123 |
| false | 234 |
| true | 345 |
| true | 456 |
| false | 567 |
| false | 678 |
| false | 789 |
+-------+--------+
</code></pre>
<p>How do I conduct an operation which returns a list of all the 'Number' which has Value == TRUE</p>
<p>The output list expected from the above table is</p>
<pre><code>['123', '345', '456']
</code></pre>
<p>Thanks in advance!</p>
| 0 | 2016-10-12T14:58:24Z | 40,001,910 | <p><code>df.loc[df['Value'],'Number']</code> should work assuming the dtype for 'Value' are real booleans:</p>
<pre><code>In [68]:
df.loc[df['Value'],'Number']
Out[68]:
0 123
2 345
3 456
Name: Number, dtype: int64
</code></pre>
<p>The above uses boolean indexing, here the boolean values are a mask against the df.</p>
<p>If you want a <code>list</code>:</p>
<pre><code>In [69]:
df.loc[df['Value'],'Number'].tolist()
Out[69]:
[123, 345, 456]
</code></pre>
| 1 | 2016-10-12T14:59:37Z | [
"python",
"pandas"
] |
Reading named command arguments | 40,001,892 | <p>Can I use <code>argparse</code> to read named command line arguments that do not need to be in a specific order? I browsed through the <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow">documentation</a> but most of it focused on displaying content based on the arguments provided (such as <code>--h</code>).</p>
<p>Right now, my script reads ordered, unnamed arguments:</p>
<blockquote>
<p>myscript.py foo-val bar-val</p>
</blockquote>
<p>using <code>sys.argv</code>:</p>
<pre><code>foo = sys.argv[1]
bar = sys.argv[2]
</code></pre>
<p>But I would like to change the input so that it is order agnostic by naming arguments:</p>
<blockquote>
<p>myscript.py --bar=bar-val --foo=foo-val </p>
</blockquote>
| 0 | 2016-10-12T14:58:44Z | 40,002,281 | <p>The answer is <strong>yes</strong>. A quick look at <a href="https://docs.python.org/3.6/library/argparse.html?highlight=argparse#module-argparse" rel="nofollow">the argparse documentation</a> would have answered as well.</p>
<p>Here is a very simple example, argparse is able to handle far more specific needs.</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--foo', '-f', help="a random options", type= str)
parser.add_argument('--bar', '-b', help="a more random option", type= int, default= 0)
print(parser.format_help())
# usage: test_args_4.py [-h] [--foo FOO] [--bar BAR]
#
# optional arguments:
# -h, --help show this help message and exit
# --foo FOO, -f FOO a random options
# --bar BAR, -b BAR a more random option
args = parser.parse_args("--foo pouet".split())
print(args) # Namespace(bar=0, foo='pouet')
print(args.foo) # pouet
print(args.bar) # 0
</code></pre>
<p>Off course, in a real script, you won't hard-code the command-line options and will call <code>parser.parse_args()</code> (without argument) instead. It will make argparse take the <code>sys.args</code> list as command-line arguments.</p>
<p>You will be able to call this script this way:</p>
<pre><code>test_args_4.py -h # prints the help message
test_args_4.py -f pouet # foo="pouet", bar=0 (default value)
test_args_4.py -b 42 # foo=None, bar=42
test_args_4.py -b 77 -f knock # foo="knock", bar=77
</code></pre>
<p>You will discover a lot of other features by reading the doc ;)</p>
| 0 | 2016-10-12T15:16:54Z | [
"python",
"python-3.x",
"arguments"
] |
Reading named command arguments | 40,001,892 | <p>Can I use <code>argparse</code> to read named command line arguments that do not need to be in a specific order? I browsed through the <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow">documentation</a> but most of it focused on displaying content based on the arguments provided (such as <code>--h</code>).</p>
<p>Right now, my script reads ordered, unnamed arguments:</p>
<blockquote>
<p>myscript.py foo-val bar-val</p>
</blockquote>
<p>using <code>sys.argv</code>:</p>
<pre><code>foo = sys.argv[1]
bar = sys.argv[2]
</code></pre>
<p>But I would like to change the input so that it is order agnostic by naming arguments:</p>
<blockquote>
<p>myscript.py --bar=bar-val --foo=foo-val </p>
</blockquote>
| 0 | 2016-10-12T14:58:44Z | 40,002,420 | <p>You can use the <a href="https://docs.python.org/2/howto/argparse.html#introducing-optional-arguments" rel="nofollow">Optional Arguments</a> like so:</p>
<pre><code>import argparse, sys
parser=argparse.ArgumentParser()
parser.add_argument('--bar', help='Do the bar option')
parser.add_argument('--foo', help='Foo the program')
args=parser.parse_args()
print args
print sys
</code></pre>
<p>Then if you call it with <code>./prog --bar=bar-val --foo foo-val</code> it prints:</p>
<pre><code>Namespace(bar='bar-val', foo='foo-val')
['Untitled 14.py', '--bar=bar-val', '--foo', 'foo-val']
</code></pre>
<p>Or, if the user wants help argparse builds that too:</p>
<pre><code> $ ./prog -h
usage: Untitled 14.py [-h] [--bar BAR] [--foo FOO]
optional arguments:
-h, --help show this help message and exit
--bar BAR Do the bar option
--foo FOO Foo the program
</code></pre>
| 1 | 2016-10-12T15:22:46Z | [
"python",
"python-3.x",
"arguments"
] |
How to check whether the sum exists for a given path in a tree | 40,002,118 | <p>I want to check if the sum of values in any path from the start node to the </p>
<p>leaf node exists.</p>
<p>For example suppose I have startNode is say a 7 and the sumTarget is 15 if the tree is: </p>
<pre><code> a-7
b - 1 e- 8
c - 2
d -9
</code></pre>
<p>Then since 7 +8 equals 15 it would return true</p>
<p>If I have b as the startNode and 12 as the sumTotal then it would also return true because 1 +2 + 9 is 12 starting with b. </p>
<pre><code>class Node {
int value;
Node [] children
}
</code></pre>
<p>I don't think this is right, but I'm not sure what is wrong. </p>
<pre><code>def doesSumExist(startNode, sumTarget, currentSum):
totalSum = sumTarget
if startNode is not Null:
if totalSum + startNode.value == sumTarget:
return True
else:
totalSum += startNode.value
else:
Node startNode = doesSumExist(startNode.left, sumTarget, currentSum)
if startNode is not Null:
return currentSum
startNode = doesSumExist(startNode.right, sumTarget,currentSum)
return False
</code></pre>
| 0 | 2016-10-12T15:09:32Z | 40,003,444 | <p>assuming that your node class looks something like this:</p>
<pre><code>class Node:
def __init__(self, value = 0, children = None):
self.value = value
self.children = [] if children is None else children
</code></pre>
<p>then this method should do the trick:</p>
<pre><code>def doesSumExist(startNode, targetSum, currentSum):
if startNode is None:
return False
currentSum += startNode.value
if currentSum == targetSum:
return True
for child in startNode.children:
if doesSumExist(child, targetSum, currentSum):
return True
return False
</code></pre>
<p>Note that for this Node-class design the None-check of startNode isn't really necessary for the recursion but only for the entry point. So this would probably be better:</p>
<pre><code>def doesSumExist(startNode, targetSum):
def inner(node, targetSum, currentSum):
currentSum += node.value
if currentSum == targetSum:
return True
#for people who like to save a few lines
#return any(inner(child, targetSum, currentSum) for child in node.children)
for child in node.children:
if inner(child, targetSum, currentSum):
return True
return False
if startNode is None:
return False
return inner(startNode, targetSum, 0)
</code></pre>
<p>Edit:
If you not only want to know if the sum exists in a path from your start node but also if it would exist in any given sub path this should work:</p>
<pre><code>def doesSumExist(startNode, targetSum):
def inner(node, targetSum, allValues):
allValues.append(node.value)
currentSum = 0
for val in reversed(allValues):
currentSum += val
if currentSum == targetSum:
return True
for child in node.children:
if inner(child, targetSum, allValues):
return True
allValues.pop()
return False
if startNode is None:
return False
return inner(startNode, targetSum, [])
</code></pre>
| 1 | 2016-10-12T16:14:49Z | [
"python",
"data-structures",
"binary-tree"
] |
How to check whether the sum exists for a given path in a tree | 40,002,118 | <p>I want to check if the sum of values in any path from the start node to the </p>
<p>leaf node exists.</p>
<p>For example suppose I have startNode is say a 7 and the sumTarget is 15 if the tree is: </p>
<pre><code> a-7
b - 1 e- 8
c - 2
d -9
</code></pre>
<p>Then since 7 +8 equals 15 it would return true</p>
<p>If I have b as the startNode and 12 as the sumTotal then it would also return true because 1 +2 + 9 is 12 starting with b. </p>
<pre><code>class Node {
int value;
Node [] children
}
</code></pre>
<p>I don't think this is right, but I'm not sure what is wrong. </p>
<pre><code>def doesSumExist(startNode, sumTarget, currentSum):
totalSum = sumTarget
if startNode is not Null:
if totalSum + startNode.value == sumTarget:
return True
else:
totalSum += startNode.value
else:
Node startNode = doesSumExist(startNode.left, sumTarget, currentSum)
if startNode is not Null:
return currentSum
startNode = doesSumExist(startNode.right, sumTarget,currentSum)
return False
</code></pre>
| 0 | 2016-10-12T15:09:32Z | 40,003,524 | <p>In that case I think what you're searching for is something like the following:</p>
<pre><code>def doesSumExist(startNode, sumTarget, currentSum):
totalSum = currentSum
if startNode is not Null:
if totalSum + startNode.value == sumTarget: #If this node completes the sum
return True
else: #if not
totalSum += startNode.value #increase current sum
if doesSumExist(startNode.left, sumTarget, totalSum): #recursive starting on the left children
return True
elif doesSumExist(startNode.right, sumTarget, totalSum): #recursive starting on the right children
return True
return False #if the sum is not present (starting in startNode).
</code></pre>
<p>However, this does not check if any successive combination of nodes contains the sum (the code would much more complex).</p>
<p>Hope this helps</p>
| 1 | 2016-10-12T16:18:01Z | [
"python",
"data-structures",
"binary-tree"
] |
First selenium program not working | 40,002,159 | <p>I've just loaded <code>Python</code> <code>Selenium</code> into my <code>Ubuntu</code> system and I'm following the Getting Started tutorial on <a href="http://selenium-python.readthedocs.io/getting-started.html#simple-usage" rel="nofollow">ReadTheDocs</a>. When I run this program:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
</code></pre>
<p>But I'm getting this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/henry/Documents/Scraper/test-selenium.py", line 4, in <module> driver = webdriver.Firefox()
File "/usr/local/lib/python2.7/distpackages/selenium/webdriver/firefox/webdriver.py", line 80, in __init__self.binary, timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 52, in __init__self.binary.launch_browser(self.profile, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 68, in launch_browser self._wait_until_connectable(timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 108, in _wait_until_connectable % (self.profile.path))
WebDriverException: Message: Can't load the profile. Profile Dir: /tmp/tmp8xr2V3 If you specified a log_file in the FirefoxBinary constructor, check it for details.
</code></pre>
| 0 | 2016-10-12T15:11:10Z | 40,002,396 | <p>Looks like there's a problem with the webdriver - do you have Firefox installed on your RasPi?</p>
<p>(If not, this might help: <a href="https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=150438" rel="nofollow">https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=150438</a>)</p>
| 0 | 2016-10-12T15:21:36Z | [
"python",
"python-2.7",
"selenium"
] |
First selenium program not working | 40,002,159 | <p>I've just loaded <code>Python</code> <code>Selenium</code> into my <code>Ubuntu</code> system and I'm following the Getting Started tutorial on <a href="http://selenium-python.readthedocs.io/getting-started.html#simple-usage" rel="nofollow">ReadTheDocs</a>. When I run this program:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
</code></pre>
<p>But I'm getting this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/henry/Documents/Scraper/test-selenium.py", line 4, in <module> driver = webdriver.Firefox()
File "/usr/local/lib/python2.7/distpackages/selenium/webdriver/firefox/webdriver.py", line 80, in __init__self.binary, timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 52, in __init__self.binary.launch_browser(self.profile, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 68, in launch_browser self._wait_until_connectable(timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 108, in _wait_until_connectable % (self.profile.path))
WebDriverException: Message: Can't load the profile. Profile Dir: /tmp/tmp8xr2V3 If you specified a log_file in the FirefoxBinary constructor, check it for details.
</code></pre>
| 0 | 2016-10-12T15:11:10Z | 40,002,559 | <p>I think you are using selenium 2.53.6</p>
<p>The issue is compatibility of firefox with selenium, since firefox>= 48 need Gecko Driver(<a href="https://github.com/mozilla/geckodriver" rel="nofollow">https://github.com/mozilla/geckodriver</a>) to run testcases on firefox. or you can downgrade the firefox to 46..from this link <a href="https://ftp.mozilla.org/pub/firefox/releases/46.0.1/" rel="nofollow">https://ftp.mozilla.org/pub/firefox/releases/46.0.1/</a> </p>
| 0 | 2016-10-12T15:29:51Z | [
"python",
"python-2.7",
"selenium"
] |
Python nested dictionary with SQL insert | 40,002,162 | <p>I have generated a very large dictionary after processing an XML file, and I am looking extract from this dictionary and to insert columns and values into my mySQL database table.</p>
<p>I am using Python 3.</p>
<p>The dictionary is nested; here's a simplistic example of what I have:</p>
<pre><code>d ={'Test1'{'TestID':'first','Dev_Type':'this device','Version':'v1_0','Address':'some Address'}
'Test2'{'TestID':'second','Dev_Type':'that device','Version':'v1_0','Address':'other Address'}
'Test3'{'TestID','third','Dev_Type':'other device','Version':'v1_0','Address':'another Address'}
}
</code></pre>
<p>Essentially I want to iterate over each primary Key in this dictionary (e.g. Test1,Test2,Test3) and extract the secondary keys as a column name tuple and the associated seconday key values as a values tuple, a bit like this:</p>
<pre><code>cols = ('TestID','Dev_Type','Version','Address')
vals = ('first','this device','v1_0','some Address')
</code></pre>
<p>On iterating over each primary key I will add the two tuples to my mySQL table using this command:</p>
<pre><code>sql = "INSERT INTO Parameters ({0}) VALUES ({1})".format(', '.join(cols), ', '.join(['%s'] * len(cols)));
try:
cursor.execute(sql, vals)
except Exception as e:
print(e)
pass
</code></pre>
<p>Then repeat the process on the next primary key ('Test2').</p>
<p>I have made an initial attempt, but have hard coded the Primary key in this instance:</p>
<pre><code>for k, v in d:
#Missing appropriate method here
cols = tuple(d['Test1'].keys())
vals = tuple(d['Test1'].values())
sql = "INSERT INTO Parameters ({0}) VALUES ({1})".format(', '.join(cols), ', '.join(['%s'] * len(cols)));
try:
cursor.execute(sql, vals)
except Exception as e:
pass
connection.close()
return
</code></pre>
| 0 | 2016-10-12T15:11:18Z | 40,002,349 | <p>You can iterate over <code>d.values()</code> and use the <code>.keys()</code> and <code>.values()</code> methods on the nested dictionaries to get the columns and values:</p>
<pre><code>for v in d.values():
cols = v.keys()
vals = v.values()
sql = "INSERT INTO Parameters ({}) VALUES ({})".format(
', '.join(cols),
', '.join(['%s'] * len(cols)));
try:
cursor.execute(sql, vals)
except Exception as e:
pass
</code></pre>
<p>Note that in Python 3 <a href="https://docs.python.org/3/library/stdtypes.html#dict.keys" rel="nofollow"><code>dict.keys()</code></a> and <a href="https://docs.python.org/3/library/stdtypes.html#dict.values" rel="nofollow"><code>dict.values()</code></a> return <em>views</em> of the dictionaryâs keys and values (unlike lists in Python 2).</p>
| 2 | 2016-10-12T15:19:47Z | [
"python",
"python-3.x",
"dictionary"
] |
Python nested dictionary with SQL insert | 40,002,162 | <p>I have generated a very large dictionary after processing an XML file, and I am looking extract from this dictionary and to insert columns and values into my mySQL database table.</p>
<p>I am using Python 3.</p>
<p>The dictionary is nested; here's a simplistic example of what I have:</p>
<pre><code>d ={'Test1'{'TestID':'first','Dev_Type':'this device','Version':'v1_0','Address':'some Address'}
'Test2'{'TestID':'second','Dev_Type':'that device','Version':'v1_0','Address':'other Address'}
'Test3'{'TestID','third','Dev_Type':'other device','Version':'v1_0','Address':'another Address'}
}
</code></pre>
<p>Essentially I want to iterate over each primary Key in this dictionary (e.g. Test1,Test2,Test3) and extract the secondary keys as a column name tuple and the associated seconday key values as a values tuple, a bit like this:</p>
<pre><code>cols = ('TestID','Dev_Type','Version','Address')
vals = ('first','this device','v1_0','some Address')
</code></pre>
<p>On iterating over each primary key I will add the two tuples to my mySQL table using this command:</p>
<pre><code>sql = "INSERT INTO Parameters ({0}) VALUES ({1})".format(', '.join(cols), ', '.join(['%s'] * len(cols)));
try:
cursor.execute(sql, vals)
except Exception as e:
print(e)
pass
</code></pre>
<p>Then repeat the process on the next primary key ('Test2').</p>
<p>I have made an initial attempt, but have hard coded the Primary key in this instance:</p>
<pre><code>for k, v in d:
#Missing appropriate method here
cols = tuple(d['Test1'].keys())
vals = tuple(d['Test1'].values())
sql = "INSERT INTO Parameters ({0}) VALUES ({1})".format(', '.join(cols), ', '.join(['%s'] * len(cols)));
try:
cursor.execute(sql, vals)
except Exception as e:
pass
connection.close()
return
</code></pre>
| 0 | 2016-10-12T15:11:18Z | 40,002,680 | <p>Iterating over a dictionary actually iterates over the keys. <code>for k in d:</code> is equivalent to <code>for k in d.keys():</code>. You are looking for the <code>values</code> or <code>items</code> methods, which will actually return the key and the value as a tuple:</p>
<pre><code>for k, v in d.items():
# k will take the values 'Test1', 'Test2', etc.
# v will take the values of the corresponding nested dicts.
</code></pre>
<p>or</p>
<pre><code>for v in d.values():
# v will take on the values of the nested dicts.
</code></pre>
<p>I would recommend using <code>items</code> over <code>values</code> since that way you will have a reference to which primary key (test) you are processing. I am going to go out on a limb here and guess that you will need this for the non-trivial version of your program.</p>
<p>From there, you use <code>v</code> as you are trying to use <code>d[...]</code>, since that is exactly what it is:</p>
<pre><code>for k, v in d.items():
cols = v.keys()
vals = v.values()
sql = "INSERT INTO Parameters ({0}) VALUES ({1})".format(
', '.join(cols),
', '.join(['%s'] * len(v))
)
try:
cursor.execute(sql, vals)
except Exception as e:
pass
connection.close()
return
</code></pre>
<p>Since <code>v</code> is a nested dictionary, you can get the number of elements in both <code>cols</code> and <code>vals</code> as just <code>len(v)</code>.</p>
| 0 | 2016-10-12T15:35:19Z | [
"python",
"python-3.x",
"dictionary"
] |
Resuming an optimization in scipy.optimize? | 40,002,172 | <p>scipy.optimize presents many different methods for local and global optimization of multivariate systems. However, I have a very long optimization run needed that may be interrupted (and in some cases I may want to interrupt it deliberately). Is there any way to restart... well, any of them? I mean, clearly one can provide the last, most optimized set of parameters found as the initial guess, but that's not the only parameter in play - for example, there are also gradients (jacobians, for example), populations in differential evolution, etc. I obviously don't want these to have to start over as well.</p>
<p>I see little way to prove these to scipy, nor to save its state. For functions that take a jacobian for example, there is a jacobian argument ("jac"), but it's either a boolean (indicating that your evaluation function returns a jacobian, which mine doesn't), or a callable function (I would only have the single result from the last run to provide). Nothing takes just an array of the last jacobian available. And with differential evolution, loss of the population would be horrible for performance and convergence.</p>
<p>Are there any solutions to this? Any way to resume optimizations at all?</p>
| 0 | 2016-10-12T15:11:38Z | 40,006,867 | <p>The general answer is no, there's no general solution apart from, just as you say, starting from the last estimate from the previous run.</p>
<p>For differential evolution specifically though, you can can instantiate the <code>DifferentialEvolutionSolver</code>, which you can pickle at checkpoint and unpickle to resume.
(The suggestion comes from <a href="https://github.com/scipy/scipy/issues/6517" rel="nofollow">https://github.com/scipy/scipy/issues/6517</a>)</p>
| 0 | 2016-10-12T19:31:13Z | [
"python",
"scipy",
"mathematical-optimization"
] |
Resuming an optimization in scipy.optimize? | 40,002,172 | <p>scipy.optimize presents many different methods for local and global optimization of multivariate systems. However, I have a very long optimization run needed that may be interrupted (and in some cases I may want to interrupt it deliberately). Is there any way to restart... well, any of them? I mean, clearly one can provide the last, most optimized set of parameters found as the initial guess, but that's not the only parameter in play - for example, there are also gradients (jacobians, for example), populations in differential evolution, etc. I obviously don't want these to have to start over as well.</p>
<p>I see little way to prove these to scipy, nor to save its state. For functions that take a jacobian for example, there is a jacobian argument ("jac"), but it's either a boolean (indicating that your evaluation function returns a jacobian, which mine doesn't), or a callable function (I would only have the single result from the last run to provide). Nothing takes just an array of the last jacobian available. And with differential evolution, loss of the population would be horrible for performance and convergence.</p>
<p>Are there any solutions to this? Any way to resume optimizations at all?</p>
| 0 | 2016-10-12T15:11:38Z | 40,059,852 | <p>The following can save and restart from a previous <code>x</code>,
but I gather you want to save and restart more state, e.g. gradients, too; can you clarify that ?</p>
<p>See also <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html" rel="nofollow">basinhopping</a>,
which has a nice-looking gui, <a href="https://github.com/pele-python/pele" rel="nofollow">pele-python</a> .</p>
<pre><code>#!/usr/bin/env python
""" Funcgradmn: wrap f() and grad(), save all x[] f[] grad[] to plot or restart """
from __future__ import division
import numpy as np
__version__ = "2016-10-18 oct denis"
class Funcgradmon(object):
""" Funcgradmn: wrap f() and grad(), save all x[] f[] grad[] to plot or restart
Example: minimize, save, restart --
fg = Funcgradmon( func, gradfunc, verbose=1 )
# fg(x): f(x), g(x) for minimize( jac=True )
# run 100 iter (if linesearch, 200-300 calls of fg()) --
options = dict( maxiter=100 ) # ...
min0 = minimize( fg, x0, jac=True, options=options )
fg.savez( "0.npz", paramstr="..." ) # to plot or restart
# restart from x[50] --
# (won't repeat the previous path from 50
# unless you save and restore the whole state of the optimizer)
x0 = fg.restart( 50 )
# change params ...
min50 = minimize( fg, x0, jac=True, options=options )
"""
def __init__( self, func, gradfunc, verbose=1 ):
self.func = func
self.gradfunc = gradfunc
self.verbose = verbose
self.x, self.f, self.g = [], [], [] # growing lists
self.t = 0
def __call__( self, x ):
""" f, g = func(x), gradfunc(x); save them; return f, g """
x = np.asarray_chkfinite( x ) # always
f = self.func(x)
g = self.gradfunc(x)
g = np.asarray_chkfinite( g )
self.x.append( np.copy(x) )
self.f.append( _copy( f ))
self.g.append( np.copy(g) )
if self.verbose:
print "%3d:" % self.t ,
fmt = "%-12g" if np.isscalar(f) else "%s\t"
print fmt % f ,
print "x: %s" % x , # with user's np.set_printoptions
print "\tgrad: %s" % g
# better df dx dg
# callback: plot
self.t += 1
return f, g
def restart( self, n ):
""" x0 = fg.restart( n ) returns x[n] to minimize( fg, x0 )
"""
x0 = self.x[n] # minimize from here
del self.x[:n]
del self.f[:n]
del self.g[:n]
self.t = n
if self.verbose:
print "Funcgradmon: restart from x[%d] %s" % (n, x0)
return x0
def savez( self, npzfile, **kw ):
""" np.savez( npzfile, x= f= g= ) """
x, f, g = map( np.array, [self.x, self.f, self.g] )
if self.verbose:
asum = "f: %s \nx: %s \ng: %s" % (
_asum(f), _asum(x), _asum(g) )
print "Funcgradmon: saving to %s: \n%s \n" % (npzfile, asum)
np.savez( npzfile, x=x, f=f, g=g, **kw )
def load( self, npzfile ):
load = np.load( npzfile )
x, f, g = load["x"], load["f"], load["g"]
if self.verbose:
asum = "f: %s \nx: %s \ng: %s" % (
_asum(f), _asum(x), _asum(g) )
print "Funcgradmon: load %s: \n%s \n" % (npzfile, asum)
self.x = list( x )
self.f = list( f )
self.g = list( g )
self.loaddict = load
return self.restart( len(x) - 1 )
def _asum( X ):
""" one-line array summary: "shape type min av max" """
if not hasattr( X, "dtype" ):
return str(X)
return "%s %s min av max %.3g %.3g %.3g" % (
X.shape, X.dtype, X.min(), X.mean(), X.max() )
def _copy( x ):
return x if x is None or np.isscalar(x) \
else np.copy( x )
#...............................................................................
if __name__ == "__main__":
import sys
from scipy.optimize import minimize, rosen, rosen_der
np.set_printoptions( threshold=20, edgeitems=10, linewidth=140,
formatter = dict( float = lambda x: "%.3g" % x )) # float arrays %.3g
dim = 3
method = "cg"
maxiter = 10 # 1 linesearch -> 2-3 calls of fg
# to change these params, run this.py a=1 b=None 'c = ...' in sh or ipython
for arg in sys.argv[1:]:
exec( arg )
print "\n", 80 * "-"
print "Funcgradmon: dim %d method %s maxiter %d \n" % (
dim, method, maxiter )
x0 = np.zeros( dim )
#...........................................................................
fg = Funcgradmon( rosen, rosen_der, verbose=1 )
options = dict( maxiter=maxiter ) # ...
min0 = minimize( fg, x0, jac=True, method=method, options=options )
fg.savez( "0.npz", paramstr="..." ) # to plot or restart
x0 = fg.restart( 5 ) # = fg.x[5]
# change params, print them all
min5 = minimize( fg, x0, jac=True, method=method, options=options )
fg.savez( "5.npz", paramstr="..." )
</code></pre>
| 0 | 2016-10-15T13:52:04Z | [
"python",
"scipy",
"mathematical-optimization"
] |
error to pass value in query Django | 40,002,187 | <p>Hi everyone I have problem with this query in Django</p>
<pre><code> projects_name = str(kwargs['project_name']).split(',')
status = str(kwargs['status'])
list_project = tuple(projects_name)
opc_status = {'jobs_running':'running', 'jobs_pending':'pending', 'cpus_used':'cpu'}
if status in opc_status.values():
key = list(opc_status.keys())[list(opc_status.values()).index(status)] + ', entry_dt'
else:
key='*'
db = MySQLdb.connect(host='..', port=, user='..', passwd='..', db='..')
try:
cursor = db.cursor()
cursor.execute('SELECT %s FROM proj_cpus WHERE project in %s', key, list_project])
</code></pre>
<p>the first params of the query must be <code>*</code> or something like <code>jobs_pending, entry_dt</code></p>
<p>but query return this error</p>
<pre><code>tuple index out of range
</code></pre>
<p>Any idea about how to create the query correctly?</p>
| 0 | 2016-10-12T15:12:52Z | 40,002,682 | <p>You could try this:</p>
<pre><code># Build a comma-separated string of all items in list_project
data_list = ', '.join([item for item in list_project])
query = 'SELECT %s FROM proj_cpus WHERE project in (%s)'
# Supply the parameters in the form of a tuple
cursor.execute(query, (key, data_list))
</code></pre>
<p><code>cursor.fetchall()</code> will always return data in tuples like you have observed in comments, it is not because there is an issue with the query. To convert to json you could do something like the following (<code>row_counter</code> is just a placeholder to make sure that there is a unique key for every entry).</p>
<pre><code>import json
key = '*'
data_list = ', '.join([item for item in list_project])
query = 'SELECT %s FROM proj_cpus WHERE project in (%s)'
cursor.execute(query, (key, data_list))
all_rows = cursor.fetchall()
row_headings = [header[0] for header in cursor.description]
row_counter = 0
all_rows_container = {}
for item in all_rows:
item_dict = {row_headings[x]: item[x] for x in range(len(row_headings))}
all_rows_container[row_counter] = item_dict
row_counter += 1
json_data = json.dumps(all_rows_container)
print json_data
</code></pre>
<p>NOTE: the above may throw <code>IndexError</code> if the query is not with <code>key = '*'</code> because I think <code>row_headings</code> will contain all of the schema for the table, even for values that you did not select in the query. However, it should be sufficient to demonstrate the approach and you can tailor it in the event that you pick specific columns only.</p>
| 0 | 2016-10-12T15:35:22Z | [
"python",
"django"
] |
fit_transform with the training data and transform with the testing | 40,002,232 | <p>As the title says, I am using <code>fit_transform</code> with the <code>CountVectorizer</code> on the training data .. and then I am using <code>tranform</code> only with the testing data ... will this gives me the same as using <code>fit</code> only on the training and <code>tranform</code> only on the testing data ?</p>
| 0 | 2016-10-12T15:14:50Z | 40,002,626 | <p>The answer is <strong>YES</strong> :</p>
<p><code>fit_transform</code> is equivalent to <code>fit</code> followed by <code>transform</code>, but more efficiently implemented. <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit" rel="nofollow">See documentation</a></p>
<p>Both <code>fit</code>and <code>fit_transform</code> fit your classifier to your dataset. You can then use the same classifier to transform any other dataset (in your case the test set).</p>
| 1 | 2016-10-12T15:32:31Z | [
"python",
"scikit-learn"
] |
Django: KeyError triggered in forms.py | 40,002,239 | <p>Making a card app. User can make a deck and put cards in that deck. Decks and cards have an 'owner' field in their models to state who the user is.</p>
<p>forms.py</p>
<pre><code>class CardForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
# Pop() removes 'user' from the kwargs dictionary and populates the user variable
user = kwargs.pop('owner')
super(CardForm, self).__init__(*args, **kwargs)
self.fields['deck'] = forms.ModelChoiceField( # modify choices on 'deck' field
queryset=Deck.objects.filter(owner=user)
)
class Meta:
model = Card
fields = ('term', 'definition', 'deck')
</code></pre>
<p>The part triggering the KeyError is</p>
<pre><code>super(CardForm, self).__init__(*args, **kwargs)
</code></pre>
<p>views.py</p>
<pre><code>def card_new(request, deck):
if request.method == "POST":
form = CardForm(request.POST)
if form.is_valid():
card = form.save(commit=False)
card.save()
return redirect('card:detail', deck)
else:
form = CardForm(initial={'deck': deck}, owner=request.user) # this initial field sets card's deck as current deck
return render(request, 'card/card_edit.html', {'form': form})
</code></pre>
<p>models.py</p>
<pre><code>class Card(models.Model):
owner = models.ForeignKey(User, null=True, default=True, related_name='oc')
term = models.CharField(max_length=100, default='N/A')
definition = models.TextField(default='N/A')
deck = models.ForeignKey(Deck, on_delete=models.CASCADE)
</code></pre>
| 1 | 2016-10-12T15:14:58Z | 40,002,599 | <p>You should pass <code>owner</code> for POST requests, as you already do for GET requests.</p>
<pre><code>if request.method == "POST":
form = CardForm(request.POST, owner=request.user)
</code></pre>
| 0 | 2016-10-12T15:31:34Z | [
"python",
"django"
] |
Can subclasses in Python inherit parent class decorator | 40,002,241 | <p>If I apply a decorator to a class:</p>
<pre><code>from flask.ext.restful import Resource
@my_decorator()
class Foo(Resource): ...
</code></pre>
<p>Will it be automatically applied to any subclass methods as well? For example, will it be <em>magically</em> applied to <code>a_foobar_method</code>?</p>
<pre><code>class FooBar(Foo):
def a_foobar_method(self):
...
</code></pre>
| 0 | 2016-10-12T15:15:05Z | 40,002,375 | <p>Decorators do not "mark" the decorated class in any way. There is no way to tell if you wrote</p>
<pre><code>@decorator
class Foo:
pass
</code></pre>
<p>or</p>
<pre><code>class Foo:
pass
Foo = decorator(Foo)
</code></pre>
<p>simply by looking at <code>Foo</code> itself. Decorators are just applied at the point of use, and then are forgotten.</p>
| -1 | 2016-10-12T15:20:50Z | [
"python",
"python-decorators"
] |
Can subclasses in Python inherit parent class decorator | 40,002,241 | <p>If I apply a decorator to a class:</p>
<pre><code>from flask.ext.restful import Resource
@my_decorator()
class Foo(Resource): ...
</code></pre>
<p>Will it be automatically applied to any subclass methods as well? For example, will it be <em>magically</em> applied to <code>a_foobar_method</code>?</p>
<pre><code>class FooBar(Foo):
def a_foobar_method(self):
...
</code></pre>
| 0 | 2016-10-12T15:15:05Z | 40,002,452 | <p>In short, no.</p>
<pre><code>@my_decorator
class Foo(Resource): ...
</code></pre>
<p>is simply shorthand for</p>
<pre><code>class Foo(Resource): ...
Foo = my_decorator(Foo)
</code></pre>
<p>If the only time you ever use <code>Foo</code> is to define <code>FooBar</code>, then together your two pieces of code are equivalent to this:</p>
<pre><code>class Foo(Resource): ...
Foo_decorated = my_decorator(Foo)
class FooBar(Foo_decorated):
def a_foobar_method(self):
...
</code></pre>
<p>Or even to this:</p>
<pre><code>class Foo(Resource): ...
class FooBar(my_decorator(Foo)):
def a_foobar_method(self):
...
</code></pre>
<p>The real answer depends on what the decorator does. In principle it stands a chance of seeing the methods of <code>FooBar</code> if it looks at the list of methods after the class has been instantiated as an object (e.g. because it modifies the <code>__init__</code> method). But if it goes through the methods at the time it is called and applies some change to each of them, then it will have no effect on extra methods (or replaced methods) in <code>FooBar</code>. </p>
| 2 | 2016-10-12T15:24:08Z | [
"python",
"python-decorators"
] |
pandas left join - why more results? | 40,002,355 | <p>How is it possible that a pandas left join like</p>
<pre><code>df.merge(df2, left_on='first', right_on='second', how='left')
</code></pre>
<p>increases the data frame from 221309 to 1388680 rows?</p>
<h1>edit</h1>
<p>shape of df 1 (221309, 83)</p>
<p>shape of df2 (7602, 6)</p>
| 1 | 2016-10-12T15:20:04Z | 40,002,535 | <p>As <a href="http://stackoverflow.com/questions/40002355/pandas-left-join-why-more-results/40002535#comment67282514_40002355">@JonClements has already said in the comment</a> it's a result of duplicated entries in the columns used for merging/joining. Here is a small demo:</p>
<pre><code>In [5]: df
Out[5]:
a b
0 1 11
1 1 12
2 2 21
In [6]: df2
Out[6]:
a c
0 1 111
1 1 112
2 2 221
3 2 222
4 3 311
In [7]: df.merge(df2, on='a', how='left')
Out[7]:
a b c
0 1 11 111
1 1 11 112
2 1 12 111
3 1 12 112
4 2 21 221
5 2 21 222
</code></pre>
| 4 | 2016-10-12T15:28:35Z | [
"python",
"pandas",
"join",
"dataframe",
"left-join"
] |
How to manipulate a huge CSV file in python | 40,002,369 | <p>I have a CSV file more than 16G, each row is the text data. When I was encoding (e.g. one-hot-encode) the whole CSV file data, my process was killed due to the memory limitation. Is there a way to process this kind of "big data"? </p>
<p>I am thinking that split the whole CSV file into multiple "smaller" files, then append them to another CSV file, is that a correct way to handle the huge CSV file?</p>
| -1 | 2016-10-12T15:20:36Z | 40,002,530 | <p>Your question does not state what language you are using to handle this CSV file. I will reply using C#, but I imagine that the strategy will work equally well for Java too.</p>
<p>You can try using the <code>StreamReader</code> class to read the file line-by-line. That should take care of the read side of things.</p>
<p>Something like:</p>
<pre><code>using (var reader = new StreamReader(...))
{
var line = string.Empty;
while ((line != reader.ReadLine()) != null)
{
Process(line);
}
}
</code></pre>
<p>NB: That's a code snippet in C#, and is more pseudo-code than actual code.</p>
<p>You should create a database using some kind of local DB technology, either SQLite or SQL Server LocalDB or even MySQL and load the data into a table or tables in that.</p>
<p>You can then write any other further processing based on the data held in the database rather than in a simple text file.</p>
| 0 | 2016-10-12T15:28:19Z | [
"python",
"csv",
"encoding",
"bigdata"
] |
How to manipulate a huge CSV file in python | 40,002,369 | <p>I have a CSV file more than 16G, each row is the text data. When I was encoding (e.g. one-hot-encode) the whole CSV file data, my process was killed due to the memory limitation. Is there a way to process this kind of "big data"? </p>
<p>I am thinking that split the whole CSV file into multiple "smaller" files, then append them to another CSV file, is that a correct way to handle the huge CSV file?</p>
| -1 | 2016-10-12T15:20:36Z | 40,003,734 | <p>This has been discussed in <a href="http://stackoverflow.com/questions/33689456/reading-huge-csv-files-efficiently?rq=1">Reading huge csv files efficiently?</a></p>
<p>Probably the most reasonable thing to do with a 16GB csv file would not to load it all into memory, but read and process it line by line:</p>
<pre><code>with open(filename, "r") as f:
lines = csv.reader(f)
for line in lines:
#Process the line
</code></pre>
| 0 | 2016-10-12T16:28:31Z | [
"python",
"csv",
"encoding",
"bigdata"
] |
QScintilla based text editor in PyQt5 with clickable functions and variables | 40,002,373 | <p>I am trying to make a simple texteditor with basic syntax highlighting, code completion and clickable functions & variables in PyQt5. My best hope to achieve this is using the QScintilla port <br>
for PyQt5.<br>
I have found the following QScintilla-based texteditor example on the Eli Bendersky website (<a href="http://eli.thegreenplace.net/2011/04/01/sample-using-qscintilla-with-pyqt" rel="nofollow">http://eli.thegreenplace.net/2011/04/01/sample-using-qscintilla-with-pyqt</a>, Victor S. has adapted it to PyQt5). I think this example is a good starting point:</p>
<pre><code>#-------------------------------------------------------------------------
# qsci_simple_pythoneditor.pyw
#
# QScintilla sample with PyQt
#
# Eli Bendersky ([email protected])
# This code is in the public domain
#-------------------------------------------------------------------------
import sys
import sip
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.Qsci import QsciScintilla, QsciLexerPython
class SimplePythonEditor(QsciScintilla):
ARROW_MARKER_NUM = 8
def __init__(self, parent=None):
super(SimplePythonEditor, self).__init__(parent)
# Set the default font
font = QFont()
font.setFamily('Courier')
font.setFixedPitch(True)
font.setPointSize(10)
self.setFont(font)
self.setMarginsFont(font)
# Margin 0 is used for line numbers
fontmetrics = QFontMetrics(font)
self.setMarginsFont(font)
self.setMarginWidth(0, fontmetrics.width("00000") + 6)
self.setMarginLineNumbers(0, True)
self.setMarginsBackgroundColor(QColor("#cccccc"))
# Clickable margin 1 for showing markers
self.setMarginSensitivity(1, True)
# self.connect(self,
# SIGNAL('marginClicked(int, int, Qt::KeyboardModifiers)'),
# self.on_margin_clicked)
self.markerDefine(QsciScintilla.RightArrow,
self.ARROW_MARKER_NUM)
self.setMarkerBackgroundColor(QColor("#ee1111"),
self.ARROW_MARKER_NUM)
# Brace matching: enable for a brace immediately before or after
# the current position
#
self.setBraceMatching(QsciScintilla.SloppyBraceMatch)
# Current line visible with special background color
self.setCaretLineVisible(True)
self.setCaretLineBackgroundColor(QColor("#ffe4e4"))
# Set Python lexer
# Set style for Python comments (style number 1) to a fixed-width
# courier.
#
lexer = QsciLexerPython()
lexer.setDefaultFont(font)
self.setLexer(lexer)
text = bytearray(str.encode("Arial"))
# 32, "Courier New"
self.SendScintilla(QsciScintilla.SCI_STYLESETFONT, 1, text)
# Don't want to see the horizontal scrollbar at all
# Use raw message to Scintilla here (all messages are documented
# here: http://www.scintilla.org/ScintillaDoc.html)
self.SendScintilla(QsciScintilla.SCI_SETHSCROLLBAR, 0)
# not too small
self.setMinimumSize(600, 450)
def on_margin_clicked(self, nmargin, nline, modifiers):
# Toggle marker for the line the margin was clicked on
if self.markersAtLine(nline) != 0:
self.markerDelete(nline, self.ARROW_MARKER_NUM)
else:
self.markerAdd(nline, self.ARROW_MARKER_NUM)
if __name__ == "__main__":
app = QApplication(sys.argv)
editor = SimplePythonEditor()
editor.show()
editor.setText(open(sys.argv[0]).read())
app.exec_()
</code></pre>
<p>Just copy-paste this code into an empty <code>.py</code> file, and run it. You should get the following simple texteditor appearing on your display:</p>
<p><a href="https://i.stack.imgur.com/MRcdJ.png" rel="nofollow"><img src="https://i.stack.imgur.com/MRcdJ.png" alt="enter image description here"></a></p>
<p>Notice how perfect the syntax highlighting is! QScintilla certainly did some parsing on the background to achieve that.<br>
Is it possible to make <em>clickable functions & variables</em> for this texteditor? Every self-respecting IDE has it. You click on a function, and the IDE jumps to the function definition. The same for variables. I would like to know:<br></p>
<ul>
<li>Does QScintilla support <em>clickable functions & variables</em>?</li>
<li>If not, is it possible to import another python module that implements this feature in the QScintilla texteditor?</li>
</ul>
<hr>
<p><strong>EDIT :</strong><br>
λuser noted the following:</p>
<blockquote>
<p>Clickable function names require full parsing with a much deeper knowledge of a programming language [..]<br>This is way beyond the scope of Scintilla/QScintilla. Scintilla provides a way to react when the mouse clicks somewhere on the text, but the logic of "where is the definition of a function" is not in Scintilla and probably never will be.<br>However, some projects are dedicated to this task, like <strong>ctags</strong>. You could simply write a wrapper around this kind of tool.</p>
</blockquote>
<p>I guess that writing such wrapper for <strong>ctags</strong> is now on my TODO list. The very first step is to get a reaction (Qt signal) when the user clicks on a function or variable. And perhaps the function/variable should turn a bit blueish when you hover with the mouse over it, to notify the user that it is clickable. I already tried to achieve this, but am held back by the shortage of QScintilla documentation.</p>
<p>So let us trim down the question to: <em>How do you make a function or variable in the QScintilla texteditor clickable (with clickable defined as 'something happens')</em></p>
| 0 | 2016-10-12T15:20:48Z | 40,002,879 | <p>Syntax highlighting is just a matter of running a lexer on the source file to find tokens, then attribute styles to it. A lexer has a very basic understanding of a programming language, it only understands what is a number literal, a keyword, an operator, a comment, a few others and that's all. This is a somewhat simple job that can be performed with just regular expressions.</p>
<p>On the other hand, clickable function names requires requires full parsing with a much deeper knowledge of a programming language, e.g. is this a declaration of a variable or a use, etc. Furthermore, this may require parsing other source files not opened by current editor.</p>
<p>This is way beyond the scope of Scintilla/QScintilla. Scintilla provides a way to react when the mouse clicks somewhere on the text, but the logic of "where is the definition of a function" is not in Scintilla and probably never will be.</p>
<p>However, some projects are dedicated to this task, like <a href="http://ctags.sourceforge.net/" rel="nofollow">ctags</a>. You could simply write a wrapper around this kind of tool.</p>
| 2 | 2016-10-12T15:45:38Z | [
"python",
"python-3.x",
"pyqt",
"pyqt5",
"qscintilla"
] |
QScintilla based text editor in PyQt5 with clickable functions and variables | 40,002,373 | <p>I am trying to make a simple texteditor with basic syntax highlighting, code completion and clickable functions & variables in PyQt5. My best hope to achieve this is using the QScintilla port <br>
for PyQt5.<br>
I have found the following QScintilla-based texteditor example on the Eli Bendersky website (<a href="http://eli.thegreenplace.net/2011/04/01/sample-using-qscintilla-with-pyqt" rel="nofollow">http://eli.thegreenplace.net/2011/04/01/sample-using-qscintilla-with-pyqt</a>, Victor S. has adapted it to PyQt5). I think this example is a good starting point:</p>
<pre><code>#-------------------------------------------------------------------------
# qsci_simple_pythoneditor.pyw
#
# QScintilla sample with PyQt
#
# Eli Bendersky ([email protected])
# This code is in the public domain
#-------------------------------------------------------------------------
import sys
import sip
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.Qsci import QsciScintilla, QsciLexerPython
class SimplePythonEditor(QsciScintilla):
ARROW_MARKER_NUM = 8
def __init__(self, parent=None):
super(SimplePythonEditor, self).__init__(parent)
# Set the default font
font = QFont()
font.setFamily('Courier')
font.setFixedPitch(True)
font.setPointSize(10)
self.setFont(font)
self.setMarginsFont(font)
# Margin 0 is used for line numbers
fontmetrics = QFontMetrics(font)
self.setMarginsFont(font)
self.setMarginWidth(0, fontmetrics.width("00000") + 6)
self.setMarginLineNumbers(0, True)
self.setMarginsBackgroundColor(QColor("#cccccc"))
# Clickable margin 1 for showing markers
self.setMarginSensitivity(1, True)
# self.connect(self,
# SIGNAL('marginClicked(int, int, Qt::KeyboardModifiers)'),
# self.on_margin_clicked)
self.markerDefine(QsciScintilla.RightArrow,
self.ARROW_MARKER_NUM)
self.setMarkerBackgroundColor(QColor("#ee1111"),
self.ARROW_MARKER_NUM)
# Brace matching: enable for a brace immediately before or after
# the current position
#
self.setBraceMatching(QsciScintilla.SloppyBraceMatch)
# Current line visible with special background color
self.setCaretLineVisible(True)
self.setCaretLineBackgroundColor(QColor("#ffe4e4"))
# Set Python lexer
# Set style for Python comments (style number 1) to a fixed-width
# courier.
#
lexer = QsciLexerPython()
lexer.setDefaultFont(font)
self.setLexer(lexer)
text = bytearray(str.encode("Arial"))
# 32, "Courier New"
self.SendScintilla(QsciScintilla.SCI_STYLESETFONT, 1, text)
# Don't want to see the horizontal scrollbar at all
# Use raw message to Scintilla here (all messages are documented
# here: http://www.scintilla.org/ScintillaDoc.html)
self.SendScintilla(QsciScintilla.SCI_SETHSCROLLBAR, 0)
# not too small
self.setMinimumSize(600, 450)
def on_margin_clicked(self, nmargin, nline, modifiers):
# Toggle marker for the line the margin was clicked on
if self.markersAtLine(nline) != 0:
self.markerDelete(nline, self.ARROW_MARKER_NUM)
else:
self.markerAdd(nline, self.ARROW_MARKER_NUM)
if __name__ == "__main__":
app = QApplication(sys.argv)
editor = SimplePythonEditor()
editor.show()
editor.setText(open(sys.argv[0]).read())
app.exec_()
</code></pre>
<p>Just copy-paste this code into an empty <code>.py</code> file, and run it. You should get the following simple texteditor appearing on your display:</p>
<p><a href="https://i.stack.imgur.com/MRcdJ.png" rel="nofollow"><img src="https://i.stack.imgur.com/MRcdJ.png" alt="enter image description here"></a></p>
<p>Notice how perfect the syntax highlighting is! QScintilla certainly did some parsing on the background to achieve that.<br>
Is it possible to make <em>clickable functions & variables</em> for this texteditor? Every self-respecting IDE has it. You click on a function, and the IDE jumps to the function definition. The same for variables. I would like to know:<br></p>
<ul>
<li>Does QScintilla support <em>clickable functions & variables</em>?</li>
<li>If not, is it possible to import another python module that implements this feature in the QScintilla texteditor?</li>
</ul>
<hr>
<p><strong>EDIT :</strong><br>
λuser noted the following:</p>
<blockquote>
<p>Clickable function names require full parsing with a much deeper knowledge of a programming language [..]<br>This is way beyond the scope of Scintilla/QScintilla. Scintilla provides a way to react when the mouse clicks somewhere on the text, but the logic of "where is the definition of a function" is not in Scintilla and probably never will be.<br>However, some projects are dedicated to this task, like <strong>ctags</strong>. You could simply write a wrapper around this kind of tool.</p>
</blockquote>
<p>I guess that writing such wrapper for <strong>ctags</strong> is now on my TODO list. The very first step is to get a reaction (Qt signal) when the user clicks on a function or variable. And perhaps the function/variable should turn a bit blueish when you hover with the mouse over it, to notify the user that it is clickable. I already tried to achieve this, but am held back by the shortage of QScintilla documentation.</p>
<p>So let us trim down the question to: <em>How do you make a function or variable in the QScintilla texteditor clickable (with clickable defined as 'something happens')</em></p>
| 0 | 2016-10-12T15:20:48Z | 40,006,957 | <p>I got a helpful answer from Matic Kukovec through mail, that I would like to share here. Matic Kukovec made an incredible IDE based on QScintilla: <a href="https://github.com/matkuki/ExCo" rel="nofollow">https://github.com/matkuki/ExCo</a>. Maybe it will inspire more people to dig deeper into QScintilla (and clickable variables and functions).</p>
<hr>
<p>Hotspots make text clickable. You have to style it manualy using the <code>QScintilla.SendScintilla</code> function.
Example function I used in my editor Ex.Co. ( <a href="https://github.com/matkuki/ExCo" rel="nofollow">https://github.com/matkuki/ExCo</a> ):</p>
<pre><code>def style_hotspot(self, index_from, length, color=0xff0000):
"""Style the text from/to with a hotspot"""
send_scintilla =
#Use the scintilla low level messaging system to set the hotspot
self.SendScintilla(PyQt4.Qsci.QsciScintillaBase.SCI_STYLESETHOTSPOT, 2, True)
self.SendScintilla(PyQt4.Qsci.QsciScintillaBase.SCI_SETHOTSPOTACTIVEFORE, True, color)
self.SendScintilla(PyQt4.Qsci.QsciScintillaBase.SCI_SETHOTSPOTACTIVEUNDERLINE, True)
self.SendScintilla(PyQt4.Qsci.QsciScintillaBase.SCI_STARTSTYLING, index_from, 2)
self.SendScintilla(PyQt4.Qsci.QsciScintillaBase.SCI_SETSTYLING, length, 2)
</code></pre>
<p>This makes text in the QScintilla editor clickable when you hover the mouse over it.
The number 2 in the above functions is the hotspot style number.
To catch the event that fires when you click the hotspot, connect to these signals:</p>
<pre><code>QScintilla.SCN_HOTSPOTCLICK
QScintilla.SCN_HOTSPOTDOUBLECLICK
QScintilla.SCN_HOTSPOTRELEASECLICK
</code></pre>
<p>For more details look at Scintilla hotspot documentation:
<a href="http://www.scintilla.org/ScintillaDoc.html#SCI_STYLESETHOTSPOT" rel="nofollow">http://www.scintilla.org/ScintillaDoc.html#SCI_STYLESETHOTSPOT</a>
and QScintilla hotspot events:
<a href="http://pyqt.sourceforge.net/Docs/QScintilla2/classQsciScintillaBase.html#a5eff383e6fa96cbbaba6a2558b076c0b" rel="nofollow">http://pyqt.sourceforge.net/Docs/QScintilla2/classQsciScintillaBase.html#a5eff383e6fa96cbbaba6a2558b076c0b</a></p>
<hr>
<p>First of all, a big thank you to Mr. Kukovec! I have a few questions regarding your answer:</p>
<p><strong>(1)</strong> There are a couple of things I don't understand in your example function.</p>
<pre><code>def style_hotspot(self, index_from, length, color=0xff0000):
"""Style the text from/to with a hotspot"""
send_scintilla = # you undefine send_scintilla?
#Use the scintilla low level messaging system to set the hotspot
self.SendScintilla(..) # What object does 'self' refer to in this
self.SendScintilla(..) # context?
self.SendScintilla(..)
</code></pre>
<p><strong>(2)</strong> You say "To catch the event that fires when you click the hotspot, connect to these signals:"</p>
<pre><code>QScintilla.SCN_HOTSPOTCLICK
QScintilla.SCN_HOTSPOTDOUBLECLICK
QScintilla.SCN_HOTSPOTRELEASECLICK
</code></pre>
<p>How do you actually connect to those signals? Could you give one example? I'm used to the PyQt signal-slot mechanism, but I never used it on QScintilla. It would be a big help to see an example :-)</p>
<p><strong>(3)</strong> Maybe I missed something, but I don't see where you define in QScintilla that functions and variables (and not other things) are clickable in the source code?</p>
<p>Thank you so much for your kind help :-)</p>
| 0 | 2016-10-12T19:37:09Z | [
"python",
"python-3.x",
"pyqt",
"pyqt5",
"qscintilla"
] |
Append output function to multiple lists | 40,002,416 | <p>I want to execute a function with different parameter values. I have the following snippet of code which works perfectly well:</p>
<pre><code>tau = np.arange(2,4.01,0.1)
R = []
P = []
T = []
L = []
D = []
E = []
Obj = []
for i, tenum in enumerate(tau):
[r, p, t, l, d, e, obj] = (foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01))
R.append(r)
P.append(p)
T.append(t)
L.append(l)
D.append(d)
E.append(e)
Obj.append(obj)
</code></pre>
<p>However, I was wondering though: <strong>Is there an easier way to accomplish this?</strong> </p>
<hr>
<p>I have tried using
<code>res.append(foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01)</code> but <code>res[1]</code> returns <code><generator object <genexpr> at 0x046E7698></code>. </p>
| 0 | 2016-10-12T15:22:42Z | 40,002,571 | <pre><code>tau = np.arange(2,4.01,0.1)
results = [[] for _ in range(7)]
for i, tenum in enumerate(tau):
data = foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01)
for r,d in zip(results, data):
r.append(d)
r, p, t, l, d, e, _obj = results
</code></pre>
| 3 | 2016-10-12T15:30:18Z | [
"python",
"list",
"python-2.7",
"append"
] |
Append output function to multiple lists | 40,002,416 | <p>I want to execute a function with different parameter values. I have the following snippet of code which works perfectly well:</p>
<pre><code>tau = np.arange(2,4.01,0.1)
R = []
P = []
T = []
L = []
D = []
E = []
Obj = []
for i, tenum in enumerate(tau):
[r, p, t, l, d, e, obj] = (foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01))
R.append(r)
P.append(p)
T.append(t)
L.append(l)
D.append(d)
E.append(e)
Obj.append(obj)
</code></pre>
<p>However, I was wondering though: <strong>Is there an easier way to accomplish this?</strong> </p>
<hr>
<p>I have tried using
<code>res.append(foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01)</code> but <code>res[1]</code> returns <code><generator object <genexpr> at 0x046E7698></code>. </p>
| 0 | 2016-10-12T15:22:42Z | 40,002,592 | <p>You can turn a generator object into a list object by just passing it to the <code>list()</code> function so maybe this will do what you want:</p>
<pre><code>res = []
for i, tenum in enumerate(tau):
res.append(list(foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01)))
</code></pre>
<p>Even shorter with a list comprehension:</p>
<pre><code>res = [list(foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01)) for i, tenum in enumerate(tau)]
</code></pre>
<p>Either way, this leaves res transposed compared to what you want (thinking of it as a matrix). You can fix that with a call to <code>zip</code>:</p>
<pre><code>res_tr = zip(*res)
R, P, T, L, D, E, Obj = res_tr
</code></pre>
<p>Edit: Shortest of all, you can avoid building the intermediate list with a generator expression passed directly to <code>zip()</code>:</p>
<pre><code>R, P, T, L, D, E, Obj = zip(*(list(foo.cvxEDA(edaN, 1./fs, tenum, 0.7, 10.0, 0.0008, 0.01)) for tenum in tau))
</code></pre>
<p>One final note: In all of these, you can replace "<code>for i, tenum in enumerate(tau)</code>" with "<code>for tenum in tau</code>" since you don't seem to be using <code>i</code>.</p>
| 3 | 2016-10-12T15:31:23Z | [
"python",
"list",
"python-2.7",
"append"
] |
How to vectorize hinge loss gradient computation | 40,002,509 | <p>I'm computing thousands of gradients and would like to vectorize the computations in Python. The context is SVM and the loss function is Hinge Loss. Y is Mx1, X is MxN and w is Nx1.</p>
<pre><code> L(w) = lam/2 * ||w||^2 + 1/m Sum i=1:m ( max(0, 1-y[i]X[i]w) )
</code></pre>
<p>The gradient of this is</p>
<pre><code>grad = lam*w + 1/m Sum i=1:m {-y[i]X[i].T if y[i]*X[i]*w < 1, else 0}
</code></pre>
<p>Instead of looping through each element of the sum and evaluating the max function, is it possible to vectorize this? I want to use something like np.where like the following</p>
<pre><code>grad = np.where(y*X.dot(w) < 1, -X.T.dot(y), 0)
</code></pre>
<p>This does not work because where the condition is true, -X.T*y is the wrong dimension. </p>
<p>edit: list comprehension version, would like to know if there's a cleaner or more optimal way</p>
<pre><code>def grad(X,y,w,lam):
# cache y[i]*X[i].dot(w), each row of Xw is multiplied by a single element of y
yXw = y*X.dot(w)
# cache y[i]*X[i], note each row of X is multiplied by a single element of y
yX = X*y[:,np.newaxis]
# return the average of this max function
return lam*w + np.mean( [-yX[i] if yXw[i] < 1 else 0 for i in range(len(y))] )
</code></pre>
| 1 | 2016-10-12T15:27:14Z | 40,029,763 | <p>you have two vectors A and B, and you want to return array C, such that C[i] = A[i] if B[i] < 1 and 0 else, consequently all you need to do is</p>
<pre><code>C := A * sign(max(0, 1-B)) # suprisingly similar to the original hinge loss, right?:)
</code></pre>
<p>since</p>
<ul>
<li>if B < 1 then 1-B > 0, thus max(0, 1-B) > 0 and sign(max(0, 1-B)) == 1</li>
<li>if B >= 1 then 1-B <= 0, thus max(0, 1-B) = 0 and sign(max(0, 1-B)) == 0</li>
</ul>
<p>so in your code it will be something like</p>
<pre><code>A = (y*X.dot(w)).ravel()
B = (X*y[:,np.newaxis]).ravel()
C = A * np.sign(np.maximum(0, 1-B))
</code></pre>
| 1 | 2016-10-13T19:52:03Z | [
"python",
"optimization",
"machine-learning"
] |
pandas: compare sum over two time periods? | 40,002,511 | <p>I have a dataframe that looks like this:</p>
<pre><code> prod_code month items cost
0 040201060AAAIAI 2016-05-01 5 572.20
1 040201060AAAKAK 2016-05-01 164 14805.19
2 040201060AAALAL 2016-05-01 13465 14486.07
</code></pre>
<p>I would like to first group by the first four characters of <code>prod_code</code>, then sum the total cost of each group from Jan-Feb 2016, then compare this with the total cost from Mar-Apr 2016, then find the groups with the biggest percentage increase over the two time periods.</p>
<p>What's the best way to go about this?</p>
<p>Here is my code so far:</p>
<pre><code>d = { 'prod_code': ['040201060AAAIAI', '040201060AAAIAJ', '040201060AAAIAI', '040201060AAAIAI', '040201060AAAIAI', '040201060AAAIAI', '040301060AAAKAG', '040301060AAAKAK', '040301060AAAKAK', '040301060AAAKAX', '040301060AAAKAK', '040301060AAAKAK'], 'month': ['2016-01-01', '2016-02-01', '2016-03-01', '2016-01-01', '2016-02-01', '2016-03-01', '2016-01-01', '2016-02-01', '2016-03-01', '2016-01-01', '2016-02-01', '2016-03-01'], 'cost': [43, 45, 46, 41, 48, 59, 8, 9, 10, 12, 15, 13] }
df = pd.DataFrame.from_dict(d)
df['para'] = df.prod_code.str[:4]
df_para = df.groupby(['para', 'month']).sum()
</code></pre>
<p>This gives me <code>df_para</code> which looks like this:</p>
<pre><code> cost
para month
0402 2016-01-01 84
2016-02-01 93
2016-03-01 105
0403 2016-01-01 20
2016-02-01 24
2016-03-01 23
</code></pre>
<p>Now I need to calculate the sum for each group for Jan-Feb and for Apr-Mar, then the difference between those two groups, and finally sort by the difference between those two groups. What is the best way to do this?</p>
| 1 | 2016-10-12T15:27:18Z | 40,002,847 | <p>You can create a month group variable based on whether the months are <code>Jan-Feb</code> or <code>Mar-Apr</code> and then group by the code and month group variable, summarize the cost and calculate the difference:</p>
<pre><code>import numpy as np
import pandas as pd
df['month_period'] = np.where(pd.to_datetime(df.month).dt.month.isin([1,2]), 1, 2)
# creation of the month group variable could be adjusted based on how you want to cut
# your time, this is a simplified example which assumes you only have data from Jan-Apr
(df.groupby([df.prod_code.str[:4], df.month_period]).sum().groupby(level = 0).pct_change()
.dropna().sort('cost', ascending=False))
</code></pre>
<p><a href="https://i.stack.imgur.com/7Urvn.png" rel="nofollow"><img src="https://i.stack.imgur.com/7Urvn.png" alt="enter image description here"></a></p>
| 1 | 2016-10-12T15:44:00Z | [
"python",
"pandas"
] |
Python: convert some encoding to russian alphabet | 40,002,546 | <p>I have list with unicode</p>
<pre><code>lst = [u'\xd0\xbe', u'/', u'\xd0\xb8', u'\xd1\x81', u'\xd0\xb2', u'\xd0\xba', u'\xd1\x8f', u'\xd1\x83', u'\xd0\xbd\xd0\xb0', u'____', u'|', u'\xd0\xbf\xd0\xbe', u'11', u'search', u'\xd0\xbe\xd1\x82', u'**modis**', u'15', u'\xd0\xa1', u'**avito**', u'\xd0\xbd\xd0\xb5', u'[\xd0\xa1\xd0\xbe\xd1\x85\xd1\x80\xd0\xb0\xd0\xbd\xd1\x91\xd0\xbd\xd0\xbd\xd0\xb0\xd1\x8f', u'\xd0\x92', u'\xd0\xb5\xd1\x89\xd1\x91', u'kid', u'google', u'\xd0\xbb\xd0\xb8', u'13', u'**\xd0\xb0\xd0\xb2\xd0\xb8\xd1\x82\xd0\xbe**', u'[\xd0\x9f\xd0\xbe\xd0\xba\xd0\xb0\xd0\xb7\xd0\xb0\xd1\x82\xd1\x8c', u'\xd0\x9f\xd0\xbe\xd0\xb6\xd0\xb0\xd0\xbb\xd0\xbe\xd0\xb2\xd0\xb0\xd1\x82\xd1\x8c\xd1\x81\xd1\x8f', u'\xd0\x9e', u'&parent-', u'\xd0\xaf\xd0\xbd\xd0\xb4\xd0\xb5\xd0\xba\xd1\x81', u'###', u'**avito**.', u'**kiddieland**', u'\xd0\xbc\xd0\xb0\xd0\xb3\xd0\xb0\xd0\xb7\xd0\xb8\xd0\xbd', u'45', u'click2.yandex.ru/redir', u'72']
</code></pre>
<p>I try to convert some like <code>u'\xd0\xbe'</code> to russian.
I tried to use 2 and 3 python, but I can't do that.
I use IDE Pycharm and in settings I have default utf-8.
When I print that with</p>
<pre><code>for elem in lst:
print (elem)
</code></pre>
<p>it returns me <code>þ</code> for the first elem. When I try <code>print (elem.encode('cp1252'))</code> it returns <code>b'\xd0\xbe'</code>
When I use <code>chardet.detect</code> it returns me, that it's <code>utf-8</code>.
Can anybody explain to me, how can I convert it to russian alphabet and why ways, that I use doesn't fit to get it.</p>
| 0 | 2016-10-12T15:29:16Z | 40,003,034 | <p>It appears that the elements of your list are byte strings encoded in UTF-8, yet they are of type <code>str</code> (or <code>unicode</code> in Python 2).</p>
<p>I used the following to convert them back into proper UTF-8:</p>
<pre><code>def reinterpret(string):
byte_arr = bytearray(ord(char) for char in string)
return byte_arr.decode('utf8')
</code></pre>
<p>This gives the following, which looks a bit more like Russian:</p>
<pre><code>>>> for elem in lst:
... print(reinterpret(elem))
...
о
/
и
Ñ
в
к
Ñ
Ñ
на
____
|
по
11
search
оÑ
**modis**
15
С
**avito**
не
[СоÑ
ÑанÑннаÑ
Ð
еÑÑ
kid
google
ли
13
**авиÑо**
[ÐоказаÑÑ
ÐожаловаÑÑÑÑ
Ð
&parent-
ЯндекÑ
###
**avito**.
**kiddieland**
магазин
45
click2.yandex.ru/redir
72
</code></pre>
| 1 | 2016-10-12T15:54:04Z | [
"python",
"unicode",
"encoding",
"utf-8"
] |
Python: convert some encoding to russian alphabet | 40,002,546 | <p>I have list with unicode</p>
<pre><code>lst = [u'\xd0\xbe', u'/', u'\xd0\xb8', u'\xd1\x81', u'\xd0\xb2', u'\xd0\xba', u'\xd1\x8f', u'\xd1\x83', u'\xd0\xbd\xd0\xb0', u'____', u'|', u'\xd0\xbf\xd0\xbe', u'11', u'search', u'\xd0\xbe\xd1\x82', u'**modis**', u'15', u'\xd0\xa1', u'**avito**', u'\xd0\xbd\xd0\xb5', u'[\xd0\xa1\xd0\xbe\xd1\x85\xd1\x80\xd0\xb0\xd0\xbd\xd1\x91\xd0\xbd\xd0\xbd\xd0\xb0\xd1\x8f', u'\xd0\x92', u'\xd0\xb5\xd1\x89\xd1\x91', u'kid', u'google', u'\xd0\xbb\xd0\xb8', u'13', u'**\xd0\xb0\xd0\xb2\xd0\xb8\xd1\x82\xd0\xbe**', u'[\xd0\x9f\xd0\xbe\xd0\xba\xd0\xb0\xd0\xb7\xd0\xb0\xd1\x82\xd1\x8c', u'\xd0\x9f\xd0\xbe\xd0\xb6\xd0\xb0\xd0\xbb\xd0\xbe\xd0\xb2\xd0\xb0\xd1\x82\xd1\x8c\xd1\x81\xd1\x8f', u'\xd0\x9e', u'&parent-', u'\xd0\xaf\xd0\xbd\xd0\xb4\xd0\xb5\xd0\xba\xd1\x81', u'###', u'**avito**.', u'**kiddieland**', u'\xd0\xbc\xd0\xb0\xd0\xb3\xd0\xb0\xd0\xb7\xd0\xb8\xd0\xbd', u'45', u'click2.yandex.ru/redir', u'72']
</code></pre>
<p>I try to convert some like <code>u'\xd0\xbe'</code> to russian.
I tried to use 2 and 3 python, but I can't do that.
I use IDE Pycharm and in settings I have default utf-8.
When I print that with</p>
<pre><code>for elem in lst:
print (elem)
</code></pre>
<p>it returns me <code>þ</code> for the first elem. When I try <code>print (elem.encode('cp1252'))</code> it returns <code>b'\xd0\xbe'</code>
When I use <code>chardet.detect</code> it returns me, that it's <code>utf-8</code>.
Can anybody explain to me, how can I convert it to russian alphabet and why ways, that I use doesn't fit to get it.</p>
| 0 | 2016-10-12T15:29:16Z | 40,003,035 | <p>Your data is a <a href="https://en.wikipedia.org/wiki/Mojibake" rel="nofollow">Mojibake</a>, incorrectly decoded from UTF-8 bytes as Latin-1 or CP1252.</p>
<p>You can repair this by manually reverting that process:</p>
<pre><code>repaired = [elem.encode('latin1').decode('utf8') for elem in lst]
</code></pre>
<p>but be careful; if the data actually was decoded as cp1252 the above would fail if there were any bytes in the range 0x80-0x9f in the source data.</p>
<p>You can use the <a href="https://ftfy.readthedocs.io/en/latest/" rel="nofollow"><code>ftfy</code> library</a> instead; it contains specialist codecs that can handle forced decodings too (where bytes are selectively decoded as Latin-1 where a CP1252 mapping is missing):</p>
<pre><code>import ftfy
repaired = [ftfy.fix_text(elem) for elem in lst]
</code></pre>
<p><code>ftfy.fix_text()</code> does a good job at auto-detecting what codec the data was decoded with.</p>
<p>Either method works for the sample data you gave; using <code>ftfy</code> or manually decoding doesn't make a difference for <em>that specific example</em>:</p>
<pre><code>>>> import ftfy
>>> repaired = [ftfy.fix_text(elem) for elem in lst]
>>> repaired
[u'\u043e', u'/', u'\u0438', u'\u0441', u'\u0432', u'\u043a', u'\u044f', u'\u0443', u'\u043d\u0430', u'____', u'|', u'\u043f\u043e', u'11', u'search', u'\u043e\u0442', u'**modis**', u'15', u'\u0421', u'**avito**', u'\u043d\u0435', u'[\u0421\u043e\u0445\u0440\u0430\u043d\u0451\u043d\u043d\u0430\u044f', u'\u0412', u'\u0435\u0449\u0451', u'kid', u'google', u'\u043b\u0438', u'13', u'**\u0430\u0432\u0438\u0442\u043e**', u'[\u041f\u043e\u043a\u0430\u0437\u0430\u0442\u044c', u'\u041f\u043e\u0436\u0430\u043b\u043e\u0432\u0430\u0442\u044c\u0441\u044f', u'\u041e', u'&parent-', u'\u042f\u043d\u0434\u0435\u043a\u0441', u'###', u'**avito**.', u'**kiddieland**', u'\u043c\u0430\u0433\u0430\u0437\u0438\u043d', u'45', u'click2.yandex.ru/redir', u'72']
>>> print repaired[20]
[СоÑ
ÑанÑннаÑ
</code></pre>
<p>Of course, the <em>better</em> solution is to avoid creating a Mojibake in the first place. If you can correct the source of the error, so much the better.</p>
<p>For example, if you loaded this data using the <code>requests</code> library and assumed that it was safe to use the <code>response.text</code> attribute, then please do read the <a href="http://docs.python-requests.org/en/master/user/advanced/#encodings" rel="nofollow"><em>Encodings</em> section</a> of the <em>Advanced Usage</em> chapter in the library documentation:</p>
<blockquote>
<p>The only time Requests will not do this is if no explicit charset is present in the HTTP headers and the Content-Type header contains text. In this situation, RFC 2616 specifies that the default charset must be ISO-8859-1. Requests follows the specification in this case. If you require a different encoding, you can manually set the <code>Response.encoding</code> property, or use the raw <code>Response.content</code>.</p>
</blockquote>
<p>so if there is character set defined on the response, <code>response.text</code> will give you Latin-1 decoded text instead. It is better to avoid using <code>response.text</code> and use <code>response.content</code> instead in that case and either manually decode or use a format-appropriate parser to determine the codec used (such as BeatifulSoup for HTML).</p>
| 1 | 2016-10-12T15:54:05Z | [
"python",
"unicode",
"encoding",
"utf-8"
] |
Calculating score of merged string | 40,002,581 | <p>I am trying to figure out how to calculate the score of two merged lists of names. I need to give one point for each character (including spaces between first and last name) plus one point for each vowel in the name. </p>
<pre><code>a = ["John", "Kate", "Oli"]
b = ["Green", "Fletcher", "Nelson"]
vowel = ["a", "e", "i", "o", "u"]
gen = ((x, y) for x in a for y in b)
for u, v in gen:
print u, v
</code></pre>
<p>I am struggling to figure out what to do. Any help would be greatly appreciated.</p>
| 1 | 2016-10-12T15:30:46Z | 40,002,730 | <pre><code>for first, second in gen:
name = " ".join(first, second)
score = 0
for letter in name:
if letter in vowel:
score += 1
score += len(name)
</code></pre>
<p>It's easier to have them as a string, you can then loop over that string. One point per character is easy, that's just length of the string. If vowel is one extra letter, just loop over letters and if vowel add a point. Voila!</p>
| 0 | 2016-10-12T15:38:11Z | [
"python",
"string",
"list"
] |
Calculating score of merged string | 40,002,581 | <p>I am trying to figure out how to calculate the score of two merged lists of names. I need to give one point for each character (including spaces between first and last name) plus one point for each vowel in the name. </p>
<pre><code>a = ["John", "Kate", "Oli"]
b = ["Green", "Fletcher", "Nelson"]
vowel = ["a", "e", "i", "o", "u"]
gen = ((x, y) for x in a for y in b)
for u, v in gen:
print u, v
</code></pre>
<p>I am struggling to figure out what to do. Any help would be greatly appreciated.</p>
| 1 | 2016-10-12T15:30:46Z | 40,002,895 | <p>So you first <code>zip</code> first and last names, then make then <code>str</code> with <code>' '</code> as separator. And then with <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a> count how many times so called <em>vowel</em> characters occur, <code>sum</code> them up, and add <code>len</code> of whole name. And that would be <code>dict</code> object, then you can do whatever you like with it.</p>
<pre><code>from collections import Counter
a = ["John", "Kate", "Oli"]
b = ["Green", "Fletcher", "Nelson"]
vowel = ["a", "e", "i", "o", "u"]
output = {}
for item in [' '.join(i) for i in zip(a,b)]:
output[item] = sum(Counter(item)[x] for x in vowel) + len(item)
output
</code></pre>
<p>Output:</p>
<pre><code>{'John Green': 13, 'Kate Fletcher': 17, 'Oli Nelson': 13}
</code></pre>
<p><strong>UPDATE</strong></p>
<p>If you need all possible variations of first name and last name you can do that with <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product</code></a></p>
<pre><code>from itertools import product
from collections import Counter
a = ["John", "Kate", "Oli"]
b = ["Green", "Fletcher", "Nelson"]
vowel = ["a", "e", "i", "o", "u"]
output = {}
for item in [' '.join(i) for i in product(a,b)]:
output[item] = sum(Counter(item)[x] for x in vowel) + len(item)
output
</code></pre>
<p>Output:</p>
<pre><code>{'John Fletcher': 16,
'John Green': 13,
'John Nelson': 14,
'Kate Fletcher': 17,
'Kate Green': 14,
'Kate Nelson': 15,
'Oli Fletcher': 15,
'Oli Green': 12,
'Oli Nelson': 13}
</code></pre>
| 0 | 2016-10-12T15:46:26Z | [
"python",
"string",
"list"
] |
What is the mistake here? | 40,002,588 | <p>These are my codes but it seems to be correct,but it doesn't work,please help </p>
<pre><code>HEADER_XPATH = ['//h1[@class="story-body__h1"]//text()']
AUTHOR_XPATH = ['//span[@class="byline__name"]//text()']
PUBDATE_XPATH = ['//div/@data-datetime']
WTAGS_XPATH = ['']
CATEGORY_XPATH = ['//span[@rev="news|source""]//text()']
TEXT = ['//div[@property="articleBody"]//p//text()']
INTERLINKS = ['//div[@class="story-body__link"]//p//a/@href']
DATE_FORMAT_STRING = '%Y-%m-%d'
class BBCSpider(Spider):
name = "bbc"
allowed_domains = ["bbc.com"]
sitemap_urls = [
'http://Www.bbc.com/news/sitemap/',
'http://www.bbc.com/news/technology/',
'http://www.bbc.com/news/science_and_environment/']
def parse_page(self, response):
items = []
item = ContentItems()
item['title'] = process_singular_item(self, response, HEADER_XPATH, single=True)
item['resource'] = urlparse(response.url).hostname
item['author'] = process_array_item(self, response, AUTHOR_XPATH, single=False)
item['pubdate'] = process_date_item(self, response, PUBDATE_XPATH, DATE_FORMAT_STRING, single=True)
item['tags'] = process_array_item(self, response, TAGS_XPATH, single=False)
item['category'] = process_array_item(self, response, CATEGORY_XPATH, single=False)
item['article_text'] = process_article_text(self, response, TEXT)
item['external_links'] = process_external_links(self, response, INTERLINKS, single=False)
item['link'] = response.url
items.append(item)
return items
</code></pre>
| -2 | 2016-10-12T15:31:07Z | 40,005,562 | <p>Your spider is just badly structured and because of that it does nothing.<br>
The <code>scrapy.Spider</code> spider requires <code>start_urls</code> class attribute which should contains list of urls that the spider will use to start the crawl, all of these urls will callback to class method <code>parse</code> which means it's required as well.</p>
<p>Your spider has <code>sitemap_urls</code> class attribute and it's not being used anywhere, also your spider has <code>parse_page</code> class method that is never used anywhere either.<br>
So in short your spider should look something like this:</p>
<pre><code>class BBCSpider(Spider):
name = "bbc"
allowed_domains = ["bbc.com"]
start_urls = [
'http://Www.bbc.com/news/sitemap/',
'http://www.bbc.com/news/technology/',
'http://www.bbc.com/news/science_and_environment/']
def parse(self, response):
# This is a page with all of the articles
article_urls = # find article urls in the pages
for url in article_urls:
yield Request(url, self.parse_page)
def parse_page(self, response):
# This is an article page
items = []
item = ContentItems()
# populate item
return item
</code></pre>
| 0 | 2016-10-12T18:12:48Z | [
"python",
"python-3.x",
"scrapy",
"web-crawler",
"scrapy-spider"
] |
Selenium, Python. Gives no such element: Unable to locate element error whether the button is there | 40,002,614 | <p>I am working on my test case which includes sending values to the input fields for buying tickets. But for this case selenium gives me unable to locate element error while I am trying to locate input field named itemq_3728, the problem is the page is changing the name of input field every time it reopens the page.</p>
<p>How can I locate the input field ?
I try the XPath but can't achieve the goal and also couldn't write it relative to the name of the ticket</p>
<pre><code><table id="bms_tickets" width="90%" cellspacing="5" cellpadding="0" class="bms_tickets table">
<thead>
<tr>
<th>NAME</th>
<th width="240px">PRICE</th>
<th width="100px">QUANTITY</th>
</tr>
</thead>
<tbody id="resTypesTable">
<tr id="bms_restype_3728" class="bms_restype">
<td class="bms_restype_desc">
Gen Ad
<div style="font-size:10px;margin-left:5px;">
</div>
</td>
<td class="bms_restype_price">
$10.00
<input type="hidden" name="pay_itemq_3728" value="10.00">
</td>
<td class="bms_restype_qty">
<input type="text" name="itemq_3728" value="0" placeholder="1" min="1">
</td>
</tr>
</tbody>
</table>
</code></pre>
| 0 | 2016-10-12T15:31:58Z | 40,003,230 | <p>Hope this will help assuming only numeric path of name changes after pageload:
'<code>//td[@class="bms_restype_qty"]//input[starts-Ââwith(@name,"itemq")]'</code></p>
| 0 | 2016-10-12T16:04:00Z | [
"python",
"google-chrome",
"selenium",
"xpath"
] |
Selenium, Python. Gives no such element: Unable to locate element error whether the button is there | 40,002,614 | <p>I am working on my test case which includes sending values to the input fields for buying tickets. But for this case selenium gives me unable to locate element error while I am trying to locate input field named itemq_3728, the problem is the page is changing the name of input field every time it reopens the page.</p>
<p>How can I locate the input field ?
I try the XPath but can't achieve the goal and also couldn't write it relative to the name of the ticket</p>
<pre><code><table id="bms_tickets" width="90%" cellspacing="5" cellpadding="0" class="bms_tickets table">
<thead>
<tr>
<th>NAME</th>
<th width="240px">PRICE</th>
<th width="100px">QUANTITY</th>
</tr>
</thead>
<tbody id="resTypesTable">
<tr id="bms_restype_3728" class="bms_restype">
<td class="bms_restype_desc">
Gen Ad
<div style="font-size:10px;margin-left:5px;">
</div>
</td>
<td class="bms_restype_price">
$10.00
<input type="hidden" name="pay_itemq_3728" value="10.00">
</td>
<td class="bms_restype_qty">
<input type="text" name="itemq_3728" value="0" placeholder="1" min="1">
</td>
</tr>
</tbody>
</table>
</code></pre>
| 0 | 2016-10-12T15:31:58Z | 40,004,646 | <p>You can locate it using <code>cssSelector</code> as below :-</p>
<pre><code>driver.find_element_by_css_selector("td.bms_restype_qty > input[type='text']")
</code></pre>
<p>Or if you're interested to locate this element using <code>xpath</code> you can locate it wrt <code>Gen Ad</code> name column text as below :-</p>
<pre><code>driver.find_element_by_xpath(".//td[normalize-space(.)='Gen Ad' and @class = 'bms_restype_desc']/following-sibling::td[@class='bms_restype_qty']/input")
</code></pre>
<p>Or</p>
<pre><code>driver.find_element_by_xpath(".//tr[td[normalize-space(.)='Gen Ad']]/td[@class='bms_restype_qty']/input")
</code></pre>
| 0 | 2016-10-12T17:19:47Z | [
"python",
"google-chrome",
"selenium",
"xpath"
] |
python scikit linear-regression weird results | 40,002,632 | <p>im new to python.</p>
<p>Im tring to plot, using matplotlib, the results from linea regression.</p>
<p>I've tried with some basic data and it worked, but when i try with the actual data, the regression line is compltetely wrong. I think im doing something wrong with the fit() or predict() functions.</p>
<p>this is the code :</p>
<pre><code>import matplotlib.pyplot as plt
from sklearn import linear_model
import scipy
import numpy as np
regr=linear_model.LinearRegression()
A=[[69977, 4412], [118672, 4093], [127393, 12324], [226158, 15453], [247883, 8924], [228057, 6568], [350119, 4040], [197808, 6793], [205989, 8471], [10666, 632], [38746, 1853], [12779, 611], [38570, 1091], [38570, 1091], [95686, 8752], [118025, 17620], [79164, 13335], [83051, 1846], [4177, 93], [29515, 1973], [75671, 5070], [10077, 184], [78975, 4374], [187730, 17133], [61558, 2521], [34705, 1725], [206514, 10548], [13563, 1734], [134931, 7117], [72527, 6551], [16014, 310], [20619, 403], [21977, 437], [20204, 258], [20406, 224], [20551, 375], [38251, 723], [20416, 374], [21125, 429], [20405, 235], [20042, 431], [20016, 366], [19702, 200], [20335, 420], [21200, 494], [22667, 487], [20393, 405], [20732, 414], [20602, 393], [111705, 7623], [112159, 5982], [6750, 497], [59624, 418], [111468, 10209], [40057, 1484], [435, 0], [498848, 17053], [26585, 1390], [75170, 3883], [139146, 3540], [84931, 7214], [19144, 3125], [31144, 2861], [66573, 818], [114253, 4155], [15421, 2094], [307497, 5110], [484904, 10273], [373476, 36365], [128152, 10920], [517285, 106315], [453483, 10054], [270763, 17542], [9068, 362], [61992, 1608], [35791, 1747], [131215, 6227], [4314, 191], [16316, 2650], [72791, 2077], [47008, 4656], [10853, 1346], [66708, 4855], [214736, 11334], [46493, 4236], [23042, 737], [335941, 11177], [65167, 2433], [94913, 7523], [454738, 12335]]
#my data are selected from a Mysql DB and stored in np array like this one above.
regr.fit(A,A[:,1])
plt.scatter(A[:,0],A[:,1], color='black')
plt.plot(A[:,1],regr.predict(A), color='blue',linewidth=3)
plt.show()
</code></pre>
<p>what a want is a regression line using the data from the first column of A and the second column. And this is the result: <a href="https://i.stack.imgur.com/2dMev.png" rel="nofollow">enter image description here</a></p>
<p>I know that the presence of outlier can havily impact on the output , but i tried with other tolls for regression and the regression line was a lot closer to the area where points are, so im sure im missing something.</p>
<p>Thank you.</p>
<p>EDIT 1: as suggested i tried again changing only the plot() param . Instead of A[:,1] i used A[:,0] and this is the result : <a href="https://i.stack.imgur.com/UP6iM.png" rel="nofollow">enter image description here</a></p>
<p>A simple example at scikit-learn.org/stable/modules/linear_model.html , looks like mine. I dont need prediction so i didnt sliced my data in training and test set...maybe im misunderstading the meaning of "X,y", but again , looking at the example in the link, it looks like mine. </p>
<p>EDIT 2: finally it worked. </p>
<pre><code>X=A[:,0]
X=X[:,np.newaxis]
regr=linear_model.LinearRegression()
regr.fit(X,A[:,1])
plt.plot(X,regr.predict(X))
</code></pre>
<p>the X param just need to be a 2 Dim array. The example in EDIT 1 really misleaded me :(. </p>
| 0 | 2016-10-12T15:32:53Z | 40,003,677 | <p>You seem to be including the target values <code>A[:, 1]</code> in your training data. The fitting command is of the form <code>regr.fit(X, y)</code>.</p>
<p>You also seem to have a problem with this line:</p>
<blockquote>
<p><code>plt.plot(A[:,1],regr.predict(A), color='blue',linewidth=3)</code></p>
</blockquote>
<p>I think that should you should replace <code>A[:, 1]</code> with <code>A[:, 0]</code>, if you want to to plot your prediction against the predictor values.</p>
<p>You may find it easier to split your data into <code>X</code> and <code>y</code> at the beginning - it may make things clearer.</p>
| 0 | 2016-10-12T16:26:03Z | [
"python",
"matplotlib",
"scikit-learn"
] |
Reject user input if criteria not met in Python | 40,002,645 | <p>I know this question is similar to one I have already asked, but it is an extension and so justified its own space :-)</p>
<p>I am a Python newbie writing a code which takes input from a user and then stores that user input in an array (to do more stuff with later), provided two criteria are met:</p>
<p>1) The total inputs add up to one</p>
<p>2) There is no input itself greater than one.</p>
<p>I have already had <a href="http://stackoverflow.com/questions/39956782/reject-or-loop-over-user-input-if-two-conditions-not-met">some help with this question</a>, but had to modify it a bit since my code inputs can't easily be written with the inputs being classified by some index "n" (the questions prompting input can't really be formatted as "input (n), where n runs from 1 to A") </p>
<p>Here is my attempt so far:</p>
<pre><code>num_array = list()
input_number = 1
while True:
a1 = raw_input('Enter concentration of hydrogen (in decimal form): ')
a2 = raw_input('Enter concentration of chlorine (in decimal form): ')
a3 = raw_input('Enter concentration of calcium (in decimal form): ')
li = [a1, a2, a3]
for s in li:
num_array.append(float(s))
total = sum([float(s)])
if float(s-1) > 1.0:
num_array.remove(float(s-1))
print('The input is larger than one.')
continue
if total > 1.0: # Total larger than one, remove last input and print reason
num_array.remove(float(s-1))
print('The sum of the percentages is larger than one.')
continue
if total == 1.0: # if the sum equals one: exit the loop
break
input_number += 1
</code></pre>
<p>I am rather glad it compiles, but Python doesn't like the line </p>
<pre><code>if float(s-1) > 1.0:
</code></pre>
<p>for which it throws the error:</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'str' and 'int'
</code></pre>
<p>I know this is because the "s" is a string, not an integer, but I can't think of an easy way around the problem, or for how to implement a loop over user inputs in this case in general.</p>
<p>How to improve this program to only write user input to the array if the criteria are met?</p>
<p>Thanks for your time and help!</p>
| 0 | 2016-10-12T15:33:32Z | 40,002,725 | <p>You'll simply need to cast to float before you do subtraction:</p>
<pre><code>if float(s) - 1 > 1.0:
</code></pre>
<p>This way, you can subtract 1 from the float value of <code>s</code></p>
<p><strong>EDIT:</strong> I would also make the following changes to your code in order to function more correctly.</p>
<pre><code>num_array = list()
input_number = 1
while True:
a1 = raw_input('Enter concentration of hydrogen (in decimal form): ')
a2 = raw_input('Enter concentration of chlorine (in decimal form): ')
a3 = raw_input('Enter concentration of calcium (in decimal form): ')
try: # try to cast everythiong as float. If Exception, start loop over.
li = [float(a1), float(a2), float(a3)]
except ValueError:
continue
total = 0 # reset total to 0 each iteration
for s in li:
num_array.append(s)
total += s # use += to keep running toatal
if s > 1.0:
num_array.remove(s)
print('The input is larger than one.')
break # break rather than continue to break out of for loop and start while loop over
if total > 1.0:
num_array.remove(s)
print('The sum of the percentages is larger than one.')
break # break again
if total == 1.0:
break
</code></pre>
<p>I think this is what you were going for.</p>
| 2 | 2016-10-12T15:37:53Z | [
"python",
"loops",
"input"
] |
Reject user input if criteria not met in Python | 40,002,645 | <p>I know this question is similar to one I have already asked, but it is an extension and so justified its own space :-)</p>
<p>I am a Python newbie writing a code which takes input from a user and then stores that user input in an array (to do more stuff with later), provided two criteria are met:</p>
<p>1) The total inputs add up to one</p>
<p>2) There is no input itself greater than one.</p>
<p>I have already had <a href="http://stackoverflow.com/questions/39956782/reject-or-loop-over-user-input-if-two-conditions-not-met">some help with this question</a>, but had to modify it a bit since my code inputs can't easily be written with the inputs being classified by some index "n" (the questions prompting input can't really be formatted as "input (n), where n runs from 1 to A") </p>
<p>Here is my attempt so far:</p>
<pre><code>num_array = list()
input_number = 1
while True:
a1 = raw_input('Enter concentration of hydrogen (in decimal form): ')
a2 = raw_input('Enter concentration of chlorine (in decimal form): ')
a3 = raw_input('Enter concentration of calcium (in decimal form): ')
li = [a1, a2, a3]
for s in li:
num_array.append(float(s))
total = sum([float(s)])
if float(s-1) > 1.0:
num_array.remove(float(s-1))
print('The input is larger than one.')
continue
if total > 1.0: # Total larger than one, remove last input and print reason
num_array.remove(float(s-1))
print('The sum of the percentages is larger than one.')
continue
if total == 1.0: # if the sum equals one: exit the loop
break
input_number += 1
</code></pre>
<p>I am rather glad it compiles, but Python doesn't like the line </p>
<pre><code>if float(s-1) > 1.0:
</code></pre>
<p>for which it throws the error:</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'str' and 'int'
</code></pre>
<p>I know this is because the "s" is a string, not an integer, but I can't think of an easy way around the problem, or for how to implement a loop over user inputs in this case in general.</p>
<p>How to improve this program to only write user input to the array if the criteria are met?</p>
<p>Thanks for your time and help!</p>
| 0 | 2016-10-12T15:33:32Z | 40,002,840 | <p>One way you can assure the input type is with a try/except condition to exit a loop:</p>
<pre><code>while True:
a1 = raw_input('Enter concentration of hydrogen (in decimal form): ')
a2 = raw_input('Enter concentration of chlorine (in decimal form): ')
a3 = raw_input('Enter concentration of calcium (in decimal form): ')
try:
a1 = float(a1)
a2 = float(a2)
a3 = float(a3)
break
except ValueError:
print('All inputs must be numerals')
</code></pre>
<p>And then it will just stay in the loop if they didn't enter something Python could convert to a float.</p>
| 2 | 2016-10-12T15:43:33Z | [
"python",
"loops",
"input"
] |
Reject user input if criteria not met in Python | 40,002,645 | <p>I know this question is similar to one I have already asked, but it is an extension and so justified its own space :-)</p>
<p>I am a Python newbie writing a code which takes input from a user and then stores that user input in an array (to do more stuff with later), provided two criteria are met:</p>
<p>1) The total inputs add up to one</p>
<p>2) There is no input itself greater than one.</p>
<p>I have already had <a href="http://stackoverflow.com/questions/39956782/reject-or-loop-over-user-input-if-two-conditions-not-met">some help with this question</a>, but had to modify it a bit since my code inputs can't easily be written with the inputs being classified by some index "n" (the questions prompting input can't really be formatted as "input (n), where n runs from 1 to A") </p>
<p>Here is my attempt so far:</p>
<pre><code>num_array = list()
input_number = 1
while True:
a1 = raw_input('Enter concentration of hydrogen (in decimal form): ')
a2 = raw_input('Enter concentration of chlorine (in decimal form): ')
a3 = raw_input('Enter concentration of calcium (in decimal form): ')
li = [a1, a2, a3]
for s in li:
num_array.append(float(s))
total = sum([float(s)])
if float(s-1) > 1.0:
num_array.remove(float(s-1))
print('The input is larger than one.')
continue
if total > 1.0: # Total larger than one, remove last input and print reason
num_array.remove(float(s-1))
print('The sum of the percentages is larger than one.')
continue
if total == 1.0: # if the sum equals one: exit the loop
break
input_number += 1
</code></pre>
<p>I am rather glad it compiles, but Python doesn't like the line </p>
<pre><code>if float(s-1) > 1.0:
</code></pre>
<p>for which it throws the error:</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'str' and 'int'
</code></pre>
<p>I know this is because the "s" is a string, not an integer, but I can't think of an easy way around the problem, or for how to implement a loop over user inputs in this case in general.</p>
<p>How to improve this program to only write user input to the array if the criteria are met?</p>
<p>Thanks for your time and help!</p>
| 0 | 2016-10-12T15:33:32Z | 40,003,795 | <p>Your code is a bit confused. I'm sure that someone could pick holes in the following but give it a whirl and see if it helps. </p>
<p>Initially putting the questions into a list, allows you to ask and validate the input one at a time within a single loop, exiting the loop only when all of the questions have been asked, validated and stored. </p>
<p>First define the questions, next process each question within a loop and within that loop, use the <code>while</code> statement to stay on the same question, until a valid answer has been supplied. </p>
<pre><code>input_number = 1
questions = []
answers = []
questions.append('Enter concentration of hydrogen (in decimal form): ')
questions.append('Enter concentration of chlorine (in decimal form): ')
questions.append('Enter concentration of calcium (in decimal form): ')
for i in questions:
while True:
try:
ans = float(raw_input(i)) #Accept the answer to each question
except ValueError:
print('Please input in decimal form')
continue # Invalid input, try again
if ans > 1.0:
print('The input is larger than one.')
continue # Invalid input, try again
if sum(answers,ans) > 1.0:
print('The sum of the answers is larger than one.')
print(answers)
continue # Invalid input, try again
answers.append(ans)
break # The answer to this question has been validated, add it to the list
print ("Your validated input is ",answers)
input_number += 1
</code></pre>
| 1 | 2016-10-12T16:32:32Z | [
"python",
"loops",
"input"
] |
Getting element by undefinied tag name | 40,002,652 | <p>I'm parsing an xml document in Python using minidom. </p>
<p>I have an element:</p>
<pre><code><informationRequirement>
<requiredDecision href="id"/>
</informationRequirement>
</code></pre>
<p>The only thing I need is value of href in subelement but its tag name can be different (for example <em>requiredKnowledge</em> instead of <em>requiredDecision</em>; it always shall begin with <em>required</em>).
If the tag was always the same I would use something like:</p>
<pre><code>element.getElementsByTagName('requiredDecision')[0].attributes['href'].value
</code></pre>
<p>But that's not the case. What can be substitute of this knowing that tag name varies?</p>
<p>(there will be always <strong>one</strong> subelement)</p>
| 0 | 2016-10-12T15:33:40Z | 40,006,370 | <p>If you're always guaranteed to have one subelement, just grab that element:</p>
<pre><code>element.childNodes[0].attributes['href'].value
</code></pre>
<p>However, this is brittle. A (perhaps) better approach could be:</p>
<pre><code>hrefs = []
for child in element.childNodes:
if child.tagName.startswith('required'):
hrefs.append(child.attributes['href'].value)
</code></pre>
| 0 | 2016-10-12T18:59:58Z | [
"python",
"xml-parsing",
"minidom"
] |
Refreshing QGraphicsScene pyqt - Odd results | 40,002,664 | <p>I am trying to build a simple node graph in pyqt. I am having some trouble with a custom widget leaving artifacts and not drawing correctly when moved by the mouse. See images below:</p>
<p><a href="https://i.stack.imgur.com/KBMoP.png" rel="nofollow"><img src="https://i.stack.imgur.com/KBMoP.png" alt="Before I move the node with the mouse"></a>
Before I move the node with the mouse.</p>
<p><a href="https://i.stack.imgur.com/C6dBz.png" rel="nofollow"><img src="https://i.stack.imgur.com/C6dBz.png" alt="After I move the mode with the mouse"></a>
After I move the mode with the mouse</p>
<p>I thought that maybe it was the bounding method on my custom widget called nodeGFX:</p>
<pre><code>def boundingRect(self):
"""Bounding."""
return QtCore.QRectF(self.pos().x(),
self.pos().y(),
self.width,
self.height)
</code></pre>
<p>Any Ideas guys? Complete py file below. </p>
<pre><code>"""Node graph and related classes."""
from PyQt4 import QtGui
from PyQt4 import QtCore
# import canvas
'''
TODO
Function
- Delete Connection
- Delete nodes
Look
- Use look information from settings
- nodes
- connections
- canvas
'''
# ----------------------------- NodeGFX Class --------------------------------#
# Provides a visual repersentation of a node in the node interface. Requeres
# canvas interface. Added to main scene
#
class NodeGFX(QtGui.QGraphicsItem):
"""Display a node."""
# --------------------------- INIT ---------------------------------------#
#
# Initlize the node
# n_x - Where in the graphics scene to position the node. x cord
# n_y - Where in the graphics scene to position the node. y cord
# n_node - Node object from Canvas. Used in construction of node
# n_scene - What is the parent scene of this object
#
def __init__(self, n_x, n_y, n_node, n_scene):
"""INIT."""
super(NodeGFX, self).__init__()
# Colection of input and output AttributeGFX Objects
self.gscene = n_scene
self.inputs = {}
self.outputs = {}
# An identifier for selections
self.io = "node"
# The width of a node - TODO implement in settings!
self.width = 350
# Use information from the passed in node to build
# this object.
self.name = n_node.name
node_inputs = n_node.in_attributes
node_outputs = n_node.out_attributes
# How far down to go between each attribute TODO implement in settings!
attr_offset = 25
org_offset = attr_offset
if len(node_inputs) > len(node_outputs):
self.height = attr_offset * len(node_inputs) + (attr_offset * 2)
else:
self.height = attr_offset * len(node_outputs) + (attr_offset * 2)
# Create the node!
'''
QtGui.QGraphicsRectItem.__init__(self,
n_x,
n_y,
self.width,
self.height,
scene=n_scene)
'''
self.setFlag(QtGui.QGraphicsItem.ItemIsMovable, True)
self.lable = QtGui.QGraphicsTextItem(self.name,
self)
# Set up inputs
for key, value in node_inputs.iteritems():
new_attr_gfx = AttributeGFX(0,
0,
self.scene(),
self,
key,
value.type,
"input")
new_attr_gfx.setPos(0, attr_offset)
self.inputs[key] = new_attr_gfx
attr_offset = attr_offset + 25
# set up Outputs
attr_offset = org_offset
for key, value in node_outputs.iteritems():
new_attr_gfx = AttributeGFX(0,
0,
self.scene(),
self,
key,
value.type,
"output")
new_attr_gfx.setPos(self.width, attr_offset)
self.outputs[key] = new_attr_gfx
attr_offset = attr_offset + 25
# ---------------- Utility Functions -------------------------------------#
def canv(self):
"""Link to the canvas object."""
return self.scene().parent().parent().canvasobj
def __del__(self):
"""Destory a node and all child objects."""
# Remove self from GFX scene
print "Node del func called"
self.scene().removeItem(self)
def boundingRect(self):
"""Bounding."""
return QtCore.QRectF(self.pos().x(),
self.pos().y(),
self.width,
self.height)
def mousePressEvent(self, event):
self.update()
super(NodeGFX, self).mousePressEvent(event)
def mouseReleaseEvent(self, event):
self.update()
super(NodeGFX, self).mouseReleaseEvent(event)
# ------------- Event Functions ------------------------------------------#
def mouseMoveEvent(self, event):
"""Update connections when nodes are moved."""
self.scene().updateconnections()
QtGui.QGraphicsItem.mouseMoveEvent(self, event)
self.gscene.update()
def mousePressEvent(self, event):
"""Select a node."""
print "Node Selected"
self.scene().selection(self)
QtGui.QGraphicsEllipseItem.mousePressEvent(self, event)
# ----------- Paint Functions -------------------------------------------#
def paint(self, painter, option, widget):
painter.setPen(QtCore.Qt.NoPen)
painter.setBrush(QtCore.Qt.darkGray)
self.width = 400
self.height = 400
painter.drawEllipse(-7, -7, 20, 20)
rectangle = QtCore.QRectF(0,
0,
self.width,
self.height)
painter.drawRoundedRect(rectangle, 15.0, 15.0)
# ----------------------------- NodeGFX Class --------------------------------#
# Provides a visual repersentation of a Connection in the node interface.
# Requeres canvas interface and two nodes. Added to main scene
# Using two attributes draw a line between them. When
# Set up, a connection is also made on the canvas. unlike the canvas which
# stores connections on attributes, connectionGFX objects are stored in a
# list on the scene object
#
class ConnectionGFX (QtGui.QGraphicsLineItem):
"""A connection between two nodes."""
# ---------------------- Init Function -----------------------------------#
#
# Inits the Connection.
# n_scene - The scene to add these connections to
# n_upsteam - a ref to an upstream attributeGFX object.
# n_downstream - a ref to a downstream attributeGFX object.
#
def __init__(self, n_scene, n_upstream, n_downstream):
"""INIT."""
# Links to the AttributeGFX objs
self.upstreamconnect = n_upstream
self.downstreamconnect = n_downstream
self.io = 'connection'
super(ConnectionGFX, self).__init__(scene=n_scene)
self.setFlag(QtGui.QGraphicsItem.ItemIsSelectable, True)
self.scene().addItem(self)
self.update()
# ----------------- Utility functions -------------------------------
# When nodes are moved update is called. This will change the line
def update(self):
"""Called when new Draw."""
super(ConnectionGFX, self).update()
x1, y1, x2, y2 = self.updatepos()
self.setLine(x1, y1, x2, y2)
# Called by update calculate the new line points
def updatepos(self):
"""Get new position Data to draw line."""
up_pos = QtGui.QGraphicsItem.scenePos(self.upstreamconnect)
dn_pos = QtGui.QGraphicsItem.scenePos(self.downstreamconnect)
x1 = up_pos.x()
y1 = up_pos.y()
x2 = dn_pos.x()
y2 = dn_pos.y()
return x1, y1, x2, y2
# -------------------------- Event Overides ------------------------------#
def mousePressEvent(self, event):
"""Select a connection."""
print "Connection Selected"
self.scene().selection(self)
QtGui.QGraphicsEllipseItem.mousePressEvent(self, event)
# ------------------------ AttributeGFX Class --------------------------------#
# Provides a visual repersentation of an attribute. Used for both input and
# output connections. Stored on nodes themselves. They do not hold any of
# the attribute values. This info is stored and modded in the canvas.
#
class AttributeGFX (QtGui.QGraphicsEllipseItem):
"""An attribute on a node."""
# ---------------- Init -------------------------------------------------#
#
# Init the attributeGFX obj. This object is created by the nodeGFX obj
# n_x - Position x
# n_y - Position y
# n_scene - The scene to add this object to
# n_parent - The patent node of this attribute. Used to link
# n_name - The name of the attribute, must match whats in canvas
# n_type - The data type of the attribute
# n_io - Identifier for selection
def __init__(self,
n_x,
n_y,
n_scene,
n_parent,
n_name,
n_type,
n_io):
"""INIT."""
self.width = 15
self.height = 15
self.io = n_io
self.name = n_name
# Use same object for inputs and outputs
self.is_input = True
if "output" in n_io:
self.is_input = False
QtGui.QGraphicsEllipseItem.__init__(self,
n_x,
n_y,
self.width,
self.height,
n_parent,
n_scene)
self.lable = QtGui.QGraphicsTextItem(n_name, self, n_scene)
# self.lable.setY(n_y)
# TODO - Need a more procedual way to place the outputs...
if self.is_input is False:
n_x = n_x - 100
# self.lable.setX(self.width + n_x)
self.lable.setPos(self.width + n_x, n_y)
# ----------------------------- Event Overides -------------------------- #
def mousePressEvent(self, event):
"""Select and attribute."""
print "Attr Selected"
self.scene().selection(self)
QtGui.QGraphicsEllipseItem.mousePressEvent(self, event)
# ------------------------ SceneGFX Class --------------------------------#
# Provides tracking of all the elements in the scene and provides all the
# functionality. Is a child of the NodeGraph object. Commands for editing the
# node network byond how they look in the node graph are passed up to the
# canvas. If the functions in the canvas return true then the operation is
# permitted and the data in the canvas has been changed.
#
class SceneGFX(QtGui.QGraphicsScene):
"""Stores grapahic elems."""
# -------------------------- init -------------------------------------- #
#
# n_x - position withing the node graph widget x cord
# n_y - position withing the node graph widget y cord
def __init__(self, n_x, n_y, n_width, n_height, n_parent):
"""INIT."""
# Dict of nodes. Must match canvas
self.nodes = {}
# list of connections between nodes
self.connections = []
# The currently selected object
self.cur_sel = None
# how far to off set newly created nodes. Prevents nodes from
# being created ontop of each other
self.node_creation_offset = 100
super(SceneGFX, self).__init__(n_parent)
self.width = n_width
self.height = n_height
def addconnection(self, n1_node, n1_attr, n2_node, n2_attr):
"""Add a new connection."""
new_connection = ConnectionGFX(self,
self.nodes[n1_node].outputs[n1_attr],
self.nodes[n2_node].inputs[n2_attr])
self.connections.append(new_connection)
self.parent().update_attr_panel()
def helloworld(self):
"""test."""
print "Scene - hello world"
def updateconnections(self):
"""Update connections."""
for con in self.connections:
con.update()
def canv(self):
"""Link to the canvas object."""
return self.parent().canvasobj
def mainwidget(self):
"""Link to the main widget obj."""
return self.parent()
def delselection(self):
"""Delete the selected obj."""
if "connection" in self.cur_sel.io:
print "Deleteing Connection"
if self.mainwidget().delete_connection(self.cur_sel):
self.removeItem(self.cur_sel)
for x in range(0, len(self.connections) - 1):
if self.cur_sel == self.connections[x]:
del self.connections[x]
break
self.cur_sel = None
elif "node" in self.cur_sel.io:
if self.mainwidget().delete_node(self.cur_sel):
print "Deleteing Node"
node_name = self.cur_sel.name
# First search for all connections assosiated with this node
# and delete
# Create Dic from list
connection_dict = {}
for x in range(0, len(self.connections)):
connection_dict[str(x)] = self.connections[x]
new_connection_list = []
for key, con in connection_dict.iteritems():
up_node = con.upstreamconnect.parentItem().name
down_node = con.downstreamconnect.parentItem().name
if up_node == node_name or down_node == node_name:
self.removeItem(connection_dict[key])
else:
new_connection_list.append(con)
self.connections = new_connection_list
del connection_dict
self.removeItem(self.nodes[node_name])
del self.nodes[node_name]
self.parent().update_attr_panel()
def keyPressEvent(self, event):
"""Listen for key presses on scene obj."""
if event.key() == QtCore.Qt.Key_Delete:
self.delselection()
super(SceneGFX, self).keyPressEvent(event)
def selection(self, sel):
"""Function to handel selections and connections."""
last_sel = self.cur_sel
self.cur_sel = sel
print "Last Sel:", last_sel
print "Current Sel:", self.cur_sel
if "node" in sel.io:
self.mainwidget().selected_node = sel
self.mainwidget().attr_panel.update_layout()
# Need to compaire the current and last selections to see
# if a connection has been made
if last_sel != None:
if "input" in last_sel.io and "output" in self.cur_sel.io:
lspn = last_sel.parentItem().name
cspn = self.cur_sel.parentItem().name
if lspn is not cspn:
print "Connecting Attrs 1"
self.mainwidget().connect(last_sel.parentItem().name,
last_sel.name,
self.cur_sel.parentItem().name,
self.cur_sel.name)
last_sel = None
self.cur_sel = None
elif "output" in last_sel.io and "input" in self.cur_sel.io:
lspn = last_sel.parentItem().name
cspn = self.cur_sel.parentItem().name
if lspn is not cspn:
print "Connecting Attrs 2"
self.mainwidget().connect(last_sel.parentItem().name,
last_sel.name,
self.cur_sel.parentItem().name,
self.cur_sel.name)
last_sel = None
self.cur_sel = None
class NodeGraph (QtGui.QGraphicsView):
"""Main Wrapper for node network."""
def __init__(self, p):
"""INIT."""
QtGui.QGraphicsView.__init__(self, p)
self.mainwin = p
self.initui()
self.nodes = {}
def initui(self):
"""Set up the UI."""
self.setFixedSize(1000, 720)
self.scene = SceneGFX(0, 0, 25, 1000, self.mainwin)
self.setScene(self.scene)
def addnode(self, node_name, node_type):
"""Forward node creation calls to scene."""
br = self.mapToScene(self.viewport().geometry()).boundingRect()
x = br.x() + (br.width()/2)
y = br.y() + (br.height()/2)
new_node = NodeGFX(x,
y,
self.canv().nodes[node_name],
self)
self.scene.addItem(new_node)
self.nodes[node_name] = new_node
def addconnection(self, n1_node, n1_attr, n2_node, n2_attr):
"""Add a connection between 2 nodes."""
self.scene.addconnection(n1_node, n1_attr, n2_node, n2_attr)
def helloworld(self):
"""test."""
print "Node graph - hello world"
def canv(self):
"""Link to the canvas object."""
return self.mainwin.canvasobj
def change_name_accepted(self, old_name, new_name):
"""Update the node graph to accept new names"""
pass
</code></pre>
| 0 | 2016-10-12T15:34:39Z | 40,023,399 | <p>So my issue was that I was not scaling the bounding box in "object space"... To following changes fix my issue.</p>
<pre><code> def boundingRect(self):
"""Bounding."""
# Added .5 for padding
return QtCore.QRectF(-.5,
-.5,
self.width + .5,
self.height + .5)
</code></pre>
<hr>
<pre><code> def paint(self, painter, option, widget):
painter.setPen(QtCore.Qt.NoPen)
painter.setBrush(QtCore.Qt.darkGray)
self.width = 400
self.height = 400
rectangle = QtCore.QRectF(0,
0,
self.width,
self.height)
painter.drawRoundedRect(rectangle, 15.0, 15.0)
</code></pre>
| 0 | 2016-10-13T14:10:34Z | [
"python",
"qt",
"canvas",
"pyqt",
"graphics2d"
] |
Letter Game Challenge Python | 40,002,754 | <p>A word game awards points for the letters used in a word. The lower the frequency of the letter in the English language, the higher the score for the letter. Write a program that asks the user to input a word. The program should then output the score for the word according to the following rules:</p>
<p><img src="//i.stack.imgur.com/Fhyli.jpg" alt="Table for letters and score in numbers"></p>
<p>How would you add a score together for letters that the user has inputted?</p>
<p>I am having issues with this</p>
<pre><code>#Letter Game Challenge
letters = ("e","a","r","i","o","t","n","s","l","c","u","d","p","m","h"
,"g","b","f","y","w","k","v","x","z","j","q")
points = (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23
,24,25,26)
def main():
global word_input
print ("Input a word to see the score")
word_input = input()
if any(letter in word_input for letter in letters):
l1()
else:
print ("Enter a word with letters in!")
main()
</code></pre>
<p>This is what I have so far just don't know how to split the word inputted and check for a letter and give a score for it.</p>
| -1 | 2016-10-12T15:39:30Z | 40,059,702 | <p>To loop through the letters of a word use a for loop:</p>
<pre><code>for letter in word_input:
</code></pre>
<p>You will need to look up the score for each letter. Try using a dictionary:</p>
<pre><code>scores = {"e": 1,
"a": 2,
#etc
</code></pre>
<p>Then you can look the score of a letter with <code>scores[letter]</code></p>
| 0 | 2016-10-15T13:38:46Z | [
"python",
"python-3.x"
] |
Django Multi-App Database Router not working | 40,002,781 | <p>I'm doing a project using Django 1.10 and Python 2.7.</p>
<p>I have read and tried to implement database routing according to <a href="https://docs.djangoproject.com/en/1.10/topics/db/multi-db/#using-routers" rel="nofollow">this page</a>. I have also read up on a lot of different tutorials and other stackoverflow questions. However, I can't seem to get it to work. </p>
<p>This is the scenario that I have:</p>
<p>I need all analytics, auth and admin apps models on the default database. Then cancellation app on a separate database, and driveractivity app on another separate database.</p>
<p>This is the router that I'm using:</p>
<pre><code>from django.conf import settings
class AppRouter:
def db_for_read(self, model, **hints):
if model._meta.app_label == 'analytics':
return 'default'
elif model._meta.app_label == 'cancellation':
return 'cancellations_db'
elif model._meta.app_label == 'driveractivity':
return 'driveractivity_db'
return None
def db_for_write(self, model, **hints):
if model._meta.app_label == 'analytics':
return 'default'
elif model._meta.app_label == 'cancellation':
return 'cancellations_db'
elif model._meta.app_label == 'driveractivity':
return 'driveractivity_db'
return None
def allow_migrate(self, db, app_label, model=None, **hints):
if app_label == 'cancellation' and db == 'cancellations_db':
return True
if app_label == 'driveractivity' and db == 'driveractivity_db':
return True
if app_label in ('analytics', 'auth', 'admin', 'contenttypes', 'sessions', 'rest_framework') and db == 'default':
return True
return False
</code></pre>
<p>My database settings are as follows (<strong>settings.py</strong>): </p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': 'localhost',
'NAME': 'analytics',
'USER': 'root'
},
'driveractivity_db': {
'ENGINE': 'django.db.backends.mysql',
'HOST': 'localhost',
'PORT': '3306',
'NAME': 'driveractivity',
'USER': 'root',
},
'cancellations_db': {
'ENGINE': 'django.db.backends.mysql',
'HOST': 'localhost',
'PORT': '3306',
'NAME': 'teke_cancellation',
'USER': 'root'
}
}
DATABASE_ROUTERS = ['analytics.AppRouter.AppRouter']
DB_Mapping = {
"cancellation": "cancellations_db",
"driveractivity": "driveractivity_db"
}
</code></pre>
<p><strong>models.py - cancellation</strong></p>
<pre><code>class Cancellation(models.Model):
id = models.IntegerField(primary_key=True)
user = models.EmailField(max_length=255,blank=False)
time = models.DateField(blank=False)
created_at = models.DateTimeField(auto_now=True,auto_now_add=False)
updated_at = models.DateTimeField(auto_now=False,auto_now_add=True)
class Meta:
app_label = 'cancellation'
class PenaltyType(models.Model):
id = models.IntegerField(primary_key=True)
name = models.CharField(max_length=255,blank=False)
created_at = models.DateTimeField(auto_now=True,auto_now_add=False)
updated_at = models.DateTimeField(auto_now=False,auto_now_add=True)
class Meta:
app_label = 'cancellation'
class Penalty(models.Model):
id = models.IntegerField(primary_key=True)
user = models.EmailField(max_length=255,blank=False)
meted = models.BooleanField(default=False)
penalty_type = models.ForeignKey(PenaltyType)
created_at = models.DateTimeField(auto_now=True,auto_now_add=False)
updated_at = models.DateTimeField(auto_now=False,auto_now_add=True)
class Meta:
app_label = 'cancellation'
</code></pre>
<p><strong>models.py - driveractivity</strong></p>
<pre><code>class Activity(models.Model):
email = models.EmailField(null=False)
driver_name = models.CharField(max_length=30,null=False)
vehicle_reg = models.CharField(max_length=30,null=False)
status = models.CharField(max_length=15)
desc = models.CharField(max_length=250)
lng = models.FloatField()
lat = models.FloatField()
time = models.DateTimeField()
created_at = models.DateTimeField(auto_now=True,auto_now_add=False)
updated_at = models.DateTimeField(auto_now=False,auto_now_add=True)
class Meta:
app_label = 'driveractivity'
class DistanceDetails(models.Model):
email = models.EmailField(null=False)
driver_name = models.CharField(max_length=30,null=False)
vehicle_reg = models.CharField(max_length=30,null=False)
new_lng = models.FloatField()
new_lat = models.FloatField()
last_state = models.CharField(max_length=15,null=False)
last_lng = models.FloatField()
last_lat = models.FloatField()
created_at = models.DateTimeField(auto_now=True,auto_now_add=False)
updated_at = models.DateTimeField(auto_now=False,auto_now_add=True)
class Meta:
app_label = 'driveractivity'
</code></pre>
<p>Edited <strong>router.py</strong></p>
<pre><code>class AppRouter(object):
def db_for_read(self, model, **hints):
return DB_Mapping.get(model._meta.app_label, 'default')
def db_for_write(self, model, **hints):
return DB_Mapping.get(model._meta.app_label, 'default')
def allow_migrate(self, db, app_label, model=None, **hints):
if app_label in DB_Mapping.keys() or db in DB_Mapping.values():
return True
else:
return None
</code></pre>
<p>Is there something I'm doing wrong?</p>
| 1 | 2016-10-12T15:40:36Z | 40,003,844 | <p>check if this works for you:</p>
<p><strong>settings.py</strong></p>
<pre><code>DB_Mapping = {
"cancellation": "cancellation_db",
"driveractivity": "driveractivity_db",
...
}
</code></pre>
<p><strong>router.py</strong></p>
<pre><code>from project.settings import DB_Mapping
class MyRouter(object):
def db_for_read(self, model, **hints):
return DB_Mapping.get(model._meta.app_label, 'default')
def db_for_write(self, model, **hints):
return DB_Mapping.get(model._meta.app_label, 'default')
</code></pre>
<p>i do use a similar setup where i route to a different database user based on the group of the user who's making the request.</p>
<p>the allow_migrate can also be written similarly., where if the <code>db == 'default'</code> or <code>if model._meta.app_label in DB_Mapping.keys() or db in DB_Mapping.values(): return True else False</code></p>
| 0 | 2016-10-12T16:35:48Z | [
"python",
"django"
] |
Numpy choose without replacement along specific dimension | 40,002,821 | <p>Without replacement I'm choosing k elements from a sample n distinct times according to a specified distribution.</p>
<p>The iterative solution is simple:</p>
<pre><code>for _ in range(n):
np.random.choice(a, size=k, replace=False, p=p)
</code></pre>
<p>I can't set <code>size=(k, n)</code> because I would sample without replacement across samples. <code>a</code> and <code>n</code> are large, I hope for a vectorized solution.</p>
| 2 | 2016-10-12T15:42:44Z | 40,003,823 | <p>So the full iterative solution is:</p>
<pre><code>In [158]: ll=[]
In [159]: for _ in range(10):
...: ll.append(np.random.choice(5,3))
In [160]: ll
Out[160]:
[array([3, 2, 4]),
array([1, 1, 3]),
array([0, 3, 1]),
...
array([0, 3, 0])]
In [161]: np.array(ll)
Out[161]:
array([[3, 2, 4],
[1, 1, 3],
...
[3, 0, 1],
[4, 4, 2],
[0, 3, 0]])
</code></pre>
<p>That could be cast as list comprehension: <code>np.array([np.random.choice(5,3) for _ in range(10)])</code>.</p>
<p>Or an equivalent where you <code>A=np.zeros((10,3),int)</code> and <code>A[i,:]=np.random...</code></p>
<p>In other words you want choices from <code>range(5)</code>, but want them to be unique only within rows.</p>
<p>The <code>np.random.choice</code> docs suggest an alternative:</p>
<pre><code>>>> np.random.choice(5, 3, replace=False)
array([3,1,0])
>>> #This is equivalent to np.random.permutation(np.arange(5))[:3]
</code></pre>
<p>I'm wondering if I can generate</p>
<pre><code>array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
...
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]])
</code></pre>
<p>and permute values within rows. But with <code>permute</code> I can only shuffle all the columns together. So I'm still stuck with iterating on rows to produce the choice without replacement.</p>
| 0 | 2016-10-12T16:34:25Z | [
"python",
"numpy"
] |
Numpy choose without replacement along specific dimension | 40,002,821 | <p>Without replacement I'm choosing k elements from a sample n distinct times according to a specified distribution.</p>
<p>The iterative solution is simple:</p>
<pre><code>for _ in range(n):
np.random.choice(a, size=k, replace=False, p=p)
</code></pre>
<p>I can't set <code>size=(k, n)</code> because I would sample without replacement across samples. <code>a</code> and <code>n</code> are large, I hope for a vectorized solution.</p>
| 2 | 2016-10-12T15:42:44Z | 40,003,884 | <p>Here are a couple of suggestions.</p>
<ol>
<li><p>You can preallocate the <code>(n, k)</code> output array, then do the choice multiple times:</p>
<pre><code>result = np.zeros((n, k), dtype=a.dtype)
for row in range(n):
result[row, :] = np.random.choice(a, size=k, replace=False, p=p)
</code></pre></li>
<li><p>You can precompute the <code>n * k</code> selection indices and then apply them to <code>a</code> all at once. Since you want to sample the indices without replacement, you will want to use <code>np.choice</code> in a loop again:</p>
<pre><code>indices = np.concatenate([np.random.choice(a.size, size=k, replace=False, p=p) for _ in range(n)])
result = a[indices].reshape(n, k)
</code></pre></li>
</ol>
| 0 | 2016-10-12T16:37:54Z | [
"python",
"numpy"
] |
Wait for page redirect Selenium WebDriver (Python) | 40,002,826 | <p>I have a page which loads dynamic content with ajax and then redirects after a certain amount of time (not fixed). How can i force selenium webdriver to wait for the page to redirect then immediately after go to a different link?</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome();
driver.get("http://www.website.com/wait.php")
</code></pre>
| 0 | 2016-10-12T15:42:59Z | 40,003,165 | <p>You can create a custom <a href="http://selenium-python.readthedocs.io/waits.html#explicit-waits" rel="nofollow">Expected Condition to wait</a> for the URL to change:</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
wait = WebDriverWait(driver, 10)
wait.until(lambda driver: driver.current_url != "http://www.website.com/wait.php")
</code></pre>
<p>The Expected Condition is basically a <em>callable</em> - you can <a href="http://stackoverflow.com/a/29377790/771848">wrap it into a class</a> overwriting the <code>__call__()</code> magic method as the built-in conditions are implemented.</p>
| 1 | 2016-10-12T16:00:41Z | [
"python",
"selenium",
"selenium-webdriver",
"webdriver"
] |
Remove newline characters that appear between curly brackets | 40,002,880 | <p>I'm currently writing a text processing script which contains static text and variable values (surrounded by curly brackets). I need to be able to strip out newline characters but only if they appear between the curly brackets:</p>
<p><code>Some text\nwith a {variable\n} value"</code></p>
<p>to:</p>
<p><code>Some text\nwith a {variable} value"</code></p>
<p>Further down in the processing I'm already doing this:</p>
<p><code>re.sub(r'\{.*?\}', '(.*)', text, flags=re.MULTILINE|re.DOTALL)</code></p>
<p>But I'm not sure how to target just the newline character and not the entirely of the curly bracket pair. There is also the possibility of multiple newlines:</p>
<p><code>Some text\nwith a {variable\n\n\n} value"</code></p>
<hr>
<p>Using Python 3.x</p>
| 1 | 2016-10-12T15:45:40Z | 40,002,947 | <p>You may pass the match object to a lambda in a <code>re.sub</code> and replace all newlines inside <code>{...}</code>:</p>
<pre><code>import re
text = 'Some text\nwith a {variable\n} value"'
print(re.sub(r'{.*?}', lambda m: m.group().replace("\n", ""), text, flags=re.DOTALL))
</code></pre>
<p>See <a href="http://ideone.com/ElV2ph" rel="nofollow">online Python 3 demo</a></p>
<p>Note that you do not need <code>re.MULTILINE</code> flag with this regex as it has no <code>^</code>/<code>$</code> anchors to redefine the behavior of, and you do not need to escape <code>{</code> and <code>}</code> in the current expression (without excessive backslashes, regexps look cleaner).</p>
| 2 | 2016-10-12T15:49:38Z | [
"python",
"regex",
"python-3.x"
] |
Remove newline characters that appear between curly brackets | 40,002,880 | <p>I'm currently writing a text processing script which contains static text and variable values (surrounded by curly brackets). I need to be able to strip out newline characters but only if they appear between the curly brackets:</p>
<p><code>Some text\nwith a {variable\n} value"</code></p>
<p>to:</p>
<p><code>Some text\nwith a {variable} value"</code></p>
<p>Further down in the processing I'm already doing this:</p>
<p><code>re.sub(r'\{.*?\}', '(.*)', text, flags=re.MULTILINE|re.DOTALL)</code></p>
<p>But I'm not sure how to target just the newline character and not the entirely of the curly bracket pair. There is also the possibility of multiple newlines:</p>
<p><code>Some text\nwith a {variable\n\n\n} value"</code></p>
<hr>
<p>Using Python 3.x</p>
| 1 | 2016-10-12T15:45:40Z | 40,003,041 | <p>Assuming you have un-nested, balanced brackets you can use this lookahead regex to replace newline between <code>{...}</code>:</p>
<pre><code>>>> s = "Some text\nwith a {variable\n} value"
>>> print re.sub(r'\n(?=[^{}]*})', '', s)
Some text
with a {variable} value
</code></pre>
<p><code>(?=[^{}]*})</code> is lookahead to assert we have a closing <code>}</code> ahead of newline without matching a <code>{</code> or <code>}</code> before closing <code>}</code>.</p>
| 1 | 2016-10-12T15:54:22Z | [
"python",
"regex",
"python-3.x"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.