title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
sequence
fabfile doesn't see remote environment variables
39,979,229
<p>My remote server (192.168.3.68) contains several environment variables set in my ~/.bashrc:</p> <pre><code># For instance export MY_DATABASE_HOST=127.0.0.1 </code></pre> <p>When I put <code>run('echo $MY_DATABASE_HOST')</code> in <code>fabfile.py</code>, it shows:</p> <pre><code>[192.168.3.68] run: echo $MY_DATABASE_HOST [192.168.3.68] output: Done Disconnecting from 192.168.3.68... done. </code></pre> <p>I've tried adding <code>run('source ~/.bashrc')</code> immediately before the echo but nothing changes.</p> <p>Why isn't the set environment variables in ~/.bashrc visible to fabfile?</p> <p>What do I do to fix that because fabfile must be able to read these variables?</p> <p><strong>UPDATE</strong></p> <pre><code>from fabric.context_managers import prefix # This didn't work with prefix('source /home/meandme/.bashrc'): run('echo $MY_DATABASE_HOST') # This didn't work either run('source /home/meandme/.bashrc &amp;&amp; echo $MY_DATABASE_HOST') </code></pre>
2
2016-10-11T14:10:40Z
39,994,484
<p>Actually bashrc is executed. But it gets stopped because it's not running interactively through this:</p> <pre><code>case $- in *i*) ;; *) return;; esac </code></pre> <p>Now it works after I moved my environment variables at the top of my bashrc.</p> <p>More detailed answer here <a href="https://github.com/fabric/fabric/issues/1519" rel="nofollow">https://github.com/fabric/fabric/issues/1519</a></p>
0
2016-10-12T08:57:26Z
[ "python", "fabric" ]
Solving a system of equation with Sympy, python2.7
39,979,293
<p>I want to solve a system of equations. But I want to be able to precise the value to "get", and as a function of "what".</p> <p>To better understand, I take an exemple from <a href="http://stackoverflow.com/questions/22156709/solving-system-of-nonlinear-equations-with-python#_=_">here</a>, wich I modfified:</p> <pre><code>import sympy as sp x, y, z = sp.symbols('x, y, z') rho, sigma, beta = sp.symbols('rho, sigma, beta') f1 = sigma * (y - x) f2 = x * (rho - z) - y f3 = x * y - beta * z print sp.solvers.solve((f1, f2, f3), (x, y, z)) </code></pre> <p>in </p> <pre><code>import sympy as sp x, y, z, w = sp.symbols('x, y, z, w') rho, sigma, beta = sp.symbols('rho, sigma, beta') f1 = sigma * (y - x) f2 = x * (rho - z) - y f3 = x * y - beta * w f4 = z - w print sp.solvers.solve((f1, f2, f3, f4), (x, y, z)) </code></pre> <p>So, as you can see, I replace <strong>z</strong> by <strong>w</strong> in the last equation and I add a new to precise <strong>z = w</strong>. But, <strong>sympy (on python 2.7) is unable to solve this new system of equation!!</strong></p> <p><em>So my question:</em> How to get the result for x, y, z as a function of rho, sigma, beta. And more generally, how do we precise the variable "response variable".</p> <p><em>I think that could be very helpfull because, often, you don't want to developp your system of equation before asking python to solve it.</em></p> <p>In the same way, if I take a more complex example:</p> <pre><code>import sympy as sp x, y, z, w, u = sp.symbols('x, y, z, w, u') rho, sigma, beta = sp.symbols('rho, sigma, beta') f1 = sigma * (y - x) f2 = x * (rho - u) - y f3 = x * y - beta * w f4 = z - w f5 = w - u print sp.solvers.solve((f1, f2, f3, f4, f5), (x, y, z)) </code></pre> <p>The response I get is: </p> <pre><code>[] </code></pre> <p>But as you see I have z = w = u Son I should get the same answer!</p>
3
2016-10-11T14:13:37Z
39,979,664
<p>Your code is giving following error: </p> <blockquote> <p>Traceback (most recent call last): File "C:\temp\equation1.py", line 37, in f3 = x * y - beta * w NameError: name 'w' is not defined</p> </blockquote> <p>Hence we pull symbol 'w' from sympy symbols as below <code>x, y, z, w = sp.symbols('x, y, z, w')</code> </p> <p>You also mentioned you are trying to add <code>z = w</code> , so once we add that to your code, it works.</p> <p><strong>Working Code:</strong></p> <pre><code>import sympy as sp x, y, z, w = sp.symbols('x, y, z, w') rho, sigma, beta = sp.symbols('rho, sigma, beta') z = w f1 = sigma * (y - x) f2 = x * (rho - z) - y f3 = x * y - beta * w f4 = z - w print sp.solvers.solve((f1, f2, f3, f4), (x, y, z, w)) </code></pre> <p><strong>Output:</strong> </p> <pre><code>Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; ================================ RESTART ================================ &gt;&gt;&gt; [(0, 0, 0), (-sqrt(beta*rho - beta), -sqrt(beta*(rho - 1)), rho - 1), (sqrt(beta*rho - beta), sqrt(beta*(rho - 1)), rho - 1)] &gt;&gt;&gt; </code></pre>
0
2016-10-11T14:30:25Z
[ "python", "python-2.7", "sympy", "equation", "equation-solving" ]
Python: store a value in a variable so that you can recognize each reoccurence
39,979,358
<p>If this question is unclear, I am very open to constructive criticism. </p> <p>I have an excel table with about 50 rows of data, with the first column in each row being a date. I need to access all the data for only one date, and that date appears only about 1-5 times. It is the most recent date so I've already organized the table by date with the most recent being at the top. </p> <p>So my goal is to store that date in a variable and then have Python look only for that variable (that date) and take only the columns corresponding to that variable. I need to use this code on 100's of other excel files as well, so it would need to arbitrarily take the most recent date (always at the top though). </p> <p>My current code below simply takes the first 5 rows because I know that's how many times this date occurs.</p> <pre><code>import os from numpy import genfromtxt import pandas as pd path = 'Z:\\folderwithcsvfile' for filename in os.listdir(path): file_path = os.path.join(path, filename) if os.path.isfile(file_path): broken_df = pd.read_csv(file_path) df3 = broken_df['DATE'] df4 = broken_df['TRADE ID'] df5 = broken_df['AVAILABLE STOCK'] df6 = broken_df['AMOUNT'] df7 = broken_df['SALE PRICE'] print (df3) #print (df3.head(6)) print (df4.head(6)) print (df5.head(6)) print (df6.head(6)) print (df7.head(6)) </code></pre>
3
2016-10-11T14:17:00Z
39,980,175
<p>This is a relatively simple filtering operation. You state that you want to "take only the columns" that are the latest date, so I assume that an acceptable result will be a filter <code>DataFrame</code> with just the correct columns. </p> <p>Here's a simple CSV that is similar to your structure:</p> <pre><code>DATE,TRADE ID,AVAILABLE STOCK 10/11/2016,123,123 10/11/2016,123,123 10/10/2016,123,123 10/9/2016,123,123 10/11/2016,123,123 </code></pre> <p>Note that I mixed up the dates a little bit, because it's hacky and error-prone to just assume that the latest dates will be on the top. The following script will filter it appropriately:</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv('data.csv') # convert the DATE column to datetimes df['DATE'] = pd.to_datetime(df['DATE']) # find the latest datetime latest_date = df['DATE'].max() # use index filtering to only choose the columns that equal the latest date latest_rows = df[df['DATE'] == latest_date] print (latest_rows) # now you can perform your operations on latest_rows </code></pre> <p>In my example, this will print:</p> <pre><code> DATE TRADE ID AVAILABLE STOCK 0 2016-10-11 123 123 1 2016-10-11 123 123 4 2016-10-11 123 123 </code></pre>
1
2016-10-11T14:53:00Z
[ "python", "date", "pandas" ]
pytest "No module named" error
39,979,369
<p>I'm new to python and trying to write a test for my first app. simple app structure:</p> <pre><code>- app - tests __init__.py - test_main.py - __init__.py - main.py </code></pre> <p>The <code>main.py</code> contains <code>main_func</code></p> <p><strong>test_main.py:</strong></p> <pre><code>from main import main_func def test_check(): assert main_func() is True </code></pre> <p>If i run the test manually, by the command <code>pytest</code> while in <code>app</code> directory - got this <strong>error:</strong></p> <pre><code>C:\Users\*****\PycharmProjects\checker3&gt;pytest ============================= test session starts ============================= platform win32 -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 rootdir: C:\Users\*****\PycharmProjects\app, inifile: collected 0 items / 1 errors =================================== ERRORS ==================================== _____________________ ERROR collecting tests/test_main.py _____________________ ImportError while importing test module 'C:\Users\*****\PycharmProjects\app\tests\test_main.py'. Original error message: 'No module named 'main'' Make sure your test modules/packages have valid Python names. !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.14 seconds =========================== </code></pre> <p>the packages names are right. Also, i use the <code>PyCharm</code> IDE and it doesn't track any import package error.</p> <p>Moreover, when i execute the pytest test configuration through the IDE - test is working.</p>
0
2016-10-11T14:17:31Z
39,980,236
<p>I think it runs in pycharm because of folder configuration mark in pycharm project. If you set the app folder as "Source" in settings, pycharm can include in any path. </p> <p>For details; <a href="https://www.jetbrains.com/help/pycharm/2016.1/configuring-folders-within-a-content-root.html" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/configuring-folders-within-a-content-root.html</a></p> <p>It doesn't run in shell because it cannot include main. If you want to include and run, you should insert app path in sys.</p> <pre><code>import sys sys.path.insert(0, &lt;app folder path&gt;) </code></pre>
1
2016-10-11T14:56:04Z
[ "python", "pycharm", "py.test" ]
Pandas adding method does not survive serialization
39,979,397
<p>I am trying to add a method to pandas so that I can use it readily if I have access to the dataframe. However serialization "kills" the method such as shown by the following example</p> <pre><code>import dill class Foo: def sayhello(self): print("hello") f = Foo() dill.dump(f, open("./foo.pickle", "wb")) f1 = dill.load(open("./foo.pickle", "r")) f1.sayhello() def addto(instance): def decorator(f): import types f = types.MethodType(f, instance, instance.__class__) setattr(instance, f.func_name, f) return f return decorator @addto(f) def saygoodbye(self): print("goodbye") dill.dump(f, open("./foo.pickle", "wb")) f1 = dill.load(open("./foo.pickle", "r")) f1.sayhello() f1.saygoodbye() import pandas as pd df = pd.DataFrame([0,1]) @addto(df) def saygoodbye(self): print("goodbye") dill.dump(df, open("./dframe.pickle", "wb")) df1 = dill.load(open("./dframe.pickle", "r")) df1.saygoodbye() </code></pre> <p>which throws me a <code>AttributeError: 'DataFrame' object has no attribute 'saygoodbye'</code></p> <p>1) Do you see what is causing a problem ?</p> <p>2) Do you have any idea how to serialize an added method on a dataframe ? </p> <p>Thanks</p>
0
2016-10-11T14:18:49Z
39,980,241
<p>1) Do you see what is causing a problem ?</p> <p>You need to add the method to the class instead of the instance like</p> <pre><code>df = pd.DataFrame([0,1]) @addto(pd.DataFrame) def saygoodbye(self): print("goodbye") </code></pre> <p>2) Do you have any idea how to serialize an added method on a dataframe ?</p> <p>If I understood correctly, you want to serialize the dataframe instance to a pickle file and later deserialize it later. My suggestion is creating an new class inheriting from DataFrame.</p> <pre><code>class MyDataFrame(pd.DataFrame): def saygoodbye(self): print 'saygoodbye' df = MyDataFrame([0,1]) dill.dump(df, open("./dframe.pickle", "wb")) df1 = dill.load(open("./dframe.pickle", "r")) df.saygoodbye() </code></pre>
1
2016-10-11T14:56:12Z
[ "python", "python-3.x", "pandas", "methods", "dill" ]
Python default parameter if no command line arguments are passed
39,979,618
<p>I'l like to build a program with this behaviour:</p> <p>usage: sage 4ct.py [-h] (-r R | -i I | -p P) [-o O]</p> <p>But if you don't give any parameter, I'd like to have "-r 100" as the default.</p> <p>Is it possible?</p> <pre><code>parser = argparse.ArgumentParser(description = '4ct args') group_input = parser.add_mutually_exclusive_group(required = True) group_input.add_argument("-r", "-random", help = "Random graph: dual of a triangulation of N vertices", nargs = 1, type = int, default = 100) group_input.add_argument("-i", "-input", help = "Input edgelist filename (networkx)", nargs = 1) group_input.add_argument("-p", "-planar", help = "Load a planar embedding of the graph G.faces() - Automatically saved at each run: input_planar_file.serialized", nargs = 1) parser.add_argument("-o", "-output", help="Output edgelist filename (networkx)", nargs = 1, required = False) args = parser.parse_args() </code></pre>
2
2016-10-11T14:28:30Z
39,980,408
<p>Give the following a try:</p> <pre><code>import argparse import sys parser = argparse.ArgumentParser(description='4ct args') group_input = parser.add_mutually_exclusive_group(required=True) group_input.add_argument("-r", "-random", help="Random graph: dual of a triangulation of N vertices", nargs=1, type=int, default=100) group_input.add_argument("-i", "-input", help="Input edgelist filename (networkx)", nargs=1) group_input.add_argument("-p", "-planar", help="Load a planar embedding of the graph G.faces() - Automatically saved at each run: input_planar_file.serialized",nargs=1) parser.add_argument("-o", "-output", help="Output edgelist filename (networkx)", nargs=1, required=False) if not sys.argv[1:]: sys.argv.extend(['-r', '100']) args = parser.parse_args(sys.argv[1:]) </code></pre> <p>Essentially you are checking if any commandline parameters are given at all, and if not, you append the desired <code>-r 100</code></p>
0
2016-10-11T15:03:20Z
[ "python", "argparse" ]
Python default parameter if no command line arguments are passed
39,979,618
<p>I'l like to build a program with this behaviour:</p> <p>usage: sage 4ct.py [-h] (-r R | -i I | -p P) [-o O]</p> <p>But if you don't give any parameter, I'd like to have "-r 100" as the default.</p> <p>Is it possible?</p> <pre><code>parser = argparse.ArgumentParser(description = '4ct args') group_input = parser.add_mutually_exclusive_group(required = True) group_input.add_argument("-r", "-random", help = "Random graph: dual of a triangulation of N vertices", nargs = 1, type = int, default = 100) group_input.add_argument("-i", "-input", help = "Input edgelist filename (networkx)", nargs = 1) group_input.add_argument("-p", "-planar", help = "Load a planar embedding of the graph G.faces() - Automatically saved at each run: input_planar_file.serialized", nargs = 1) parser.add_argument("-o", "-output", help="Output edgelist filename (networkx)", nargs = 1, required = False) args = parser.parse_args() </code></pre>
2
2016-10-11T14:28:30Z
39,980,588
<p>Just remove the <code>required</code>argument of the <code>add_mutually_exclusive_group</code> function call (or set it to False) and you're done:</p> <pre><code>import argparse parser = argparse.ArgumentParser(description = '4ct args') group_input = parser.add_mutually_exclusive_group(required = False) group_input.add_argument("-r", "--random", help = "Random graph: dual of a triangulation of N vertices", type = int, default = 100) group_input.add_argument("-i", "--input", help = "Input edgelist filename (networkx)") group_input.add_argument("-p", "--planar", help = "Load a planar embedding of the graph G.faces() - Automatically saved at each run: input_planar_file.serialized") parser.add_argument("-o", "--output", help="Output edgelist filename (networkx)", required = False) print(parser.parse_args()) # Namespace(input=None, output=None, planar=None, random=100) print(parser.parse_args("-r 77".split())) # Namespace(input=None, output=None, planar=None, random=77) print(parser.parse_args("-o some/path".split())) # Namespace(input=None, output='some/path', planar=None, random=100) print(parser.parse_args("-i some/path".split())) # Namespace(input='some/path', output=None, planar=None, random=100) print(parser.parse_args("-i some/path -o some/other/path".split())) # Namespace(input='some/path', output='some/other/path', planar=None, random=100) print(parser.parse_args("-r 42 -o some/other/path".split())) # Namespace(input=None, output='some/other/path', planar=None, random=42) </code></pre> <p>As you can see, the <code>random</code> option is defaulted to 100 even if:</p> <ul> <li>the <code>output</code> option is provided, which seems normal</li> <li>an option from the mutual exclusive group other than <code>random</code> is provided, which can be problematic. you will have to check in your code if <code>random</code> is the only exclusive option which has a value before taking it in account.</li> </ul> <hr> <p>This example also includes some tiny improvement to your option parser:</p> <ul> <li>use long option names with two dashes (it is a convention but it also allows argparse to correctly recognise option name).</li> <li>remove the <code>nargs=1</code> in your options definitions which makes you retrieve a list of one value. By removing it, you could retrieve directly the value. </li> </ul>
4
2016-10-11T15:10:55Z
[ "python", "argparse" ]
Building a nested list with XPath-extracted XML document structure
39,979,724
<p>I am trying to get the text (using xpath) of all <code>&lt;h2&gt;</code> tags in:</p> <pre><code>&lt;div id="static_id"&gt; &lt;div&gt;... &lt;a ...&gt; &lt;div&gt;... &lt;h2&gt;Text 1&lt;/h2&gt; &lt;a ...&gt; &lt;div&gt;... &lt;div&gt;... &lt;span&gt;... &lt;h2&gt;Text 2&lt;/h2&gt; &lt;a ...&gt; &lt;span&gt;... &lt;h2&gt;Text 3&lt;/h2&gt; &lt;div id="static_id"&gt; &lt;div&gt;... &lt;span&gt;... &lt;h2&gt;Text A&lt;/h2&gt; &lt;a ...&gt; &lt;div&gt;... &lt;p&gt;... &lt;div&gt;... &lt;h2&gt;Text B&lt;/h2&gt; &lt;a ...&gt; &lt;h2&gt;Text C&lt;/h2&gt; [...] </code></pre> <p>In my HTML source code there are <code>&lt;div&gt;'s</code> with the id <code>static_id</code>. Within these div's there is just one <code>&lt;h2&gt;</code> tag, and I want to get its content. In the end, I would like to have a list that looks like this:</p> <pre><code>lst = [["Text 1", "Text 2", "Text 3"], ["Text A", "Text B", "Text C"]] </code></pre> <p>Please notice that it's a list of lists (every h2-text from one <code>&lt;div id="static_id"&gt;</code> should end up in a separate list like in the example above.</p> <p>Is there an easy way to achieve this?</p> <p>I thought I count all <code>static_id</code> divs and iterate over all <code>&lt;h2&gt;</code> tags to achive this. My approach:</p> <pre><code>list_all = [] div_amount = len(tree.xpath('//div[@id="static_id"]')) # 2 elements in this case (works) for d in range(1, div_amount+1) # 1,2 h2_count = len(tree.xpath('//div[@class="static_id"]['+str(d)+']//h2')) #count h2 lst = [] for i in range(1, h2_count+1) #1,2,3 h2_text = ''.join(tree.xpath('//div[@id="static_id"]['+str(d)+']//h2['+i+']/text()')) lst.append(h2_text) list_all.append(lst) </code></pre> <p>Line 2: Counts all id="static_id"</p> <p>Line 3: Loop over all id="static_id"</p> <p>Line 4: Count all h2 (unfortunately all h2's from the HTML source are counted) </p> <p>Line 5: Loop over all h2's</p> <p>Line 6: Get h2'text, and next save in list</p> <p>Can anyone help me out, please? I feel like this could be done easier, but I don't know how.</p>
0
2016-10-11T14:32:50Z
39,979,999
<p>Easily made a one-liner:</p> <pre><code>list_all = [ static_id_div.xpath('.//h2/text()') for static_id_div in tree.xpath('//div[@id="static_id"]') ] </code></pre> <p>The important difference here is that the inner query is being run <em>against the elements returned by the outer query</em>, rather than making them work starting from the root of the document.</p>
0
2016-10-11T14:45:25Z
[ "python", "xpath" ]
python pyquery import not working on Mac OS Sierra
39,979,830
<p>I'm trying to import pyquery as I did hundreds on time before, and it's not working. It looks like related to the Mac OS Sierra. (module installed with pip and up-to-date)</p> <pre><code>from pyquery import PyQuery as pq </code></pre> <p>And got an error on the namespacing</p> <pre><code>ImportError: cannot import name PyQuery </code></pre> <p>Any idea ? Thx !</p>
0
2016-10-11T14:37:41Z
40,004,038
<p>Finally found why. My file had the same name as my import... So the library import was overridden by the name of the .py file. </p>
0
2016-10-12T16:46:04Z
[ "python", "osx", "pyquery" ]
Pandas dataframe, each cell into list - more pythonic way?
39,979,889
<p>I have a pandas dataframe with columns and rows like this:</p> <pre><code> a b c d a 40 15 25 35 b 10 25 35 45 c 20 35 45 55 d 40 45 55 65 </code></pre> <p>For all numbers > 30 I need an output like this:</p> <pre><code>a, a, 40 a, d, 40 b, c, 35 b, d, 45 </code></pre> <p>and so on.</p> <p>Currently I am running a loop like this:</p> <pre><code> for i in df.columns: for j in df.index: if df[i][j] &gt; 30: a.append(i+","+j+","+str(df[i][j])") </code></pre> <p>This works, but is very slow. Is there a more pythonic way to do this?</p> <p>Thanks!</p>
3
2016-10-11T14:40:46Z
39,979,928
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>df = df.stack().reset_index() df.columns = ['a','b','c'] print (df[df.c &gt; 30]) a b c 0 a a 40 3 a d 35 6 b c 35 7 b d 45 9 c b 35 10 c c 45 11 c d 55 12 d a 40 13 d b 45 14 d c 55 15 d d 65 </code></pre> <p>Similar solution:</p> <pre><code>s = df.stack() df = s[s &gt; 30].reset_index() df.columns = ['a','b','c'] print (df) a b c 0 a a 40 1 a d 35 2 b c 35 3 b d 45 4 c b 35 5 c c 45 6 c d 55 7 d a 40 8 d b 45 9 d c 55 10 d d 65 </code></pre> <p>Another solution:</p> <pre><code>df1 = df[df &gt; 30].stack().reset_index() df1.columns = ['a','b','c'] df1.c = df1.c.astype(int) print (df1) a b c 0 a a 40 1 a d 35 2 b c 35 3 b d 45 4 c b 35 5 c c 45 6 c d 55 7 d a 40 8 d b 45 9 d c 55 10 d d 65 </code></pre> <hr> <p>Last you can <code>apply</code> join:</p> <pre><code>df['d'] = df.astype(str).apply(', '.join, axis=1) print (df) a b c d 0 a a 40 a, a, 40 1 a d 35 a, d, 35 2 b c 35 b, c, 35 3 b d 45 b, d, 45 4 c b 35 c, b, 35 5 c c 45 c, c, 45 6 c d 55 c, d, 55 7 d a 40 d, a, 40 8 d b 45 d, b, 45 9 d c 55 d, c, 55 10 d d 65 d, d, 65 print (df.d.tolist()) ['a, a, 40', 'a, d, 35', 'b, c, 35', 'b, d, 45', 'c, b, 35', 'c, c, 45', 'c, d, 55', 'd, a, 40', 'd, b, 45', 'd, c, 55', 'd, d, 65'] </code></pre>
3
2016-10-11T14:42:01Z
[ "python", "pandas" ]
Numpy vectorize sum over indices
39,979,916
<p>I have a list of indices (list(int)) and a list of summing indices (list(list(int)). Given a 2D numpy array, I need to find the sum over indices in the second list for each column and add them to the corresponding indices in the first column. Is there any way to vectorize this? Here is the normal code:</p> <pre><code>indices = [1,0,2] summing_indices = [[5,6,7],[6,7,8],[4,5]] matrix = np.arange(9*3).reshape((9,3)) for c,i in enumerate(indices): matrix[i,c] = matrix[summing_indices[i],c].sum()+matrix[i,c] </code></pre>
1
2016-10-11T14:41:40Z
39,980,763
<p>Here's an almost* vectorized approach using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow"><code>np.add.reduceat</code></a> -</p> <pre><code>lens = np.array(map(len,summing_indices)) col = np.repeat(indices,lens) row = np.concatenate(summing_indices) vals = matrix[row,col] addvals = np.add.reduceat(vals,np.append(0,lens.cumsum()[:-1])) matrix[indices,np.arange(len(indices))] += addvals[indices.argsort()] </code></pre> <p>Please note that this has some setup overhead, so it would be best suited for <code>2D</code> input arrays with a good number of columns as we are iterating along the columns.</p> <p>*: Almost because of the use of <code>map()</code> at the start, but computationally that should be negligible. </p>
2
2016-10-11T15:18:40Z
[ "python", "numpy", "vectorization" ]
Why don't objects have hasattr and getattr as attributes?
39,980,030
<p>When you need to access an object's attributes dynamically in Python, you can just use the builtin functions <code>hasattr(object, attribute)</code> or <code>getattr(object, attribute)</code>.</p> <p>However this seems like an odd order for the syntax to take. It's less readable and intuitive as it messes up the regular sentence structure for English.</p> <blockquote> <p><code>if hasattr(person, age):</code><br> if has attribute Person age</p> </blockquote> <p>Where having it as a method of the object would be much more readable:</p> <blockquote> <p><code>if person.hasattr(age):</code><br> if Person has attribute age</p> </blockquote> <p>Is there a particular reason for not implementing it this way? I could imagine there are cases where you're not sure if the object is even a proper object, rather than just <code>None</code>, but surely in those cases of uncertainty you could just use the builtin function anyway for extra safety.</p> <p>Is there some other drawback or consideration I'm not thinking of that makes adding these not worth it?</p>
1
2016-10-11T14:46:50Z
39,982,252
<p>Its part of the language design. I guess your find some docs about the more complicated thoughts behind it, but the key points are like</p> <ul> <li>You suggest to use a function of an object for a builtin function on all objects. Why should this function be specific to this object?</li> <li>Semantics: the <code>getattr</code> function works on objects, not as part of an object.</li> <li>Namespace: The functions of an object are defined by you, not by the language. Internal functions are of the form <code>__getattr__</code> and you will find this function on your object ;-). And getattr uses it internally, so you can even override it (if you know, what you're doing).</li> </ul>
1
2016-10-11T16:34:28Z
[ "python", "class" ]
Why don't objects have hasattr and getattr as attributes?
39,980,030
<p>When you need to access an object's attributes dynamically in Python, you can just use the builtin functions <code>hasattr(object, attribute)</code> or <code>getattr(object, attribute)</code>.</p> <p>However this seems like an odd order for the syntax to take. It's less readable and intuitive as it messes up the regular sentence structure for English.</p> <blockquote> <p><code>if hasattr(person, age):</code><br> if has attribute Person age</p> </blockquote> <p>Where having it as a method of the object would be much more readable:</p> <blockquote> <p><code>if person.hasattr(age):</code><br> if Person has attribute age</p> </blockquote> <p>Is there a particular reason for not implementing it this way? I could imagine there are cases where you're not sure if the object is even a proper object, rather than just <code>None</code>, but surely in those cases of uncertainty you could just use the builtin function anyway for extra safety.</p> <p>Is there some other drawback or consideration I'm not thinking of that makes adding these not worth it?</p>
1
2016-10-11T14:46:50Z
39,993,932
<p>You'll find quite a few similar examples - like <code>len(obj)</code> instead of <code>obj.length()</code>, <code>hash(obj)</code> instead of <code>obj.hash()</code>, <code>isinstance(obj, cls)</code> instead of <code>obj.isinstance(cls)</code>. You may also have noticed that addition is spelled <code>obj1 + obj2</code> instead of <code>obj1.add(obj2)</code>, substraction spelled <code>obj1 - obj2</code> instead of <code>obj1.sub(obj2)</code> etc... The point is that some builtin "functions" are to be considered as operators rather than really functions, and are supported by "__magic__" methods (<code>__len__</code>, <code>__hash__</code>, <code>__add__</code> etc). </p> <p>As of the "why", you'd have to ask GvR but historical reasons set asides, it at least avoids a lot of namespace pollution / name clashes. How would you name the length of a "Line" or "Rectangle" class if <code>length</code> was already a "kind of but not explicitely reserved" name ? And how should introspection understand that <code>Rectangle.length()</code> doesn't mean <code>Rectangle</code> is a sizeable sequence-like object ?</p> <p>Using generic "operator" functions (note that proper operators also exist as functions, cf the <code>operator</code> module) + "__magic__" methods make the intention clear and leaves normal names open for "user space" semantic.</p> <p>wrt/ the "regular sentence structure for English", I have to say I don</p> <blockquote> <p>I could imagine there are cases where you're not sure if the object is even a proper object, rather than just None</p> </blockquote> <p><code>None</code> <em>is</em> a "proper" object. Everything in Python (well, everything you can bind to a name) is a "proper" object. </p>
3
2016-10-12T08:29:14Z
[ "python", "class" ]
Function working iteratively over each row in an individual column of an array - numpy
39,980,081
<p>I'm looking to change all the values in an array using the following formula:</p> <pre><code>new_value = old_value * elec_space - elec_space </code></pre> <p>A complicating issue is that all values above 48 within the array will be increased by two, as 49 &amp; 50 will never exist in the original array (infile, shown below). This means that any value above 48 will have to have 2 subtracted from it before performing the above calculation.</p> <p>Original values:</p> <pre><code>elec_space = 0.5 infile = [[41, 42, 43, 44] [41, 42, 44, 45] [41, 43, 45, 47] [44, 45, 46, 47] [44, 45, 47, 48] [44, 46, 48, 52] [47, 48, 51, 52] [47, 48, 52, 53] [47, 51, 53, 55]] </code></pre> <p>Desired values:</p> <pre><code>infile = [[ 20, 20.5, 21, 21.5] [ 20, 20.5, 21.5, 22] [ 20 21, 22, 23] [21.5, 22, 22.5, 23] [21.5 22, 23, 23.5] [21.5, 22.5, 23.5, 24.5] [ 23, 23.5, 24, 24.5] [ 23, 23.5, 24.5, 25] [ 23, 24, 25, 26]] </code></pre> <p>I've tried:</p> <pre><code>def remove_missing(infile): if infile &gt; 48: return (infile - 2) * elec_space - elec_space else: return infile * elec_space - elec_space A = remove_missing(infile[:,0]) B = remove_missing(infile[:,1]) M = remove_missing(infile[:,2]) N = remove_missing(infile[:,3]) infile = np.column_stack((A, B, M, N)) </code></pre> <p>And:</p> <pre><code>def remove_missing(infile): return (infile - 2) * elec_space - elec_space if infile &gt; 50 else infile * elec_space - elec_space A = remove_missing(infile[:,0]) B = remove_missing(infile[:,1]) M = remove_missing(infile[:,2]) N = remove_missing(infile[:,3]) </code></pre> <p>But got the following traceback for each of them:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-181-dcc8e29a527f&gt; in &lt;module&gt;() 4 else: 5 return infile * elec_space - elec_space ----&gt; 6 A = remove_missing(infile[:,0]) 7 B = remove_missing(infile[:,1]) 8 M = remove_missing(infile[:,2]) &lt;ipython-input-181-dcc8e29a527f&gt; in remove_missing(infile) 1 def remove_missing(infile): ----&gt; 2 if infile &gt; 48: 3 return (infile - 2) * elec_space - elec_space 4 else: 5 return infile * elec_space - elec_space ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-180-c407ec4fa95d&gt; in &lt;module&gt;() 2 return (infile - 2) * elec_space - elec_space if infile &gt; 50 else infile * elec_space - elec_space 3 ----&gt; 4 A = remove_missing(infile[:,0]) 5 B = remove_missing(infile[:,1]) 6 M = remove_missing(infile[:,2]) &lt;ipython-input-180-c407ec4fa95d&gt; in remove_missing(infile) 1 def remove_missing(infile): ----&gt; 2 return (infile - 2) * elec_space - elec_space if infile &gt; 50 else infile * elec_space - elec_space 3 4 A = remove_missing(infile[:,0]) 5 B = remove_missing(infile[:,1]) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>I don't think a.any or a.all are the right options, as I want the function to run iteratively for each row in the column of the array, not to alter all values based on one of the values being over 48.</p> <p>Has anyone got any pointers on how best to tackle this, please?</p>
1
2016-10-11T14:48:48Z
39,980,218
<p>One alternative approach could be to subtract <code>1</code> for elements that were greater than <code>48</code> from the result, like so -</p> <pre><code>(infile - 2*(infile&gt;48))* elec_space - elec_space </code></pre>
2
2016-10-11T14:55:05Z
[ "python", "arrays", "function", "numpy", "boolean-operations" ]
Accessing dynamic generated website content with Python requests
39,980,262
<p>I'm trying to gather data from few websites using Python (BeautifulSoup). However, sometimes it's difficult to access search results, example:</p> <pre><code>import requests from bs4 import BeautifulSoup url1 = 'https://auto.ria.com/legkovie/city/vinnica/?page=1' url2= 'https://auto.ria.com/search/?top=11&amp;category_id=1&amp;state[0]=1' def get_value(url): r = requests.get(url, headers = {'Accept-Encoding' : 'deflate'}) print("Response Time: {}".format(r.elapsed.total_seconds())) soup = BeautifulSoup(r.text, 'lxml') data = soup.find('span', attrs = {'id' : 'resultsCount'}).find('strong') print('{} \n'.format(data)) get_value(url1) get_value(url2) </code></pre> <p>The output is:</p> <pre><code>Response Time: 5.4943 &lt;strong class="count"&gt;5 310&lt;/strong&gt; Response Time: 0.174867 &lt;strong class="count"&gt;0&lt;/strong&gt; </code></pre> <p>though in case of url2 the number displayed in browser is 338. I suppose that search results can be found in some json, but how to access it using requests?</p>
-1
2016-10-11T14:56:57Z
39,981,573
<p>I recommend zooming out on the detail of the soup object to see what's there. You can try using findAll instead of find and printing results. You can also try stripping the final call to find (for the strong tag) and print the results. Once you investigate the larger object you'll likely see what is happening. It could be url2 is tagged differently and you'll have to adjust your function to accommodate. </p>
0
2016-10-11T15:56:27Z
[ "python", "json", "web-scraping", "python-requests" ]
Accessing dynamic generated website content with Python requests
39,980,262
<p>I'm trying to gather data from few websites using Python (BeautifulSoup). However, sometimes it's difficult to access search results, example:</p> <pre><code>import requests from bs4 import BeautifulSoup url1 = 'https://auto.ria.com/legkovie/city/vinnica/?page=1' url2= 'https://auto.ria.com/search/?top=11&amp;category_id=1&amp;state[0]=1' def get_value(url): r = requests.get(url, headers = {'Accept-Encoding' : 'deflate'}) print("Response Time: {}".format(r.elapsed.total_seconds())) soup = BeautifulSoup(r.text, 'lxml') data = soup.find('span', attrs = {'id' : 'resultsCount'}).find('strong') print('{} \n'.format(data)) get_value(url1) get_value(url2) </code></pre> <p>The output is:</p> <pre><code>Response Time: 5.4943 &lt;strong class="count"&gt;5 310&lt;/strong&gt; Response Time: 0.174867 &lt;strong class="count"&gt;0&lt;/strong&gt; </code></pre> <p>though in case of url2 the number displayed in browser is 338. I suppose that search results can be found in some json, but how to access it using requests?</p>
-1
2016-10-11T14:56:57Z
39,982,738
<p>Your code is running fine, and url2 returns the expected result. By viewing page source from chrome:<br> <em><code>&lt;span id="resultsCount" class="hide"&gt;Найдено &lt;strong class="count"&gt;0&lt;/strong&gt; объявлений&lt;/span&gt;</code></em></p> <p>This is the tag that your are trying to find with beautiful soup. The number displayed in chrome and the program's output are the same!</p> <pre><code>&lt;strong class="count"&gt;0&lt;/strong&gt; </code></pre> <p>Also, the search result is not returned in json. If you check the response headers: </p> <pre><code>Content-Type: text/html </code></pre> <p>Maybe you want the response to contain the whole tag instead? If that's the case, try: </p> <pre><code>data = soup.find('span', attrs = {'id' : 'resultsCount'}) </code></pre>
0
2016-10-11T17:02:24Z
[ "python", "json", "web-scraping", "python-requests" ]
TypeError: unsupported operand type(s) for +: 'Cursor' and 'Cursor'
39,980,281
<p>I want to be able to store two collections in variable to be able to view and sort through them. However, I seem to be getting the error above. My code is in python and look like this:</p> <pre><code> from pymongo import MongoClient db = MongoClient('10.39.165.193', 27017)['mean-dev'] cursor1 = db.Build_Progress.find() cursor2 = db.build_lookup.find() joincursors = cursor1+ cursor2 for document in cook: print(document) </code></pre>
0
2016-10-11T14:57:44Z
39,980,637
<p>You need to <a href="https://docs.python.org/3.6/library/itertools.html#itertools.chain" rel="nofollow"><code>chain</code></a> the two cursors like this:</p> <pre><code>from itertools import chain for document in chain(cursor1, cursor2): print(document) </code></pre>
0
2016-10-11T15:13:08Z
[ "python", "mongodb", "pymongo" ]
How to replace elements by value in a list of lists in Python?
39,980,292
<p>I have:</p> <pre><code> counts = [[2, 2, 2, 0], [2, 2, 1, 0]] countsminusone = [[1, 1, 1, -1], [1, 1, 0, -1]] #Which is counts - 1 </code></pre> <p>For every value, where countsminusone is 0 or less than 0, I want to replace it with 1.</p> <pre><code> countsminusone1 = [[1 if x == 0 or x &lt; 0 else x for x in pair] for pair in countsminusone] #I cannot get this to work </code></pre> <p>And then divide counts by countsminusone</p> <pre><code>Divide = [[n/d for n, d in zip(subq, subr)] for subq, subr in zip(counts, countsminusone)] #This should work if the above works </code></pre>
-1
2016-10-11T14:58:04Z
39,980,506
<p>It works, except you forgot to replace <code>countsminusone</code> by <code>countsminusone1</code> in your last line.</p> <pre><code>countsminusone1 = [[1 if x &lt;= 0 else x for x in pair] for pair in countsminusone] Divide = [[n/d for n, d in zip(subq, subr)] for subq, subr in zip(counts, countsminusone1)] </code></pre>
1
2016-10-11T15:07:13Z
[ "python", "list", "replace", "list-comprehension", "value" ]
How to replace elements by value in a list of lists in Python?
39,980,292
<p>I have:</p> <pre><code> counts = [[2, 2, 2, 0], [2, 2, 1, 0]] countsminusone = [[1, 1, 1, -1], [1, 1, 0, -1]] #Which is counts - 1 </code></pre> <p>For every value, where countsminusone is 0 or less than 0, I want to replace it with 1.</p> <pre><code> countsminusone1 = [[1 if x == 0 or x &lt; 0 else x for x in pair] for pair in countsminusone] #I cannot get this to work </code></pre> <p>And then divide counts by countsminusone</p> <pre><code>Divide = [[n/d for n, d in zip(subq, subr)] for subq, subr in zip(counts, countsminusone)] #This should work if the above works </code></pre>
-1
2016-10-11T14:58:04Z
39,980,581
<p>I think you are making this much too complicated. Let's state what you actually want to do in the simplest way:</p> <blockquote> <p>GOAL: Divide every number n in list of lists by n - 1 or, if n - 1 &lt;= 0, by 1.</p> </blockquote> <p>This can be done without creating extra lists and zipping:</p> <pre><code>counts = [[2, 2, 2, 0], [2, 2, 1, 0]] divided = [[i / max(i - 1, 1) for i in sublst] for sublst in counts] </code></pre> <p>Note that in this case, <code>max(i - 1, 1)</code> will <em>always</em> be 1.</p>
2
2016-10-11T15:10:31Z
[ "python", "list", "replace", "list-comprehension", "value" ]
How to replace elements by value in a list of lists in Python?
39,980,292
<p>I have:</p> <pre><code> counts = [[2, 2, 2, 0], [2, 2, 1, 0]] countsminusone = [[1, 1, 1, -1], [1, 1, 0, -1]] #Which is counts - 1 </code></pre> <p>For every value, where countsminusone is 0 or less than 0, I want to replace it with 1.</p> <pre><code> countsminusone1 = [[1 if x == 0 or x &lt; 0 else x for x in pair] for pair in countsminusone] #I cannot get this to work </code></pre> <p>And then divide counts by countsminusone</p> <pre><code>Divide = [[n/d for n, d in zip(subq, subr)] for subq, subr in zip(counts, countsminusone)] #This should work if the above works </code></pre>
-1
2016-10-11T14:58:04Z
39,981,334
<p>This works for any number of elements inside the sub list</p> <pre><code>import numpy as np map( list , np.array(counts)/np.array(zip(*([ iter([ 1 if i &lt;=0 else i for i in sum( countsminusone , []) ]) ] * len(countsminusone[0]) ) ) ) ) </code></pre>
0
2016-10-11T15:44:42Z
[ "python", "list", "replace", "list-comprehension", "value" ]
Dictionaries are ordered in Python 3.6
39,980,323
<p>Dictionaries are ordered in Python 3.6, unlike in previous Python incarnations. This seems like a substantial change, but it's only a short paragraph in the <a href="https://docs.python.org/3.6/whatsnew/3.6.html#other-language-changes">documentation</a>. It is described as an implementation detail rather than a language feature, but also implies this may become standard in the future.</p> <p>How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order? </p> <p>Here is the text from the documentation:</p> <blockquote> <p><code>dict()</code> now uses a “compact” representation <a href="https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html">pioneered by PyPy</a>. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. <a href="https://www.python.org/dev/peps/pep-0468">PEP 468</a> (Preserving the order of **kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in <a href="https://bugs.python.org/issue27350">issue 27350</a>. Idea <a href="https://mail.python.org/pipermail/python-dev/2012-December/123028.html">originally suggested by Raymond Hettinger</a>.)</p> </blockquote>
72
2016-10-11T14:59:23Z
39,980,548
<p>Below is answering the original first question:</p> <blockquote> <p>Should I use <code>dict</code> or <code>OrderedDict</code> in Python 3.6?</p> </blockquote> <p>I think this sentence from the documentation is actually enough to answer your question</p> <blockquote> <p>The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon</p> </blockquote> <p><code>dict</code> is not explicitly meant to be an ordered collection, so if you want to stay consistent and not rely on a side effect of the new implementation you should stick with <code>OrderedDict</code>.</p> <p>Make your code future proof :)</p> <p>There's a debate about that <a href="https://news.ycombinator.com/item?id=12460936">here</a>.</p>
27
2016-10-11T15:09:00Z
[ "python", "python-3.x", "dictionary", "python-internals", "python-3.6" ]
Dictionaries are ordered in Python 3.6
39,980,323
<p>Dictionaries are ordered in Python 3.6, unlike in previous Python incarnations. This seems like a substantial change, but it's only a short paragraph in the <a href="https://docs.python.org/3.6/whatsnew/3.6.html#other-language-changes">documentation</a>. It is described as an implementation detail rather than a language feature, but also implies this may become standard in the future.</p> <p>How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order? </p> <p>Here is the text from the documentation:</p> <blockquote> <p><code>dict()</code> now uses a “compact” representation <a href="https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html">pioneered by PyPy</a>. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. <a href="https://www.python.org/dev/peps/pep-0468">PEP 468</a> (Preserving the order of **kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in <a href="https://bugs.python.org/issue27350">issue 27350</a>. Idea <a href="https://mail.python.org/pipermail/python-dev/2012-December/123028.html">originally suggested by Raymond Hettinger</a>.)</p> </blockquote>
72
2016-10-11T14:59:23Z
39,980,744
<blockquote> <p>How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order?</p> </blockquote> <p>Essentially by keeping two arrays, one holding the entries for the dictionary in the order that they were inserted and the other holding a list of indices.</p> <p>In the previous implementation a sparse array of type <em>dictionary entries</em> had to be allocated; unfortunately, it also resulted in a lot of empty space since that array was not allowed to be more than <code>2/3</code>s full. This is not the case now since only the <em>required</em> entries are stored and a sparse array of type <em>integer</em> <code>2/3</code>s full is kept. </p> <p>Obviously creating a sparse array of type "dictionary entries" is much more memory demanding than a sparse array for storing ints (<a href="https://github.com/python/cpython/blob/master/Objects/dict-common.h#L55">sized <code>8 bytes</code> tops</a> in cases of really large dictionaries) </p> <hr> <p><a href="https://mail.python.org/pipermail/python-dev/2012-December/123028.html">In the original proposal made by Raymond Hettinger</a>, a visualization of the data structures used can be seen which captures the gist of the idea.</p> <blockquote> <p>For example, the dictionary:</p> <pre><code>d = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'} </code></pre> <p>is currently stored as:</p> <pre><code>entries = [['--', '--', '--'], [-8522787127447073495, 'barry', 'green'], ['--', '--', '--'], ['--', '--', '--'], ['--', '--', '--'], [-9092791511155847987, 'timmy', 'red'], ['--', '--', '--'], [-6480567542315338377, 'guido', 'blue']] </code></pre> <p>Instead, the data should be organized as follows:</p> <pre><code>indices = [None, 1, None, None, None, 0, None, 2] entries = [[-9092791511155847987, 'timmy', 'red'], [-8522787127447073495, 'barry', 'green'], [-6480567542315338377, 'guido', 'blue']] </code></pre> </blockquote> <p>As you can visually now see, in the original proposal, a lot of space is essentially empty to reduce collisions and make look-ups faster. With the new approach, you reduce the memory required by moving the sparseness where it's really required, in the indices.</p> <hr> <blockquote> <p>Should you depend on it and/or use it?</p> </blockquote> <p>As noted in the documentation, this is considered an implementation detail meaning it is subject to change and you shouldn't depend on it. </p> <p>Different implementations of Python aren't required to make the dictionary ordered, rather, just support an ordered mapping where that is required (Notable examples are <em><a href="https://docs.python.org/3.6/whatsnew/3.6.html#pep-520-preserving-class-attribute-definition-order">PEP 520: Preserving Class Attribute Definition Order</a></em> and <em><a href="https://docs.python.org/3.6/whatsnew/3.6.html#pep-468-preserving-keyword-argument-order">PEP 468: Preserving Keyword Argument Order</a></em>)</p> <p>If you want to write code that preserves the ordering and want it to not break on previous versions/different implementations you should always use <code>OrderedDict</code>. Besides, <code>OrderedDict</code> will most likely eventually become a thin-wrapper around the new <code>dict</code> implementation.</p>
57
2016-10-11T15:17:53Z
[ "python", "python-3.x", "dictionary", "python-internals", "python-3.6" ]
Write magnetic tape end of record linux
39,980,359
<p>Task is create two record with different sizes within one file entry. I'm using python 3.4.5 for testing:</p> <pre><code>import fcntl import os import struct MTIOCTOP = 0x40086d01 # refer to mtio.h MTSETBLK = 20 fh = os.open('/dev/st2', os.O_WRONLY ) fcntl.ioctl(fh, MTIOCTOP, struct.pack('hi', MTSETBLK, 1024)) os.write(fh, b'a'*1024) fcntl.ioctl(fh, MTIOCTOP, struct.pack('hi', MTSETBLK, 2048)) os.write(fh, b'b'*2048) os.close(fh) [root@dev2 mhvtl]# tcopy /dev/st2 file 0: block size 4096: 1 records file 0: eof after 1 records: 4096 bytes &lt;&lt;&lt; should be 2 records eot total length: 4096 bytes [root@dev2 mhvtl]# ^C </code></pre> <p>Is there an ioctl opt code that will initiate a new record on the tape with variable record length. Or any other way to work around this bug?</p>
0
2016-10-11T15:01:14Z
39,981,853
<p>Issue was with tcopy, it uses block size on device instead of detecting it.</p> <pre><code>fcntl.ioctl(fh, MTIOCTOP, struct.pack('hi', MTSETBLK, 0)) </code></pre> <p>after last write allowed tcopy to display data as intended.</p>
0
2016-10-11T16:11:43Z
[ "python", "linux", "ioctl", "fcntl", "mt" ]
2 column n rows array- operation on elements of two colums in each row
39,980,383
<p>I have an array of n rows and two columns (array). I have another variable (a) which I am using as reference.</p> <pre><code> for a between (1-10000) if column1 of ARRAY&lt;= a &lt;= column2 of the ARRAY save the tuple (a, YES) </code></pre> <p>This resultant tuple I will use for further operation</p>
-2
2016-10-11T15:02:30Z
39,984,475
<p>The <a href="http://docs.scipy.org/doc/numpy/reference/arrays.html" rel="nofollow">NumPy documentation</a> has all the information you may need.</p>
0
2016-10-11T18:44:15Z
[ "python", "arrays", "numpy" ]
Python: Flatten and Parse certain sections of JSON
39,980,664
<p>I have an input JSON that looks like this:</p> <pre><code>&gt; {"payment": {"payment_id": "AA340", "payment_amt": "20", "chk_nr": "321749", "clm_list": {"dtl": [{"clm_id": "1A2345", "name": "John", adj:{"adj_id":"W123","adj_cd":"45"}}, {"clm_id": "9999", "name": "Dilton", adj:{"adj_id":"X123","adj_cd":"5"}}]}}} </code></pre> <p>I need the output to look like this:</p> <pre><code>{"clm_id": "1A2345",adj:{"adj_id":"W123"},"payment_amt": "20", "chk_nr": "321749"} {"clm_id": "9999"adj:{"adj_id":"X123"},"payment_amt": "20", "chk_nr": "321749"} </code></pre> <p>So the code takes in the one JSON doc, parses the claim array section and normalizes it by adding payment info to each section. Even the nested JSON is parsed.</p> <p>I'm able to parse the data, but unsure on how to normalize only certain section of the data.</p> <p>The code below will parse the data, but NOT normalize</p> <pre><code>keep = ["payment","payment_id","payment_amt", "clm_list", "dtl", "clm_id","adj","adj_id"] old_dict={"payment": {"payment_id": "AA340", "payment_amt": "20", "chk_nr": "321749", "clm_list": {"dtl": [{"clm_id": "1A2345", "name": "John", "adj": {"adj_id": "W123", "adj_cd": "45"}}, {"clm_id": "9999", "name": "Dilton", "adj": {"adj_id": "X123", "adj_cd": "5"}}]}}} def recursively_prune_dict_keys(obj, keep): if isinstance(obj, dict): return dict([(k, recursively_prune_dict_keys(v, keep)) for k, v in obj.items() if k in keep]) elif isinstance(obj, list): return [recursively_prune_dict_keys(item, keep) for item in obj] else: return obj new_dict = recursively_prune_dict_keys(old_dict, keep) conv_json=new_dict["payment"] print json.dumps(conv_json) </code></pre>
0
2016-10-11T15:14:44Z
39,989,645
<p>It may be the neat way is to simply pick through the data, like;</p> <pre><code>new_dict = recursively_prune_dict_keys(old_dict, keep) payment = old_dict['payment'] claims = payment['clm_list']['dtl'] for claim in claims: claim['payment_amt'] = payment['payment_amt'] claim['chk_nr'] = payment['chk_nr'] print(json.dumps(claims)) </code></pre> <p>This will yield;</p> <pre><code>[{"chk_nr": "321749", "clm_id": "1A2345", "payment_amt": "20", "adj": {"adj_id": "W123"}}, {"chk_nr": "321749", "clm_id": "9999", "payment_amt": "20", "adj": {"adj_id": "X123"}}] </code></pre> <p>This contains the output you asked for, but not exactly as you may want to see it. </p> <p>First, your desired output isn't correct JSON without the square brackets <code>[]</code> that would make it a list. But, we can get rid of that by dumping each claim individually;</p> <pre><code>new_dict = recursively_prune_dict_keys(old_dict, keep) payment = old_dict['payment'] claims = payment['clm_list']['dtl'] for claim in claims: claim['payment_amt'] = payment['payment_amt'] claim['chk_nr'] = payment['chk_nr'] print(json.dumps(claim)) </code></pre> <p>This gives;</p> <pre><code>{"name": "John", "clm_id": "1A2345", "payment_amt": "20", "adj": {"adj_cd": "45", "adj_id": "W123"}, "chk_nr": "321749"} {"name": "Dilton", "clm_id": "9999", "payment_amt": "20", "adj": {"adj_cd": "5", "adj_id": "X123"}, "chk_nr": "321749"} </code></pre> <p>This is close to your desired output, except maybe for the ordering. Python dicts are not inherently ordered. You can sort them, however. So, if the ordering is important, you will want to read through <a href="http://pythoncentral.io/how-to-sort-python-dictionaries-by-key-or-value/" rel="nofollow">How to Sort Python Dictionaries by Key or Value</a></p>
0
2016-10-12T02:37:53Z
[ "python", "json", "parsing", "flatten" ]
multiple attributes being affected at once
39,980,712
<p>When I change the <code>stats.hp</code> or <code>base_stats.hp</code> value for the <code>Creature</code> class, it always sets <strong>both</strong> values at once, which is a problem because it means I cannot reset the creature's hp to it's base value. Here's some of the code that deals with this</p> <pre><code>class Stats: def __init__ (self,hp,height,strength,speed,skill,agility,perception): x = random.randint(-2,2) self.hp = hp+x x = random.randint(-10,10) self.height = height+x x = random.randint(-2,2) self.strength = strength+x x = random.randint(-2,2) self.speed = speed+x x = random.randint(-1,1) self.skill = skill+x x = random.randint(-2,2) self.agility = agility+x x = random.randint(-2,2) self.perception = perception+x class Creature: def __init__ (self,name,stats,top_image,side_image): self.name = name self.base_stats = stats self.stats = stats # More code here for rest of attributes </code></pre> <p>Maybe the problem is because the <code>Creature.stats</code> and <code>Creature.base_stats</code> are referencing the same <code>stats</code> variable?</p> <p>(edit)</p> <p>The <code>stats</code> referenced in the <code>__init__</code> of the Creature class is a <code>Stats</code> object</p>
0
2016-10-11T15:16:30Z
39,980,816
<p>the stats in your <code>__init__</code> function is an object of Class Stats correct? therefore you are assigning the SAME object to self.base_stats and self.stats. So any update to either of those will affect the other because you are changing the ONLY Stats object that your creature has as an attribute.</p> <pre><code>class Creature: def __init__ (self,name,stats,top_image,side_image): self.name = name self.base_stats = stats self.stats = stats </code></pre>
0
2016-10-11T15:20:35Z
[ "python", "class" ]
multiple attributes being affected at once
39,980,712
<p>When I change the <code>stats.hp</code> or <code>base_stats.hp</code> value for the <code>Creature</code> class, it always sets <strong>both</strong> values at once, which is a problem because it means I cannot reset the creature's hp to it's base value. Here's some of the code that deals with this</p> <pre><code>class Stats: def __init__ (self,hp,height,strength,speed,skill,agility,perception): x = random.randint(-2,2) self.hp = hp+x x = random.randint(-10,10) self.height = height+x x = random.randint(-2,2) self.strength = strength+x x = random.randint(-2,2) self.speed = speed+x x = random.randint(-1,1) self.skill = skill+x x = random.randint(-2,2) self.agility = agility+x x = random.randint(-2,2) self.perception = perception+x class Creature: def __init__ (self,name,stats,top_image,side_image): self.name = name self.base_stats = stats self.stats = stats # More code here for rest of attributes </code></pre> <p>Maybe the problem is because the <code>Creature.stats</code> and <code>Creature.base_stats</code> are referencing the same <code>stats</code> variable?</p> <p>(edit)</p> <p>The <code>stats</code> referenced in the <code>__init__</code> of the Creature class is a <code>Stats</code> object</p>
0
2016-10-11T15:16:30Z
39,980,858
<p>Yes. They reference the same object. You could use <a href="https://docs.python.org/2/library/copy.html" rel="nofollow">copy</a>, instead.</p> <pre><code>from copy import copy self.base_stats = copy(stats) self.stats = copy(stats) </code></pre>
2
2016-10-11T15:22:27Z
[ "python", "class" ]
HTML: set id for onclick from list
39,980,917
<p>Apologies if this question has already been asked. I'm new to HTML and I'm not familiar with the words I should use to find help with.</p> <p>I'm using Flask and HTML to make a website.</p> <p>My code:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;body&gt; &lt;h3&gt;Click the result you want to investigate&lt;/h3&gt; {% for r in results %} &lt;p id=r["links"] onclick="myFunction(id)"&gt; {{r["title"]}} ({{r["address_snippet"]}}) &lt;/p&gt; {% endfor %} &lt;script&gt; function myFunction(id) { document.getElementById(id).innerHTML = "CLICKED HERE"; } &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I'm trying to print out a list of names, and etc to the screen. The user will click one name, this name will then be returned to a Python function for further analysis.</p> <p>As a first attempt, I just want to change the text from the name to "CLICKED HERE". However, regardless of which name I click, only the first entry changes.</p> <p>I can't figure out how to set the id from the container. Any help would be appreciated!</p>
0
2016-10-11T15:24:50Z
39,981,941
<p>You need to update your code like this, so you get the actual value from the variable <code>r["links"]</code>, by adding the brackets <code>{{r["links"]}}</code></p> <pre><code>&lt;p id="{{r["links"]}}" onclick="myFunction('{{r["links"]}}')" </code></pre>
0
2016-10-11T16:15:48Z
[ "python", "html", "string", "list", "flask" ]
How does Oracle handle accents in queries?
39,980,927
<p>I'm using python to execute the following query in Oracle:</p> <pre><code>SELECT COUNT(*) FROM TABLE WHERE DATA = 'CAMIÓN' </code></pre> <p>I'm getting a 0 when I should be getting a value different to 0 because there are rows where DATA is 'CAMIÓN'.</p> <p>If you execute the query like this:</p> <pre><code>SELECT COUNT(*) FROM TABLE WHERE DATA = 'CAMIN' </code></pre> <p>It will give you 0, so I'm thinking it might be due to the accent because it doesn't give an error, it seems oracle is removing the troubled character.</p> <p>How does Oracle handle the accents? Does it remove those?</p>
1
2016-10-11T15:25:29Z
39,981,270
<p>If alternative convention spelling is enabled, then the acents equivalent word will be taken by oracle. </p> <p>For Example, </p> <p>Character Alternative Spelling ä ae</p> <p>You can use ctx_ddl.unset_attribute/ctx_ddl.set_attribute to set or unset alternative spelling conventions. </p>
0
2016-10-11T15:41:54Z
[ "python", "oracle" ]
Python bcrypt package on Heroku gives AttributeError: 'module' object has no attribute 'ffi'
39,980,976
<p>I'm having a problem using bcrypt with my Flask application on Heroku. When I deploy to Heroku and go to the login route I get 500 Internal server error. It works correctly locally. How do I get the bcrypt package working on Heroku?</p> <pre><code>ERROR in app: Exception on /login [POST] Traceback (most recent call last): File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/app/.heroku/python/lib/python2.7/site-packages/flask_restful/__init__.py", line 477, in wrapper resp = resource(*args, **kwargs) File "/app/.heroku/python/lib/python2.7/site-packages/flask/views.py", line 84, in view return self.dispatch_request(*args, **kwargs) File "/app/.heroku/python/lib/python2.7/site-packages/flask_restful/__init__.py", line 587, in dispatch_request resp = meth(*args, **kwargs) File "/app/app.py", line 196, in post elif bcrypt.check_password_hash(user.password, password): File "/app/.heroku/python/lib/python2.7/site-packages/flask_bcrypt.py", line 193, in check_password_hash return safe_str_cmp(bcrypt.hashpw(password, pw_hash), pw_hash) File "/app/.heroku/python/lib/python2.7/site-packages/bcrypt/__init__.py", line 82, in hashpw hashed = _bcrypt.ffi.new("char[]", 128) AttributeError: 'module' object has no attribute 'ffi' </code></pre>
0
2016-10-11T15:27:29Z
39,981,834
<p>I've found the solution, I was using the following packages: bcrypt, flask_bcrypt and py-crypt. So I uninstall the py-bcrypt, probably this package was in conflict with bcrypt package.</p> <pre><code>pip uninstall py-bcrypt </code></pre>
0
2016-10-11T16:10:02Z
[ "python", "heroku", "flask", "bcrypt" ]
Python bcrypt package on Heroku gives AttributeError: 'module' object has no attribute 'ffi'
39,980,976
<p>I'm having a problem using bcrypt with my Flask application on Heroku. When I deploy to Heroku and go to the login route I get 500 Internal server error. It works correctly locally. How do I get the bcrypt package working on Heroku?</p> <pre><code>ERROR in app: Exception on /login [POST] Traceback (most recent call last): File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/app/.heroku/python/lib/python2.7/site-packages/flask_restful/__init__.py", line 477, in wrapper resp = resource(*args, **kwargs) File "/app/.heroku/python/lib/python2.7/site-packages/flask/views.py", line 84, in view return self.dispatch_request(*args, **kwargs) File "/app/.heroku/python/lib/python2.7/site-packages/flask_restful/__init__.py", line 587, in dispatch_request resp = meth(*args, **kwargs) File "/app/app.py", line 196, in post elif bcrypt.check_password_hash(user.password, password): File "/app/.heroku/python/lib/python2.7/site-packages/flask_bcrypt.py", line 193, in check_password_hash return safe_str_cmp(bcrypt.hashpw(password, pw_hash), pw_hash) File "/app/.heroku/python/lib/python2.7/site-packages/bcrypt/__init__.py", line 82, in hashpw hashed = _bcrypt.ffi.new("char[]", 128) AttributeError: 'module' object has no attribute 'ffi' </code></pre>
0
2016-10-11T15:27:29Z
40,133,925
<p>I encountered a similar issue. Here is a copy of the last part of my stack trace: </p> <pre><code> self.password = User.hashed_password(password) File "/app/application/models.py", line 16, in hashed_password File "/app/.heroku/python/lib/python3.5/site-packages/flask_bcrypt.py", line 163, in generate_password_hash File "/app/.heroku/python/lib/python3.5/site-packages/bcrypt/__init__.py", line 50, in gensalt output = _bcrypt.ffi.new("unsigned char[]", 30) AttributeError: module 'bcrypt._bcrypt' has no attribute 'ffi' </code></pre> <p>I'm wondering if this issue is particular to Heroku. I was using some existing Flask boilerplate. But this issue with Bcrypt has also happened to me in previous projects when using a (different) boilerplate Flask project on Heroku.</p> <p><strong>Possible Solution 1</strong></p> <p>Play around with different dependency combinations. In one case, the issue went away when I included <code>cryptography</code> in my <code>requirements.txt</code>. But as Jean Silva had mentioned in this thread, it is possible that dependencies could be in conflict. So you might want to play with different combinations until something works.</p> <p><strong>Possible Solution 2</strong></p> <p>If using Flask, try having the <code>werkzeug.security</code> package/module to hash / check hashes as opposed to using the <code>bcrypt</code> package directly. In example below in my <code>models.py</code>, commenting out such lines and adding new ones solved the issue for me.</p> <pre><code># from index import db, bcrypt from index import db from werkzeug.security import generate_password_hash, check_password_hash class User(db.Model): id = db.Column(db.Integer(), primary_key=True) email = db.Column(db.String(255), unique=True) password = db.Column(db.String(255)) def __init__(self, email, password): self.email = email self.active = True self.password = User.hashed_password(password) @staticmethod def hashed_password(password): # return bcrypt.generate_password_hash(password) return generate_password_hash(password) @staticmethod def get_user_with_email_and_password(email, password): user = User.query.filter_by(email=email).first() # if user and bcrypt.check_password_hash(user.password, password): if user and check_password_hash(user.password, password): return user else: return None </code></pre>
0
2016-10-19T14:14:44Z
[ "python", "heroku", "flask", "bcrypt" ]
Using BeautifulSoup, is it possible to move to the parent tag when using the search for text function?
39,980,998
<p>Is it possible to move from the current position in the DOM up and down when only the text is an common identifier?</p> <pre><code>&lt;div&gt;changing text&lt;/div&gt; &lt;div&gt;fixed text&lt;/div&gt; </code></pre> <p>How to get the text <code>changing text</code> when searching for the <code>fixed text</code> and moving up to parent div? </p> <p>What I tried:</p> <pre><code>x = soup.body.findAll(text=re.compile('fixed text')).parent AttributeError: 'ResultSet' object has no attribute 'parent' </code></pre>
0
2016-10-11T15:28:27Z
39,981,169
<p>The error you are having is due to call <code>parent</code> in a ResultSet, a list of results. If you need to have multiple results, try:</p> <pre><code>x = soup.body.find_all(text=re.compile('fixed text')) for i in x: previous_div = i.previous_sibling </code></pre> <p>If you doesnt want to find multiple results, just change find_all to find: </p> <pre><code>x = soup.body.find(text=re.compile('fixed text')).previous_sibling </code></pre> <p>Note that I replace parent to previous_sibling, as the divs are in the same level</p>
0
2016-10-11T15:37:11Z
[ "python", "beautifulsoup" ]
Using BeautifulSoup, is it possible to move to the parent tag when using the search for text function?
39,980,998
<p>Is it possible to move from the current position in the DOM up and down when only the text is an common identifier?</p> <pre><code>&lt;div&gt;changing text&lt;/div&gt; &lt;div&gt;fixed text&lt;/div&gt; </code></pre> <p>How to get the text <code>changing text</code> when searching for the <code>fixed text</code> and moving up to parent div? </p> <p>What I tried:</p> <pre><code>x = soup.body.findAll(text=re.compile('fixed text')).parent AttributeError: 'ResultSet' object has no attribute 'parent' </code></pre>
0
2016-10-11T15:28:27Z
39,981,226
<p>This program might do what you want:</p> <pre><code>from bs4 import BeautifulSoup import re html = '&lt;body&gt;&lt;div&gt;changing text&lt;/div&gt;&lt;div&gt;fixed text&lt;/div&gt;&lt;body&gt;' soup = BeautifulSoup(html) x = soup.body.findAll(text=re.compile('fixed text'))[0].parent.previous_sibling assert x.text == 'changing text' </code></pre>
2
2016-10-11T15:39:45Z
[ "python", "beautifulsoup" ]
Flask Installation - Error
39,981,113
<p>I tried installing Flask on a virtual environment on my PC that has Linux Mint. Ended up with this error:</p> <pre><code>*error: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/itsdangerous.py' ---------------------------------------------------------------------- Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-cMPDih/itsdangerous/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-HIVrsp-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-cMPDih/itsdangerous/* </code></pre>
-1
2016-10-11T15:34:09Z
39,981,848
<p>The error message indicates, that you're not working in the virtual environment. You probably haven't activated it. You can easily test and activate it:</p> <pre><code>$ which python /usr/bin/python # oops, no virtual environment $ source /home/user/venv/bin/activate $ which python /home/user/venv/bin/python # correct $ pip install flask </code></pre> <p>You need to do the activate every time. You may create a start-script, for example in bash to activate it when running a program:</p> <pre><code>#!/bin/bash source /home/user/venv/bin/activate python /home/user/venv/myproject/main.py </code></pre>
1
2016-10-11T16:11:21Z
[ "python", "pip", "virtualenv" ]
How can I display my python file in html?
39,981,157
<p>I'm using flask, tried different examples, many codes but nothing worked .. </p> <p>this is my html: </p> <pre><code>&lt;form method="post" name="prueba"&gt; &lt;div class="form-group "&gt; &lt;label class="col-sm-3 col-sm-3 control-label"&gt;Direccion IP: &lt;/label&gt; &lt;div class="col-sm-9"&gt; &lt;input type="text" class="form-control" value="{{ address }}" &gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>python file:</p> <pre><code>def get_info(): with open('/etc/network/interfaces', 'r+') as f: for line in f: found_address = line.find('address') if found_address != -1: address = line[found_address+len('address:'):] print 'Address: ', address found_network = line.find('network') if found_network != -1: network = line[found_network+len('network:'):] print 'Network: ', network found_netmask = line.find('netmask') if found_netmask != -1: netmask = line[found_netmask+len('netmask:'):] print 'Netmask: ', netmask found_broadcast = line.find('broadcast') if found_broadcast != -1: broadcast = line[found_broadcast+len('broadcast:'):] print 'Broadcast: ', broadcast return address print get_info() @app.route('/test') def showPage(): addresses = get_info() return render_template('test.html', addresses=addresses) </code></pre> <p>python file works correctly when I run it through console but when I try to display it inside my form nothing happens.</p>
-1
2016-10-11T15:36:47Z
39,981,278
<p>Could it be that you provide the data to the template as <code>addresses</code> but you call the variable in your template <code>address</code>?</p>
1
2016-10-11T15:42:16Z
[ "python", "html", "python-2.7", "flask", "debian" ]
Upload file to MS SharePoint using Python OneDrive SDK
39,981,210
<p>Is it possible to upload a file to the <strong>Shared Documents</strong> library of a <strong>Microsoft SharePoint</strong> site with the <strong><a href="https://github.com/OneDrive/onedrive-sdk-python" rel="nofollow">Python OneDrive SDK</a></strong>? </p> <p><strong><a href="https://dev.onedrive.com/readme.htm" rel="nofollow">This documentation</a></strong> says it should be (in the first sentence), but I can't make it work.</p> <p>I'm able to authenticate (with Azure AD) and upload to a <strong>OneDrive</strong> folder, but when trying to upload to a <strong>SharePoint</strong> folder, I keep getting this error:</p> <blockquote> <p>"Exception of type 'Microsoft.IdentityModel.Tokens.<strong>AudienceUriValidationFailedException</strong>' was thrown."</p> </blockquote> <p>The code I'm using that returns an object with the error:</p> <pre><code>(...authentication...) client = onedrivesdk.OneDriveClient('https://{tenant}.sharepoint.com/{site}/_api/v2.0/', auth, http) client.item(path='/drive/special/documents').children['test.xlsx'].upload('test.xlsx') </code></pre> <p><a href="http://i.stack.imgur.com/ZQEj4.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZQEj4.png" alt="where I&#39;d like to upload on the web"></a></p> <p>I can successfully upload to <code>https://{tenant}-my.sharepoint.com/_api/v2.0/</code> (notice the "<strong>-my</strong>" after the <code>{tenant}</code>) with the following code:</p> <pre><code>client = onedrivesdk.OneDriveClient('https://{tenant}-my.sharepoint.com/_api/v2.0/', auth, http) returned_item = client.item(drive='me', id='root').children['test.xlsx'].upload('test.xlsx') </code></pre> <p>How could I upload the same file to a <strong>SharePoint</strong> site?</p> <p><em>(Answers to similar questions (<a href="http://stackoverflow.com/questions/37451835/onedrive-api-refer-to-sharepoint-file-to-upload-or-download-invalid-audience">1</a>,<a href="http://stackoverflow.com/questions/37233669/onedrive-api-python-sdk-points-to-login-live-com-not-mydomain-sharepoint-com">2</a>,<a href="http://stackoverflow.com/questions/29635758/onedrive-sharepoint-oauth-invalid-audience-error">3</a>,<a href="http://stackoverflow.com/questions/39822092/which-sdk-or-api-should-i-use-to-list-and-upload-files-into-office-365-sharepoin">4</a>) on Stack Overflow are either too vague or suggest using a different API. My question is if it's possible using the OneDrive Python SDK, and if so, how to do it.)</em></p> <hr> <p><strong>Update</strong>: Here is my full code and output. (<em>Sensitive original data replaced with similarly formatted gibberish.</em>)</p> <pre><code>import re import onedrivesdk from onedrivesdk.helpers.resource_discovery import ResourceDiscoveryRequest # our domain (not the original) redirect_uri = 'https://example.ourdomain.net/' # our client id (not the original) client_id = "a1234567-1ab2-1234-a123-ab1234abc123" # our client secret (not the original) client_secret = 'ABCaDEFGbHcd0e1I2fghJijkL3mn4M5NO67P8Qopq+r=' resource = 'https://api.office.com/discovery/' auth_server_url = 'https://login.microsoftonline.com/common/oauth2/authorize' auth_token_url = 'https://login.microsoftonline.com/common/oauth2/token' http = onedrivesdk.HttpProvider() auth = onedrivesdk.AuthProvider(http_provider=http, client_id=client_id, auth_server_url=auth_server_url, auth_token_url=auth_token_url) should_authenticate_via_browser = False try: # Look for a saved session. If not found, we'll have to # authenticate by opening the browser. auth.load_session() auth.refresh_token() except FileNotFoundError as e: should_authenticate_via_browser = True pass if should_authenticate_via_browser: auth_url = auth.get_auth_url(redirect_uri) code = '' while not re.match(r'[a-zA-Z0-9_-]+', code): # Ask for the code print('Paste this URL into your browser, approve the app\'s access.') print('Copy the resulting URL and paste it below.') print(auth_url) code = input('Paste code here: ') # Parse code from URL if necessary if re.match(r'.*?code=([a-zA-Z0-9_-]+).*', code): code = re.sub(r'.*?code=([a-zA-Z0-9_-]*).*', r'\1', code) auth.authenticate(code, redirect_uri, client_secret, resource=resource) # If you have access to more than one service, you'll need to decide # which ServiceInfo to use instead of just using the first one, as below. service_info = ResourceDiscoveryRequest().get_service_info(auth.access_token)[0] auth.redeem_refresh_token(service_info.service_resource_id) auth.save_session() # Save session into a local file. # Doesn't work client = onedrivesdk.OneDriveClient( 'https://{tenant}.sharepoint.com/sites/{site}/_api/v2.0/', auth, http) returned_item = client.item(path='/drive/special/documents') .children['test.xlsx'] .upload('test.xlsx') print(returned_item._prop_dict['error_description']) # Works, uploads to OneDrive instead of SharePoint site client2 = onedrivesdk.OneDriveClient( 'https://{tenant}-my.sharepoint.com/_api/v2.0/', auth, http) returned_item2 = client2.item(drive='me', id='root') .children['test.xlsx'] .upload('test.xlsx') print(returned_item2.web_url) </code></pre> <p>Output:</p> <pre><code>Exception of type 'Microsoft.IdentityModel.Tokens.AudienceUriValidationFailedException' was thrown. https://{tenant}-my.sharepoint.com/personal/user_domain_net/_layouts/15/WopiFrame.aspx?sourcedoc=%1ABCDE2345-67F8-9012-3G45-6H78IJKL9M01%2N&amp;file=test.xlsx&amp;action=default </code></pre>
4
2016-10-11T15:39:00Z
40,137,811
<p>I finally found a solution, with the help of (<em>SO user</em>) sytech.</p> <p>The answer to my original question is that using the original <strong><a href="https://github.com/OneDrive/onedrive-sdk-python" rel="nofollow">Python OneDrive SDK</a></strong>, it's <strong>not possible</strong> to upload a file to the <code>Shared Documents</code> folder of a <code>SharePoint Online</code> site (at the moment of writing this): when the SDK queries the <a href="https://dev.onedrive.com/auth/aad_oauth.htm#step-3-discover-the-onedrive-for-business-resource-uri" rel="nofollow"><strong>resource discovery service</strong></a>, it drops all services whose <code>service_api_version</code> is not <code>v2.0</code>. However, I get the SharePoint service with <code>v1.0</code>, so it's dropped, although it could be accessed using API v2.0 too.</p> <p><strong>However</strong>, by extending the <code>ResourceDiscoveryRequest</code> class (in the OneDrive SDK), we can create a workaround for this. I managed to <strong>upload a file</strong> this way:</p> <pre><code>import json import re import onedrivesdk import requests from onedrivesdk.helpers.resource_discovery import ResourceDiscoveryRequest, \ ServiceInfo # our domain (not the original) redirect_uri = 'https://example.ourdomain.net/' # our client id (not the original) client_id = "a1234567-1ab2-1234-a123-ab1234abc123" # our client secret (not the original) client_secret = 'ABCaDEFGbHcd0e1I2fghJijkL3mn4M5NO67P8Qopq+r=' resource = 'https://api.office.com/discovery/' auth_server_url = 'https://login.microsoftonline.com/common/oauth2/authorize' auth_token_url = 'https://login.microsoftonline.com/common/oauth2/token' # our sharepoint URL (not the original) sharepoint_base_url = 'https://{tenant}.sharepoint.com/' # our site URL (not the original) sharepoint_site_url = sharepoint_base_url + 'sites/{site}' file_to_upload = 'C:/test.xlsx' target_filename = 'test.xlsx' class AnyVersionResourceDiscoveryRequest(ResourceDiscoveryRequest): def get_all_service_info(self, access_token, sharepoint_base_url): headers = {'Authorization': 'Bearer ' + access_token} response = json.loads(requests.get(self._discovery_service_url, headers=headers).text) service_info_list = [ServiceInfo(x) for x in response['value']] # Get all services, not just the ones with service_api_version 'v2.0' # Filter only on service_resource_id sharepoint_services = \ [si for si in service_info_list if si.service_resource_id == sharepoint_base_url] return sharepoint_services http = onedrivesdk.HttpProvider() auth = onedrivesdk.AuthProvider(http_provider=http, client_id=client_id, auth_server_url=auth_server_url, auth_token_url=auth_token_url) should_authenticate_via_browser = False try: # Look for a saved session. If not found, we'll have to # authenticate by opening the browser. auth.load_session() auth.refresh_token() except FileNotFoundError as e: should_authenticate_via_browser = True pass if should_authenticate_via_browser: auth_url = auth.get_auth_url(redirect_uri) code = '' while not re.match(r'[a-zA-Z0-9_-]+', code): # Ask for the code print('Paste this URL into your browser, approve the app\'s access.') print('Copy the resulting URL and paste it below.') print(auth_url) code = input('Paste code here: ') # Parse code from URL if necessary if re.match(r'.*?code=([a-zA-Z0-9_-]+).*', code): code = re.sub(r'.*?code=([a-zA-Z0-9_-]*).*', r'\1', code) auth.authenticate(code, redirect_uri, client_secret, resource=resource) service_info = AnyVersionResourceDiscoveryRequest().\ get_all_service_info(auth.access_token, sharepoint_base_url)[0] auth.redeem_refresh_token(service_info.service_resource_id) auth.save_session() client = onedrivesdk.OneDriveClient(sharepoint_site_url + '/_api/v2.0/', auth, http) # Get the drive ID of the Documents folder. documents_drive_id = [x['id'] for x in client.drives.get()._prop_list if x['name'] == 'Documents'][0] items = client.item(drive=documents_drive_id, id='root') # Upload file uploaded_file_info = items.children[target_filename].upload(file_to_upload) </code></pre> <p>Authenticating for a different service gives you a different token.</p>
2
2016-10-19T17:21:32Z
[ "python", "sharepoint", "onedrive", "onedrive-api" ]
Python/selenium - locate elements 'By' - specify locator strategy with variable?
39,981,228
<p>Having created a selenium browser and retrieved a web page, this works fine:</p> <pre><code>... if selenium_browser.find_element(By.ID, 'id_name'): print "found" ... </code></pre> <p>given a tuple like this:</p> <pre><code>tup = ('ID', 'id_name') </code></pre> <p>I'd like to be able to locate elements like this: </p> <pre><code>if selenium_browser.find_element(By.tup[0], tup[1]): </code></pre> <p>but I get this error</p> <pre><code>AttributeError: type object 'By' has no attribute 'tup' </code></pre> <p>How can I do this without having to write:</p> <pre><code>if tup[0] == 'ID': selenium_browser.find_element(By.ID, tup[1]) ... elif tup[0] == 'CLASS_NAME': selenium_browser.find_element(By.CLASS_NAME, tup[1]) ... elif tup[0] == 'LINK_TEXT': etc etc </code></pre> <p><a href="http://selenium-python.readthedocs.io/api.html?highlight=#module-selenium.webdriver.common.by" rel="nofollow">http://selenium-python.readthedocs.io/api.html?highlight=#module-selenium.webdriver.common.by</a></p>
1
2016-10-11T15:39:58Z
39,981,670
<p>If you want to directly provide a tuple to <code>find_element</code>, add a <code>*</code> in front:</p> <pre><code>locator = (By.ID, 'id') element = driver.find_element(*locator) </code></pre> <p>Or with the method provided as a string:</p> <pre><code>locator = ('ID', 'id_name') driver.find_element(getattr(By, locator[0]), locator[1]) </code></pre>
3
2016-10-11T16:00:59Z
[ "python", "selenium" ]
Python/selenium - locate elements 'By' - specify locator strategy with variable?
39,981,228
<p>Having created a selenium browser and retrieved a web page, this works fine:</p> <pre><code>... if selenium_browser.find_element(By.ID, 'id_name'): print "found" ... </code></pre> <p>given a tuple like this:</p> <pre><code>tup = ('ID', 'id_name') </code></pre> <p>I'd like to be able to locate elements like this: </p> <pre><code>if selenium_browser.find_element(By.tup[0], tup[1]): </code></pre> <p>but I get this error</p> <pre><code>AttributeError: type object 'By' has no attribute 'tup' </code></pre> <p>How can I do this without having to write:</p> <pre><code>if tup[0] == 'ID': selenium_browser.find_element(By.ID, tup[1]) ... elif tup[0] == 'CLASS_NAME': selenium_browser.find_element(By.CLASS_NAME, tup[1]) ... elif tup[0] == 'LINK_TEXT': etc etc </code></pre> <p><a href="http://selenium-python.readthedocs.io/api.html?highlight=#module-selenium.webdriver.common.by" rel="nofollow">http://selenium-python.readthedocs.io/api.html?highlight=#module-selenium.webdriver.common.by</a></p>
1
2016-10-11T15:39:58Z
39,981,750
<p>Your syntax is off.</p> <p>The docs say this should be used like such: <code>find_element(by='id', value=None)</code></p> <p>So instead of </p> <pre><code>if selenium_browser.find_element(By.tup[0], tup[1]): </code></pre> <p>You should do</p> <pre><code>if selenium_browser.find_element(tup[0], tup[1]): #or if selenium_browser.find_element(by=tup[0], value=tup[1]): </code></pre> <p>You may or may not need to lowercase the <code>by</code> element IE <code>tup[0].lower()</code></p>
1
2016-10-11T16:05:15Z
[ "python", "selenium" ]
Comparing numbers give the wrong result in Python
39,981,237
<p>sorry if this is a terrible question, but I am really new to programming. I am attempting a short little test program.</p> <p>If I enter any value less than 24, it does print the "You will be old..." statement. If I enter any value greater than 24 (ONLY up to 99), it prints the "you are old" statement.</p> <p>The problem is if you enter a value of 100 or greater, it prints the "You will be old before you know it." statement.</p> <pre><code>print ('What is your name?') myName = input () print ('Hello, ' + myName) print ('How old are you?, ' + myName) myAge = input () if myAge &gt; ('24'): print('You are old, ' + myName) else: print('You will be old before you know it.') </code></pre>
3
2016-10-11T15:40:23Z
39,981,337
<p>The string <code>'100'</code> is indeed less than the string <code>'24'</code>, because <code>'1'</code> is "alphabetically" smaller than <code>'2'</code>. You need to compare <em>numbers</em>.</p> <pre><code>my_age = int(input()) if my_age &gt; 24: </code></pre>
1
2016-10-11T15:44:51Z
[ "python", "if-statement" ]
Comparing numbers give the wrong result in Python
39,981,237
<p>sorry if this is a terrible question, but I am really new to programming. I am attempting a short little test program.</p> <p>If I enter any value less than 24, it does print the "You will be old..." statement. If I enter any value greater than 24 (ONLY up to 99), it prints the "you are old" statement.</p> <p>The problem is if you enter a value of 100 or greater, it prints the "You will be old before you know it." statement.</p> <pre><code>print ('What is your name?') myName = input () print ('Hello, ' + myName) print ('How old are you?, ' + myName) myAge = input () if myAge &gt; ('24'): print('You are old, ' + myName) else: print('You will be old before you know it.') </code></pre>
3
2016-10-11T15:40:23Z
39,981,347
<p>You're testing a string value <code>myAge</code> against another string value <code>'24'</code>, as opposed to integer values.</p> <pre><code>if myAge &gt; ('24'): print('You are old, ' + myName) </code></pre> <p>Should be</p> <pre><code>if int(myAge) &gt; 24: print('You are old, {}'.format(myName)) </code></pre> <p>In Python, you can greater-than / less-than against strings, but it doesn't work how you might think. So if you want to test the value of the integer representation of the string, use <code>int(the_string)</code></p> <pre><code>&gt;&gt;&gt; "2" &gt; "1" True &gt;&gt;&gt; "02" &gt; "1" False &gt;&gt;&gt; int("02") &gt; int("1") True </code></pre> <p>You may have also noticed that I changed <code>print('You are old, ' + myName)</code> to <code>print('You are old, {}'.format(myName))</code> -- You should become accustomed to this style of string formatting, as opposed to doing string concatenation with <code>+</code> -- You can read more about it in <a href="https://docs.python.org/3.5/library/string.html#custom-string-formatting">the docs.</a> But it really doesn't have anything to do with your core problem.</p>
5
2016-10-11T15:45:38Z
[ "python", "if-statement" ]
Comparing numbers give the wrong result in Python
39,981,237
<p>sorry if this is a terrible question, but I am really new to programming. I am attempting a short little test program.</p> <p>If I enter any value less than 24, it does print the "You will be old..." statement. If I enter any value greater than 24 (ONLY up to 99), it prints the "you are old" statement.</p> <p>The problem is if you enter a value of 100 or greater, it prints the "You will be old before you know it." statement.</p> <pre><code>print ('What is your name?') myName = input () print ('Hello, ' + myName) print ('How old are you?, ' + myName) myAge = input () if myAge &gt; ('24'): print('You are old, ' + myName) else: print('You will be old before you know it.') </code></pre>
3
2016-10-11T15:40:23Z
39,981,354
<pre><code>print ('What is your name?') myName = input () print ('Hello, ' + myName) print ('How old are you?, ' + myName) myAge = input () if int(myAge) &gt; 24: print('You are old, ' + myName) else: print('You will be old before you know it.') </code></pre> <p>Just a small thing about your code. You should convert the input from <code>myAge</code> to an integer (<code>int</code>) <em>(number)</em> and then compare that number to the number 24.;</p> <p>Also, you should usually not add strings together as it is consider <em>non-pythonic</em> and it slow. Try something like <code>print ('Hello, %s' % myName)</code> instead of <code>print ('Hello, ' + myName)</code>. </p> <p><a href="https://www.tutorialspoint.com/python/python_strings.htm" rel="nofollow">Python Strings Tutorial</a></p>
1
2016-10-11T15:45:53Z
[ "python", "if-statement" ]
Comparing numbers give the wrong result in Python
39,981,237
<p>sorry if this is a terrible question, but I am really new to programming. I am attempting a short little test program.</p> <p>If I enter any value less than 24, it does print the "You will be old..." statement. If I enter any value greater than 24 (ONLY up to 99), it prints the "you are old" statement.</p> <p>The problem is if you enter a value of 100 or greater, it prints the "You will be old before you know it." statement.</p> <pre><code>print ('What is your name?') myName = input () print ('Hello, ' + myName) print ('How old are you?, ' + myName) myAge = input () if myAge &gt; ('24'): print('You are old, ' + myName) else: print('You will be old before you know it.') </code></pre>
3
2016-10-11T15:40:23Z
39,981,383
<p>Use <code>int(myAge)</code>. I always use <code>raw_input</code> and also, you dont have to print your questions. Instead put the question in with your raw_inputs like so:</p> <pre><code>myName = raw_input("Whats your name?") print ('Hello, ' + myName) myAge = raw_input('How old are you?, ' + myName) if int(myAge) &gt; ('24'): print('You are old, ' + myName) else: print('You will be old before you know it.') </code></pre>
0
2016-10-11T15:47:36Z
[ "python", "if-statement" ]
Python csv; get max length of all columns then lengthen all other columns to that length
39,981,239
<p>I have a directory full of data files in the following format:</p> <blockquote> <pre><code>4 2 5 7 1 4 9 8 8 7 7 1 4 1 4 1 5 2 0 1 0 0 0 0 0 </code></pre> </blockquote> <p>They are separated by tabs. The third and fourth columns contain useful information until they reach 'zeroes'.. At which point, they are arbitrarily filled with zeroes until the end of file. </p> <p>I want to get the length of the longest column where we do not count the 'zero' values on the bottom. In this case, the longest column is column 3 with a length of 7 because we disregard the zeros at the bottom. Then I want to transform all the other columns by packing zeroes on them until their length is equal to the length of my third column (besides column 4 b/c it is already filled with zeroes). Then I want to get rid of all the zeros beyond my max length in all my columns.. So my desired file output will be as follows:</p> <blockquote> <pre><code>4 2 5 7 1 4 9 8 8 7 7 1 0 4 1 4 0 0 1 5 0 0 2 0 0 0 1 0 </code></pre> </blockquote> <p>These files consist of ~ 100,000 rows each on average... So processing them takes a while. Can't really find an efficient way of doing this. Because of the way file-reading goes (line-by-line), am I right in assuming that in order to find the length of a column, we need to process in the worst case, N rows? Where N is the length of the entire file. When I just ran a script to print out all the rows, it took about 10 seconds per file... Also, I'd like to modify the file in-place (over-write). </p>
1
2016-10-11T15:40:29Z
39,982,131
<p>Hi I would use Pandas and Numpy for this:</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv('csv.csv', delimiter='\t') df = df.replace(0,np.nan) while df.tail(1).isnull().all().all() == True: df=df[0:len(df)-1] df=df.replace(np.nan,0) df.to_csv('csv2.csv',sep='\t', index=False) #i used a different name just for testing </code></pre> <p>You create a DataFrame with your csv data.<br> There are a lot of built in functions that deal with <code>NaN</code> values, so change all <code>0</code>s to <code>nan</code>. Then start at the end <code>tail(1)</code> and check if the row is <code>all()</code> <code>NaN</code>. If so copy the DF less the last row and repeat. I did this with 100k rows and it takes only a few seconds.</p>
0
2016-10-11T16:27:08Z
[ "python", "csv" ]
Python csv; get max length of all columns then lengthen all other columns to that length
39,981,239
<p>I have a directory full of data files in the following format:</p> <blockquote> <pre><code>4 2 5 7 1 4 9 8 8 7 7 1 4 1 4 1 5 2 0 1 0 0 0 0 0 </code></pre> </blockquote> <p>They are separated by tabs. The third and fourth columns contain useful information until they reach 'zeroes'.. At which point, they are arbitrarily filled with zeroes until the end of file. </p> <p>I want to get the length of the longest column where we do not count the 'zero' values on the bottom. In this case, the longest column is column 3 with a length of 7 because we disregard the zeros at the bottom. Then I want to transform all the other columns by packing zeroes on them until their length is equal to the length of my third column (besides column 4 b/c it is already filled with zeroes). Then I want to get rid of all the zeros beyond my max length in all my columns.. So my desired file output will be as follows:</p> <blockquote> <pre><code>4 2 5 7 1 4 9 8 8 7 7 1 0 4 1 4 0 0 1 5 0 0 2 0 0 0 1 0 </code></pre> </blockquote> <p>These files consist of ~ 100,000 rows each on average... So processing them takes a while. Can't really find an efficient way of doing this. Because of the way file-reading goes (line-by-line), am I right in assuming that in order to find the length of a column, we need to process in the worst case, N rows? Where N is the length of the entire file. When I just ran a script to print out all the rows, it took about 10 seconds per file... Also, I'd like to modify the file in-place (over-write). </p>
1
2016-10-11T15:40:29Z
39,983,331
<p>Here are two ways to do it:</p> <pre><code># Read in the lines and fill in the zeroes with open('input.txt') as input_file: data = [[item.strip() or '0' for item in line.split('\t')] for line in input_file] # Delete lines near the end that are only zeroes while set(data[-1]) == {'0'}: del data[-1] # Write out the lines with open('output.txt', 'wt') as output_file: output_file.writelines('\t'.join(line) + '\n' for line in data) </code></pre> <p>Or</p> <pre><code>with open('input.txt') as input_file: with open('output.txt', 'wt') as output_file: for line in input_file: line = line.split('\t') line = [item.strip() or '0' for item in line] if all(item == '0' for item in line): break output_file.write('\t'.join(line)) output_file.write('\n') </code></pre>
0
2016-10-11T17:40:16Z
[ "python", "csv" ]
Don't know what happened with this : "pymysql.err.ProgrammingError: (1064..."
39,981,248
<p>Here is my code:</p> <pre><code>from urllib.request import urlopen from bs4 import BeautifulSoup as bs import re import pymysql resp = urlopen("https://en.wikipedia.org/wiki/Main_Page").read().decode("utf-8") soup = bs(resp ,"html.parser") listUrls = soup.findAll("a", href=re.compile("^/wiki/")) for url in listUrls: if not re.search('\.(jpg|JPG)$', url['href']): conn = pymysql.connect( host='127.0.0.1', user='root', password='', db='wikiurl', charset='utf8mb4' ) try: with conn.cursor() as cursor: sql = "insert into 'wikiurl'('urlname','urlhref') VALUES (%s , %s)" cursor.execute(sql,(url.get_text(), "https://en.wikipedia.org" + url["href"])) conn.commit() finally: conn.close() </code></pre> <p>Error:</p> <blockquote> <p>pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''wikiurl'('urlname','urlhref') VALUES ('Wikipedia' , '<a href="https://en.wikipedia.org/w" rel="nofollow">https://en.wikipedia.org/w</a>' at line 1")</p> </blockquote>
-1
2016-10-11T15:40:47Z
39,981,374
<p>First of all, I recommend giving whitespace the utmost attention to detail.</p> <p>Try this:</p> <pre><code>sql = "INSERT INTO wikiurl (urlname, urlhref) VALUES (%s, %s)" </code></pre> <p>Also notice that single quotation marks are not necessary around the table name. See: <a href="http://dev.mysql.com/doc/refman/5.7/en/insert.html" rel="nofollow">MySQL Insert documentation.</a></p> <p>Edit: And you don't need quotation marks around the column names.</p>
0
2016-10-11T15:46:55Z
[ "python", "mysql" ]
Don't know what happened with this : "pymysql.err.ProgrammingError: (1064..."
39,981,248
<p>Here is my code:</p> <pre><code>from urllib.request import urlopen from bs4 import BeautifulSoup as bs import re import pymysql resp = urlopen("https://en.wikipedia.org/wiki/Main_Page").read().decode("utf-8") soup = bs(resp ,"html.parser") listUrls = soup.findAll("a", href=re.compile("^/wiki/")) for url in listUrls: if not re.search('\.(jpg|JPG)$', url['href']): conn = pymysql.connect( host='127.0.0.1', user='root', password='', db='wikiurl', charset='utf8mb4' ) try: with conn.cursor() as cursor: sql = "insert into 'wikiurl'('urlname','urlhref') VALUES (%s , %s)" cursor.execute(sql,(url.get_text(), "https://en.wikipedia.org" + url["href"])) conn.commit() finally: conn.close() </code></pre> <p>Error:</p> <blockquote> <p>pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''wikiurl'('urlname','urlhref') VALUES ('Wikipedia' , '<a href="https://en.wikipedia.org/w" rel="nofollow">https://en.wikipedia.org/w</a>' at line 1")</p> </blockquote>
-1
2016-10-11T15:40:47Z
39,982,001
<p>I think your sql syntax has some error,but it is not easy to debug it.</p> <p>I recommend you use this method to print what the real sql string that is sent to mysql server.pymysql manual above:</p> <p><code>mogrify(self, query, args=None)</code></p> <p>'''Returns the exact string that is sent to the database by calling the execute() method. This method follows the extension to the DB API 2.0 followed by Psycopg.''' eg:</p> <p>you can use</p> <p><code>print cursor.mogrify(sql,(url.get_text(), "https://en.wikipedia.org" + url["href"]))</code></p> <p>good luck!</p>
0
2016-10-11T16:19:50Z
[ "python", "mysql" ]
Counting how many words are over a certain limit in a string Python
39,981,262
<p>Thank you all for help on previous part. I have now finished that. However changing title slightly and rewording question I now say that this is my code.</p> <pre><code>s = raw_input("Enter your text: ") longestWord = max(s.split(), key=len) k = list(s) count = len(k) wordsOver = [] over = count - 140 def numLen(s, n): return sum(1 for x in s.split() if len(x) &gt;= n) for x in s.split(): if len(x) &gt;= n: wordsOver.insert(0, x) val = numLen(s, 7) if count &gt; 140: print ("Sorry, that is more than 140 characters.") print ("You had a total of " + str(count) + " characters.") print ("That's " + str(over) + " over the max allowed.") print ("You're longest word was, " + longestWord) print ("There are " + str(val) + " words over 7 characters.") print ("They were:") print (wordsOver) print ("You may want to consider changing them for shorter words.") else: print ("That's short enough!") </code></pre> <p>So now what I'm looking for is why the displaying of the words that are over isn't working, why and how to fix it. BTW for a little help it's the wordsOver bit that's broken</p>
1
2016-10-11T15:41:30Z
39,981,303
<p>isalpha() doesn't return true for whitespace.</p> <p>Additionally, there are a whole slew of better ways to do this. You should look at the Counter class in collections. It's a dictionary that will provide you with what you need.</p>
-1
2016-10-11T15:43:27Z
[ "python", "python-2.7" ]
Counting how many words are over a certain limit in a string Python
39,981,262
<p>Thank you all for help on previous part. I have now finished that. However changing title slightly and rewording question I now say that this is my code.</p> <pre><code>s = raw_input("Enter your text: ") longestWord = max(s.split(), key=len) k = list(s) count = len(k) wordsOver = [] over = count - 140 def numLen(s, n): return sum(1 for x in s.split() if len(x) &gt;= n) for x in s.split(): if len(x) &gt;= n: wordsOver.insert(0, x) val = numLen(s, 7) if count &gt; 140: print ("Sorry, that is more than 140 characters.") print ("You had a total of " + str(count) + " characters.") print ("That's " + str(over) + " over the max allowed.") print ("You're longest word was, " + longestWord) print ("There are " + str(val) + " words over 7 characters.") print ("They were:") print (wordsOver) print ("You may want to consider changing them for shorter words.") else: print ("That's short enough!") </code></pre> <p>So now what I'm looking for is why the displaying of the words that are over isn't working, why and how to fix it. BTW for a little help it's the wordsOver bit that's broken</p>
1
2016-10-11T15:41:30Z
39,981,387
<p>in the for loop you're looking at each character and checking if that character <code>i</code> is a character that is in an alphabet using <code>.isalpha()</code> which will return false when it encounters a space since it is not alphabetic.</p> <p>See: <a href="https://docs.python.org/2/library/stdtypes.html#str.isalpha" rel="nofollow">https://docs.python.org/2/library/stdtypes.html#str.isalpha</a></p>
0
2016-10-11T15:47:43Z
[ "python", "python-2.7" ]
Counting how many words are over a certain limit in a string Python
39,981,262
<p>Thank you all for help on previous part. I have now finished that. However changing title slightly and rewording question I now say that this is my code.</p> <pre><code>s = raw_input("Enter your text: ") longestWord = max(s.split(), key=len) k = list(s) count = len(k) wordsOver = [] over = count - 140 def numLen(s, n): return sum(1 for x in s.split() if len(x) &gt;= n) for x in s.split(): if len(x) &gt;= n: wordsOver.insert(0, x) val = numLen(s, 7) if count &gt; 140: print ("Sorry, that is more than 140 characters.") print ("You had a total of " + str(count) + " characters.") print ("That's " + str(over) + " over the max allowed.") print ("You're longest word was, " + longestWord) print ("There are " + str(val) + " words over 7 characters.") print ("They were:") print (wordsOver) print ("You may want to consider changing them for shorter words.") else: print ("That's short enough!") </code></pre> <p>So now what I'm looking for is why the displaying of the words that are over isn't working, why and how to fix it. BTW for a little help it's the wordsOver bit that's broken</p>
1
2016-10-11T15:41:30Z
40,118,903
<p>Welcome to SO! </p> <p>I think this is what you want to do. In your numLen function you need to use <code>append()</code> rather than <code>insert()</code> when adding to your list. This is because when using insert you're not incrementing the index so each time you insert at index 0 you overwrite any value that's already there. Append does the legwork of finding out where the end of your list is and puts what you pass to it onto the end.</p> <pre><code>s = raw_input("Enter your text: ") longestWord = max(s.split(), key=len) k = list(s) count = len(k) wordsOver = [] over = count - 140 def numLen(s, n): for x in s.split(): if len(x) &gt;= 7: wordsOver.append(x) return len(wordsOver) val = numLen(s, 7) if count &gt; 140: print ("Sorry, that is more than 140 characters.") print ("You had a total of " + str(count) + " characters.") print ("That's " + str(over) + " over the max allowed.") print ("You're longest word was, \"" + longestWord + "\"") print ("There are " + str(val) + " words over 7 characters.") print ("They were:") print (wordsOver) print ("You may want to consider changing them for shorter words.") else: print ("That's short enough!") </code></pre>
1
2016-10-18T22:07:35Z
[ "python", "python-2.7" ]
How to configure Django settings for different environments in a modular way?
39,981,292
<p>I have already searched on the web on this doubt, but they don't really seem to apply to my case.</p> <p>I have 3 different config files - Dev, Staging, Prod (of course)</p> <p>I want to modularize settings properly without repetition. So, I have made base_settings.py and I am importing it to dev_settings.py, stg_settings.py etc. </p> <p><strong>Problem - How to invoke the scripts on each env properly with minimal changes?</strong></p> <p>Right now, I'm doing this (taking dev env as an example)- </p> <p><strong>python manage.py runserver --settings=core.dev_settings</strong></p> <p>This works so far, but I am not convinced on how good workaround is this. Because wsgi.py and a couple of other services have - </p> <p><strong>os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings')</strong></p> <p>I am looking to do something without changing the config files of other services. Thank you everyone in advance.</p> <p>PS - I've tried to be as clear as possible, but please excuse me if anything is unclear.</p>
0
2016-10-11T15:42:59Z
39,981,423
<p>Just set <code>DJANGO_SETTINGS_MODULE</code> in environment variables to your desired config file.</p> <p>That won't make you to change any of other services config files, and you don't even need to change django settings files.</p>
1
2016-10-11T15:48:59Z
[ "python", "django", "wsgi", "django-settings" ]
How to configure Django settings for different environments in a modular way?
39,981,292
<p>I have already searched on the web on this doubt, but they don't really seem to apply to my case.</p> <p>I have 3 different config files - Dev, Staging, Prod (of course)</p> <p>I want to modularize settings properly without repetition. So, I have made base_settings.py and I am importing it to dev_settings.py, stg_settings.py etc. </p> <p><strong>Problem - How to invoke the scripts on each env properly with minimal changes?</strong></p> <p>Right now, I'm doing this (taking dev env as an example)- </p> <p><strong>python manage.py runserver --settings=core.dev_settings</strong></p> <p>This works so far, but I am not convinced on how good workaround is this. Because wsgi.py and a couple of other services have - </p> <p><strong>os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings')</strong></p> <p>I am looking to do something without changing the config files of other services. Thank you everyone in advance.</p> <p>PS - I've tried to be as clear as possible, but please excuse me if anything is unclear.</p>
0
2016-10-11T15:42:59Z
39,981,777
<p>Have a look at the <a href="https://django-configurations.readthedocs.io/en/stable/" rel="nofollow">Django Configurations</a> package.</p>
0
2016-10-11T16:06:53Z
[ "python", "django", "wsgi", "django-settings" ]
How do i verify user in database in google appengine app ( can anyone recommend the best way to do user authentication for google appengine app)?
39,981,297
<p>I have following code in <code>models.py</code> i can sort database by only key but not by string ?</p> <pre><code>from google.appengine.ext import ndb class Roles(ndb.Model): name = ndb.StringProperty() owner = ndb.KeyProperty(kind='User') created = ndb.DateTimeProperty(required=True, auto_now_add = True) class RESTMeta: user_owner_property = 'owner' include_output_properties = ['name'] class Users(ndb.Model): name = ndb.StringProperty() email = ndb.StringProperty() password = ndb.StringProperty() roles = ndb.KeyProperty(kind='Roles') owner = ndb.KeyProperty(kind='User') created = ndb.DateTimeProperty(required=True, auto_now_add = True) class RESTMeta: user_owner_property = 'owner' include_output_properties = ['name'] </code></pre> <p>And the following in <code>api.py</code></p> <pre><code>app = webapp2.WSGIApplication([ RESTHandler( '/api/roles', # The base URL for this model's endpoints models.Roles, # The model to wrap permissions={ 'GET': PERMISSION_ANYONE, 'POST': PERMISSION_ANYONE, 'PUT': PERMISSION_OWNER_USER, 'DELETE': PERMISSION_ADMIN }, # Will be called for every PUT, right before the model is saved (also supports callbacks for GET/POST/DELETE) put_callback=lambda model, data: model ), RESTHandler( '/api/users', # The base URL for this model's endpoints models.Users, # The model to wrap permissions={ 'GET': PERMISSION_ANYONE, 'POST': PERMISSION_ANYONE, 'PUT': PERMISSION_OWNER_USER, 'DELETE': PERMISSION_ADMIN }, # Will be called for every PUT, right before the model is saved (also supports callbacks for GET/POST/DELETE) put_callback=lambda model, data: model )],debug=True, config = config) </code></pre> <p>I can successfully <code>get</code> by key in <code>api\users?q=roles=key('key')</code></p> <p>How do i get specific user by <strong>String</strong> <code>api\users?q=email=String('String')</code></p> <p><strong>The Question is how do I do user auth for google appengine app</strong></p>
0
2016-10-11T15:43:12Z
39,981,830
<p>You seem to be asking so many questions in one.</p> <p>To get user by email, simply do this:</p> <pre><code>users = Users.query(Users.email=='query_email').fetch(1) #note fetch() always returns a list if users: user_exists = True else: user_exists = False </code></pre> <p>Please note, you may need to <a href="https://cloud.google.com/appengine/docs/java/config/indexconfig#creating_indexes_using_the_development_server" rel="nofollow">update your datastore index</a> to support that query. The easiest way to do it is to first run the code in your local development server, and the index will be automatically updated for you.</p> <p>To answer your second questions, for user authentication I would recommend <a href="https://docs.djangoproject.com/en/1.10/topics/auth/" rel="nofollow">Django's in-built User Authentication</a>. Please note that you can always run vanilla <a href="https://cloud.google.com/python/django/flexible-environment" rel="nofollow">django on appengine with a Flexible VM</a> using <a href="https://cloud.google.com/sql/docs/" rel="nofollow">CloudSQL</a> instead of the Datastore.</p> <p>Alternatively, you could use the <a href="https://cloud.google.com/appengine/docs/python/users/" rel="nofollow">Appengine Users API</a>, though your users would need to have Google Accounts.</p>
1
2016-10-11T16:09:48Z
[ "python", "django", "google-app-engine", "jinja2" ]
Acquire x-axis values in Python matplotlib
39,981,340
<p>Before I ask this question, I have already searched the internet for a while without success. To many experts this surely appears to be fairly simple. Please bear with me. </p> <p>I am having a plot made by matplotlib and it is returned as a plf.Figure. See the following: </p> <pre><code>def myplotcode(): x = np.linspace(0, 2*np.pi) y = np.sin(x) print("x in external function", x) y2 = np.cos(x) fig = plf.Figure() ax = fig.add_subplot(111) ax.plot(x, y, 'bo', x, y2,'gs') ax.set_ylabel("Some function") return fig, ax </code></pre> <p>What I want to do in the function that call this one is to be able to get all these x values from the returned ax or fig. Well, I understand one simple solution is just to return x array too. However, I am trying to keep the number of returns as small as possible.</p> <p>So, my question is: Can I acquire this x-axis array from fig or ax?</p> <p>Thank you so much in advance. </p>
0
2016-10-11T15:45:13Z
39,981,475
<p>You can do:</p> <pre><code>l = ax.axes.lines[0] # If you have more curves, just change the index x, y = l.get_data() </code></pre> <p>That will give you two arrays, with the <code>x</code> and <code>y</code> data</p>
0
2016-10-11T15:51:39Z
[ "python", "matplotlib" ]
"Extra data" error trying to load a JSON file with Python
39,981,370
<p>I'm trying to load the following JSON file, named <code>archived_sensor_data.json</code>, into Python:</p> <pre><code>[{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}][{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}, {"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475816130.812}, "id": "2f896308-884d-4a5f-a8d2-ee68fc4c625a", "name": "Susan Wagner"}] </code></pre> <p>The script I'm trying to run (from the same directory) is as follows:</p> <pre><code>import json reconstructed_data = json.load(open("archived_sensor_data.json")) </code></pre> <p>However, I get the following error:</p> <pre><code>ValueError: Extra data: line 1 column 164 - line 1 column 324 (char 163 - 323) </code></pre> <p>I'm not sure where this is going wrong, because from <a href="http://www.json.org/" rel="nofollow">www.json.org</a> it seems like valid JSON syntax for an array of dictionaries. Any ideas what is causing the error?</p>
0
2016-10-11T15:46:39Z
39,981,489
<p>It is not a valid json; There are two list in here; one is</p> <pre><code>[{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}] </code></pre> <p>and the other one</p> <pre><code>[{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}, {"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475816130.812}, "id": "2f896308-884d-4a5f-a8d2-ee68fc4c625a", "name": "Susan Wagner"}] </code></pre> <p>You can see the validation error in here; <a href="http://www.jsoneditoronline.org/?id=569644c48d5753ceb21daf66483d80cd" rel="nofollow">http://www.jsoneditoronline.org/?id=569644c48d5753ceb21daf66483d80cd</a></p>
1
2016-10-11T15:52:31Z
[ "python", "json" ]
Segfault while embedding Python using scrapy in C
39,981,373
<p>i'm currently trying to run some scrapy spider (in Python) from a C code but i'm constantly getting segfault while testing.</p> <p>i have this code that allow me to run a simple python function from c :</p> <pre><code> int main() { PyObject *retour, *module, *fonction, *arguments; char *resultat; Py_Initialize(); PySys_SetPath("."); module = PyImport_ImportModule("test"); fonction = PyObject_GetAttrString(module, "hellowrld); arguments = Py_BuildValue("(s)", "hello world"); retour = PyEval_CallObject(fonction, arguments); PyArg_Parse(retour, "s", &amp;resultat); printf("Resultat: %s\n", resultat); Py_Finalize(); return 0; } </code></pre> <p>if i call the hellowrld function that looks like this in test.py</p> <pre><code>def hellowrld(arg): return arg + '!!' </code></pre> <p>it will work fine, but i'm trying to run the function runspider_with_url from this code :</p> <pre><code> # -*- coding: utf-8 -*- import scrapy from scrapy.crawler import CrawlerProcess import lxml.etree import lxml.html class GetHtmlSpider(scrapy.Spider): name = "getHtml" def __init__(self, var_url=None, *args, **kwargs): super(GetHtmlSpider, self).__init__(*args, **kwargs) self.start_urls = [var_url] def parse(self,response): root = lxml.html.fromstring(response.body) print lxml.html.tostring(root) def runspider_with_url(var_url): process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' }) process.crawl(GetHtmlSpider,var_url=var_url) process.start() return "It works!!" </code></pre> <p>And when i'm trying to execute it, i keep getting segmentation fault errors</p> <p>I tried to add this at the end of my python</p> <pre><code>foo = runspider_with_url("http://www.google.com/") print foo </code></pre> <p>This call works when i execute it in bash with the command : </p> <pre><code>python -c 'import get_html; get_html.runspider_with_url("https://www.wikipedia.org")' </code></pre> <p>So i could ask my C program to execute the python with bash, and writte the result in a .txt, but i'd rather not.</p> <p>Thanks</p>
0
2016-10-11T15:46:50Z
39,983,098
<p>There is a segfault at <code>PyArg_Parse(retour, "s", &amp;resultat);</code> because the script raises an exception and <code>retour</code> is NULL. After wrapping that in some error detection, the problem was that <code>PySys_SetPath(".");</code> replaces the existing <code>sys.path</code> so things like <code>scrapy</code> could no longer be imported. So I fixed that with a quick call to python code that inserts in <code>sys.path</code>. Adding additional error handling along the way, you get</p> <pre><code>#include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;Python.h&gt; int main() { PyObject *retour, *module, *fonction, *arguments; char *resultat; printf("starting\n"); Py_Initialize(); //PySys_SetPath("."); if(PyRun_SimpleString("import sys;sys.path.insert(0, '.')")) { printf("path expansion failed\n"); return(2); } module = PyImport_ImportModule("test"); if(module == NULL) { printf("import failed\n"); PyErr_Print(); return(2); } fonction = PyObject_GetAttrString(module, "runspider_with_url"); if(fonction == NULL) { printf("could not find function\n"); return(2); } arguments = Py_BuildValue("(s)", "hello world"); if(arguments == NULL) { printf("arg parsing failed\n"); return(2); } printf("Calling\n"); retour = PyEval_CallObject(fonction, arguments); printf("Returned\n"); // note: need to release arguments Py_DECREF(arguments); if(retour == NULL) { printf("It all went wrong\n"); PyErr_Print(); return(2); } PyArg_Parse(retour, "s", &amp;resultat); printf("Resultat: %s\n", resultat); Py_Finalize(); return 0; } </code></pre>
0
2016-10-11T17:25:27Z
[ "python", "c", "scrapy", "embed" ]
Which is the most efficent of matching and replacing with an identifier every three new lines?
39,981,438
<p>I am working with some .txt files that doesn't have structure (they are messy), they represent a number of pages. In order to give them some structure I would like to identify the number of pages since the file itself doesn't have them. This can be done by replacing every three newlines with some annotation like: </p> <pre><code>\n page: N \n </code></pre> <p>Where <code>N</code> is the number. This is how my files look like, and I also tried with a simple <a href="http://pastebin.com/K0PdJ1TG" rel="nofollow"><code>replace</code></a>. However, this function confuses and does not give me the expected format which would be something like <a href="http://pastebin.com/ZCw63uyE" rel="nofollow">this</a>. Any idea of how to replace the spaces with some kind of identifier, just to try to parse them and getting the position of some information (page)?.</p> <p>I also tried this:</p> <p>import re</p> <pre><code>replaced = re.sub('\b(\s+\t+)\b', '\n\n\n', text) print (replaced) </code></pre>
2
2016-10-11T15:49:58Z
39,981,717
<p>If the format is as regular as you state in your problem description:</p> <blockquote> <p>Replace every occurrence of three newlines <code>\n</code> with <code>page: N</code></p> </blockquote> <p>You wouldn't have to use the <code>re</code> module. Something as simple as the following would do the trick:</p> <pre><code>&gt;&gt;&gt; s='aaaaaaaaaaaaaaaaa\n\n\nbbbbbbbbbbbbbbbbbbbbbbb\n\n\nccccccccccccccccccccccc' &gt;&gt;&gt; pages = s.split('\n\n\n') &gt;&gt;&gt; ''.join(page + '\n\tpage: {}\n'.format(i + 1) for i, page in enumerate(pages)) 'aaaaaaaaaaaaaaaaa\n\tpage: 1\nbbbbbbbbbbbbbbbbbbbbbbb\n\tpage: 2\nccccccccccccccccccccccc\n\tpage: 3\n' </code></pre> <p>I suspect, though, that your format is less regular than that, but you'll have to include more details before I can give a good answer for that.</p> <p>If you want to split with messy whitespace (which I'll define as <em>at least</em> three newlines with any other whitespace mixed in), you can replace <code>s.split('\n\n\n')</code> with:</p> <pre><code>re.split(r'(?:\n\s*?){3,}', s) </code></pre>
2
2016-10-11T16:03:24Z
[ "python", "regex", "python-3.x", "nlp", "text-processing" ]
Creating a function for Road Pricing
39,981,446
<blockquote> <p>The Transport Authority is implementing a new Road Pricing system. The authorities decided that the cars will be charged based on distance travelled, on a per mile basis. A car will be charged $0.50/mi, a van $2.1/mi and taxis travel for free. Create a function to determine how much a particular vehicle would be charged based on a particular distance. The function should take as input the type of the car and the distance travelled, and return the charged price.</p> </blockquote> <p>The problem above is what I have to do and the code below is what I have so far. The issue I have is that I'm receiving an error for not identifying car, van, and taxi before. But if I do so, it would print out all 3 situations. How would I be able to print out 1 outcome depending on the input for y?</p> <pre><code> def Road_Pricing(): x = float(input("How many miles is driven?")) y = (input("What car was driven?")) if "car": print (.50*x) if "van": print (2.1*x) if "taxi": print ("Free") Road_Pricing() </code></pre>
-2
2016-10-11T15:50:11Z
39,981,523
<p>Are you trying to compare the variable against some strings? </p> <p><code>if y == "car":</code></p>
0
2016-10-11T15:53:55Z
[ "python", "function" ]
Creating a function for Road Pricing
39,981,446
<blockquote> <p>The Transport Authority is implementing a new Road Pricing system. The authorities decided that the cars will be charged based on distance travelled, on a per mile basis. A car will be charged $0.50/mi, a van $2.1/mi and taxis travel for free. Create a function to determine how much a particular vehicle would be charged based on a particular distance. The function should take as input the type of the car and the distance travelled, and return the charged price.</p> </blockquote> <p>The problem above is what I have to do and the code below is what I have so far. The issue I have is that I'm receiving an error for not identifying car, van, and taxi before. But if I do so, it would print out all 3 situations. How would I be able to print out 1 outcome depending on the input for y?</p> <pre><code> def Road_Pricing(): x = float(input("How many miles is driven?")) y = (input("What car was driven?")) if "car": print (.50*x) if "van": print (2.1*x) if "taxi": print ("Free") Road_Pricing() </code></pre>
-2
2016-10-11T15:50:11Z
39,981,591
<p>You have a problem with your <code>if</code> statement : firstly you are not checking any condition, secondly <code>input</code> returns a `string``</p> <p>try this out instead :</p> <pre><code>def Road_Pricing(): x = float(input("How many miles are driven?")) type = input("What car was driven?") if type == 'car': print ("%f$ to pay" %.50*x) elif type == 'van': print ("%f$ to pay" %2.1*x) elif type == 'taxi': print ("Free ride") pass </code></pre>
0
2016-10-11T15:57:13Z
[ "python", "function" ]
Creating a function for Road Pricing
39,981,446
<blockquote> <p>The Transport Authority is implementing a new Road Pricing system. The authorities decided that the cars will be charged based on distance travelled, on a per mile basis. A car will be charged $0.50/mi, a van $2.1/mi and taxis travel for free. Create a function to determine how much a particular vehicle would be charged based on a particular distance. The function should take as input the type of the car and the distance travelled, and return the charged price.</p> </blockquote> <p>The problem above is what I have to do and the code below is what I have so far. The issue I have is that I'm receiving an error for not identifying car, van, and taxi before. But if I do so, it would print out all 3 situations. How would I be able to print out 1 outcome depending on the input for y?</p> <pre><code> def Road_Pricing(): x = float(input("How many miles is driven?")) y = (input("What car was driven?")) if "car": print (.50*x) if "van": print (2.1*x) if "taxi": print ("Free") Road_Pricing() </code></pre>
-2
2016-10-11T15:50:11Z
39,981,743
<p>The requirement is(emphasis mine):</p> <blockquote> <p>...... The function should take as input the type of the car and the distance travelled, and <strong>return</strong> the charged price.</p> </blockquote> <p>This means:</p> <ol> <li>The function should take two parameters: one for the type of the car and one for the distance travelled.</li> <li>You must return the price(which I suspect is a floating point number), rather than simply printing it.</li> </ol> <p>Another problem in your code is that expressions in your statements isn't checking the value of <code>car_type</code>. Also, you should use more meaningful variable names(for example, <code>distance</code> and <code>car_type</code> instead of <code>x</code> and <code>y</code>).</p> <pre><code>def road_pricing(car_type, distance): if car_type == "car": return .50 * distance if car_type == "van": return 2.1 * distance if car_type == "taxi": return 0 car_type = raw_input("What car was driven? ") distance = float(input("How many miles is driven? ")) print road_pricing(car_type, distance) </code></pre>
1
2016-10-11T16:04:58Z
[ "python", "function" ]
Random and Itertools
39,981,461
<p>I have some sample code that iterates through two different ranges of numbers successfully, but I want to add functionality to it so that it moves through the chained ranges randomly like so:</p> <pre><code>import itertools import random for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): print f </code></pre> <p>However this produces the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#12&gt;", line 1, in &lt;module&gt; for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): File "G:\Python27\lib\random.py", line 321, in sample n = len(population) TypeError: object of type 'itertools.chain' has no len() </code></pre> <p>Can anyone advise the amendments needed to make this function as intended?</p>
2
2016-10-11T15:50:47Z
39,981,534
<p>The issue is that <code>itertools.chain</code> creates <a class='doc-link' href="http://stackoverflow.com/documentation/python/292/generators#t=201610111555299054924">generators</a>, rather than lists. These generators are lazily evaluated, each element exists only briefly and is discarded after use. The <code>len</code> function is not defined for generators because all the elements don't exist at once.</p> <p>To fix your issue, you'll have to explicitly convert the chain output to a list. </p> <pre><code>c = itertools.chain(range(30, 54), range(1, 24)) for f in random.sample(list(c), 48): </code></pre>
2
2016-10-11T15:54:11Z
[ "python", "random", "itertools", "sample" ]
Random and Itertools
39,981,461
<p>I have some sample code that iterates through two different ranges of numbers successfully, but I want to add functionality to it so that it moves through the chained ranges randomly like so:</p> <pre><code>import itertools import random for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): print f </code></pre> <p>However this produces the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#12&gt;", line 1, in &lt;module&gt; for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): File "G:\Python27\lib\random.py", line 321, in sample n = len(population) TypeError: object of type 'itertools.chain' has no len() </code></pre> <p>Can anyone advise the amendments needed to make this function as intended?</p>
2
2016-10-11T15:50:47Z
39,981,536
<p>As the <a href="https://docs.python.org/3/library/random.html#random.sample" rel="nofollow"><code>random.sample</code></a> documentation states,</p> <blockquote> <p>Returns a <em>k</em> length list of unique elements chosen from the population sequence or set</p> </blockquote> <p>It requires a sequence or a set so that it can sample from the entire population, but <a href="https://docs.python.org/3/library/itertools.html#itertools.chain" rel="nofollow"><code>itertools.chain</code></a> returns an iterator which could even be infinite. So <code>sample</code> cannot determine the actual size of the population. That is why you are getting this error.</p> <p>To fix this, you can simply create a list or a tuple and pass it to <code>sample</code>, like this</p> <pre><code>for f in random.sample(list(itertools.chain(range(30, 54), range(1, 24))), 48) </code></pre> <hr> <p><strong>Note:</strong> The other problem in your code is that, the sampling quantity cannot be bigger than the actual population.</p> <pre><code>&gt;&gt;&gt; len(range(30, 54)) 24 &gt;&gt;&gt; len(range(1, 24)) 23 </code></pre> <p>So the population size is 47 and you are sampling 48 elements.</p>
2
2016-10-11T15:54:20Z
[ "python", "random", "itertools", "sample" ]
Random and Itertools
39,981,461
<p>I have some sample code that iterates through two different ranges of numbers successfully, but I want to add functionality to it so that it moves through the chained ranges randomly like so:</p> <pre><code>import itertools import random for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): print f </code></pre> <p>However this produces the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#12&gt;", line 1, in &lt;module&gt; for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): File "G:\Python27\lib\random.py", line 321, in sample n = len(population) TypeError: object of type 'itertools.chain' has no len() </code></pre> <p>Can anyone advise the amendments needed to make this function as intended?</p>
2
2016-10-11T15:50:47Z
39,981,540
<p>You have to convert <code>population</code> to a <code>list</code> explicitly. You can try like this:</p> <pre><code>n = len(list(population)) </code></pre>
1
2016-10-11T15:54:28Z
[ "python", "random", "itertools", "sample" ]
Random and Itertools
39,981,461
<p>I have some sample code that iterates through two different ranges of numbers successfully, but I want to add functionality to it so that it moves through the chained ranges randomly like so:</p> <pre><code>import itertools import random for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): print f </code></pre> <p>However this produces the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#12&gt;", line 1, in &lt;module&gt; for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): File "G:\Python27\lib\random.py", line 321, in sample n = len(population) TypeError: object of type 'itertools.chain' has no len() </code></pre> <p>Can anyone advise the amendments needed to make this function as intended?</p>
2
2016-10-11T15:50:47Z
39,981,550
<p>A quick fix would be as follows:</p> <pre><code>for f in random.sample(list(itertools.chain(range(30, 54), range(1, 24))), 48): </code></pre> <p>The problem with your code is that to sample from some iterable randomly, you need to know its length first, but <code>itertools.chain</code> is an iterable that provides only the <code>__iter__</code> method and no <code>__len__</code>. </p> <p>Basically, to do <code>random.choice</code> or <code>random.sample</code> or anything that involves <em>choosing</em> elements at random, you'll need a sequence or a set, which means that sequence should be <em>finite</em>. Iterables that don't provide the <code>__len__</code> method are considered infinite as you'll never know how many elements will be produced until the iterable's exhausted, if at all. </p>
1
2016-10-11T15:54:50Z
[ "python", "random", "itertools", "sample" ]
Random and Itertools
39,981,461
<p>I have some sample code that iterates through two different ranges of numbers successfully, but I want to add functionality to it so that it moves through the chained ranges randomly like so:</p> <pre><code>import itertools import random for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): print f </code></pre> <p>However this produces the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#12&gt;", line 1, in &lt;module&gt; for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): File "G:\Python27\lib\random.py", line 321, in sample n = len(population) TypeError: object of type 'itertools.chain' has no len() </code></pre> <p>Can anyone advise the amendments needed to make this function as intended?</p>
2
2016-10-11T15:50:47Z
39,981,651
<p>Im pretty sure you can get the length of a generator type object with the following syntax:</p> <pre><code>print sum(1 for x in (f for f in random.sample(list(itertools.chain(range(30, 54), range(1, 24))), 56))) </code></pre>
1
2016-10-11T15:59:51Z
[ "python", "random", "itertools", "sample" ]
Error Installing ICE on Python
39,981,517
<p>I have python 3.5 and 2.7 installed (I don't know if this might be the problem) and I need to use ZeroC Ice and when I do:</p> <pre><code>sudo pip install ice </code></pre> <p>I get the following error in the terminal:</p> <pre><code>Collecting ice Using cached ice-0.0.1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "/tmp/pip-build-MoRI5C/ice/setup.py", line 32, in &lt;module&gt; import ice File "ice.py", line 46, in &lt;module&gt; import urllib.parse ImportError: No module named parse ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-MoRI5C/ice/ </code></pre> <p>But parse is already installed (I guess):</p> <pre><code>sudo -H pip install parse Requirement already satisfied (use --upgrade to upgrade): parse in /usr/local/lib/python2.7/dist-packages </code></pre> <p>How can I solve this please?</p> <p>THanks</p>
0
2016-10-11T15:53:43Z
39,982,216
<p>This is a simple mistake, try this instead.</p> <pre><code>pip install zeroc-ice </code></pre> <p>This should do the correct job.</p>
0
2016-10-11T16:32:29Z
[ "python", "ice" ]
Is there a better way?
39,981,564
<p>I have file say <code>abc.txt</code> with below format:</p> <pre><code>+ : @group2 : ALL + : @grp_xvz : ALL + : @group_abc_app: ALL + : @group_1_abc : ALL + : @group_2_xyz : ALL + : @group_3_def@@nmo_hosts : ALL </code></pre> <p>I need to grep for specific entries and check if file size of abc.txt > 220 </p> <pre><code>+ : @group_2_xyz : ALL or + : @group_3_def@@nmo_hosts : ALL and filesize of abc.txt &gt; 220 </code></pre> <p>In bash I can do like this</p> <pre><code>if grep --quiet "+[[:blank:]]:[[:blank:]]@group_2_xyz[[:blank:]]*:[[:blank:]]ALL" abc.txt || grep --quiet +[[:blank:]]:[[:blank:]]@group_3_def[@A-Za-z0-9_][[:blank:]]:[[:blank:]] abc.txt and [ du -sb abc.txt | awk '{print $1}' -gt 220 ]; then ..do..something </code></pre> <p>How to do same in python? I was trying to use "re.findall' but not sure if I can use multiple conditions there ? or if someone can suggest best way?</p> <pre><code>re.findall(r'+\s*:\s*@group_2_xyz\s*:\s*ALL', open('abc.txt,'r').read()) </code></pre> <p>Thanks in advance.</p>
0
2016-10-11T15:56:10Z
39,982,430
<p>Try this:</p> <pre><code>import os, re match = re.search( r'^\+ *: *(@group_2_xyz|@group_3_def@@nmo_hosts) *: *ALL$', open('abc.txt').read(), re.M ) print(os.stat('abc.txt').st_size &gt; 220, match is not None) </code></pre>
0
2016-10-11T16:44:33Z
[ "python" ]
Dynamic submodule import failing in __main__
39,981,655
<p>I have a module that I want to run with <code>python -m modulename command</code> with commands referring to submodules launched by importing them. The file layout is as follows:</p> <pre><code>mainmodule/: __init__.py (empty) submodule1.py submodule2.py __main__.py </code></pre> <p>with <code>__main__.py</code> as follows:</p> <pre><code>import sys, importlib commands = {"cmd1": "submodule1", "cmd2": "submodule2"} try: cmd = modules[sys.argv[1]] except IndexError: cmd = "cmd1" except Error: pass module = importlib.import_module("."+cmd, "mainmodule") </code></pre> <ul> <li><code>python -m mainmodule</code> launches <code>submodule1</code> as expected;</li> <li><code>python -m mainmodule cmd1</code> works;</li> <li><code>python -m mainmodule.submodule1</code> works;</li> <li><code>python -m mainmodule.submodule2</code> <strong>works too;</strong></li> </ul> <p><strong>BUT</strong> <code>python -m mainmodule cmd2</code> fails:</p> <pre><code>ImportError: No module named mainmodule.submodule2 </code></pre> <p>Why? I've tried changing the <code>import</code> value expression in many ways, it always fails in the same way.</p>
0
2016-10-11T16:00:11Z
39,981,922
<p>Change;</p> <pre><code> cmd = modules[sys.argv[1]] </code></pre> <p>to</p> <pre><code> cmd = commands[sys.argv[1]] </code></pre> <p>Other than one typo fix, I can't get the same error. Are you maybe not running python from the directory above <code>mainmodule</code>? Or do you maybe not have <code>mainmodule</code> installed properly?</p>
0
2016-10-11T16:14:59Z
[ "python", "python-2.7" ]
Getting non-ASCII characters to work in Ren'Py functions
39,981,682
<p>I'm translating a Ren'Py game, which involves redefining a function that converts numbers to written-out words in a particular language. Those strings are then handled and inserted into the game text by the main code of the game (which I can't modify).</p> <p>My problem is that when I return strings that contain non-ascii characters like <code>ö</code> or <code>ü</code>, the game throws an exception when it gets to that point.</p> <pre><code>UnicodeDecodeError: 'utf8' codec can't decode bytes in position 2-4: unexpected end of data </code></pre> <p>Using character codes like <code>\uC3B6</code> throws no exception, but I end up with a placeholder box instead of the character I want.</p> <p>Is there any way to make the function return these characters properly without having access to the remaining code?</p>
1
2016-10-11T16:01:48Z
39,985,790
<p>Turns out I was using the wrong escape character and the wrong hex codes. And I had to use unicode strings. <code>u'\xF6'</code> and <code>u'\xFC'</code> work perfectly fine for the two characters I was trying to get.</p>
0
2016-10-11T20:04:47Z
[ "python", "unicode", "utf-8", "renpy" ]
Converting a nested array into a pandas dataframe in python
39,981,740
<p>I'm attempting to convert several dictionaries contained in an array to a pandas dataframe. The dicts are saved as such: </p> <pre><code>[[{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.309886', u'longitude': u'0.496902'},u'month': u'2015-01'},{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.306209', u'longitude': u'0.490475'},u'month': u'2015-02'}]] </code></pre> <p>I'm trying to format my data to the format below:</p> <pre><code> Category Latitude Longitude 0 anti-social 524498.597 175181.644 1 anti-social 524498.597 175181.644 2 anti-social 524498.597 175181.644 . ... ... . ... ... . ... ... </code></pre> <p>I've tried to force the data into a dataframe with the below code but it doesn't produce the intended output.</p> <pre><code>for i in crimes: for x in i: print pd.DataFrame([x['category'], x['location']['latitude'], x['location']['longitude']]) </code></pre> <p>I'm very new to Python so any links/tips to help me build this dataframe would be highly appreciated!</p>
1
2016-10-11T16:04:36Z
39,981,900
<p>You are on the right track, but you are creating a new dataframe for each row and not giving the proper <code>columns</code>. The following snippet should work:</p> <pre><code>import pandas as pd import numpy as np crimes = [[{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.309886', u'longitude': u'0.496902'},u'month': u'2015-01'},{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.306209', u'longitude': u'0.490475'},u'month': u'2015-02'}]] # format into a flat list formatted_crimes = [[x['category'], x['location']['latitude'], x['location']['longitude']] for i in crimes for x in i] # now pass the formatted list to DataFrame and label the columns df = pd.DataFrame(formatted_crimes, columns=['Category', 'Latitude', 'Longitude']) </code></pre> <p>The result is:</p> <pre><code> Category Latitude Longitude 0 anti-social-behaviour 52.309886 0.496902 1 anti-social-behaviour 52.306209 0.490475 </code></pre>
1
2016-10-11T16:13:55Z
[ "python", "python-2.7" ]
Caffe NetParameter parsing error
39,981,754
<p>I tried to load model and i got this error:</p> <p>Check failed: ReadProtoFromBinaryFile(param_file, param) Failed to parse NetParameter file: /home/Energetiks/builds/convolutional-pose-machines-release/testing/python/../../model/_trained_MPI/pose_iter_985000_addLEEDS.caffemodel <strong>* Check failure stack trace: *</strong> Aborted (core dumped)</p> <p>pose_iter_985000_addLEEDS.caffemodel exists and the path is right.</p>
-1
2016-10-11T16:05:28Z
40,016,039
<p>Problem is solved. I download file one more time and it works! Maybe I downloaded the wrong file the first time.</p>
1
2016-10-13T08:33:52Z
[ "python", "caffe", "pycaffe" ]
What does "TypeError: [foo] object is not callable" mean?
39,981,766
<p>I am trying to iterate through a list of Facebook postIDs, and I am getting the following error:</p> <p>TypeError: 'list' object is not callable</p> <p>Here is my code:</p> <pre><code>MCTOT_postIDs = [["126693553344_10155053097028345"], ["126693553344_10155050947628345"], ["126693553344_10155048566893345"], ["126693553344_10155044677673345"], ["126693553344_10155042089618345"], ["126693553344_10155035937853345"], ["126693553344_10155023046098345"]] g = facebook.GraphAPI() g.access_token = g.get_app_access_token(APP_ID, APP_SECRET) for x in MCTOT_postIDs(): g.get_object('fields="message, likes, shares') </code></pre> <p>I know I am making a basic error somewhere, but I can't seem to figure it out. Thanks!</p>
-1
2016-10-11T16:06:19Z
39,981,944
<p>EDIT: For the other error, the function g.get_object(...) requires one more argument, that you are not passing. You're passing the fields, but you're must pass an ID as an argument too, you must pass the x of your loop, that contains the id.</p> <p>Probably should go like:</p> <pre><code>g.get_object('fields="message, likes, shares', x) </code></pre> <p>or maybe </p> <pre><code>g.get_object('fields="message, likes, shares', x[0]) </code></pre> <p>if you need to pass it as a string, not an array (your list is a list of arrays) but this should be a topic for a new question...</p> <hr> <p>The error message says:</p> <pre><code>TypeError: 'list' object is not callable </code></pre> <p>So look again at your code: when you try to do the for ... in loop, you're trying to call your list, as if it was a function.</p> <p>You're doing</p> <pre><code>for x in MCTOT_postIDs(): </code></pre> <p>When you should be doing</p> <pre><code>for x in MCTOT_postIDs: </code></pre> <p>The list is not callable, and the () is used for calling a function (meaning: execute the function). Remove it and it should work.</p>
4
2016-10-11T16:16:08Z
[ "python" ]
Print output to a file using another Python module
39,981,775
<p>I have two python modules: <code>buildContent.py</code> which contains code that results in output i want. <code>buildRun.py</code> which i run in order to redirect the output to a file.</p> <p>I'm trying to save the output from <code>buildContent.py</code> to a file and I did something like this in the <code>buildRun.py</code>:</p> <pre><code>import buildContent import sys with open('out.xhtml', 'w') as f: sys.stdout = f print buildContent </code></pre> <p>I can see my output in the console but the file result is:</p> <pre><code>&lt;module 'buildContent' from 'here's my path to the file'&gt; </code></pre> <p>what to do?</p>
0
2016-10-11T16:06:51Z
39,982,111
<p>the redirection is working properly. if you replace your print statement with a string you will see that it has worked.</p> <p>The reason for that output is that you are not calling any functions within buildcontent, merely importing it. </p> <p>The solution is to run the buildContent file from within the above where your print statement should be.</p> <p>see <a href="https://stackoverflow.com/questions/1027714/how-to-execute-a-file-within-the-python-interpreter">this question</a> for an example</p>
1
2016-10-11T16:25:31Z
[ "python" ]
Print output to a file using another Python module
39,981,775
<p>I have two python modules: <code>buildContent.py</code> which contains code that results in output i want. <code>buildRun.py</code> which i run in order to redirect the output to a file.</p> <p>I'm trying to save the output from <code>buildContent.py</code> to a file and I did something like this in the <code>buildRun.py</code>:</p> <pre><code>import buildContent import sys with open('out.xhtml', 'w') as f: sys.stdout = f print buildContent </code></pre> <p>I can see my output in the console but the file result is:</p> <pre><code>&lt;module 'buildContent' from 'here's my path to the file'&gt; </code></pre> <p>what to do?</p>
0
2016-10-11T16:06:51Z
39,982,907
<p>Instead of printing <code>buildContent</code>, just execute that module with the required parameters. Not sure of the content of <code>buildContent</code> but something like this should work:</p> <pre><code>buildContent(data) </code></pre> <p>This way the code inside <code>buildContent</code> will run on the <code>"data"</code> and print the results (if the print statements are given in the module). If you did not include print statements in <code>buildContent</code>, collect the output into a variable and print that variable. Something like this:</p> <pre><code>var = buildContent(data) print var </code></pre> <p>If you do not need any data atall to run <code>buildContent</code>, just run:</p> <pre><code>buildContent() </code></pre>
0
2016-10-11T17:13:08Z
[ "python" ]
HTML Scraping with BeautifulSoup
39,981,811
<p>I'm trying to search </p> <pre><code>&lt;span&gt;Status:&lt;/span&gt;, &lt;span&gt;&lt;strong&gt;Moored&lt;/strong&gt;&lt;/span&gt;, &lt;strong&gt;Moored&lt;/strong </code></pre> <p>And pull out <code>Moored</code>. I've tried a lot of things but haven't been able to get it. Most recently <code>find(attrs={'span':'Status:'})</code> but that just returns <code>[]</code>. There are a lot of things tagged with <code>&lt;strong&gt;</code> in the HTML, but this is the only <code>&lt;strong&gt;</code> after a <code>&lt;span&gt;Status:</code></p> <p>Edit: the HTML snipped above is a result of running <code>a = soup.find_all(attrs={'class':'vertical-offset-10 group-ib'})</code> then iterating over each loop with <code>a = (row.findChildren())</code></p> <p>In the HTML: </p> <pre><code>&lt;div class="vertical-offset-10 group-ib"&gt; &lt;span&gt;Status:&lt;/span&gt; &lt;span&gt;&lt;strong&gt;Moored&lt;/strong&gt;&lt;/span&gt; &lt;/div&gt; </code></pre> <p>To clarify, all I want is the string <code>Moored</code></p>
0
2016-10-11T16:08:45Z
39,981,989
<pre><code>res = soup.find_all('span', text="Status:") res[0].parent.find('strong').text </code></pre> <p>soup.find_all searches for all <code>&lt;span&gt;</code> tags that contain the text <code>"Result:"</code>, then takes the next_sibling (the next <code>&lt;span&gt;</code> tag) and gets that tag's text contents. </p>
2
2016-10-11T16:18:55Z
[ "python", "html", "beautifulsoup" ]
how can i parse html with lxml
39,981,846
<p>I have this html:</p> <pre><code>&lt;td class="name-td alLeft bordR"&gt;13.10.2016, Thu&lt;span class="sp"&gt;|&lt;/span&gt;17:00&lt;/td&gt; </code></pre> <p>I want to get a date (13.10.2016) and a time (17:00).</p> <p>I'm doing that:</p> <pre><code>t = lxml.html.parse(url) nextMatchDate = t.findall(".//td[@class='bordR']")[count].text </code></pre> <p>But getting an error, </p> <pre><code>IndexError: list index out of range </code></pre> <p>I think it happens because I have a html-tags in <code>a</code> tag</p> <p>Could you help me please?</p>
1
2016-10-11T16:11:11Z
39,982,594
<p>The problem is in the way you check for the <code>bordR</code> class. <code>class</code> is a <em>multi-valued</em> space-delimited attribute and you have to account for other classes on an element. In XPath you should be using "contains":</p> <pre><code>.//td[contains(@class, 'bordR')] </code></pre> <p>Or, even more reliable would be to <a href="http://stackoverflow.com/a/1604480/771848">add "concat" to the partial match check</a>.</p> <p>Once you've located the element you can use <code>.text_content()</code> method to get the complete text including all the children:</p> <pre><code>In [1]: from lxml.html import fromstring In [2]: data = '&lt;td class="name-td alLeft bordR"&gt;13.10.2016, Thu&lt;span class="sp"&gt;|&lt;/span&gt;17:00&lt;/td&gt;' In [3]: td = fromstring(data) In [4]: print(td.text_content()) 13.10.2016, Thu|17:00 </code></pre> <p>To take a step further, you can <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime" rel="nofollow">load the date string into a <code>datetime</code> object</a>:</p> <pre><code>In [5]: from datetime import datetime In [6]: datetime.strptime(td.text_content(), "%d.%m.%Y, %a|%H:%M") Out[6]: datetime.datetime(2016, 10, 13, 17, 0) </code></pre>
1
2016-10-11T16:54:27Z
[ "python", "html", "parsing" ]
how can i parse html with lxml
39,981,846
<p>I have this html:</p> <pre><code>&lt;td class="name-td alLeft bordR"&gt;13.10.2016, Thu&lt;span class="sp"&gt;|&lt;/span&gt;17:00&lt;/td&gt; </code></pre> <p>I want to get a date (13.10.2016) and a time (17:00).</p> <p>I'm doing that:</p> <pre><code>t = lxml.html.parse(url) nextMatchDate = t.findall(".//td[@class='bordR']")[count].text </code></pre> <p>But getting an error, </p> <pre><code>IndexError: list index out of range </code></pre> <p>I think it happens because I have a html-tags in <code>a</code> tag</p> <p>Could you help me please?</p>
1
2016-10-11T16:11:11Z
39,982,692
<p>There's a method called <a href="http://lxml.de/api/lxml.etree._Element-class.html#itertext" rel="nofollow"><code>.itertext</code></a> that:</p> <blockquote> <p>Iterates over the text content of a subtree.</p> </blockquote> <p>So if you have an element <code>td</code> in a variable <code>td</code>, you can do this:</p> <pre><code>&gt;&gt;&gt; text = list(td.itertext()); text ['13.10.2016, Thu', '|', '17:00'] &gt;&gt;&gt; date, time = text[0].split(',')[0], text[-1] &gt;&gt;&gt; datetime_text = '{} at {}'.format(date, time) &gt;&gt;&gt; datetime_text '13.10.2016 at 17:00' </code></pre>
0
2016-10-11T16:59:30Z
[ "python", "html", "parsing" ]
Python: Installed Anaconda, but can't import numpy or matplotlib in Jupyter notebook
39,981,931
<p>I'm new to Python, so maybe there is a simple solution to this. I installed Anaconda and thought everything would be straightforward, but even though Jupyter works fine I can't import numpy and matplotlib into my notebook. Instead I get this error:</p> <pre><code>--------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-1-1e0540761e0c&gt; in &lt;module&gt;() ----&gt; 1 import matplotlib.pyplot as plt 2 vals = [1, 2, 3, 4] 3 plt.plot(vals) //anaconda/lib/python3.5/site-packages/matplotlib/__init__.py in &lt;module&gt;() 120 # cbook must import matplotlib only within function 121 # definitions, so it is safe to import from it here. --&gt; 122 from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label 123 from matplotlib.compat import subprocess 124 from matplotlib.rcsetup import (defaultParams, //anaconda/lib/python3.5/site-packages/matplotlib/cbook.py in &lt;module&gt;() 31 from weakref import ref, WeakKeyDictionary 32 ---&gt; 33 import numpy as np 34 import numpy.ma as ma 35 //anaconda/lib/python3.5/site-packages/numpy/__init__.py in &lt;module&gt;() 144 return loader(*packages, **options) 145 --&gt; 146 from . import add_newdocs 147 __all__ = ['add_newdocs', 148 'ModuleDeprecationWarning', //anaconda/lib/python3.5/site-packages/numpy/add_newdocs.py in &lt;module&gt;() 11 from __future__ import division, absolute_import, print_function 12 ---&gt; 13 from numpy.lib import add_newdoc 14 15 ############################################################################### //anaconda/lib/python3.5/site-packages/numpy/lib/__init__.py in &lt;module&gt;() 6 from numpy.version import version as __version__ 7 ----&gt; 8 from .type_check import * 9 from .index_tricks import * 10 from .function_base import * //anaconda/lib/python3.5/site-packages/numpy/lib/type_check.py in &lt;module&gt;() 9 'common_type'] 10 ---&gt; 11 import numpy.core.numeric as _nx 12 from numpy.core.numeric import asarray, asanyarray, array, isnan, \ 13 obj2sctype, zeros //anaconda/lib/python3.5/site-packages/numpy/core/__init__.py in &lt;module&gt;() 12 os.environ[envkey] = '1' 13 env_added.append(envkey) ---&gt; 14 from . import multiarray 15 for envkey in env_added: 16 del os.environ[envkey] ImportError: dlopen(//anaconda/lib/python3.5/site-packages/numpy/core/multiarray.so, 10): Symbol not found: _strnlen Referenced from: /anaconda/lib/python3.5/site-packages/numpy/core/../../../..//libmkl_intel_lp64.dylib Expected in: flat namespace in /anaconda/lib/python3.5/site-packages/numpy/core/../../../..//libmkl_intel_lp64.dylib </code></pre> <p>Since both packages show up in <code>$ conda list</code> its probably some kind of linking error(?), but that is unfortunately something a beginner can hardly solve for himself. Can anyone help? </p>
0
2016-10-11T16:15:11Z
39,981,977
<p>Okay so If I correctly understand what you are saying I propose that you add the package in the same folder your python file is located in. If possible add the code you have used to import the data so I can located any possible mistakes</p>
0
2016-10-11T16:18:11Z
[ "python", "numpy", "matplotlib", "anaconda", "jupyter-notebook" ]
Python: Installed Anaconda, but can't import numpy or matplotlib in Jupyter notebook
39,981,931
<p>I'm new to Python, so maybe there is a simple solution to this. I installed Anaconda and thought everything would be straightforward, but even though Jupyter works fine I can't import numpy and matplotlib into my notebook. Instead I get this error:</p> <pre><code>--------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-1-1e0540761e0c&gt; in &lt;module&gt;() ----&gt; 1 import matplotlib.pyplot as plt 2 vals = [1, 2, 3, 4] 3 plt.plot(vals) //anaconda/lib/python3.5/site-packages/matplotlib/__init__.py in &lt;module&gt;() 120 # cbook must import matplotlib only within function 121 # definitions, so it is safe to import from it here. --&gt; 122 from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label 123 from matplotlib.compat import subprocess 124 from matplotlib.rcsetup import (defaultParams, //anaconda/lib/python3.5/site-packages/matplotlib/cbook.py in &lt;module&gt;() 31 from weakref import ref, WeakKeyDictionary 32 ---&gt; 33 import numpy as np 34 import numpy.ma as ma 35 //anaconda/lib/python3.5/site-packages/numpy/__init__.py in &lt;module&gt;() 144 return loader(*packages, **options) 145 --&gt; 146 from . import add_newdocs 147 __all__ = ['add_newdocs', 148 'ModuleDeprecationWarning', //anaconda/lib/python3.5/site-packages/numpy/add_newdocs.py in &lt;module&gt;() 11 from __future__ import division, absolute_import, print_function 12 ---&gt; 13 from numpy.lib import add_newdoc 14 15 ############################################################################### //anaconda/lib/python3.5/site-packages/numpy/lib/__init__.py in &lt;module&gt;() 6 from numpy.version import version as __version__ 7 ----&gt; 8 from .type_check import * 9 from .index_tricks import * 10 from .function_base import * //anaconda/lib/python3.5/site-packages/numpy/lib/type_check.py in &lt;module&gt;() 9 'common_type'] 10 ---&gt; 11 import numpy.core.numeric as _nx 12 from numpy.core.numeric import asarray, asanyarray, array, isnan, \ 13 obj2sctype, zeros //anaconda/lib/python3.5/site-packages/numpy/core/__init__.py in &lt;module&gt;() 12 os.environ[envkey] = '1' 13 env_added.append(envkey) ---&gt; 14 from . import multiarray 15 for envkey in env_added: 16 del os.environ[envkey] ImportError: dlopen(//anaconda/lib/python3.5/site-packages/numpy/core/multiarray.so, 10): Symbol not found: _strnlen Referenced from: /anaconda/lib/python3.5/site-packages/numpy/core/../../../..//libmkl_intel_lp64.dylib Expected in: flat namespace in /anaconda/lib/python3.5/site-packages/numpy/core/../../../..//libmkl_intel_lp64.dylib </code></pre> <p>Since both packages show up in <code>$ conda list</code> its probably some kind of linking error(?), but that is unfortunately something a beginner can hardly solve for himself. Can anyone help? </p>
0
2016-10-11T16:15:11Z
39,982,352
<p>The key to your problem is possibly that you're running a pretty old Mac OS X version as <code>_strnlen</code> wasn't even available <a href="http://stackoverflow.com/q/32468480/4354477">until 10.7 release</a>. </p> <p>Anaconda is built for at least OS X 10.7 (according to <a href="https://groups.google.com/a/continuum.io/forum/m/#!topic/anaconda/QxVJifuRWiI" rel="nofollow">this</a>), so you're probably out of luck here and a possible solution would be to upgrade the system. </p>
1
2016-10-11T16:40:30Z
[ "python", "numpy", "matplotlib", "anaconda", "jupyter-notebook" ]
Generate pandas dataframe from a series of start and end dates
39,981,934
<p>I have list of start and end dates which I want to convert into 1 large dataframe.</p> <p>here is a small reproductible example of what I am trying to acheive</p> <pre><code>import pandas as pd from pandas.tseries.offsets import * import datetime as dt dates = pd.DataFrame([[dt.datetime(2016,01,01),dt.datetime(2016,02,01)], [dt.datetime(2016,01,10), dt.datetime(2016,02,25)], [dt.datetime(2016,02,10), dt.datetime(2016,03,25)]], columns=['start', 'end']) </code></pre> <p>which gives me start and end dates such has:</p> <pre><code>In[14]: dates Out[14]: start end 0 2016-01-01 2016-02-01 1 2016-01-10 2016-02-25 2 2016-02-10 2016-03-25 </code></pre> <p>I ma trying to create a dataframe with date ranges of weekdays based on those start / end dates and append them together. </p> <p>this is how I approch the problem but it doesn't feel too much pythonic:</p> <pre><code>op_series = list() for row in dates.itertuples(): time_range = pd.date_range(row.start, row.end, freq=BDay()) s = len(time_range) op_series += (zip(time_range, [row.start]*s, [row.end]*s)) df = pd.DataFrame(op_series, columns=['date', 'start', 'end']) In[4]: df.head() Out[4]: date start end 0 2016-01-01 2016-01-01 2016-02-01 1 2016-01-04 2016-01-01 2016-02-01 2 2016-01-05 2016-01-01 2016-02-01 3 2016-01-06 2016-01-01 2016-02-01 4 2016-01-07 2016-01-01 2016-02-01 </code></pre> <p>is there a more efficient way than creating list of data and them gluing them together?</p> <p>thanks!</p>
0
2016-10-11T16:15:20Z
39,982,848
<p>Still a bit clumsy, but probably more efficient than yours, as it's all in numpy. Merge a Dataframe with the appropriate day diffs</p> <pre><code>df = pd.DataFrame([[dt.datetime(2016,1,1),dt.datetime(2016,2,1)], [dt.datetime(2016,1,10), dt.datetime(2016,2,25)], [dt.datetime(2016,2,10), dt.datetime(2016,3,25)]], columns=['start', 'end']) df['diff'] = (df['end'] - df['start']).dt.days arr = np.empty(0, dtype=np.uint32) diff_arr = np.empty(0, dtype=np.uint32) for value in df['diff'].unique(): arr = np.append(arr, np.arange(value)) diff_arr = np.append(diff_arr, np.full(value, value, dtype=np.uint32)) tmp_df = pd.DataFrame(dict(diff=diff_arr, i=arr)) tmp_df['i'] = pd.to_timedelta(tmp_df['i'], unit='D') df = df.merge(tmp_df, on='diff') df['date'] = df['start'] + df['i'] df.drop(['i', 'diff'], inplace=True, axis=1) </code></pre>
0
2016-10-11T17:09:49Z
[ "python", "python-2.7", "datetime", "pandas" ]
Dataframe exported to CSV turns out different with data appearing in different columns then originally was
39,981,943
<p>I'm trying to read a CSV as a dataframe, then sort by column and subsequently output the sorted dataframe into a new CSV. However, the problem is that my output CSV looks nothing like the sorted dataframe with data being moved to wrong columns etc etc. I suspect that the problem lies with the data as some columns are made up of long strings and might have special characters - this is because when I stripped out certain columns, the steps I took below does work. I have tried to export and reimport the dataframe in both dictionary and pickle format and it works perfectly.</p> <p>First I read in a CSV file and then sort by a column (The csv files I used can be downloaded in the comment below (&lt;100kb in size)</p> <pre><code>df = pd.read_csv("database.csv",encoding = "ISO-8859-1") sorteddf = df.sort_values(by="All Comment Score") </code></pre> <p><a href="https://i.stack.imgur.com/0qnHB.png" rel="nofollow">This show how the dataframe looks after sorting (What I want)</a></p> <p>Then I store my dataframe in a new CSV file and read that new CSV as a new dataframe:</p> <pre><code>sorteddf.to_csv("test.csv") newdf = pd.read_csv("test.csv",encoding = "ISO-8859-1") </code></pre> <p>However, when I read the newly outputed CSV file as a new dataframe, the columns and the data appear to be a mess: <a href="https://i.stack.imgur.com/kriPo.png" rel="nofollow">This shows how the dataframe imported from the output CSV actually looks like</a></p> <p>I would really appreciate it if someone could shed some light on my problem and point me in the right direction! </p>
0
2016-10-11T16:16:05Z
39,982,308
<p>Are you talking about the unnamed column?</p> <p>Try using <code> sorteddf.to_csv('test.csv', index=False) </code> This tells pandas not to output the inbuilt index column (most of the time you don't care about this)</p>
1
2016-10-11T16:37:58Z
[ "python", "csv", "pandas", "export-to-excel" ]
Dataframe exported to CSV turns out different with data appearing in different columns then originally was
39,981,943
<p>I'm trying to read a CSV as a dataframe, then sort by column and subsequently output the sorted dataframe into a new CSV. However, the problem is that my output CSV looks nothing like the sorted dataframe with data being moved to wrong columns etc etc. I suspect that the problem lies with the data as some columns are made up of long strings and might have special characters - this is because when I stripped out certain columns, the steps I took below does work. I have tried to export and reimport the dataframe in both dictionary and pickle format and it works perfectly.</p> <p>First I read in a CSV file and then sort by a column (The csv files I used can be downloaded in the comment below (&lt;100kb in size)</p> <pre><code>df = pd.read_csv("database.csv",encoding = "ISO-8859-1") sorteddf = df.sort_values(by="All Comment Score") </code></pre> <p><a href="https://i.stack.imgur.com/0qnHB.png" rel="nofollow">This show how the dataframe looks after sorting (What I want)</a></p> <p>Then I store my dataframe in a new CSV file and read that new CSV as a new dataframe:</p> <pre><code>sorteddf.to_csv("test.csv") newdf = pd.read_csv("test.csv",encoding = "ISO-8859-1") </code></pre> <p>However, when I read the newly outputed CSV file as a new dataframe, the columns and the data appear to be a mess: <a href="https://i.stack.imgur.com/kriPo.png" rel="nofollow">This shows how the dataframe imported from the output CSV actually looks like</a></p> <p>I would really appreciate it if someone could shed some light on my problem and point me in the right direction! </p>
0
2016-10-11T16:16:05Z
39,984,506
<p>You have decoding/encoding issues. Your encoding is not in "ISO" its in 'latin-1'. Its hard to fix this unless you figure out why you are reading in your data like this. </p>
0
2016-10-11T18:46:05Z
[ "python", "csv", "pandas", "export-to-excel" ]
To process csv data set in Jupyter notebook
39,981,986
<p>I am worried to process data set developed in Sindhi language. I followed all steps but unable to process the data set. May any one help me in loading and importing csv file from local drive. I tried like:</p> <pre><code>import csv data C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv </code></pre> <p>got response like:</p> <pre><code>File "&lt;ipython-input-71-6a0a9456deeb&gt;", line 1 data C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv ^ SyntaxError: invalid syntax </code></pre> <p>then enter query as:</p> <pre><code>import csv with open(C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv, 'rb') as f: data = list(csv.reader(f)) </code></pre> <p>and got response as:</p> <pre><code>File "&lt;ipython-input-74-29f185d274e2&gt;", line 2 with open(C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv, 'rb') as f: ^ SyntaxError: invalid syntax </code></pre> <p>than process as:</p> <pre><code>from sklearn import datasets sindhi2 = datasets.load_sindhi2() digits = datasets.load_digits() </code></pre> <p>and got response as:</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-9-119477fe5453&gt; in &lt;module&gt;() 1 from sklearn import datasets ----&gt; 2 sindhi2 = datasets.load_sindhi2() 3 digits = datasets.load_digits() AttributeError: module 'sklearn.datasets' has no attribute 'load_sindhi2' </code></pre> <p>Please help me in loading and importing dataset from my local drive D and process POS tagging and feature deriving in jupyter notebook</p>
1
2016-10-11T16:18:38Z
39,982,108
<p>Your second block is almost correct, all you need is to quote the file name:</p> <pre><code>import csv with open(r'C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv', 'rb') as f: data = list(csv.reader(f)) </code></pre> <p>Also note that I used raw string (see the <code>r</code> before the single quote) so that I don't have to escape the backspaces.</p> <h1>Update</h1> <p>Since you are using Python 3, you should use mode <code>'r'</code>:</p> <pre><code>with open(r'C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv', 'r') as f: </code></pre> <p>Or omit the mode:</p> <pre><code>with open(r'C:\Users\mazhar\Anaconda3\Lib\site-packages\sindhi2.csv') as f: </code></pre> <p>I have tried this with Anaconda + Python 3 Jupyter notebook.</p>
1
2016-10-11T16:25:23Z
[ "python", "csv", "jupyter", "tagging" ]
an array from genfromtxt is being passed as a sequence?
39,982,036
<p>I have a list of coordinates and their respective error values in the shape:</p> <pre><code># Graph from standard correlation, page 1 1.197 0.1838 -0.03504 0.07802 +-0.006464 +0.004201 1.290 0.2072 -0.04241 0.05380 +-0.005833 +0.008101 </code></pre> <p>where the columns denote <code>x,y,lefterror,righterror,buttomerror,toperror</code>I load the file as <code>error=np.genfromtxt("standard correlation.1",skip_header=1)</code> and finally I try to graph this as</p> <pre><code>xerr=error[:,2:4] yerr=error[:,4:] x=error[:,0] y=error[:,1] plt.errorbar(x,y,xerr=xerr,yerr=yerr,fmt='') </code></pre> <p>Which yells a <code>ValueError: setting an array element with a sequence.</code>when I try to run it, I understand this error is given when you're passing an object such a a list to an argument that's expecting a numpy array object, I am clueless as to how I should fix this problem, as np.genfromtxt should always return an ndarray.</p> <p>Thanks for your help.</p> <p><strong>Edit:</strong> I changed the file to remove the '+' character, as reading '+-' would yield NaN values in the bottom error column, but I still get the same error. </p>
0
2016-10-11T16:21:49Z
39,983,132
<p>Thanks to hpaulj I noticed that the error bars' shape was (30,2), however <code>plt.errobar()</code> expects error arrarys in the shape (2,n), as python usually transposes matrices in similar operations and automatically avoids this problem I figured it would also do it, but I decided to change lines the following way:</p> <pre><code>xerr=error[:,2:4] yerr=error[:,4:] </code></pre> <p>into</p> <pre><code>xerr=np.transpose(error[:,2:4]) yerr=np.transpose(error[:,4:]) </code></pre> <p>which made the script run properly, although I still don't understand why the previous code gave me such an error, if anyone can help me clear that up I'll appreciate it.</p>
0
2016-10-11T17:28:03Z
[ "python", "arrays", "numpy", "matplotlib", "errorbar" ]
an array from genfromtxt is being passed as a sequence?
39,982,036
<p>I have a list of coordinates and their respective error values in the shape:</p> <pre><code># Graph from standard correlation, page 1 1.197 0.1838 -0.03504 0.07802 +-0.006464 +0.004201 1.290 0.2072 -0.04241 0.05380 +-0.005833 +0.008101 </code></pre> <p>where the columns denote <code>x,y,lefterror,righterror,buttomerror,toperror</code>I load the file as <code>error=np.genfromtxt("standard correlation.1",skip_header=1)</code> and finally I try to graph this as</p> <pre><code>xerr=error[:,2:4] yerr=error[:,4:] x=error[:,0] y=error[:,1] plt.errorbar(x,y,xerr=xerr,yerr=yerr,fmt='') </code></pre> <p>Which yells a <code>ValueError: setting an array element with a sequence.</code>when I try to run it, I understand this error is given when you're passing an object such a a list to an argument that's expecting a numpy array object, I am clueless as to how I should fix this problem, as np.genfromtxt should always return an ndarray.</p> <p>Thanks for your help.</p> <p><strong>Edit:</strong> I changed the file to remove the '+' character, as reading '+-' would yield NaN values in the bottom error column, but I still get the same error. </p>
0
2016-10-11T16:21:49Z
39,983,652
<p>The shape of the array numpy expects for individual errorbars is <code>(2, N)</code>. You therefore need to transpose your array <code>error[:,2:4].T</code> Also, <code>matplotlib.errorbar</code> understands those values relative to the data. If <code>x</code> is the value and <code>(xmin, xmax)</code> the corresponding error, the errorbar goes from <code>x-xmin</code> to <code>x+xmax</code>. You therefore shouldn't have negative values in the errorbar arrays.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt f = "1 0.1 0.05 0.1 0.005 0.01" + \ " 1.197 0.1838 -0.03504 0.07802 -0.006464 0.004201 " + \ " 1.290 0.2072 -0.04241 0.05380 -0.005833 0.008101" error=np.fromstring(f, sep=" ").reshape(3,6) print error #[[ 1. 0.1 0.05 0.1 0.005 0.01 ] # [ 1.197 0.1838 -0.03504 0.07802 -0.006464 0.004201] # [ 1.29 0.2072 -0.04241 0.0538 -0.005833 0.008101]] xerr=np.abs(error[:,2:4].T) yerr=np.abs(error[:,4:].T) x=error[:,0] y=error[:,1] plt.errorbar(x,y,xerr=xerr,yerr=yerr,fmt='') plt.show() </code></pre> <p>Concerning the value error, it may have been caused by the <code>+-</code> issue.</p>
0
2016-10-11T17:58:27Z
[ "python", "arrays", "numpy", "matplotlib", "errorbar" ]
Importing nested list into a text file
39,982,053
<p>I've been working on a problem which I realise I am probably approaching the wrong way but am now confused and out of ideas. Any research that I have done has left me more confused, and thus I have come for help.</p> <p>I have a nested list: </p> <blockquote> <p>[['# Name Surname', 'Age', 'Class', 'Score', '\n'], ['name', '9', 'B', 'N/A', '\n'], ['name1', '9', 'B', 'N/A', '\n'], ['name2', '8', 'B', 'N/A', '\n'], ['name3', '9', 'B', 'N/A', '\n'], ['name4', '8', 'B', 'N/A', '']]</p> </blockquote> <p>I am trying to make it so this list is imported into a text file in the correct layout. For this I flattened the string and then joined it together with ','. </p> <p>The problem with this is that because the '\n' is being stored in the list itself, it adds a comma after this, which ends up turning this:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>Name,9,B,N/A,</p> <p>Name1,9,B,N/A,</p> <p>Name2,8,B,N/A,</p> <p>Name3,9,B,N/A,</p> <p>Name4,8,B,N/A,</p> </blockquote> <p>into:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>,</p> <p>,Name,9,B,N/A,</p> <p>,Name1,9,B,N/A,</p> <p>,Name2,8,B,N/A,</p> <p>,Name3,9,B,N/A,</p> <p>,Name4,8,B,N/A,</p> </blockquote> <p>If I remove the \n from the code the formatting in the text file is all wrong due to no new lines.</p> <p>Is there a better way to approach this or is there a quick fix to all my problems that I cannot see?</p> <p>Thanks!</p> <p>My code for reference:</p> <pre><code>def scorestore(score): user[accountLocation][3] = score file = ("classdata",schclass,".txt") file = "".join(file) flattened = [val for sublist in user for val in sublist] flatstring = ','.join(str(v) for v in flattened) accountlist = open(file,"w") accountlist.write(flatstring) accountlist.close() </code></pre>
0
2016-10-11T16:22:28Z
39,982,200
<p>The easiest way would probably be to remove the newlines from the sublists as you get them, the print each sublist one at a time. This would look something like:</p> <pre><code>for sublist in users: print(",".join(val for val in sublist if not val.isspace()), file=accountlist) </code></pre> <p>This will fail on the 0 in your list, however. I'm not sure if you intend to handle that, or if it's extraneous. If you do need to handle is, then you'll have to change the generator expression to <code>str(val) for val in sublist if not str(val).isspace()</code>.</p>
0
2016-10-11T16:31:37Z
[ "python", "python-3.x" ]
Importing nested list into a text file
39,982,053
<p>I've been working on a problem which I realise I am probably approaching the wrong way but am now confused and out of ideas. Any research that I have done has left me more confused, and thus I have come for help.</p> <p>I have a nested list: </p> <blockquote> <p>[['# Name Surname', 'Age', 'Class', 'Score', '\n'], ['name', '9', 'B', 'N/A', '\n'], ['name1', '9', 'B', 'N/A', '\n'], ['name2', '8', 'B', 'N/A', '\n'], ['name3', '9', 'B', 'N/A', '\n'], ['name4', '8', 'B', 'N/A', '']]</p> </blockquote> <p>I am trying to make it so this list is imported into a text file in the correct layout. For this I flattened the string and then joined it together with ','. </p> <p>The problem with this is that because the '\n' is being stored in the list itself, it adds a comma after this, which ends up turning this:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>Name,9,B,N/A,</p> <p>Name1,9,B,N/A,</p> <p>Name2,8,B,N/A,</p> <p>Name3,9,B,N/A,</p> <p>Name4,8,B,N/A,</p> </blockquote> <p>into:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>,</p> <p>,Name,9,B,N/A,</p> <p>,Name1,9,B,N/A,</p> <p>,Name2,8,B,N/A,</p> <p>,Name3,9,B,N/A,</p> <p>,Name4,8,B,N/A,</p> </blockquote> <p>If I remove the \n from the code the formatting in the text file is all wrong due to no new lines.</p> <p>Is there a better way to approach this or is there a quick fix to all my problems that I cannot see?</p> <p>Thanks!</p> <p>My code for reference:</p> <pre><code>def scorestore(score): user[accountLocation][3] = score file = ("classdata",schclass,".txt") file = "".join(file) flattened = [val for sublist in user for val in sublist] flatstring = ','.join(str(v) for v in flattened) accountlist = open(file,"w") accountlist.write(flatstring) accountlist.close() </code></pre>
0
2016-10-11T16:22:28Z
39,982,213
<p>I'm not sure which list is the one in your post (sublist?) but when you flatten it, just discard the "\n" strings:</p> <pre><code>flattened = [x for x in sublist if x != ["\n"]] </code></pre>
1
2016-10-11T16:32:15Z
[ "python", "python-3.x" ]
Importing nested list into a text file
39,982,053
<p>I've been working on a problem which I realise I am probably approaching the wrong way but am now confused and out of ideas. Any research that I have done has left me more confused, and thus I have come for help.</p> <p>I have a nested list: </p> <blockquote> <p>[['# Name Surname', 'Age', 'Class', 'Score', '\n'], ['name', '9', 'B', 'N/A', '\n'], ['name1', '9', 'B', 'N/A', '\n'], ['name2', '8', 'B', 'N/A', '\n'], ['name3', '9', 'B', 'N/A', '\n'], ['name4', '8', 'B', 'N/A', '']]</p> </blockquote> <p>I am trying to make it so this list is imported into a text file in the correct layout. For this I flattened the string and then joined it together with ','. </p> <p>The problem with this is that because the '\n' is being stored in the list itself, it adds a comma after this, which ends up turning this:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>Name,9,B,N/A,</p> <p>Name1,9,B,N/A,</p> <p>Name2,8,B,N/A,</p> <p>Name3,9,B,N/A,</p> <p>Name4,8,B,N/A,</p> </blockquote> <p>into:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>,</p> <p>,Name,9,B,N/A,</p> <p>,Name1,9,B,N/A,</p> <p>,Name2,8,B,N/A,</p> <p>,Name3,9,B,N/A,</p> <p>,Name4,8,B,N/A,</p> </blockquote> <p>If I remove the \n from the code the formatting in the text file is all wrong due to no new lines.</p> <p>Is there a better way to approach this or is there a quick fix to all my problems that I cannot see?</p> <p>Thanks!</p> <p>My code for reference:</p> <pre><code>def scorestore(score): user[accountLocation][3] = score file = ("classdata",schclass,".txt") file = "".join(file) flattened = [val for sublist in user for val in sublist] flatstring = ','.join(str(v) for v in flattened) accountlist = open(file,"w") accountlist.write(flatstring) accountlist.close() </code></pre>
0
2016-10-11T16:22:28Z
39,982,271
<p>Instead of making one string, how about writing lines. Use something like this:</p> <pre><code> list_of_list = [[...]] lines = [','.join(line).strip() for line in list_of_list] lines = [line for line in lines if line] open(file,'w').writelines(lines) </code></pre>
0
2016-10-11T16:35:36Z
[ "python", "python-3.x" ]
Importing nested list into a text file
39,982,053
<p>I've been working on a problem which I realise I am probably approaching the wrong way but am now confused and out of ideas. Any research that I have done has left me more confused, and thus I have come for help.</p> <p>I have a nested list: </p> <blockquote> <p>[['# Name Surname', 'Age', 'Class', 'Score', '\n'], ['name', '9', 'B', 'N/A', '\n'], ['name1', '9', 'B', 'N/A', '\n'], ['name2', '8', 'B', 'N/A', '\n'], ['name3', '9', 'B', 'N/A', '\n'], ['name4', '8', 'B', 'N/A', '']]</p> </blockquote> <p>I am trying to make it so this list is imported into a text file in the correct layout. For this I flattened the string and then joined it together with ','. </p> <p>The problem with this is that because the '\n' is being stored in the list itself, it adds a comma after this, which ends up turning this:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>Name,9,B,N/A,</p> <p>Name1,9,B,N/A,</p> <p>Name2,8,B,N/A,</p> <p>Name3,9,B,N/A,</p> <p>Name4,8,B,N/A,</p> </blockquote> <p>into:</p> <blockquote> <h1>Name Surname,Age,Class,Score,</h1> <p>,</p> <p>,Name,9,B,N/A,</p> <p>,Name1,9,B,N/A,</p> <p>,Name2,8,B,N/A,</p> <p>,Name3,9,B,N/A,</p> <p>,Name4,8,B,N/A,</p> </blockquote> <p>If I remove the \n from the code the formatting in the text file is all wrong due to no new lines.</p> <p>Is there a better way to approach this or is there a quick fix to all my problems that I cannot see?</p> <p>Thanks!</p> <p>My code for reference:</p> <pre><code>def scorestore(score): user[accountLocation][3] = score file = ("classdata",schclass,".txt") file = "".join(file) flattened = [val for sublist in user for val in sublist] flatstring = ','.join(str(v) for v in flattened) accountlist = open(file,"w") accountlist.write(flatstring) accountlist.close() </code></pre>
0
2016-10-11T16:22:28Z
39,982,544
<p>Use the <code>csv</code> module to make it easier:</p> <pre><code>import csv data = [ ['# Name Surname', 'Age', 'Class', 'Score','\n'], ['\n'], ['Name', '9', 'B', 'N/A','\n'], ['Name1', '9', 'B', 'N/A','\n'], ['Name2', '8', 'B', 'N/A','\n'], ['Name3', '9', 'B', 'N/A','\n'], ['Name4', '8', 'B', 0] ] # Remove all the ending new lines data = [row[:-1] if row[-1] == '\n' else row for row in data] # Write to file with open('write_sublists.csv', 'wb') as f: writer = csv.writer(f) writer.writerows(data) </code></pre> <h1>Discussion</h1> <p>Your data is irregular: some row contains the ending new line, some row don't. Yet some row contains all strings and some row contains a mixed data type. The first step is to normalize them by remove all ending new lines. The <code>csv</code> module can take care of mixed data types just fine.</p>
0
2016-10-11T16:51:20Z
[ "python", "python-3.x" ]
How to get datetime as string from mysql database in django model query
39,982,277
<p>I am using below query to get fields from mysql database in django 1.9.</p> <pre><code>event_dict_list = EventsModel.objects.filter(name__icontains = event_name).values('sys_id','name', 'start_date_time', 'end_date_time', 'notes') </code></pre> <p>now in the result <code>event_dict_list</code>, <code>start_date_time</code> and <code>end_date_time</code> are appearing in python date time object format as below </p> <pre><code>'end_date_time': datetime.datetime(2016, 9, 26, 10, 48, 35, tzinfo=&lt;UTC&gt;) </code></pre> <p>I want it as a string in <code>YYYY-mm-dd HH:MM:SS</code> format. </p> <p>One way would be to iterate over the <code>event_dict_list</code> and get date field and then convert it into desired string format. But I wanted to know if there is any way I can specify something in query so that I get the converted date in query output only?</p> <p>Related question - what is preferred way to store date/datetime in database - as python date time or as string. Way 1 or way 2.<br> (1) <code>end_date_time = models.DateTimeField(null=False, blank=False)</code><br> (2) <code>end_date_time = models.CharField(max_length= 128, null=False, blank=False)</code> </p>
0
2016-10-11T16:35:49Z
39,984,215
<p>You could try using the CAST() command in your query: <a href="http://stackoverflow.com/questions/2392413/convert-datetime-value-into-string-in-mysql">Convert DateTime Value into String in Mysql</a></p> <p>The above example kinda shows you how to!</p> <p>If that doesnt work, the other option is as you stated, iterate over your dates and times and convert as shown here: <a href="http://stackoverflow.com/questions/10624937/convert-datetime-object-to-a-string-of-date-only-in-python">Convert datetime object to a String of date only in Python</a></p>
0
2016-10-11T18:29:23Z
[ "python", "mysql", "django", "datetime", "django-models" ]
Filter tweets before saving into csv
39,982,288
<p>The below part of code fetches all tweets for a specific user and stores them into a csv file. I want to filter the tweets before storing them and store only those that contain the term "car". How can i do it? </p> <pre><code>import tweepy #https://github.com/tweepy/tweepy import csv #Twitter API credentials consumer_key = "" consumer_secret = "" access_key = "" access_secret = "" def get_all_tweets(screen_name): #Twitter only allows access to a users most recent 3240 tweets with this method #authorize twitter, initialize tweepy auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) #initialize a list to hold all the tweepy Tweets alltweets = [] #make initial request for most recent tweets (200 is the maximum allowed count) new_tweets = api.user_timeline(screen_name = screen_name,count=200) #save most recent tweets alltweets.extend(new_tweets) #save the id of the oldest tweet less one oldest = alltweets[-1].id - 1 #keep grabbing tweets until there are no tweets left to grab while len(new_tweets) &gt; 0: print "getting tweets before %s" % (oldest) #all subsiquent requests use the max_id param to prevent duplicates new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest) #save most recent tweets alltweets.extend(new_tweets) #update the id of the oldest tweet less one oldest = alltweets[-1].id - 1 print "...%s tweets downloaded so far" % (len(alltweets)) #transform the tweepy tweets into a 2D array that will populate the csv outtweets = [[tweet.id_str, tweet.created_at, tweet.text.encode("utf-8")]for tweet in alltweets] #write the csv with open('%s_tweets.csv' % screen_name, 'wb') as f: writer = csv.writer(f) writer.writerow(["id","created_at","text"]) writer.writerows(outtweets) pass if __name__ == '__main__': #pass in the username of the account you want to download get_all_tweets("owo_batista") </code></pre>
0
2016-10-11T16:36:42Z
39,982,842
<p>I would add an if-statement to the end of your outtweets list comprehension:</p> <pre><code>outtweets = [[tweet.id_str, tweet.created_at, tweet.text.encode("utf-8")]for tweet in alltweets if 'car' in tweet.text.encode("utf-8")] </code></pre>
1
2016-10-11T17:09:32Z
[ "python", "twitter", "tweepy" ]
pytest print stacktrace when import fails
39,982,332
<p>Suppose a python test module generates an <code>ImportError</code>. <code>pytest</code> (version 3.0.2) generates a compact error report:</p> <pre><code>__________________________________________ ERROR collecting tests/wc_tests/log/test_logger.py __________________________________________ ImportError while importing test module '/Users/arthur_at_sinai/gitOnMyLaptopLocal/Mpn-Example/tests/wc_tests/log/test_logger.py'. Original error message: 'No module named 'wc.config.core'' Make sure your test modules/packages have valid Python names. </code></pre> <p>In contrast, <code>nosetests-3.4</code> generates a stacktrace, like this:</p> <pre><code>ERROR: Failure: ImportError (No module named 'wc.config.core') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/nose/failure.py", line 39, in runTest raise self.exc_val.with_traceback(self.tb) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/nose/loader.py", line 418, in loadTestsFromName addr.filename, addr.module) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 234, in load_module return load_source(name, filename, file) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 172, in load_source module = _load(spec) File "&lt;frozen importlib._bootstrap&gt;", line 693, in _load File "&lt;frozen importlib._bootstrap&gt;", line 673, in _load_unlocked File "&lt;frozen importlib._bootstrap_external&gt;", line 665, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 222, in _call_with_frames_removed File "/Users/arthur_at_sinai/gitOnMyLaptopLocal/Mpn-Example/tests/wc_tests/log/test_logger.py", line 12, in &lt;module&gt; from wc.sim.core import Simulator File "/Users/arthur_at_sinai/gitOnMyLaptopLocal/Mpn-Example/wc/sim/core.py", line 16, in &lt;module&gt; from wc.log.checkpoint import CheckpointLogger File "/Users/arthur_at_sinai/gitOnMyLaptopLocal/Mpn-Example/wc/log/checkpoint.py", line 9, in &lt;module&gt; from wc.config.core import config ImportError: No module named 'wc.config.core' </code></pre> <p>How can one get <code>pytest</code> to produce similar stacktrace information? These options are available</p> <pre><code>-l, --showlocals show locals in tracebacks (disabled by default). --tb=style traceback print mode (auto/long/short/line/native/no). --full-trace don't cut any tracebacks (default is to cut). </code></pre> <p>but none of them make a tb for me. Nor does <code>pytest -vv</code>.</p> <p>Thanks</p> <p>Arthur</p>
0
2016-10-11T16:39:04Z
39,983,844
<p>This was <a href="https://github.com/pytest-dev/pytest/pull/1979" rel="nofollow">changed</a> in pytest a week ago to display the full traceback.</p> <p>If you don't want to wait for the next release, you could use pytest from the git repository via <code>pip install git+https://github.com/pytest-dev/pytest.git</code> in the meantime.</p>
1
2016-10-11T18:09:54Z
[ "python", "py.test" ]
Code gives Change directory error but still changes directory
39,982,359
<pre><code>import os import socket import subprocess s = socket.socket() host = '&lt;my-ip&gt;' port = 9999 s.connect((host, port)) while True: data = s.recv(1024) if data[:2].decode("utf-8") == 'cd': os.chdir(data[3:].decode("utf-8")) if len(data) &gt; 0: cmd = subprocess.Popen(data[:].decode("utf-8"), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) outputBytes = cmd.stdout.read() + cmd .stderr.read() outputStr = str(outputBytes, "utf-8") s.send(str.encode(outputStr + str(os.getcwd() + '&gt; '))) print (outputStr) # Close connection s.close() </code></pre> <p>I am trying to create a remote client-server app using python by going throught thenewboston's turorial for python reverse shell.</p> <p>The above is the code for client.py. The thing is all the commands are working fine. When I use</p> <p>cd "Directory Name With Space"</p> <p>it gives me this error "The System Cannot Find The Path Specified". But it still changes the directory. I am not sure why it still changes the directory even after giving an error?</p>
0
2016-10-11T16:40:52Z
39,983,285
<p>There are two <code>if</code>s here:</p> <pre><code>if data[:2].decode("utf-8") == 'cd': os.chdir(data[3:].decode("utf-8")) if len(data) &gt; 0: cmd = subprocess.Popen(data[:].decode("utf-8"), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) </code></pre> <ul> <li><p>If a command starts with 'cd', the first block is executed (because it starts with 'cd').</p></li> <li><p>If a command starts with 'cd', the second block is also executed (because it has length > 0).</p></li> </ul> <p>The first block changes the directory even if it has spaces. The second block does not.</p> <p>You want to use <code>elif</code> to prevent both blocks from executing:</p> <pre><code>if data[:2].decode("utf-8") == 'cd': os.chdir(data[3:].decode("utf-8")) elif len(data) &gt; 0: cmd = subprocess.Popen(... </code></pre> <p>BTW, there are other problems with this code:</p> <ul> <li><p><code>'cdefgh'</code> will change directory to <code>'fgh'</code></p></li> <li><p><code>'cd abc'</code> will try to change directory to <code>' abc'</code> and probably fail</p></li> <li><p>if <code>Popen</code> is used for anything but <code>cd</code>, I don't see why it would not be used for <code>cd</code> as well</p></li> <li><p>exposing such interface in a server is a bigger security hole than any virus can hope to create - if you are doing this just to learn, burn the code as soon as you are done ;)</p></li> </ul>
1
2016-10-11T17:37:29Z
[ "python", "python-3.x" ]
Efficient iteration through nested python dictionaries
39,982,389
<p>I have a dataset for which values determined to meet specific criteria is used to perform probability calculations as part of a summation. Currently, I hold the data in nested dictionaries to simplify the process of deterministic processing. </p> <p>The algorithm I'm using proves to be very expensive and after a while overwhelms the memory.</p> <p>The psudocode for the processing is as follows:</p> <pre><code>for businessA in business : # iterate over 77039 values for businessB in business : # iterate over 77039 values if businessA != businessB : for rating in business[businessB] : # where rating is 1 - 5 for review in business[businessB][rating] : user = reviewMap[review]['user']; if user in business[businessA]['users'] : for users in business[businessA]['users'] : # do something # do probability # a print is here </code></pre> <p>How can I write the above more effectively to maintain accurate probability summation for each businessA?</p> <hr> <p><strong>EDIT</strong> including source code - here, businessA and businessB are in seperate dictionaries, however it is of note that they hold the same businessIDs (bid) in each. It is just a change of what the value is for each key:value pair.</p> <pre><code>def crossMatch(TbidMap) : for Tbid in TbidMap : for Lbid in LbidMap : # Ensure T and L aren't the same business if Tbid != Lbid : # Get numer of reviews at EACH STAR rate for L for stars in LbidMap[Lbid] : posTbid = 0; # For each review check if user rated the Tbid for Lreview in LbidMap[Lbid][stars] : user = reviewMap[Lreview]['user']; if user in TbidMap[Tbid] : # user rev'd Tbid, get their Trid &amp; see if gave Tbid pos rev for Trid in TbidMap[Tbid][user] : Tstar = reviewMap[Trid]['stars']; if Tstar in pos_list : posTbid += 1; #probability calculations happen here </code></pre>
1
2016-10-11T16:42:03Z
39,984,437
<p>There are over 5 billion combinations of companies in your dataset, which is really going to stress the memory out. I think you're storing all results into memory; instead, I would do interim dumps to a database and free up your containers. This is a sketch of the approach as I have no real data to test on, and it might be easier to respond to your difficulties as you encounter them. Ideally there would be an interim container for nested lists so that you could use <code>executemany</code> but this is so heavily nested with abbreviated names and no test data that it's difficult to follow.</p> <pre><code>import sqlite3 def create_interim_mem_dump(cursor, connection): query = """CREATE TABLE IF NOT EXISTS ratings( Tbid TEXT, Lbid TEXT, posTbid TEXT) """ cursor.execute(query) connection.commit() def crossMatch(TbidMap, cursor, connection) : for Tbid in TbidMap : for Lbid in LbidMap : # Ensure T and L aren't the same business if Tbid != Lbid : # Get numer of reviews at EACH STAR rate for L for stars in LbidMap[Lbid] : posTbid = 0; # For each review check if user rated the Tbid for Lreview in LbidMap[Lbid][stars] : user = reviewMap[Lreview]['user']; if user in TbidMap[Tbid] : # user rev'd Tbid, get their Trid &amp; see if gave Tbid pos rev for Trid in TbidMap[Tbid][user] : Tstar = reviewMap[Trid]['stars']; if Tstar in pos_list : posTbid += 1; query = """INSERT INTO ratings (Tbid, Lbid, posTbid) VALUES (?, ?, ?)""" cursor.execute(query, (Tbid, Lbid, posTbid)) connection.commit() if __name__ == '__main__': conn = sqlite3.connect('collated_ratings.db') c = conn.cursor() create_db = create_interim_mem_dump(c, conn) your_data = 'Some kind of dictionary into crossMatch()' c.close() conn.close() </code></pre>
2
2016-10-11T18:41:55Z
[ "python", "performance", "dictionary" ]
Processing several generators in parallel
39,982,477
<p>Suppose I have several generators (which should be able to run in parallel). Is it possible to use multiprocessing module to call next() on these generators so that the processing would run in parallel?</p> <p>I want to avoid making a list from the generators since it's very likely to consume lots of memory.</p> <p><strong>Context</strong>: Originally I have a generator which outputs all spanning trees of a given graph. Part of the algorithm involves iterating through the power set of a subset of the neighbors of a given vertex. I would like to parallelize this part, at least for the initial call. For a certain graph, it takes around half a second to output a tree for the first 1024 trees.</p>
0
2016-10-11T16:47:08Z
39,983,175
<p>I think your main issue would be getting the data base to the parent process to build your graph. However, this could probably be accomplished by using a multiprocessing <a href="https://docs.python.org/3.6/library/multiprocessing.html#multiprocessing.Queue" rel="nofollow"><code>Queue</code></a>.</p> <p>A simple example:</p> <pre><code>import multiprocessing from queue import Empty def call_generator(generator, queue): for item in generator: queue.put(item) def process_responses(queue): items = [] while True: try: # After a one second timeout, we'll assume the generators are done item = queue.get(timeout=1) except Empty: print('done') break print('item: {}'.format(item)) generators = [ iter(range(10)), iter(range(11, 20)), iter(range(20, 50)) ] queue = multiprocessing.Queue() p = multiprocessing.Process(target=process_responses, args=(queue,)) p.start() for generator in generators: generator_process = multiprocessing.Process( target=call_generator, args=(generator, queue) ) generator_process.start() p.join() # Wait for process_response to return </code></pre>
1
2016-10-11T17:30:53Z
[ "python" ]
How to write each element of list in separate text file?
39,982,478
<p>I am trying to write each element of the list in a different file. </p> <p>Let say we have a list:</p> <pre><code>dataset = ['abc', 'def', 'ghi'] </code></pre> <p>I want to loop through the list and create text files depending upon the length of the list. So, in this case, there should be 3 text files and each will have content abc, def and ghi respectively.</p> <p>My current code is below:</p> <pre><code># This will read a text file, normalize it and remove stopwords from it using nltk. import nltk, io, math from nltk.corpus import stopwords # Read raw text targetFile = open('text.txt') rawtext = targetFile.read() # Removing stopwords stops = set(stopwords.words('english')) filtered_text = [i for i in rawtext.lower().split() if i not in stops] # Count Number of words total_words = len(filtered_text) # Divide them equally into 10 different lists chunk_size = math.floor(total_words/10) n_lists_of_words = [filtered_text[i:i + chunk_size] for i in range(0, len(filtered_text), chunk_size)] if(len(n_lists_of_words) &gt; 10): del n_lists_of_words[-1] # Lets make list of strings instead of list of lists list_of_str = [' '.join(x) for x in n_lists_of_words] # Create 10 different files from above 10 elements of n_list_of_words list for index, word in enumerate(n_lists_of_words): with io.FileIO("output_text_" + str(index) + ".txt", "w") as file: file.write(bytes(word), 'UTF-8') </code></pre> <p>Error message:</p> <pre><code>Traceback (most recent call last): File "clean_my_text.py", line 35, in &lt;module&gt; file.write(bytes(word), 'UTF-8') TypeError: 'str' object cannot be interpreted as an integer </code></pre>
0
2016-10-11T16:47:10Z
39,982,711
<p>Your code is just a little bit wrong. here is the last line corrected. file.write(bytes(dataset[count], 'UTF-8'))</p>
1
2016-10-11T17:00:41Z
[ "python", "python-3.x" ]
How to write each element of list in separate text file?
39,982,478
<p>I am trying to write each element of the list in a different file. </p> <p>Let say we have a list:</p> <pre><code>dataset = ['abc', 'def', 'ghi'] </code></pre> <p>I want to loop through the list and create text files depending upon the length of the list. So, in this case, there should be 3 text files and each will have content abc, def and ghi respectively.</p> <p>My current code is below:</p> <pre><code># This will read a text file, normalize it and remove stopwords from it using nltk. import nltk, io, math from nltk.corpus import stopwords # Read raw text targetFile = open('text.txt') rawtext = targetFile.read() # Removing stopwords stops = set(stopwords.words('english')) filtered_text = [i for i in rawtext.lower().split() if i not in stops] # Count Number of words total_words = len(filtered_text) # Divide them equally into 10 different lists chunk_size = math.floor(total_words/10) n_lists_of_words = [filtered_text[i:i + chunk_size] for i in range(0, len(filtered_text), chunk_size)] if(len(n_lists_of_words) &gt; 10): del n_lists_of_words[-1] # Lets make list of strings instead of list of lists list_of_str = [' '.join(x) for x in n_lists_of_words] # Create 10 different files from above 10 elements of n_list_of_words list for index, word in enumerate(n_lists_of_words): with io.FileIO("output_text_" + str(index) + ".txt", "w") as file: file.write(bytes(word), 'UTF-8') </code></pre> <p>Error message:</p> <pre><code>Traceback (most recent call last): File "clean_my_text.py", line 35, in &lt;module&gt; file.write(bytes(word), 'UTF-8') TypeError: 'str' object cannot be interpreted as an integer </code></pre>
0
2016-10-11T16:47:10Z
39,987,575
<p>Thank you all. Able to do that. Here is the solution below, feel free to ask any relevant information with that:</p> <pre><code># This will read a text file, normalize it and remove stopwords from it using nltk. import nltk, io, math from nltk.corpus import stopwords from string import punctuation # Read raw text targetFile = open('input_text.txt') rawtext = targetFile.read() # Remove punctuation def strip_punctuation(s): return ''.join(c for c in s if c not in punctuation) filtered_punc = strip_punctuation(rawtext) print(filtered_punc) # Removing stopwords stops = set(stopwords.words('english')) filtered_text = [i for i in filtered_punc.lower().split() if i not in stops] # Count Number of words total_words = len(filtered_text) # Divide them equally into 10 different lists chunk_size = math.floor(total_words/10) n_lists_of_words = [filtered_text[i:i + chunk_size] for i in range(0, len(filtered_text), chunk_size)] if(len(n_lists_of_words) &gt; 10): del n_lists_of_words[-1] # Lets make list of strings instead of list of lists list_of_str = [' '.join(x) for x in n_lists_of_words] # Print list values in seperate files for index, word in enumerate(list_of_str): with open("Output" + str(index) + ".txt", "w") as text_file: print(word, file=text_file) </code></pre>
0
2016-10-11T22:09:40Z
[ "python", "python-3.x" ]
Using 7zip with python to create a passsword protected file in a given path
39,982,491
<p>I'm having an error for what seems to be a permissions problem when trying to create a zip file in a specified folder <code>testfolder</code> -folder has the following permissions: drwxr-xr-x 193 nobody nobody When trying to launch the following command in python I get the following:</p> <p><code>p= subprocess.Popen(['7z','a','-pinfected','-y','/home/John/testfolder/yada.zip'] + ['test.txt'],stdout=PIPE.subprocess,stderr=PIPE.subprocess)</code></p> <p><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/local/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied</code></p> <p>Any idea what wrong with permissions?<br> I pretty new to it ,my python runs from /usr/local/bin path</p>
0
2016-10-11T16:47:47Z
39,982,554
<p>Try changing the permissions of the folder and see if it comes again:</p> <pre><code>chmod -R 777 /foldername </code></pre>
0
2016-10-11T16:52:00Z
[ "python", "linux", "python-2.7", "permissions", "permission-denied" ]
Using 7zip with python to create a passsword protected file in a given path
39,982,491
<p>I'm having an error for what seems to be a permissions problem when trying to create a zip file in a specified folder <code>testfolder</code> -folder has the following permissions: drwxr-xr-x 193 nobody nobody When trying to launch the following command in python I get the following:</p> <p><code>p= subprocess.Popen(['7z','a','-pinfected','-y','/home/John/testfolder/yada.zip'] + ['test.txt'],stdout=PIPE.subprocess,stderr=PIPE.subprocess)</code></p> <p><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/local/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied</code></p> <p>Any idea what wrong with permissions?<br> I pretty new to it ,my python runs from /usr/local/bin path</p>
0
2016-10-11T16:47:47Z
39,982,764
<p><code>drwxr-xr-x</code> means that:</p> <p>1] only the directory's owner can list its contents, create new files in it (elevated access) etc.,</p> <p>2] members of the directory's group and other users can also list its contents, and have simple access to it.</p> <p>So in fact you don't have to change the directory's permissions unless you know what you are doing, you could just run your script with <code>sudo</code> like <code>sudo python my_script.py</code>.</p>
1
2016-10-11T17:04:01Z
[ "python", "linux", "python-2.7", "permissions", "permission-denied" ]
How to convert string with special control characters, like \'x1b', into a standard string?
39,982,497
<p>When I use python socket program, we give an option like:</p> <pre class="lang-none prettyprint-override"><code>1) Input A to show your name 2) Input B to show your age 3) Input other to set your name &gt;&gt; </code></pre> <p>When client types 'Too' + delete button + 'm', server receives 'Too\x1bm'.</p> <p>How do I convert 'Too\x1bm' to 'Tom' in Python?</p> <p>There may also be other control characters like 'move cursor' and 'tab'.</p>
0
2016-10-11T16:48:07Z
39,982,592
<p>If you know all the 'wrong characters' you can use .replace to remove unwanted parts.</p> <pre><code>'Too\x1bm'.replace(a[a.index('\x1b')-1:a.index('\x1b')+1],'') returns &gt;&gt;&gt; Tom </code></pre>
0
2016-10-11T16:54:25Z
[ "python", "sockets" ]
How to convert string with special control characters, like \'x1b', into a standard string?
39,982,497
<p>When I use python socket program, we give an option like:</p> <pre class="lang-none prettyprint-override"><code>1) Input A to show your name 2) Input B to show your age 3) Input other to set your name &gt;&gt; </code></pre> <p>When client types 'Too' + delete button + 'm', server receives 'Too\x1bm'.</p> <p>How do I convert 'Too\x1bm' to 'Tom' in Python?</p> <p>There may also be other control characters like 'move cursor' and 'tab'.</p>
0
2016-10-11T16:48:07Z
39,982,769
<p>My first guess would be:</p> <pre><code>line = 'Too\x1bm' if '\x1b' in line: while True: index = line.find('\x1b') if index &gt; 0: line = line[:index - 1] + line[index + 1:] else: break line = line.replace('\x1b', '') </code></pre>
0
2016-10-11T17:04:27Z
[ "python", "sockets" ]