title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
Python, 'NoneType' object has no attribute 'get_text' | 39,982,524 | <p>The output says: </p>
<blockquote>
<p>'NoneType' object has no attribute 'get_text'</p>
</blockquote>
<p>How can I fix this?</p>
<pre><code>response = requests.get("https://www.exar.com/careers")
soup = BeautifulSoup(response.text, "html.parser")
data = []
table_main = soup.find_all("table", class_="table")
#pprint(table_main)
for table_row in table_main:
job_category = table_row.find("th", class_="t3th").get_text().strip()
tds = table_row.find_all("td")
title = tds[0].find("td").get_text().strip()
location = tds[1].find("td").get_text().strip()
job = {
"job_location": location,
"job_title": title,
"job_dept": job_category
}
data.append(job)
pprint(data)
</code></pre>
| -4 | 2016-10-11T16:50:05Z | 39,982,552 | <p>Not sure why are you trying to find <code>td</code>s inside <code>td</code>s here:</p>
<pre><code>title = tds[0].find("td").get_text().strip()
location = tds[1].find("td").get_text().strip()
</code></pre>
<p>Replace it with just:</p>
<pre><code>title = tds[0].get_text().strip()
location = tds[1].get_text().strip()
</code></pre>
<p>Works for me.</p>
| 2 | 2016-10-11T16:51:50Z | [
"python",
"beautifulsoup"
] |
how to change all the words in a sentence into a number, and also be able to change it back | 39,982,553 | <p>OK, so I have to make a program in python, that is able to make a number for each word in a sentence, and if the word is the same it will have the same number, but I have to be able to change it back to the original sentence as well.</p>
<p>For example, "today the cat sat on the mat" would be changed into 0 1 2 3 4 1 5</p>
<p>0 being "today" and 5 being "mat", and 1 being "the"</p>
<p>so basically if I get the idea of this its creating a variable for each word being a number, the only problem is I have no idea were to start making this program, </p>
<p>some help would really be appreciated thanks :)</p>
| -5 | 2016-10-11T16:51:51Z | 39,982,837 | <p>This sounds a lot like a school assignment. Since from my experience I can say that the best practice is to do it yourself, I'd suggest only looking at the hints I give first, and if you're really stuck, look at the code.</p>
<p>Hint 1: </p>
<blockquote class="spoiler">
<p> Separate the sentence into a list of words first.</p>
</blockquote>
<p>You can do this using</p>
<blockquote class="spoiler">
<p> <code>words = sentence.split(" ")</code></p>
</blockquote>
<p>Hint 2:</p>
<blockquote class="spoiler">
<p> You want to create some kind of mapping from word to number, but also in reverse.</p>
</blockquote>
<p>You can do this using</p>
<blockquote class="spoiler">
<p> dicts. Treat them as literal dictionaries: Make one dict with the words as keys and the numbers as values, and one with numbers as keys and words as values. That'll allow you to look up numbers and words as necessary. Note that the dict which has numbers as keys could theoretically be a list, however this might break when you don't want numbers anymore, or when you want to delete certain entries.</p>
</blockquote>
<p>Hint 3:</p>
<blockquote class="spoiler">
<p> You'll need to generate an entry in both dicts mentioned in Hint 2 for every word - to make sure you can go back. Thus, a for loop over the list with words, and at every iteration generate an entry in both dicts.</p>
</blockquote>
<p>Hint 4:</p>
<blockquote class="spoiler">
<p> In order to make sure same words map to the same number, there are two ways to go. Firstly, during the iteration, you can simply check if the word is already in the words->numbers dict. If it is, skip it. Otherwise, generate the entries. Note that you need to keep track of the highest number you have used. Secondly, you could convert the list with words to a set. This will directly remove all duplicates, though you may lose the ordering of the words, meaning that in your example it might become 3 2 0 5 4 2 1</p>
</blockquote>
<p>I hope this will be useful. If absolutely necessary, I could provide the code to do it, but I highly recommend figuring it out yourself.</p>
| 1 | 2016-10-11T17:09:25Z | [
"python",
"python-3.x",
"variables",
"text",
"sentence"
] |
Getting key values from a dictionary | 39,982,612 | <p>I have a dictionary with indices as keys and timestamps as values. I wanted to get the keys for whose values there is a overlap.</p>
<p>ex:</p>
<pre><code>{1: 19-13-30
19-13-32
19-13-33
.
.
19-13-55,
2: 19-13-25
19-13-26
19-13-27
.
.
19-13-35,
3:19-13-10
19-13-31
.
.
19-13-18}
</code></pre>
<p>For the above dictionary values of 1 and 2 overlap(19-13-30 to 19-13-35). So, I want to return the keys whenever there is a overlap. In this case 1 & 2</p>
<p>For calculating the overlap I am iterating over the values and storing the start time and end time in a list like [starttime,endtime]. Then I am checking for overlap by </p>
<pre><code> overlapping = [ [x,y] for x in intervals for y in intervals if x is not y and x[1]>y[0] and x[0]<y[0] ]
for x in overlapping:
print '{0} overlaps with {1}'.format(x[0],x[1])
</code></pre>
<p>This prints the values which overlap.But instead I want the keys whose values overlap.</p>
| 0 | 2016-10-11T16:55:24Z | 39,982,745 | <p>In keeping with your code, this should be a minimal modification:</p>
<pre><code>intervals= {1:[1,2], 2:[2,3], 3:[4,5], 4:[6,8], 5:[6.5,7]}
overlapping = [ [i,j,x,y] for i,x in intervals.iteritems() for j,y in intervals.iteritems() if x is not y and x[1]>y[0] and x[0]<y[0] ]
for x in overlapping:
print '{0} overlaps with {1} at {2} and {3}'.format(x[2],x[3],x[0],x[1])
</code></pre>
| 0 | 2016-10-11T17:02:42Z | [
"python",
"dictionary"
] |
Getting key values from a dictionary | 39,982,612 | <p>I have a dictionary with indices as keys and timestamps as values. I wanted to get the keys for whose values there is a overlap.</p>
<p>ex:</p>
<pre><code>{1: 19-13-30
19-13-32
19-13-33
.
.
19-13-55,
2: 19-13-25
19-13-26
19-13-27
.
.
19-13-35,
3:19-13-10
19-13-31
.
.
19-13-18}
</code></pre>
<p>For the above dictionary values of 1 and 2 overlap(19-13-30 to 19-13-35). So, I want to return the keys whenever there is a overlap. In this case 1 & 2</p>
<p>For calculating the overlap I am iterating over the values and storing the start time and end time in a list like [starttime,endtime]. Then I am checking for overlap by </p>
<pre><code> overlapping = [ [x,y] for x in intervals for y in intervals if x is not y and x[1]>y[0] and x[0]<y[0] ]
for x in overlapping:
print '{0} overlaps with {1}'.format(x[0],x[1])
</code></pre>
<p>This prints the values which overlap.But instead I want the keys whose values overlap.</p>
| 0 | 2016-10-11T16:55:24Z | 39,982,773 | <p>Instead of iterating over the intervals, you can iterate over the values (i.e. the positions in your interval list). Something like</p>
<pre><code>keys = range(len(intervals))
overlapping = [ [x,y] for x in keys for y in keys if x is not y and intervals[x][1]>intervals[y][0] and intervals[x][0]<intervals[y][0] ]
</code></pre>
| 0 | 2016-10-11T17:04:38Z | [
"python",
"dictionary"
] |
python try except yield combination | 39,982,763 | <p>I use function <code>f</code> to create generator but sometimes it can raise error. I would like two things to happen for the main code</p>
<ol>
<li>The <code>for</code> loop in the main block continues after catching the error</li>
<li>In the <code>except</code>, print out the index that generates the error (in reality the error may not occur for index 3)</li>
</ol>
<p>The code I came up with stops after the error is raised. How shall I implement the forementioned two features? Many thanks.</p>
<pre><code>def f(n):
for i in xrange(n):
if i == 3:
raise ValueError('hit 3')
yield i
if __name__ == '__main__':
a = enumerate(f(10))
try:
for i, x in a:
print i, x
except ValueError:
print 'you have a problem with index x'
</code></pre>
| 2 | 2016-10-11T17:03:49Z | 39,982,860 | <p>You will have to catch the exception <em>inside</em> your generator if you want its loop to continue running. Here is a working example:</p>
<pre><code>def f(n):
for i in xrange(n):
try:
if i == 3:
raise ValueError('hit 3')
yield i
except ValueError:
print ("Error with key: {}".format(i))
</code></pre>
<p>Iterating through it like in your example gives:</p>
<pre><code>>>> for i in f(10):
... print (i)
...
0
1
2
Error with key: 3
4
5
6
7
8
9
</code></pre>
| 3 | 2016-10-11T17:10:32Z | [
"python",
"try-catch",
"yield",
"except"
] |
python try except yield combination | 39,982,763 | <p>I use function <code>f</code> to create generator but sometimes it can raise error. I would like two things to happen for the main code</p>
<ol>
<li>The <code>for</code> loop in the main block continues after catching the error</li>
<li>In the <code>except</code>, print out the index that generates the error (in reality the error may not occur for index 3)</li>
</ol>
<p>The code I came up with stops after the error is raised. How shall I implement the forementioned two features? Many thanks.</p>
<pre><code>def f(n):
for i in xrange(n):
if i == 3:
raise ValueError('hit 3')
yield i
if __name__ == '__main__':
a = enumerate(f(10))
try:
for i, x in a:
print i, x
except ValueError:
print 'you have a problem with index x'
</code></pre>
| 2 | 2016-10-11T17:03:49Z | 39,983,439 | <p>As of OP clarifications, he wants to continue the <strong>outside</strong> main for loop in case of error inside the generator, showing at which index the error occurred.</p>
<p>An answer by brianpck takes the approach of modifying the generator so that it prints the error. This way the main loop doesn't know an error occurred at that index, and thus you have at index x-1 the results following the error. Sometimes you care of the assumption "one index <-> one result".</p>
<p>To solve this we can manually manage the error by yielding it and then deciding what to do in the generator.</p>
<p>Like to following:</p>
<pre><code>def f(n):
for i in xrange(n):
if i == 3:
yield ValueError('hit 3')
continue # or break, depends on problem logic
yield i
if __name__ == '__main__':
a = enumerate(f(10))
for i, x in a:
if isinstance(x, ValueError):
print "Error at index", i
continue
print i, x
</code></pre>
<p>Usually it's very unlikely a generator is yielding Exception classes, so it would be safe to check if the result is an Exception and dealing with it.</p>
| 1 | 2016-10-11T17:46:19Z | [
"python",
"try-catch",
"yield",
"except"
] |
python try except yield combination | 39,982,763 | <p>I use function <code>f</code> to create generator but sometimes it can raise error. I would like two things to happen for the main code</p>
<ol>
<li>The <code>for</code> loop in the main block continues after catching the error</li>
<li>In the <code>except</code>, print out the index that generates the error (in reality the error may not occur for index 3)</li>
</ol>
<p>The code I came up with stops after the error is raised. How shall I implement the forementioned two features? Many thanks.</p>
<pre><code>def f(n):
for i in xrange(n):
if i == 3:
raise ValueError('hit 3')
yield i
if __name__ == '__main__':
a = enumerate(f(10))
try:
for i, x in a:
print i, x
except ValueError:
print 'you have a problem with index x'
</code></pre>
| 2 | 2016-10-11T17:03:49Z | 39,983,724 | <p>I suspect that, in general, you want to be able to catch values that result in error conditions. Without halting the loop inside the generator. Here's another approach that includes a Boolean in the result of the generator (as a 2-tuple) that indicates whether the calculation could be accomplished successfully.</p>
<pre><code>def f(n):
for i in range(n):
accept=True
try:
result=1/(3-i)
except:
accept=False
yield accept, i
a=enumerate(f(10))
for k,(ok,i) in a:
print (ok,i)
</code></pre>
<p>In this case, only the value 3 causes failure. Here's the output.</p>
<pre><code>True 0
True 1
True 2
False 3
True 4
True 5
True 6
True 7
True 8
True 9
</code></pre>
| 1 | 2016-10-11T18:02:54Z | [
"python",
"try-catch",
"yield",
"except"
] |
Web Scraping from a Website with a Static URL | 39,982,772 | <p>So I am trying to extract postal code information from the <a href="https://www.canadapost.ca/cpo/mc/personal/postalcode/fpc.jsf?LOCALE=en" rel="nofollow">Canada Post Website</a>. The issue I am having here is the URL remains static regardless of what address you enter when trying to find a postal code. For instance, starting at the base page, if I input '1 MACLEAN ST' as my search query, and hit enter </p>
<p><a href="https://i.stack.imgur.com/C0hmj.png" rel="nofollow"><img src="https://i.stack.imgur.com/C0hmj.png" alt="enter image description here"></a></p>
<p>You will notice the URL remains the same</p>
<p><a href="https://i.stack.imgur.com/Xo270.png" rel="nofollow"><img src="https://i.stack.imgur.com/Xo270.png" alt="enter image description here"></a></p>
<p>I have never web scraped from a website with a static URL before, and was wondering how I would go about doing this (eg. getting specific libraries for Python etc). I think at some point, I more than likely have to extract the postal code information (' A0J 1T0' in this case) through an html tag, as seen below. </p>
<p><a href="https://i.stack.imgur.com/dn56H.png" rel="nofollow"><img src="https://i.stack.imgur.com/dn56H.png" alt="enter image description here"></a></p>
| -2 | 2016-10-11T17:04:34Z | 39,982,856 | <p>Since you need to perform actions prior to scraping, you need to use a headless browser like <a href="http://phantomjs.org" rel="nofollow">phantomjs</a>. It's a bit more challenging than basic scraping, but it will allow you to type in addresses programmatically and then scrape the resulting data of the page that is returned.</p>
| 0 | 2016-10-11T17:10:21Z | [
"python",
"html",
"url",
"static",
"web-scraping"
] |
Web Scraping from a Website with a Static URL | 39,982,772 | <p>So I am trying to extract postal code information from the <a href="https://www.canadapost.ca/cpo/mc/personal/postalcode/fpc.jsf?LOCALE=en" rel="nofollow">Canada Post Website</a>. The issue I am having here is the URL remains static regardless of what address you enter when trying to find a postal code. For instance, starting at the base page, if I input '1 MACLEAN ST' as my search query, and hit enter </p>
<p><a href="https://i.stack.imgur.com/C0hmj.png" rel="nofollow"><img src="https://i.stack.imgur.com/C0hmj.png" alt="enter image description here"></a></p>
<p>You will notice the URL remains the same</p>
<p><a href="https://i.stack.imgur.com/Xo270.png" rel="nofollow"><img src="https://i.stack.imgur.com/Xo270.png" alt="enter image description here"></a></p>
<p>I have never web scraped from a website with a static URL before, and was wondering how I would go about doing this (eg. getting specific libraries for Python etc). I think at some point, I more than likely have to extract the postal code information (' A0J 1T0' in this case) through an html tag, as seen below. </p>
<p><a href="https://i.stack.imgur.com/dn56H.png" rel="nofollow"><img src="https://i.stack.imgur.com/dn56H.png" alt="enter image description here"></a></p>
| -2 | 2016-10-11T17:04:34Z | 39,982,891 | <p>You could write a wrapper using something like <a href="http://scraping.pro/selenium-ide-and-web-scraping/" rel="nofollow">Selenium</a> to interact with the page dynamically. </p>
<p>Alternatively, you may want to look into their developer API, which should allow you to provide an address and get back a code (as well as more advanced use cases like creating shipping labels).</p>
<p><a href="https://www.canadapost.ca/cpo/mc/business/productsservices/developers/services/fundamentals.jsf" rel="nofollow">https://www.canadapost.ca/cpo/mc/business/productsservices/developers/services/fundamentals.jsf</a></p>
| 1 | 2016-10-11T17:12:07Z | [
"python",
"html",
"url",
"static",
"web-scraping"
] |
Recursion in depth (Python) | 39,982,813 | <p>Check the following code:</p>
<pre><code>>>> def fib(x):
... if x == 0 or x == 1:
... return 1
... else:
... return fib(x-1) + fib(x-2)
>>> print(fib(4))
</code></pre>
<p>According to the comments in the SoloLearn Python tutorial (for Recursion), the code works like this:</p>
<pre><code>1. fib(4) = fib(3) + fib(2)
2. = (fib(2) + fib(1)) + (fib(1) + fib(0))
3. = fib(1) + fib(0) + fib(1) + fib(1) + fib(0)
4. = 1+ 1 + 1 + 1 + 1
5. = 5
</code></pre>
<p>After line 2, only <code>fib(2)</code> should go the else part of the <code>fib()</code> function, right?
The two <code>fib(1)</code> and the single <code>fib(0)</code> meet the criteria of the if part of the <code>fib()</code> function. So 1 is returned. My question is, why in the 3rd line, <code>fib(1) + fib(0) + fib(1) + fib(1) + fib(0)</code> are all replaced by 1's and then added?</p>
<p>Do forgive me for asking such a noob question.</p>
| 2 | 2016-10-11T17:07:17Z | 39,982,915 | <p>@MorganThrapp is correct. More specifically, the base case of the recursive function <code>fib</code> is:</p>
<pre><code>if x==0 or x==1:
return 1
</code></pre>
<p>This clause is triggered when <code>fib(1)</code> or <code>fib(0)</code> is called. In programming parlance, the function <em>evaluates</em> to its return value, which here is <code>1</code>.</p>
<p>In your example of <code>fib(4)</code>, <code>fib</code> gets called five times with either a <code>1</code> or a <code>0</code> input and those results are all added together, and that results in your final return value of <code>5</code> which is returned from your original call to <code>fib(4)</code> and immediately passed into the <code>print</code> function.</p>
| 2 | 2016-10-11T17:13:43Z | [
"python",
"recursion",
"return"
] |
Recursion in depth (Python) | 39,982,813 | <p>Check the following code:</p>
<pre><code>>>> def fib(x):
... if x == 0 or x == 1:
... return 1
... else:
... return fib(x-1) + fib(x-2)
>>> print(fib(4))
</code></pre>
<p>According to the comments in the SoloLearn Python tutorial (for Recursion), the code works like this:</p>
<pre><code>1. fib(4) = fib(3) + fib(2)
2. = (fib(2) + fib(1)) + (fib(1) + fib(0))
3. = fib(1) + fib(0) + fib(1) + fib(1) + fib(0)
4. = 1+ 1 + 1 + 1 + 1
5. = 5
</code></pre>
<p>After line 2, only <code>fib(2)</code> should go the else part of the <code>fib()</code> function, right?
The two <code>fib(1)</code> and the single <code>fib(0)</code> meet the criteria of the if part of the <code>fib()</code> function. So 1 is returned. My question is, why in the 3rd line, <code>fib(1) + fib(0) + fib(1) + fib(1) + fib(0)</code> are all replaced by 1's and then added?</p>
<p>Do forgive me for asking such a noob question.</p>
| 2 | 2016-10-11T17:07:17Z | 39,983,008 | <p>I believe the description of how the code works is misleading, since it seems to show that not every values are evaluated when going from one line to the next. If we replace everything by the functions it calls (or the value it returns) on the next line, and put parentheses like on your example, we get the following, that might help you understand better the inner workings of that code :</p>
<pre><code>1. fib(4)
2. = fib(3) + fib(2)
3. = (fib(2) + fib(1)) + (fib(1) + fib(0))
4. = ((fib(1) + fib(0)) + 1) + (1 + 1)
5. = 1 + 1 + 1 + 2
6. = 5
</code></pre>
| 3 | 2016-10-11T17:20:44Z | [
"python",
"recursion",
"return"
] |
Recursion in depth (Python) | 39,982,813 | <p>Check the following code:</p>
<pre><code>>>> def fib(x):
... if x == 0 or x == 1:
... return 1
... else:
... return fib(x-1) + fib(x-2)
>>> print(fib(4))
</code></pre>
<p>According to the comments in the SoloLearn Python tutorial (for Recursion), the code works like this:</p>
<pre><code>1. fib(4) = fib(3) + fib(2)
2. = (fib(2) + fib(1)) + (fib(1) + fib(0))
3. = fib(1) + fib(0) + fib(1) + fib(1) + fib(0)
4. = 1+ 1 + 1 + 1 + 1
5. = 5
</code></pre>
<p>After line 2, only <code>fib(2)</code> should go the else part of the <code>fib()</code> function, right?
The two <code>fib(1)</code> and the single <code>fib(0)</code> meet the criteria of the if part of the <code>fib()</code> function. So 1 is returned. My question is, why in the 3rd line, <code>fib(1) + fib(0) + fib(1) + fib(1) + fib(0)</code> are all replaced by 1's and then added?</p>
<p>Do forgive me for asking such a noob question.</p>
| 2 | 2016-10-11T17:07:17Z | 39,984,186 | <p>this is a double recursive function, so its call result a tree structure of calls with the base case of fib(1) and fib(0)</p>
<pre><code>fib(4) = fib(3) + fib(2)
/ \ / \
fib(4) = ( fib(2) + fib(1) ) + ( fib(1) + fib(0) )
/ \ | | |
fib(4) = ( ( fib(1) + fib(0) ) + fib(1) ) + ( fib(1) + fib(0) )
| | | | |
fib(4) = ( ( 1 + 1 ) + 1 ) + ( 1 + 1 )
\ / | \ /
fib(4) = ( ( 2 ) + 1 ) + ( 2 )
\ / |
fib(4) = ( 3 ) + ( 2 )
\ /
fib(4) = 5
</code></pre>
<p>you can also visualize the working of the function by adding some prints in the right places and a extra auxiliary argument to aid with it and some other minor changes</p>
<pre><code>>>> def fib(n, nivel=0):
if n==0 or n==1:
print(" "*nivel,"fib(",n,")=1")
return 1
else:
print(" "*nivel,"fib(",n,")")
result = fib(n-1,nivel+1) + fib(n-2,nivel+1)
print(" "*nivel,"fib(",n,")=",result)
return result
>>> fib(4)
fib( 4 )
fib( 3 )
fib( 2 )
fib( 1 )=1
fib( 0 )=1
fib( 2 )= 2
fib( 1 )=1
fib( 3 )= 3
fib( 2 )
fib( 1 )=1
fib( 0 )=1
fib( 2 )= 2
fib( 4 )= 5
5
>>>
</code></pre>
<p>here you can notice that the call are resolved in sequence from left to right and bottom up </p>
| 3 | 2016-10-11T18:27:40Z | [
"python",
"recursion",
"return"
] |
Using xpath to loop over all <h2> tags within a speciifc div | 39,982,929 | <p>I am trying to loop over every <code><h2></code> tag (get the text of it) that is inside div's with the <code>id="somediv"</code> using this code:</p>
<pre><code>for k,div1 in enumerate(tree.xpath('//div[@id="someid"]')):
print div1.xpath('.//h2['+str(k+1)+']/text()')
</code></pre>
<p>but it doesn't work. Why? However this works:</p>
<pre><code>for i in range(5): #let's say there are 5 div's with id="someid" to make things easier
print tree.xpath('//div[@id="someid"]/div/div[1]/div[2]/h2['+str(i)+']/text()'))
</code></pre>
<p>Problem here is, that I have to give the absolute path <code>.../div/div[1]/div[2]...</code> which I don't want. My first solution looks nice but is not producing the desired result, instead I can only retrieve all <code><h2></code> tags from one <code>div="someid"</code> at a time. Can anyone tell me what I am doing wrong?</p>
| 1 | 2016-10-11T17:14:53Z | 39,983,553 | <p><code>.//</code> will continue the search down the tree. A list of h2 text nodes subordinate to your div is just</p>
<pre><code>tree.xpath('//div[@id="someid"]/.//h2/text()'))
</code></pre>
| 1 | 2016-10-11T17:52:48Z | [
"python",
"xpath"
] |
Updating python IDLE to 3.5 | 39,982,981 | <p>I currently use python 3.2.3 in the IDLE (which I open from 'python 3', 'programming', 'menu') on my Raspberry Pi.
In an attempt to solve <a href="https://github.com/Zulko/moviepy/issues/322" rel="nofollow">this problem</a> I am trying to get my IDLE to run python 3.5.</p>
<p>I have got Python 3.5 to run if I enter into the terminal </p>
<pre><code>python3
</code></pre>
<p>but the IDLE still uses Python 3.2.3</p>
<p>Any thoughts?</p>
<hr>
<p>EDIT:
I realise it is actually a duplicate to <a href="http://stackoverflow.com/questions/37079195/how-do-you-update-to-the-latest-python-3-5-1-version-on-a-raspberry-pi">this</a> question, but because the asker didn't specify that they wanted the IDLE version, the answerers didn't answer the question how it was intended.</p>
| 0 | 2016-10-11T17:19:12Z | 39,983,074 | <p>This is because Raspbian (or whatever OS you're using) depends on those pre-installed versions of Python and later versions are not available via <code>apt-get</code> for good reason. </p>
<p>Edit:</p>
<p>I see you've already got Python 3.5 installed. To get IDLE to use the appropriate version, you need to set IDLE to use that version of Python by changing the shebang line in the IDLE-python file. This is probably located somewhere like <code>/usr/bin/idle-python3.2</code></p>
<p>The contents of the file will look something like this:</p>
<pre><code>#! /usr/bin/python3.2
from idlelib.PyShell import main
if __name__ == '__main__':
main()
</code></pre>
<p>So you should edit it to the version you'd like to use defaultly.</p>
<pre><code>#! /usr/bin/python3.5
from idlelib.PyShell import main
if __name__ == '__main__':
main()
</code></pre>
<p>Alternatively, make an <em>additional</em> IDLE that points to 3.5</p>
| 0 | 2016-10-11T17:24:18Z | [
"python",
"linux",
"python-3.x",
"raspberry-pi"
] |
Python altering list item in iteration | 39,983,007 | <p>I am trying to get this python code to get rid of punctuation marks associated with words and count the unique words. For some reason it's still counting both "hello." and "hello". Any help would be most appreciated. </p>
<pre><code>def word_distribution(words):
word_dict = {}
words = words.lower()
words = words.split()
for word in words:
if ord('a') <= ord(word[-1]) <= ord('z'):
pass
elif ord('A') <= ord(word[-1]) <= ord('Z'):
pass
else:
word[:-1]
word_dict = {word:words.count(word)+1 for word in set(words)}
return(word_dict)
</code></pre>
| 0 | 2016-10-11T17:20:39Z | 39,983,277 | <p>I don't know why you're adding 1 to count.</p>
<pre><code>def word_distribution(words):
word_dict = {}
words = words.lower().split()
for word in words:
if ord('a') <= ord(word[-1]) <= ord('z'):
pass
elif ord('A') <= ord(word[-1]) <= ord('Z'):
pass
word_dict = {word:words.count(word) for word in set(words)}
return(word_dict)
</code></pre>
<p>{'hello': 2, 'my': 1, 'name': 1, 'is': 1}</p>
<p>Edit:</p>
<p>as brianpck, points out:</p>
<pre><code>def word_distribution(words):
word_dict = {}
words = words.lower().split()
word_dict = {word:words.count(word) for word in set(words)}
return(word_dict)
</code></pre>
<p>also will give the same result.</p>
| 1 | 2016-10-11T17:36:55Z | [
"python"
] |
Python altering list item in iteration | 39,983,007 | <p>I am trying to get this python code to get rid of punctuation marks associated with words and count the unique words. For some reason it's still counting both "hello." and "hello". Any help would be most appreciated. </p>
<pre><code>def word_distribution(words):
word_dict = {}
words = words.lower()
words = words.split()
for word in words:
if ord('a') <= ord(word[-1]) <= ord('z'):
pass
elif ord('A') <= ord(word[-1]) <= ord('Z'):
pass
else:
word[:-1]
word_dict = {word:words.count(word)+1 for word in set(words)}
return(word_dict)
</code></pre>
| 0 | 2016-10-11T17:20:39Z | 39,983,294 | <p>You are making it too complicated, as Sohier Dane mentioned in the comments you can make use of the other post to remove punctuation and simplify the script to:</p>
<pre><code>import string
def word_distribution(words):
words = words.translate(None, string.punctuation).lower()
d = {}
for w in words.split():
if w not in d.keys():
d[w] = 1
else:
d[w] += 1
return d
</code></pre>
<p>Results:</p>
<pre><code>>>> x='Hello My Name Is hello.'
>>> print word_distribution(x)
>>> {'is': 1, 'my': 1, 'hello': 2, 'name': 1}
</code></pre>
| 1 | 2016-10-11T17:37:57Z | [
"python"
] |
Python altering list item in iteration | 39,983,007 | <p>I am trying to get this python code to get rid of punctuation marks associated with words and count the unique words. For some reason it's still counting both "hello." and "hello". Any help would be most appreciated. </p>
<pre><code>def word_distribution(words):
word_dict = {}
words = words.lower()
words = words.split()
for word in words:
if ord('a') <= ord(word[-1]) <= ord('z'):
pass
elif ord('A') <= ord(word[-1]) <= ord('Z'):
pass
else:
word[:-1]
word_dict = {word:words.count(word)+1 for word in set(words)}
return(word_dict)
</code></pre>
| 0 | 2016-10-11T17:20:39Z | 39,983,304 | <p>There are certainly better way of achieving what you are trying to do but this answer fixes your code.</p>
<p>Strings are immutable and lists are mutable. Nowhere in your code you were modifying the list. and <code>words[-1]</code> wont have any impact because you were not re assigning it and string are immutable</p>
<pre><code>def word_distribution(words):
word_dict = {}
words = words.lower()
words = words.split()
for word in words:
index = words.index(word)
if ord('a') <= ord(word[-1]) <= ord('z'):
pass
elif ord('A') <= ord(word[-1]) <= ord('Z'):
pass
else:
word = word[:-1]
words[index] = word
word_dict = {word:words.count(word) for word in set(words)}
return(word_dict)
</code></pre>
| 1 | 2016-10-11T17:38:37Z | [
"python"
] |
N/A in integer field | 39,983,051 | <p>I am importing Excel into postgreSQL using Python. Below is the field I am having problem with. Actual Sales Price had been an integer data type value which for a while but now this column contains an N/A value which is blowing up my Python script. Is there anything I can add to this script which will tell it to bring in N/A without changing the data type to varchar.</p>
<pre><code>import psycopg2
import xlrd
book = xlrd.open_workbook("T:\DataDump\8888.xlsx")
sheet = book.sheet_by_name("ProjectConsolidated")
database = psycopg2.connect (database = "***", user="****")
cursor = database.cursor()
delete = """drop table if exists "Python".ProjectConsolidated"""
print (delete)
mydata = cursor.execute(delete)
cursor.execute('''CREATE TABLE "Python".ProjectConsolidated
(DCAD_Prop_ID VARCHAR(25),
Actual_Close_Date date,
Actual_Sales_Price integer,
);''')
print "Table created successfully"
query = """INSERT INTO "Python".ProjectConsolidated (DCAD_Prop_ID,
Actual_Close_Date, Actual_Sales_Price)
VALUES (%s, %s, %s)"""
for r in range(1, sheet.nrows):
DCAD_Prop_ID = sheet.cell(r,0).value
Actual_Close_Date = None if not sheet.cell(r,1).value else xlrd.xldate.xldate_as_datetime(sheet.cell(r,1).value,book.datemode)
Actual_Sales_Price = None if not sheet.cell(r,2).value else sheet.cell(r,2).value
values = (DCAD_Prop_ID,
Actual_Close_Date, Actual_Sales_Price)
cursor.execute(query, values)
cursor.close()
database.commit()
database.close()
print ""
print "All Done! Bye, for now."
print ""
columns = str(sheet.ncols)
rows = str(sheet.nrows)
print "I just imported Excel into postgreSQL"
</code></pre>
| 0 | 2016-10-11T17:22:53Z | 39,983,115 | <pre><code>Actual_Sales_Price = None if not sheet.cell(r,61).value else sheet.cell(r,61).value
try:
float(Actual_Sales_Price)
except (ValueError, TypeError):
Actual_Sales_Price = None
</code></pre>
<p>If python fails to convert your actual sales price into a float (presumably, because it's not a number), we change the ASP to Null. </p>
<p>Your DB driver should know how to translate a python None into Postgres. </p>
<p>Whether you actually want the Actual Sales Price to be Null in your DB is up to you, although it sounds wrong from the limited info provided. </p>
| 0 | 2016-10-11T17:26:37Z | [
"python",
"postgresql"
] |
Change .so, .pyc, and .py search order in python search path | 39,983,095 | <p>According to this post, python prioritizes .so and .pyc before .py files when searching for modules. Is there some way to make .py searched first?</p>
<p><a href="http://stackoverflow.com/questions/6584457/what-is-the-precedence-of-python-compiled-files-in-imports">What is the precedence of python compiled files in imports?</a></p>
<p>My use case is that i have libraries that have .py files but are compiled to .pyc using a different bit size than my ipython notebook. I'd like to use ipython notebook on those libraries without messing up my dev environment</p>
| 0 | 2016-10-11T17:25:14Z | 39,983,250 | <p>You shouldn't be using the same package installations with two different version of the python interpreter, since they're bytecode won't be compatible. You should install the packages to each python installation separately.</p>
<p>In Python 3, this is less of an issue, since the interpreter version is hashed into the bytecode filename inside a <code>__pycache__</code> directory, so multiple interpreters can generate bytecode for the same installation without stepping on each other.</p>
<p>You can also run python with the <a href="https://docs.python.org/2/using/cmdline.html#cmdoption-B" rel="nofollow"><code>-B</code> flag</a>, or set the <code>PYTHONDONTWRITEBYTECODE</code> environment variable, to tell python not to compile .pyc files</p>
| 0 | 2016-10-11T17:35:40Z | [
"python",
"import"
] |
python regex seperated elements of a string | 39,983,143 | <p><strong>Though I'd just update the start of this question for people that come accross it in future. Regex was not the optimal solution for my particular problem, but trying to regex complicated and separated patterns (my logic from the start) in one go wasn't ideal.
The answer to the question as stated would be to try separate regexes I think, and 'filter out' the stuff needed.
My file could be worked on with the <code>pandas.read_fwf()</code> solution for optimal results so I chose that as the full answer.</strong></p>
<p>I'm sure this has been asked somewhere before but I can't find a question that is exactly trying to do what I want - so my apologies in advance.</p>
<p><strong>TLDR</strong> How would you regex for several different patterns in a line that are not located next to each other, or properly delimited? Am I wrong to be trying to do this in one move?</p>
<p>I have some strings in a pretty verbose file (see end of post) that I want to pull out. I want multiple bits of information from different columns within a line (though they are not properly delimited).</p>
<p>I know I can get this into <code>match.group()</code> which will be perfect (because I intend to use each element I pull out later in isolation), except I can't figure out how to match several substrings that are physically separated from other another in the string (unless trying to do this is one go is just wrong?).</p>
<p>I can extract the table part that I want with some simple regex no problem:</p>
<pre><code>#!/usr/bin/python
import re
hhresult_file = sys.argv[1] # The above file
regex = re.compile(r'\s*\d{1,2}\s\w{4}_\w\s.*') # Will match the whole line (my first shot at the problem)
def main():
with open(hhresult_file, 'r') as result_fasta:
lines = result_fasta.readlines()
for line in lines:
match = re.search(regex,line)
if match:
print(match.group())
if __name__ == '__main__' :
main()
</code></pre>
<p>But I'm also trying to pull out the columns which read "Hit" "Prob" "E-Value" "P-Value".</p>
<p>I think I can synthesise the required regexes for each individual fields (there are some nuances like the switch between exponentiated SI values and floats for example).</p>
<p>What I don't know how to do is 'disregard' regions of the string? Specifically, I can't get the 'Hit' (= 3izo_F) and then the 'Prob' field because of the hit description in the intervening space.</p>
<p>I was trying to go about it with grouped regexes, but without being physically adjacent it doesn't work (something like these, though there may be errors in them):</p>
<pre><code>regex = re.compile(r'''
(\w{4}_\w) # Match the hit
(\d{1,3}\.\d') # Match the probability score
(\d\.?\d?|\d\.?\d?E-\d\d|\d\.\d*) # E value as float/E-
(\d\.?\d?|\d\.?\d?E-\d\d|\d\.\d*) # Match SI or float P value
(\d+\.\d+) # Match the score
''',re.VERBOSE)
</code></pre>
<p>The file in question:</p>
<pre><code>Query PAU_03380 PAU_03380 hypothetical protein 3919442:3920968 reverse MW:51681
Match_columns 508
No_of_seqs 1 out of 1
Neff 1.0
Searched_HMMs 37488
Date Mon May 23 20:23:54 2016
Command hhsearch -cpu 10 -i /home/wms_joe/PVCs/PVC_operons/prot_all/PAU_03380.faa -d /home/wms_joe/Applications/HHSuite/databases/pdb70/pdb70_hhm.ffdata -B 5 -Z 5 -E 1E-03 -nocons -nopred -nodssp
No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM
1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
2 3izo_F Fiber; pentameric pento 97.6 1.3E-07 3.4E-12 95.6 0.0 156 156-317 210-388 (581)
3 1ocy_A Bacteriophage T4 short 97.6 1.8E-07 4.7E-12 80.4 0.0 85 323-418 10-122 (198)
4 1v1h_A Fibritin, fiber protein 96.1 0.00011 3E-09 60.4 0.0 30 167-198 2-31 (103)
5 1v1h_A Fibritin, fiber protein 95.9 0.00019 5.1E-09 59.1 0.0 10 168-177 41-50 (103)
6 1pdi_A Short tail fiber protei 95.6 0.00041 1.1E-08 63.3 0.0 26 323-348 90-116 (278)
7 2xgf_A Long tail fiber protein 94.1 0.005 1.3E-07 55.1 0.0 31 318-348 22-52 (242)
8 1h6w_A Bacteriophage T4 short 84.7 0.25 6.7E-06 47.1 0.0 27 323-349 255-282 (312)
9 1qiu_A Adenovirus fibre; fibre 79.9 0.54 1.4E-05 44.4 0.0 24 92-115 7-30 (264)
10 3s6x_A Outer capsid protein si 72.0 1.3 3.4E-05 43.6 0.0 69 106-191 44-112 (325)
No 1
>3izo_F Fiber; pentameric penton base, trimeri viral protein; 3.60A {Human adenovirus 5}
Probab=98.13 E-value=2.7e-09 Score=107.58 Aligned_cols=65 Identities=22% Similarity=0.362 Sum_probs=42.7
Q PAU_03380 93 PLILKDDVLSVDLGSGLTNETNGICVGQGDGITVNTSNVAVKQGNGISVTSSGGVAVKVSANKGLSVD 160 (508)
||-+.++-|.++....|+...+++.+--+++++|+.....++....++++ .+++++++. .||.++
T 3izo_F 104 PLTVTSEALTVAAAAPLMVAGNTLTMQSQAPLTVHDSKLSIATQGPLTVS-EGKLALQTS--GPLTTT 168 (581)
Confidence 55555556666666667777777777777777777776777777777764 566666554 355554
No 2
>3izo_F Fiber; pentameric penton base, trimeri viral protein; 3.60A {Human adenovirus 5}
Probab=97.60 E-value=1.3e-07 Score=95.57 Aligned_cols=156 Identities=19% Similarity=0.323 Sum_probs=85.6
Q PAU_03380 156 GLSVDSSGVAVKVNTDKGISVDGNGVAVKVNTSKGISVDNTGVAVIANASKGISVDGSGV--------------AVIANT 221 (508)
.|.+..++-.+.+++..|+.|.++.+.+|+ ..++.+++.|- +-.+...|+.++...- .+..+.
T 3izo_F 210 PLHVTDDLNTLTVATGPGVTINNTSLQTKV--TGALGFDSQGN-MQLNVAGGLRIDSQNRRLILDVSYPFDAQNQLNLRL 286 (581)
Confidence 344544434556666667777666655443 23333333221 1111222333332211 234445
</code></pre>
<p>It goes on a bit but is just more of the above 2 alignments.</p>
<p><strong>UPDATE 1</strong></p>
<p>Just to provide an example of what I'd ideally like at the end:</p>
<p>Given the line in the 'short table':</p>
<pre><code> 1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
</code></pre>
<p>I'd like to get either a delimited string, or separate <code>match.group</code> for:</p>
<p>The PDB Hit ID == <code>3izo_F</code></p>
<p>Each of the first 4 metrics (as separate groups ideally, but I could deal with that after the fact) = <code>98.1</code> <code>2.7E-09</code> <code>7.3E-14</code> <code>107.6</code></p>
<p>Such a shame this program doesn't just provide a proper tabular output :(</p>
| 1 | 2016-10-11T17:28:44Z | 39,983,968 | <p>You have two parts in your data file. One is compact table:</p>
<pre><code> 1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
2 3izo_F Fiber; pentameric pento 97.6 1.3E-07 3.4E-12 95.6 0.0 156 156-317 210-388 (581)
3 1ocy_A Bacteriophage T4 short 97.6 1.8E-07 4.7E-12 80.4 0.0 85
</code></pre>
<p>Fields in it have fixed position. So instead of regular expressions you can use simple substrings: </p>
<pre><code>line[4:34] for hit
line[36:40] for prob
</code></pre>
<p>But that table has trimmed hit field. If you want it's full content you have to parse second part of the file. And multiline regular expressions are a good choice for that. This one finds hit, probability and E-value, fill free to expand it.</p>
<pre><code>re.compile(r"No \d*\n>([^\n]*)\nProbab=([\d\.e\-]*).*E-value=([\d\.e\-]*).*", re.MULTILINE)
</code></pre>
<p>But that part of the file does not contain P-value. So it seems that you will have to combine these methods.</p>
| 1 | 2016-10-11T18:16:28Z | [
"python",
"regex",
"string"
] |
python regex seperated elements of a string | 39,983,143 | <p><strong>Though I'd just update the start of this question for people that come accross it in future. Regex was not the optimal solution for my particular problem, but trying to regex complicated and separated patterns (my logic from the start) in one go wasn't ideal.
The answer to the question as stated would be to try separate regexes I think, and 'filter out' the stuff needed.
My file could be worked on with the <code>pandas.read_fwf()</code> solution for optimal results so I chose that as the full answer.</strong></p>
<p>I'm sure this has been asked somewhere before but I can't find a question that is exactly trying to do what I want - so my apologies in advance.</p>
<p><strong>TLDR</strong> How would you regex for several different patterns in a line that are not located next to each other, or properly delimited? Am I wrong to be trying to do this in one move?</p>
<p>I have some strings in a pretty verbose file (see end of post) that I want to pull out. I want multiple bits of information from different columns within a line (though they are not properly delimited).</p>
<p>I know I can get this into <code>match.group()</code> which will be perfect (because I intend to use each element I pull out later in isolation), except I can't figure out how to match several substrings that are physically separated from other another in the string (unless trying to do this is one go is just wrong?).</p>
<p>I can extract the table part that I want with some simple regex no problem:</p>
<pre><code>#!/usr/bin/python
import re
hhresult_file = sys.argv[1] # The above file
regex = re.compile(r'\s*\d{1,2}\s\w{4}_\w\s.*') # Will match the whole line (my first shot at the problem)
def main():
with open(hhresult_file, 'r') as result_fasta:
lines = result_fasta.readlines()
for line in lines:
match = re.search(regex,line)
if match:
print(match.group())
if __name__ == '__main__' :
main()
</code></pre>
<p>But I'm also trying to pull out the columns which read "Hit" "Prob" "E-Value" "P-Value".</p>
<p>I think I can synthesise the required regexes for each individual fields (there are some nuances like the switch between exponentiated SI values and floats for example).</p>
<p>What I don't know how to do is 'disregard' regions of the string? Specifically, I can't get the 'Hit' (= 3izo_F) and then the 'Prob' field because of the hit description in the intervening space.</p>
<p>I was trying to go about it with grouped regexes, but without being physically adjacent it doesn't work (something like these, though there may be errors in them):</p>
<pre><code>regex = re.compile(r'''
(\w{4}_\w) # Match the hit
(\d{1,3}\.\d') # Match the probability score
(\d\.?\d?|\d\.?\d?E-\d\d|\d\.\d*) # E value as float/E-
(\d\.?\d?|\d\.?\d?E-\d\d|\d\.\d*) # Match SI or float P value
(\d+\.\d+) # Match the score
''',re.VERBOSE)
</code></pre>
<p>The file in question:</p>
<pre><code>Query PAU_03380 PAU_03380 hypothetical protein 3919442:3920968 reverse MW:51681
Match_columns 508
No_of_seqs 1 out of 1
Neff 1.0
Searched_HMMs 37488
Date Mon May 23 20:23:54 2016
Command hhsearch -cpu 10 -i /home/wms_joe/PVCs/PVC_operons/prot_all/PAU_03380.faa -d /home/wms_joe/Applications/HHSuite/databases/pdb70/pdb70_hhm.ffdata -B 5 -Z 5 -E 1E-03 -nocons -nopred -nodssp
No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM
1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
2 3izo_F Fiber; pentameric pento 97.6 1.3E-07 3.4E-12 95.6 0.0 156 156-317 210-388 (581)
3 1ocy_A Bacteriophage T4 short 97.6 1.8E-07 4.7E-12 80.4 0.0 85 323-418 10-122 (198)
4 1v1h_A Fibritin, fiber protein 96.1 0.00011 3E-09 60.4 0.0 30 167-198 2-31 (103)
5 1v1h_A Fibritin, fiber protein 95.9 0.00019 5.1E-09 59.1 0.0 10 168-177 41-50 (103)
6 1pdi_A Short tail fiber protei 95.6 0.00041 1.1E-08 63.3 0.0 26 323-348 90-116 (278)
7 2xgf_A Long tail fiber protein 94.1 0.005 1.3E-07 55.1 0.0 31 318-348 22-52 (242)
8 1h6w_A Bacteriophage T4 short 84.7 0.25 6.7E-06 47.1 0.0 27 323-349 255-282 (312)
9 1qiu_A Adenovirus fibre; fibre 79.9 0.54 1.4E-05 44.4 0.0 24 92-115 7-30 (264)
10 3s6x_A Outer capsid protein si 72.0 1.3 3.4E-05 43.6 0.0 69 106-191 44-112 (325)
No 1
>3izo_F Fiber; pentameric penton base, trimeri viral protein; 3.60A {Human adenovirus 5}
Probab=98.13 E-value=2.7e-09 Score=107.58 Aligned_cols=65 Identities=22% Similarity=0.362 Sum_probs=42.7
Q PAU_03380 93 PLILKDDVLSVDLGSGLTNETNGICVGQGDGITVNTSNVAVKQGNGISVTSSGGVAVKVSANKGLSVD 160 (508)
||-+.++-|.++....|+...+++.+--+++++|+.....++....++++ .+++++++. .||.++
T 3izo_F 104 PLTVTSEALTVAAAAPLMVAGNTLTMQSQAPLTVHDSKLSIATQGPLTVS-EGKLALQTS--GPLTTT 168 (581)
Confidence 55555556666666667777777777777777777776777777777764 566666554 355554
No 2
>3izo_F Fiber; pentameric penton base, trimeri viral protein; 3.60A {Human adenovirus 5}
Probab=97.60 E-value=1.3e-07 Score=95.57 Aligned_cols=156 Identities=19% Similarity=0.323 Sum_probs=85.6
Q PAU_03380 156 GLSVDSSGVAVKVNTDKGISVDGNGVAVKVNTSKGISVDNTGVAVIANASKGISVDGSGV--------------AVIANT 221 (508)
.|.+..++-.+.+++..|+.|.++.+.+|+ ..++.+++.|- +-.+...|+.++...- .+..+.
T 3izo_F 210 PLHVTDDLNTLTVATGPGVTINNTSLQTKV--TGALGFDSQGN-MQLNVAGGLRIDSQNRRLILDVSYPFDAQNQLNLRL 286 (581)
Confidence 344544434556666667777666655443 23333333221 1111222333332211 234445
</code></pre>
<p>It goes on a bit but is just more of the above 2 alignments.</p>
<p><strong>UPDATE 1</strong></p>
<p>Just to provide an example of what I'd ideally like at the end:</p>
<p>Given the line in the 'short table':</p>
<pre><code> 1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
</code></pre>
<p>I'd like to get either a delimited string, or separate <code>match.group</code> for:</p>
<p>The PDB Hit ID == <code>3izo_F</code></p>
<p>Each of the first 4 metrics (as separate groups ideally, but I could deal with that after the fact) = <code>98.1</code> <code>2.7E-09</code> <code>7.3E-14</code> <code>107.6</code></p>
<p>Such a shame this program doesn't just provide a proper tabular output :(</p>
| 1 | 2016-10-11T17:28:44Z | 40,001,898 | <p>It is possible to use <code>pandas.read_fwf</code> to read the tabular portion, but because your table headers are malformed (i.e. sometimes a space is part of a variable name, as in <code>Query HMM</code>, and sometimes it separates variable names, as in <code>SS</code> and <code>Cols</code>) you are going to have to specify the column widths.</p>
<p>I like to use a template row to do this.</p>
<pre><code>from io import StringIO
yourTemplate= \
"""
---|-------------------------------|----|-------|-------|------|-----|----|---------|--------------|
No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM
1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
2 3izo_F Fiber; pentameric pento 97.6 1.3E-07 3.4E-12 95.6 0.0 156 156-317 210-388 (581)
"""
yourPattern = StringIO(yourTemplate).readlines()[1]
colBreaks = [i for i, ch in enumerate(yourPattern) if ch == '|']
yourWidths = [j-i for i, j in zip( ([0]+colBreaks)[:-1], colBreaks ) ]
</code></pre>
<p>Then we can go back to your file.</p>
<pre><code>yourText= \
"""Neff 1.0
Searched_HMMs 37488
Date Mon May 23 20:23:54 2016
Command hhsearch -cpu 10 -i /home/wms_joe/PVCs/PVC_operons/prot_all/PAU_03380.faa -d /home/wms_joe/Applications/HHSuite/databases/pdb70/pdb70_hhm.ffdata -B 5 -Z 5 -E 1E-03 -nocons -nopred -nodssp
No Hit Prob E-value P-value Score SS Cols Query HMM Template HMM
1 3izo_F Fiber; pentameric pento 98.1 2.7E-09 7.3E-14 107.6 0.0 65 93-160 104-168 (581)
2 3izo_F Fiber; pentameric pento 97.6 1.3E-07 3.4E-12 95.6 0.0 156 156-317 210-388 (581)
3 1ocy_A Bacteriophage T4 short 97.6 1.8E-07 4.7E-12 80.4 0.0 85 323-418 10-122 (198)
4 1v1h_A Fibritin, fiber protein 96.1 0.00011 3E-09 60.4 0.0 30 167-198 2-31 (103)
5 1v1h_A Fibritin, fiber protein 95.9 0.00019 5.1E-09 59.1 0.0 10 168-177 41-50 (103)
6 1pdi_A Short tail fiber protei 95.6 0.00041 1.1E-08 63.3 0.0 26 323-348 90-116 (278)
7 2xgf_A Long tail fiber protein 94.1 0.005 1.3E-07 55.1 0.0 31 318-348 22-52 (242)
8 1h6w_A Bacteriophage T4 short 84.7 0.25 6.7E-06 47.1 0.0 27 323-349 255-282 (312)
9 1qiu_A Adenovirus fibre; fibre 79.9 0.54 1.4E-05 44.4 0.0 24 92-115 7-30 (264)
10 3s6x_A Outer capsid protein si 72.0 1.3 3.4E-05 43.6 0.0 69 106-191 44-112 (325)
No 1
>3izo_F Fiber; pentameric penton base, trimeri viral protein; 3.60A {Human adenovirus 5}
Probab=98.13 E-value=2.7e-09 Score=107.58 Aligned_cols=65 Identities=22% Similarity=0.362 Sum_probs=42.7
Q PAU_03380 93 PLILKDDVLSVDLGSGLTNETNGICVGQGDGITVNTSNVAVKQGNGISVTSSGGVAVKVSANKGLSVD 160 (508)
||-+.++-|.++....|+...+++.+--+++++|+.....++....++++ .+++++++. .||.++
T 3izo_F 104 PLTVTSEALTVAAAAPLMVAGNTLTMQSQAPLTVHDSKLSIATQGPLTVS-EGKLALQTS--GPLTTT 168 (581)
Confidence 55555556666666667777777777777777777776777777777764 566666554 355554
No 2
>3izo_F Fiber; pentameric penton base, trimeri viral protein; 3.60A {Human adenovirus 5}
Probab=97.60 E-value=1.3e-07 Score=95.57 Aligned_cols=156 Identities=19% Similarity=0.323 Sum_probs=85.6
Q PAU_03380 156 GLSVDSSGVAVKVNTDKGISVDGNGVAVKVNTSKGISVDNTGVAVIANASKGISVDGSGV--------------AVIANT 221 (508)
.|.+..++-.+.+++..|+.|.++.+.+|+ ..++.+++.|- +-.+...|+.++...- .+..+.
T 3izo_F 210 PLHVTDDLNTLTVATGPGVTINNTSLQTKV--TGALGFDSQGN-MQLNVAGGLRIDSQNRRLILDVSYPFDAQNQLNLRL 286 (581)
Confidence 344544434556666667777666655443 23333333221 1111222333332211 234445
"""
</code></pre>
<p>We note that to get to the tabular portion (starting with the header) we need to skip 5 rows, then keep 10 rows.</p>
<pre><code>import pandas as pd
yourData = pd.read_fwf(StringIO(yourText), skiprows=5, nrows=10, header=0, widths = yourWidths)
print(yourData.dtypes)
print(yourData)
</code></pre>
<p>This should give you what you want, in tabular form:</p>
<pre><code>print(yourData.dtypes)
print(yourData)
No int64
Hit object
Prob float64
E-value float64
P-value float64
Score float64
SS float64
Cols int64
Query HMM object
Template HMM object
dtype: object
No Hit Prob E-value P-value \
0 1 3izo_F Fiber; pentameric pento 98.1 2.700000e-09 7.300000e-14
1 2 3izo_F Fiber; pentameric pento 97.6 1.300000e-07 3.400000e-12
2 3 1ocy_A Bacteriophage T4 short 97.6 1.800000e-07 4.700000e-12
3 4 1v1h_A Fibritin, fiber protein 96.1 1.100000e-04 3.000000e-09
4 5 1v1h_A Fibritin, fiber protein 95.9 1.900000e-04 5.100000e-09
5 6 1pdi_A Short tail fiber protei 95.6 4.100000e-04 1.100000e-08
6 7 2xgf_A Long tail fiber protein 94.1 5.000000e-03 1.300000e-07
7 8 1h6w_A Bacteriophage T4 short 84.7 2.500000e-01 6.700000e-06
8 9 1qiu_A Adenovirus fibre; fibre 79.9 5.400000e-01 1.400000e-05
9 10 3s6x_A Outer capsid protein si 72.0 1.300000e+00 3.400000e-05
Score SS Cols Query HMM Template HMM
0 107.6 0.0 65 93-160 104-168 (581)
1 95.6 0.0 156 156-317 210-388 (581)
2 80.4 0.0 85 323-418 10-122 (198)
3 60.4 0.0 30 167-198 2-31 (103)
4 59.1 0.0 10 168-177 41-50 (103)
5 63.3 0.0 26 323-348 90-116 (278)
6 55.1 0.0 31 318-348 22-52 (242)
7 47.1 0.0 27 323-349 255-282 (312)
8 44.4 0.0 24 92-115 7-30 (264)
9 43.6 0.0 69 106-191 44-112 (325)
</code></pre>
<p>The <code>pandas</code> syntax to access these values is quite straightforward, as in <code>yourData.loc[3,'Prob']</code></p>
| 1 | 2016-10-12T14:58:57Z | [
"python",
"regex",
"string"
] |
Can anyone tell me what error msg "line 1182 in parse" means when I'm trying to parse and xml in python | 39,983,159 | <p>This is the code that results in an error message:</p>
<pre><code>import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.parse(data)
</code></pre>
<p>The error:</p>
<p><img src="https://i.stack.imgur.com/eMKS2.png" alt="error msg screenshot"></p>
<p>I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files. </p>
| -1 | 2016-10-11T17:30:05Z | 39,983,211 | <p>The error message indicates that your code is trying to open a file, who's name is stored in the variable source. </p>
<p>It's failing to open that file (IOError) because the variable source contains a bunch of XML, not a file name. </p>
| 0 | 2016-10-11T17:33:15Z | [
"python",
"xml",
"parsing",
"url",
"elementtree"
] |
Can anyone tell me what error msg "line 1182 in parse" means when I'm trying to parse and xml in python | 39,983,159 | <p>This is the code that results in an error message:</p>
<pre><code>import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.parse(data)
</code></pre>
<p>The error:</p>
<p><img src="https://i.stack.imgur.com/eMKS2.png" alt="error msg screenshot"></p>
<p>I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files. </p>
| -1 | 2016-10-11T17:30:05Z | 39,984,349 | <p><code>data</code> is a reference to the XML content as a string, but the <a href="https://docs.python.org/2.7/library/xml.etree.elementtree.html#xml.etree.ElementTree.parse" rel="nofollow"><code>parse()</code></a> function expects a filename or <a href="https://docs.python.org/2/glossary.html#term-file-object" rel="nofollow">file object</a> as argument. That's why there is an an error.</p>
<p><code>urlhandle</code> is a file object, so <code>tree = ET.parse(urlhandle)</code> should work for you. </p>
| 0 | 2016-10-11T18:36:53Z | [
"python",
"xml",
"parsing",
"url",
"elementtree"
] |
Can anyone tell me what error msg "line 1182 in parse" means when I'm trying to parse and xml in python | 39,983,159 | <p>This is the code that results in an error message:</p>
<pre><code>import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.parse(data)
</code></pre>
<p>The error:</p>
<p><img src="https://i.stack.imgur.com/eMKS2.png" alt="error msg screenshot"></p>
<p>I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files. </p>
| -1 | 2016-10-11T17:30:05Z | 39,984,354 | <p>Consider using ElementTree's <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.fromstring" rel="nofollow"><code>fromstring()</code></a>:</p>
<pre><code>import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
# http://feeds.bbci.co.uk/news/rss.xml?edition=int
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.fromstring(data)
print ET.tostring(tree, encoding='utf8', method='xml')
</code></pre>
| 0 | 2016-10-11T18:37:01Z | [
"python",
"xml",
"parsing",
"url",
"elementtree"
] |
QThread exception management and thread race | 39,983,224 | <p>I have a GUI (PySide) application that uses QThread. I have a signal in my QThread that is emitted when an exception occurs so that I can handle the exception in the main thread. However, the rest of the function starting the thread is still executed. I tried the <code>wait</code> function to block the execution but it does not work. Here is my implementation:</p>
<p><strong>QThread daughter</strong></p>
<pre><code>class LongTaskThread(QtCore.QThread):
task_finished = QtCore.Signal()
task_failed = QtCore.Signal(Exception)
def __init__(self, allow_log=True, test_mode=False, parent=None):
QtCore.QThread.__init__(self, parent)
def run(self):
self.task_failed.emit(Exception())
def wait_with_gui_refresh(self):
while self.isRunning():
time.sleep(0.1)
if not self.test_mode:
QtGui.QApplication.processEvents()
</code></pre>
<p><strong>Main thread</strong></p>
<pre><code>def test():
my_thread = LongTaskThread()
my_thread.task_finished.connect(on_finished)
my_thread.task_failed.connect(on_failed)
my_thread.start()
# my_thread.wait() <---- tentative 1
# my_thread.wait_with_gui_refresh() <---- tentative 2
print('bla bla bla bla')
def on_finished)():
pass
def on_failed(err):
raise err
</code></pre>
<p>I expected that the <code>print</code> would never been hit, but whether I use the <code>wait</code> function or the <code>wait_with_gui_refresh</code> function, or nothing, the print is always printed.</p>
<p>How to stop the test function when an exception is raised inside the QThread ?</p>
| 0 | 2016-10-11T17:33:54Z | 39,988,499 | <p>In your <code>test</code> function, the sequence of events is this:</p>
<ol>
<li>The thread starts</li>
<li>The thread's <code>run</code> method is called</li>
<li>The <code>task_failed</code> signal is emitted <em>asynchronously</em> (i.e. it's posted to the receiver's event queue)</li>
<li>The thread's <code>run</code> method returns</li>
<li>If the thread's <code>wait</code> method is called here, it will return <code>True</code> immediately because there is nothing to wait for (i.e. <code>run</code> has already returned)</li>
<li>A message is printed, and <code>test</code> returns</li>
<li>Control returns to the event-loop, and the <code>task_failed</code> signal is processed</li>
<li>An exception is raised in <code>on_failed</code></li>
</ol>
<p>It's hard to see anything to object to here. Presumably, you don't want to block the gui whilst the worker thread is running, so it makes perfect sense to process any exceptions aynchronously. But for that to happen, control must return to the event-loop of the main thread - which means the <code>test</code> function <em>must</em> return immediately. If you want to run some code after the thread starts, connect a slot to its <code>started</code> signal.</p>
| 1 | 2016-10-11T23:52:25Z | [
"python",
"multithreading",
"qt",
"exception-handling",
"pyside"
] |
django left join with where clause subexpression | 39,983,237 | <p>I'm currently trying to find a way to do something with Django's (v1.10) ORM that I feel should be possible but I'm struggling to understand how to apply the documented methods to solve my problem.</p>
<p><strong>Edit:</strong> So here's the sql that I've hacked together to return the data that I'd like from the <code>dbshell</code>, with a postgresql database now, after I realised that my original sqlite3 backed sql query was incorrect:</p>
<pre><code>select
voting_bill.*,vv.vote
from
voting_bill
left join
(select
voting_votes.vote,voting_votes.bill_id
from
voting_bill
left join
voting_votes
on
voting_bill.id=voting_votes.bill_id
where
voting_votes.voter_id = (select id from auth_user where username='richard' or username is Null)
)
as
vv
on
voting_bill.id=vv.bill_id;
</code></pre>
<p>Here's the 'models.py' for my voting app:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
class Bill(models.Model):
name = models.CharField(max_length=255)
description = models.TextField()
result = models.BooleanField()
status = models.BooleanField(default=False)
def __str__(self):
return self.name
class Votes(models.Model):
vote = models.NullBooleanField()
bill = models.ForeignKey(Bill, related_name='bill',
on_delete=models.CASCADE,)
voter = models.ForeignKey(User, on_delete=models.CASCADE,)
def __str__(self):
return '{0} {1}'.format(self.bill, self.voter)
</code></pre>
<p>I can see that my sql works as I expect with the vote tacked onto the end, or a null if the user hasn't voted yet. </p>
<p>I was working to have the queryset in this format so that I can iterate over it in the template to produce a table and if the result is null I can instead provide a link which takes the user to another view.</p>
<p>I've read about select_related and prefetch_related, but as I said, I'm struggling to work out how I translate this to how I can do this in SQL.</p>
| 1 | 2016-10-11T17:34:55Z | 39,983,720 | <p>Hope I correctly understood your problem. Try this:</p>
<pre><code>votes = Votes.objects.filter(voter__username='django').select_related('bill')
</code></pre>
<p>You can use this. But I think you do not need <code>select_related</code> in this case.</p>
<pre><code>bills_for_user = Bill.objects.filter(votes__voter__username='django').select_related('votes').distinct()
</code></pre>
<p>Now you can iterate your bills_for_user</p>
<pre><code>for bill in bills_for_user:
bill_name = bill.name
bill_description = bill.description
bill_result = bill.result
bill_status = bill.status
# and there are several variants what you can with votes
bill_votes = bill.votes_set.all() # will return you all votes for this bill
bill_first_vote1 = bill.votes_set.first() # will return first element in this query or None if its empty
bill_first_vote2 = bill.votes_set.all()[0] # will return first element in this query or Error if its empty
bill_last_vote = bill.votes_set.last()[0] # will return last element in this query or None if its empty
# you can also filter it for example by voting
bill_positive_votes = bill.votes_set.filter(vote=True) # will return you all votes for this bill with 'vote' = True
bill_negative_votes = bill.votes_set.filter(vote=False) # will return you all votes for this bill with 'vote' = False
bill_neutral_votes = bill.votes_set.filter(vote=None) # will return you all votes for this bill with 'vote' = None
</code></pre>
| 0 | 2016-10-11T18:02:35Z | [
"python",
"django",
"join",
"django-queryset",
"where-clause"
] |
Extracting value of an element using Selenium | 39,983,324 | <p>I'm using Python 3 and I need help with extracting the value of an element in a HTML code. The relevant part of the webpage code looks like this:</p>
<pre><code><span class="ng-isolate-scope" star-rating="4.61" size="22">
</code></pre>
<p>I'm currently using Selenium and the get_attribute function, but I have not been able to extract the 4.61 value. Since I have to loop over several webpages, the relevant part of my code looks like this:</p>
<pre><code>stars=[]
i=driver.find_elements_by_xpath("//*[@star-rating]")
for y in i:
temp=str(y.get_attribute("value"))
stars.append(temp)
</code></pre>
<p>but it is not working as I would expect. Could you help me in terms of what I'm doing wrong here? Thanks a lot for your time! </p>
| -1 | 2016-10-11T17:39:57Z | 39,983,385 | <p>Get the <code>star-rating</code> attribute instead of a <code>value</code>:</p>
<pre><code>temp = y.get_attribute("star-rating"))
</code></pre>
<p>Note that you don't have to call <code>str()</code> on the result of <code>get_attribute()</code> - you'll get the attribute value as a string.</p>
<p>You can also improve the code and collect the ratings in a single line using a list comprehension:</p>
<pre><code>stars = [elm.get_attribute("star-rating")
for elm in driver.find_elements_by_xpath("//*[@star-rating]")]
</code></pre>
<p>And, if you need the ratings as floats, call <code>float()</code>:</p>
<pre><code>stars = [float(elm.get_attribute("star-rating"))
for elm in driver.find_elements_by_xpath("//*[@star-rating]")]
</code></pre>
<hr>
<p>And, it would be a little bit more concise with a <a href="http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webdriver.WebDriver.find_elements_by_css_selector" rel="nofollow">CSS selector</a>:</p>
<pre><code>stars = [float(elm.get_attribute("star-rating"))
for elm in driver.find_elements_by_css_selector("[star-rating]")]
</code></pre>
| 2 | 2016-10-11T17:43:12Z | [
"python",
"selenium",
"getattribute"
] |
reading from csv file in shell scripting | 39,983,349 | <p>I am running a script run.sh.
The script is executed as follows. $./run.sh read.csv
The contents of the script are as follows.</p>
<pre><code> tail -n +2 $1 | while IFS="," read -r A B C D E F;
do
python test.py ${A} ${B} ${C} ${D} ${E} ${F}
done
</code></pre>
<p>My question is "If i need to pass in additional command line arguments along with read.csv from the terminal like this (for Ex: <code>$./run.sh name sex DOB read.csv</code>) how do i modify the code so that it works fine. </p>
<p>Because if i pass any other command line arguments along with the file name(read.csv) i am getting access errors to the file read.csv</p>
| 0 | 2016-10-11T17:41:06Z | 39,983,406 | <p>Positional parameters is what you are after. This is how you can do it:</p>
<pre><code>tail -n +2 $4 | while IFS="," read -r A B C D E F;##note now you would pass $4 to tail command which is your file name
do
python test.py ${A} ${B} ${C} ${D} ${E} ${F}
done
</code></pre>
<p>You could access those values like name in $1, sex in <code>$2</code>, DOB with $3 and read.csv in $4</p>
| 1 | 2016-10-11T17:44:31Z | [
"python",
"bash",
"shell",
"csv"
] |
Looking for an efficient way to combine lines in Python | 39,983,405 | <p>I'm writing a program to aggregate strace output lines on a Linux host. When strace runs with the "-f" option it will intermix system calls line so:</p>
<pre><code>close(255 <unfinished ...>
<... rt_sigprocmask resumed> NULL, 8) = 0
<... close resumed> ) = 0
[pid 19199] close(255 <unfinished ...>
[pid 19198] <... rt_sigprocmask resumed> NULL, 8) = 0
[pid 19199] <... close resumed> ) = 0
</code></pre>
<p>I would like to iterate through the output and combine "unfinished" lines with "resumed" lines. So in the output above the following two lines:</p>
<pre><code>close(255 <unfinished ...>
.....
<... close resumed> ) = 0
</code></pre>
<p>Would be combined into:</p>
<pre><code>close(255) = 0
</code></pre>
<p>I was thinking about splitting the "unfinished" lines at ">" and putting that into a list. If a future line contained resume I would iterate through this list to see if the system call and pid are present. If they are I would split() the line at ">" and combine the two. Curious if there is a better way to do this?</p>
<p><strong>* Update *</strong></p>
<p>Thanks for the awesome feedback! I came up with the following and would love to get your thoughts on the code:</p>
<pre><code>holding_cell = list()
if len(sys.argv) > 1:
strace_file = open(sys.argv[1], "r")
else:
strace_file = sys.stdin
for line in strace_file.read().splitlines():
if "clone" in line:
print line
if "unfinished" in line:
holding_cell.append(line.split("<")[0])
elif "resumed" in line:
# Get the name of the system call / pid so we can try
# to match this line w/ one in the buffer
identifier = line.split()[1]
for cell in holding_cell:
if identifier in cell:
print cell + line.split(">")[1]
holding_cell.remove(cell)
else:
print line
</code></pre>
<p>Is there a more pythonic way to write this? Thanks again for the awesome feedback!</p>
| 0 | 2016-10-11T17:44:29Z | 39,983,924 | <p>Some iterators such as file objects can be nested. Assuming you are reading this from a file-like object, you could just create an inner loop to do the combining. I'm not sure what the formatting rules for <code>strace</code> logs are, but nominally, it could be something like</p>
<pre><code>def get_logs(filename):
with open('filename') as log:
for line in log:
if "<unfinished " in line:
preamble = line.split(' ', 1)[0].strip()
for line in log:
if " resumed>" in line:
yield "{}) = {}\n".format(preamble,
line.split('=')[-1].strip())
break
else:
yield line
</code></pre>
| 0 | 2016-10-11T18:14:01Z | [
"python"
] |
Transform a base 64 encoded string into a downloadable pdf using django rest framework | 39,983,431 | <p>I have a django/python3 application that requests the Limesurvey API and gets a base 64 encoded string as result.
I'd like to return this result as a downloadable pdf file.</p>
<p>Here's my current implementation that's simply display the base 64 string into a blank page...</p>
<pre><code> data = limesurvey.export_responses_by_token(survey_id, token)
response = HttpResponse(data, content_type='application/pdf')
return StreamingHttpResponse(response)
</code></pre>
<p>Any help would be very appreciated!</p>
| 0 | 2016-10-11T17:45:40Z | 39,983,864 | <p>Three steps:</p>
<h2>Dump your base64 content into StringIO</h2>
<pre><code>import cStringIO as StringIO
buffer = StringIO.StringIO()
content.decode('base64')
buffer.write(content)
</code></pre>
<h2>Send response with proper header</h2>
<pre><code>from django.http import HttpResponse
from wsgiref.util import FileWrapper
# generate the file
response = HttpResponse(FileWrapper(buffer.getvalue()), content_type='application/zip')
response['Content-Disposition'] = 'attachment; filename=MY_FILE_NAME.zip'
return response
</code></pre>
<h2>Configure your server</h2>
<p>Beyond the scope of django. </p>
<p>e.g. for nginx, refer to <a href="https://tn123.org/mod_xsendfile/" rel="nofollow">this link</a></p>
<h2>Update</h2>
<p>After testing your content in some <a href="https://www.base64decode.org/" rel="nofollow">online converter</a>, I'm sure that's base64 stuff.</p>
<p>However, the reason why it doesn't work remains unknown until further information is provided.</p>
<p>My mock snippet is like this.</p>
<pre><code>>>> test_str = 'test'
>>> base_64 = test_str.encode('base64')
>>> base_64.decode('base64')
'test'
>>> base_64
'dGVzdA==\n'
</code></pre>
| 2 | 2016-10-11T18:10:39Z | [
"python",
"django",
"python-3.x",
"django-rest-framework"
] |
Exchanging out specific lines of a file | 39,983,500 | <p>I don't know if this should be obvious to the more tech savvy among us but is there a specific way to read a line out of a text file, then edit it and insert it back into the file in the original location? I have looked on the site but all the solutions I find seem to be for python 2.7. </p>
<p>Below is an example of what I am looking for:</p>
<pre><code> with open example.txt as file:
for line in file:
if myline in file:
file.edit("foo","fah")
</code></pre>
| 0 | 2016-10-11T17:49:36Z | 39,983,747 | <p>In 95% cases, replacing data (e.g. text) in a file usually means</p>
<ol>
<li>Read the file in chunks, e.g. line-by-line</li>
<li>Edit the chunk</li>
<li>Write the edited chunk to a new file</li>
<li>Replace the old file with a new file.</li>
</ol>
<p>So, a simple code will be:</p>
<pre><code>import os
with open(in_path, 'r') as fin:
with open(temp_path, 'w') as fout:
for line in fin:
line.replace('foo', 'fah')
fout.write(line)
os.rename(temp_path, in_path)
</code></pre>
<p>Why not in-place replacements? Well, a file is a fixed sequence of bytes, and the only way to grow it - is to append to the end of file. Now, if you want to replace the data of the same length - no problems. However if the original and new sequences' lengths differ - there is a trouble: a new sequence will be overwriting the following characters. E.g. </p>
<pre><code>original: abc hello abc world
replace abc -> 12345
result: 12345ello 12345orld
</code></pre>
| 1 | 2016-10-11T18:04:09Z | [
"python",
"file",
"python-3.x"
] |
Exchanging out specific lines of a file | 39,983,500 | <p>I don't know if this should be obvious to the more tech savvy among us but is there a specific way to read a line out of a text file, then edit it and insert it back into the file in the original location? I have looked on the site but all the solutions I find seem to be for python 2.7. </p>
<p>Below is an example of what I am looking for:</p>
<pre><code> with open example.txt as file:
for line in file:
if myline in file:
file.edit("foo","fah")
</code></pre>
| 0 | 2016-10-11T17:49:36Z | 39,983,772 | <p>You could do this by using <code>fileinput</code> with <code>inplace</code> passed. You still cannot just do a one-line edit, you much conditionally change the line you need and let everything else as is.</p>
<p>In short with a file looking like:</p>
<pre><code>$ head example.txt
pass
pass
foo
pass
</code></pre>
<p>You could do:</p>
<pre><code>import fileinput
with fileinput.input('example.txt', inplace=True) as f:
for line in f:
print('fah' if 'foo' in line else line.strip())
</code></pre>
<p>For a result of:</p>
<pre><code>$ head example.txt
pass
pass
fah
pass
</code></pre>
| 0 | 2016-10-11T18:05:48Z | [
"python",
"file",
"python-3.x"
] |
Why does my python tkinter GUI not show images when imported into another script? | 39,983,508 | <p>I am creating a GUI for a script. I first developed an empty GUI (code shown below). The images were not showing up but then I googled and realized references to the image were being garbage collected and fixed it according to this link (<a href="http://effbot.org/pyfaq/why-do-my-tkinter-images-not-appear.htm" rel="nofollow">http://effbot.org/pyfaq/why-do-my-tkinter-images-not-appear.htm</a>). I then made the image a global variable within the GUI script and that also worked.</p>
<p>This file is called GUI.py:</p>
<pre><code>import Tkinter as tk
import ttk
import tkMessageBox
import time
from PIL import ImageTk, Image
class StartPage(tk.Frame):
def __init__(self, master, text, height, width, *args, **kwargs):
global logo
tk.Frame.__init__(self, *args, borderwidth=20, **kwargs)
self.height = height
self.width = width
# path = "test.jpg"
# self.img = ImageTk.PhotoImage(Image.open(path))
logo = ImageTk.PhotoImage(Image.open('test.jpg'))
self.picture = tk.Label(self, image=logo)
#self.picture.image = img
self.picture.pack(side = "bottom", fill = "both", expand = "yes")
label = tk.Label(self, text='Waiting', font=("Helvetica bold", 24)).pack()
label = tk.Label(self, text='Click START Button', font=("Helvetica", 16)).pack(expand=True)
button = tk.Button(self, text=text, font=('Helvetica', 20),
command=lambda: self.callback())
button.pack(side="top", expand=True)
root.update()
def onlift(self):
root.geometry('{}x{}'.format(self.width, self.height))
self.lift()
class TestPage(tk.Frame):
def __init__(self, master, text, height, width, *args, **kwargs):
tk.Frame.__init__(self, *args, borderwidth=20, **kwargs)
self.height = height
self.width = width
self.state = tk.StringVar()
self.label = tk.Label(self, textvariable=self.state, font=("Helvetica", 16)).pack()
self.progress = ttk.Progressbar(self, orient='horizontal', length=1000, mode='determinate')
self.progress.pack()
path = 'connect.jpg'
img = ImageTk.PhotoImage(Image.open(path))
self.picture = tk.Label(self, text='test image', image=img)
self.picture.image = img
self.picture.pack()
root.update()
def onlift(self):
global p1
root.geometry('{}x{}'.format(self.width, self.height))
self.lift()
self.progress["value"] = 0
self.state.set('Running...')
root.update()
time.sleep(1)
self.progress["value"] = 50
root.update()
confirm = tkMessageBox.askyesno(message='ON?', icon='question', title='Confirmation')
#print confirm
if confirm:
print 'Confirmed'
self.progress["value"] = 100
self.state.set('PASSED!')
root.update()
tkMessageBox.showinfo(title='Test Passed',message='PASS')
self.label
else:
self.state.set('Test FAILED!')
root.update()
tkMessageBox.showinfo(title='Test Failed',message='FAIL',icon='warning')
p1.onlift()
class App(tk.Frame):
def __init__(self, *args, **kwargs):
global p1
tk.Frame.__init__(self, *args, **kwargs)
p1 = StartPage(self, 'START', height=root.winfo_screenheight(), width=root.winfo_screenwidth())
p2 = TestPage(self, 'blank', height=root.winfo_screenheight(), width=root.winfo_screenwidth())
p1.callback = p2.onlift
p2.callback = p1.onlift
p1.place(x=0, y=0, relwidth=1, relheight=1)
p2.place(x=0, y=0, relwidth=1, relheight=1)
p1.onlift()
global p1
global p2
global logo
global connectimg
root = tk.Tk()
root.title('GUI')
app = App(root)
root.update()
</code></pre>
<p>My problem now is, when I import this GUI.py into my actual test script I lose the images again. I have tried loading the images in that script and passing the reference to the GUI but that has not worked.</p>
<pre><code>import GUI
</code></pre>
<p>This is how I change the label text and progress bar value from my script:</p>
<pre><code>GUI.p2.state.set('Running')
GUI.p2.progress["value"] = 50
GUI.root.update()
</code></pre>
<p>If I run the GUI with a mainloop() on its own the images show up fine. When I run it as an imported module from the second script it does not display the images. What am I doing wrong?</p>
| 0 | 2016-10-11T17:50:01Z | 39,984,853 | <p>Where are your picture files? Did you make sure to put the images in your project folder? Make sure your path isn't different.</p>
| -1 | 2016-10-11T19:06:20Z | [
"python",
"user-interface",
"tkinter",
"module",
"pillow"
] |
Long path to Python script when executing it from c# | 39,983,544 | <p>I am trying to run Python script from C# program. I use official documentation from Microsoft: <a href="https://code.msdn.microsoft.com/windowsdesktop/C-and-Python-interprocess-171378ee" rel="nofollow">https://code.msdn.microsoft.com/windowsdesktop/C-and-Python-interprocess-171378ee</a> When I pass short file path to my Python script as command argument it works fine. But when I enter long path to the same Python script, process runs, but script does not execute. Whats wrong? Here is the code I use:</p>
<pre><code>using System;
using System.IO;
using System.Diagnostics;
namespace CallPython
{
class Program
{
static void Main(string[] args)
{
// full path of python interpreter
string python = @"C:\Anaconda2\python.exe";
// This path will work
string myPythonApp = @"C:\MyPython\helloworld.py";
// This path will cause program to fail, nothing response
string myPythonApp = "C:\\Users\\My Name\\Documents\\Visual Studio 2015\\Projects\\My Project Name\\helloworld.py";
// Create new process start info
ProcessStartInfo myProcessStartInfo = new ProcessStartInfo(python);
// make sure we can read the output from stdout
myProcessStartInfo.UseShellExecute = false;
myProcessStartInfo.RedirectStandardOutput = true;
myProcessStartInfo.Arguments = myPythonApp;
Process myProcess = new Process();
// assign start information to the process
myProcess.StartInfo = myProcessStartInfo;
Debug.WriteLine("Calling Python script: " + myPythonApp);
// start the process
myProcess.Start();
StreamReader myStreamReader = myProcess.StandardOutput;
string myString = myStreamReader.ReadLine();
myProcess.WaitForExit();
myProcess.Close();
// write the output we got from python app
Debug.WriteLine("Value received from script: " + myString);
}
}
}
</code></pre>
| 0 | 2016-10-11T17:52:20Z | 39,984,320 | <p>This error can be caused by if your path is spelled wrong. For creating path dynamicly you can try the foling:</p>
<ol>
<li><p>You can try:</p>
<pre><code>Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments).
</code></pre>
<p>This will return the path to the documents folder
<em>(also if other users will use the program the path will be generated correctly)</em>.
So you can leave this part of the path: "C:\Users\My Name\Documents\" over to the compiler. From here you can continue to build the path to your script.</p></li>
<li><p>you can try:</p>
<pre><code>string path = "script.py";
</code></pre>
<p>If you have this as path it will read the file from here</p></li>
</ol>
<p>"C:\Users\My Name\Documents\Visual Studio 2015\Projects\My Project Name\My Project Name\bin\Debug\<strong>script.py</strong></p>
<p><em>Or where your executable is located.</em></p>
<ol start="3">
<li>you can use the Directory.GetParrent() Directory.GetCurrentDirectory() to get the current path and move to the parrent directory en work back to the script where you've stored it.</li>
</ol>
<blockquote>
<p>These options will make it easier to create path dynamic.</p>
<p>Take a look at this page for the directory class with has some nice features :
<a href="http://msdn.microsoft.com/en-us/library/system.io.directory.aspx" rel="nofollow">http://msdn.microsoft.com/en-us/library/system.io.directory.aspx</a></p>
<p>Or the envoirment class with has some nice feuaters to generate paths:
<a href="https://msdn.microsoft.com/en-us/library/system.environment(v=vs.110).aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/system.environment(v=vs.110).aspx</a></p>
</blockquote>
| 0 | 2016-10-11T18:35:02Z | [
"c#",
"python"
] |
run two python files in parallel from terminal | 39,983,585 | <p>If I have two python programs called <em>test1.py</em> and <em>test2.py</em>, How can I run them in parallel in terminal?
Does </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>python test1.py|python test2.py</code></pre>
</div>
</div>
</p>
<p>do that?</p>
| 0 | 2016-10-11T17:54:31Z | 39,983,617 | <p>No, this will pipe the output from test1.py into test2.py.</p>
<p>Use this instead: <code>python test1.py &; python test2.py &</code></p>
<p>The <code>&</code> will fork the command into its own process.</p>
| 5 | 2016-10-11T17:56:38Z | [
"python",
"multithreading"
] |
How to restore tensorflow inceptions checkpoint file (ckpt)? | 39,983,591 | <p>I have <code>inception_resnet_v2_2016_08_30.ckpt</code> file which is a pre-trained inception model. I want to restore this model using</p>
<p><code>saver.restore(sess, ckpt_filename)</code></p>
<p>But for that, I will be required to write the set of variables that were used while training this model. Where can I find those (a script, or detailed description)?</p>
| 1 | 2016-10-11T17:55:04Z | 39,989,069 | <p>I believe the <a href="https://www.tensorflow.org/versions/master/how_tos/meta_graph/index.html" rel="nofollow"><code>MetaGraph</code> mechanism</a> is what you need.</p>
<p>EDIT: additionally, take a look at <code>tf.train.NewCheckpointReader</code> -- it has a <code>get_variable_to_shape_map()</code> method. See <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/saver_test.py#L1630" rel="nofollow">unit test</a>.</p>
| 0 | 2016-10-12T01:15:14Z | [
"python",
"tensorflow"
] |
Capturing WindowsError in template | 39,983,598 | <p>I'm coding on a Windows machine but I'm running my production site on Linux.</p>
<p>When trying to reach a page on my development machine using a database copied from the production system, I get errors while trying to list files if these files don't exist locally.
This is as expected, since I just copied the DB and not the files. I don't want/need the files, but I don't want following error either:</p>
<blockquote>
<p>WindowsError at /126/documents/ [Error 3] The system cannot find the
path specified:
u'C:\mysite\media\documents\2016\07\26\myfile.docx'</p>
</blockquote>
<p>Instead of throwing the error, I'd prefer to handle this in my template, something like:</p>
<pre><code>{% if doc.data %}{{ doc.data.size | filesizeformat }}{% else %}File not found{% endif %}
</code></pre>
<p>However that doesn't work. <code>doc.data</code> <em>does</em> exist, since the DB knows a value for this file location. But the file isn't available on the disk.</p>
<p>Any way to catch this properly, preferably in the template?</p>
<p>My model:</p>
<pre><code>class Document(models.Model):
data = models.FileField(upload_to="documents/%Y/%m/%d")
</code></pre>
| 0 | 2016-10-11T17:55:26Z | 39,983,781 | <p>Not the best answer, but a workaround can be implemented as follows in <code>views.py</code> (not in the template):</p>
<pre><code>documents = Document.objects.all()
if documents:
exclude_list = []
for doc in documents:
try:
local_file = doc.data.file
except IOError:
exclude_list.append(doc.id)
documents = documents.exclude(id__in=exclude_list)
context_dict['documents'] = documents
</code></pre>
<p>This way the document is not displayed in the template if there is no valid local file found.</p>
| 0 | 2016-10-11T18:06:07Z | [
"python",
"django"
] |
Convert to Dictionary Values that Contains a Set to a List | 39,983,611 | <p>I am trying to convert a set inside a list like</p>
<pre><code>x = [set(['Halo', 'Bye'])]
</code></pre>
<p>into a list:</p>
<pre><code>['Halo', 'Bye']
</code></pre>
<p>However when I typed list(x), the result still shows</p>
<pre><code>[set(['Halo', 'Bye'])]
</code></pre>
<p>Is there a way to do this?</p>
<p>I have been looking at various Stackoverflow resources like <a href="http://stackoverflow.com/questions/6593979/how-to-convert-a-set-to-a-list-in-python">this</a> and <a href="http://stackoverflow.com/questions/6828722/python-set-to-list">this</a> for a solution but nothing works.</p>
| -1 | 2016-10-11T17:56:01Z | 39,983,642 | <p><code>x</code> is a list already, but the set you that are trying to convert is an element of the list <code>x</code>.<br>So, do:</p>
<pre><code>print (list(x[0]))
</code></pre>
<p>instead of just <code>list(x)</code> as the set is the first and the only element in the list <code>x</code>.</p>
| 2 | 2016-10-11T17:57:57Z | [
"python",
"list",
"set"
] |
Convert to Dictionary Values that Contains a Set to a List | 39,983,611 | <p>I am trying to convert a set inside a list like</p>
<pre><code>x = [set(['Halo', 'Bye'])]
</code></pre>
<p>into a list:</p>
<pre><code>['Halo', 'Bye']
</code></pre>
<p>However when I typed list(x), the result still shows</p>
<pre><code>[set(['Halo', 'Bye'])]
</code></pre>
<p>Is there a way to do this?</p>
<p>I have been looking at various Stackoverflow resources like <a href="http://stackoverflow.com/questions/6593979/how-to-convert-a-set-to-a-list-in-python">this</a> and <a href="http://stackoverflow.com/questions/6828722/python-set-to-list">this</a> for a solution but nothing works.</p>
| -1 | 2016-10-11T17:56:01Z | 39,983,723 | <pre><code>[item for set_ in x for item in set_]
</code></pre>
<p>This will flatten a list of sets (or a list of lists) into just a list</p>
| 0 | 2016-10-11T18:02:44Z | [
"python",
"list",
"set"
] |
How to grab all lines after a match in Python | 39,983,679 | <p>I am very new to Python and I saw a post here:
<a href="http://stackoverflow.com/questions/4595197/how-to-grab-the-lines-after-a-matched-line-in-python">How to grab the lines AFTER a matched line in python</a></p>
<p>I have a file that has data in the following format:
DEFINE JOBPARM ID=(ns_ppprtg_notify,nj_pprtd028_notify,0028) </p>
<pre><code>SUBFILE='/sybods/prod/ca_scripts/ca_notify.ksh' SUBUSER=pbods SUBPASS=*PASSWORD*
</code></pre>
<p>I am attempting to extract each parameter so that I can construct the full command line.</p>
<p>Here is what I have so far:</p>
<pre><code>with open(myExtract) as Extract:
for line in Extract:
currentLine = line.strip("\n").strip()
if currentLine.startswith("DEFINE JOBPARM ID=") or \
currentLine.contains("PARM")):
logger.info("Job Parameter line found")
if "DEFINE JOBPARM ID=" in currentLine:
tJobParmString = currentLine.partition("DEFINE JOBPARM ID=")[2].parition("SUBFILE")[0].strip()
tSubFileString = currentLine.partition("SUBFILE")[2].partition("SUBUSER")[0].strip()
tSubUserString = currentLine.partition("SUBUSER")[2].partition("SUBPASS")[0].strip()
if re.match(" PARM", currentLine, flags=0):
logger.info("PARM line found")
</code></pre>
<p>This is where I am stuck...
I don't know for sure that there will be 2 PARM lines or 10. They all start with " PARM" and have numbers. How do I extract the values for PARM1, PARM2, PARM3 etc?
Each of the PARM lines (PARM1, PARM2) always starts on a new line.</p>
<p>Any pointers would be appreciated.</p>
| 1 | 2016-10-11T18:00:09Z | 39,984,432 | <p>Here you go mate. Here's my attempt at your problem. I've given you three options in the data structure layout; <code>outDict</code> which maps the PARMs to the strings; <code>outListOfTups</code> which gives you the same data as a list of tuples; and <code>outList</code> which assumes you're not interested in the PARMS:</p>
<pre><code>f = open("data.txt", 'r')
lines = f.read().splitlines()
outDict = {}
outListOfTups = []
outList = []
for line in lines:
if line.startswith('PARM'):
splitPoint = line.find("'")
name = line[:splitPoint-1]
string = line[splitPoint:].replace("'", "")
outDict[name] = string #if you want to work with dict
outListOfTups.append((name, string)) #if you want lists
outList.append(string)
print(outDict)
print(outList)
print(outListOfTups)
</code></pre>
<p>Let me know if I've misunderstood where you're coming from - this works for me on the input sample given.</p>
| 0 | 2016-10-11T18:41:25Z | [
"python"
] |
How to grab all lines after a match in Python | 39,983,679 | <p>I am very new to Python and I saw a post here:
<a href="http://stackoverflow.com/questions/4595197/how-to-grab-the-lines-after-a-matched-line-in-python">How to grab the lines AFTER a matched line in python</a></p>
<p>I have a file that has data in the following format:
DEFINE JOBPARM ID=(ns_ppprtg_notify,nj_pprtd028_notify,0028) </p>
<pre><code>SUBFILE='/sybods/prod/ca_scripts/ca_notify.ksh' SUBUSER=pbods SUBPASS=*PASSWORD*
</code></pre>
<p>I am attempting to extract each parameter so that I can construct the full command line.</p>
<p>Here is what I have so far:</p>
<pre><code>with open(myExtract) as Extract:
for line in Extract:
currentLine = line.strip("\n").strip()
if currentLine.startswith("DEFINE JOBPARM ID=") or \
currentLine.contains("PARM")):
logger.info("Job Parameter line found")
if "DEFINE JOBPARM ID=" in currentLine:
tJobParmString = currentLine.partition("DEFINE JOBPARM ID=")[2].parition("SUBFILE")[0].strip()
tSubFileString = currentLine.partition("SUBFILE")[2].partition("SUBUSER")[0].strip()
tSubUserString = currentLine.partition("SUBUSER")[2].partition("SUBPASS")[0].strip()
if re.match(" PARM", currentLine, flags=0):
logger.info("PARM line found")
</code></pre>
<p>This is where I am stuck...
I don't know for sure that there will be 2 PARM lines or 10. They all start with " PARM" and have numbers. How do I extract the values for PARM1, PARM2, PARM3 etc?
Each of the PARM lines (PARM1, PARM2) always starts on a new line.</p>
<p>Any pointers would be appreciated.</p>
| 1 | 2016-10-11T18:00:09Z | 39,984,572 | <pre><code>if("DEFINE JOBPARM ID=" in currentLine):
tJobParmString=currentLine.partition("DEFINE JOBPARM ID=")[2].partition("SUBFILE")[0].strip()
tSubFileString=currentLine.partition("SUBFILE")[2].partition("SUBUSER")[0].strip()
tSubUserString=currentLine.partition("SUBUSER")[2].partition("SUBPASS")[0].strip()
if(re.match("PARM",currentLine,flags=0)):
logger.info("PARM line found")
tParmString=currentLine.partition("=")[2].strip()
logger.info(tParmString)
if(not "PARM" in currentLine):
continue
</code></pre>
| 0 | 2016-10-11T18:49:29Z | [
"python"
] |
What is Truthy and Falsy in python ? How is it different from True and False? | 39,983,695 | <p>I just came to know there are <strong>Truthy</strong> and <strong>Falsy</strong> values in python which are different from the normal <code>True</code> and <code>False</code>?</p>
<p>Can someone please explain in depth what <em>truthy</em> and <em>falsy</em> values are? </p>
<p>Where should I use them?</p>
<p>What is the difference between <em>truthy</em> and <code>True</code> values and <em>falsy</em> and <code>False</code> values ?</p>
| 0 | 2016-10-11T18:00:55Z | 39,983,806 | <p>As the comments described, it just refers to values which are evaluated to True or False.</p>
<p>For instance, to see if a list is not empty, instead of checking like this:</p>
<pre><code>if len(my_list) != 0:
print "Not empty!"
</code></pre>
<p>You can simply do this:</p>
<pre><code>if my_list:
print "Not empty!"
</code></pre>
<p>This is because some values, such as empty lists, are considered False when evaluated for a boolean value. Non-empty lists are True.</p>
<p>Similarly for the integer 0, the empty string "", and so on, for False, and non-zero integers, non-empty strings, and so on, for True.</p>
<p>The idea of terms like "truthy" and "falsy" simply refer to those values which are considered True in cases like those described above, and those which are considered False.</p>
<p>For example, an empty list (<code>[]</code>) is considered "falsy", and a non-empty list (for example, <code>[1]</code>) is considered "truthy".</p>
| 1 | 2016-10-11T18:07:49Z | [
"python"
] |
What is Truthy and Falsy in python ? How is it different from True and False? | 39,983,695 | <p>I just came to know there are <strong>Truthy</strong> and <strong>Falsy</strong> values in python which are different from the normal <code>True</code> and <code>False</code>?</p>
<p>Can someone please explain in depth what <em>truthy</em> and <em>falsy</em> values are? </p>
<p>Where should I use them?</p>
<p>What is the difference between <em>truthy</em> and <code>True</code> values and <em>falsy</em> and <code>False</code> values ?</p>
| 0 | 2016-10-11T18:00:55Z | 39,984,041 | <p>In Python 2.7.x there is a special method on certain objects called <code>__nonzero__</code> which returns a boolean value. When defined, it allows you to call things like you would a boolean value:</p>
<pre><code>class Foo(object):
def __init__(self, x):
self.x = bool(x)
def __nonzero__(self):
return self.x
</code></pre>
<p>If you were to type this in the command line:</p>
<pre><code>a = Foo(True)
b = Foo(False)
def test_it(foo):
if foo:
print('It is truthy')
</code></pre>
<p>you'd see that <code>a</code> is "truthy" while <code>b</code> is not.</p>
<p>In Python 3.x you'd use the special method <code>__bool__</code>.</p>
| 0 | 2016-10-11T18:20:27Z | [
"python"
] |
What is Truthy and Falsy in python ? How is it different from True and False? | 39,983,695 | <p>I just came to know there are <strong>Truthy</strong> and <strong>Falsy</strong> values in python which are different from the normal <code>True</code> and <code>False</code>?</p>
<p>Can someone please explain in depth what <em>truthy</em> and <em>falsy</em> values are? </p>
<p>Where should I use them?</p>
<p>What is the difference between <em>truthy</em> and <code>True</code> values and <em>falsy</em> and <code>False</code> values ?</p>
| 0 | 2016-10-11T18:00:55Z | 39,984,051 | <p>All values are true except for:</p>
<ul>
<li><code>None</code></li>
<li><code>False</code></li>
<li><code>0</code></li>
<li><code>0.0</code></li>
<li><code>0j</code></li>
<li><code>[]</code></li>
<li><code>{}</code></li>
<li><code>()</code></li>
<li><code>''</code></li>
<li><code>set()</code></li>
<li>objects for which
<ul>
<li><code>obj.__bool__()</code> returns <code>False</code></li>
<li><code>obj.__len__()</code> returns <code>0</code> </li>
</ul></li>
</ul>
| 0 | 2016-10-11T18:20:52Z | [
"python"
] |
How to describe string with only repeated characters groups in regular expression in Python | 39,983,712 | <p>i.e. <code>'aaeegggwwqqqqq', 'ttteeyyjjj'</code></p>
<p>I tried <code>r'(([a-z])\1*)+$'</code>, but get error said: cannot refer to open group.</p>
<p>Anyone can help me figure it out? Thanks!</p>
| -1 | 2016-10-11T18:01:54Z | 39,983,764 | <p>Does</p>
<pre><code>(([a-z])\2*)+$
</code></pre>
<p>work?</p>
<p>Just quickly looking, <code>(([a-z])\2*)</code> is the first group and <code>([a-z])</code> is the second group. It looks like you meant to reference the second group.</p>
| 0 | 2016-10-11T18:05:18Z | [
"python",
"regex"
] |
How to describe string with only repeated characters groups in regular expression in Python | 39,983,712 | <p>i.e. <code>'aaeegggwwqqqqq', 'ttteeyyjjj'</code></p>
<p>I tried <code>r'(([a-z])\1*)+$'</code>, but get error said: cannot refer to open group.</p>
<p>Anyone can help me figure it out? Thanks!</p>
| -1 | 2016-10-11T18:01:54Z | 39,984,371 | <p>You should use a non-capturing outer group:</p>
<pre><code>re.match(r'(?:([a-z])\1+)+$', s)
</code></pre>
<p>See the <a href="https://regex101.com/r/f186ye/1" rel="nofollow">regex demo</a></p>
<p>This way, you do not have to re-adjust the backreferences inside the pattern.</p>
<p>An alternative is to use a named capture group and a named backreference:</p>
<pre><code>re.match(r'((?P<l>[a-z])(?P=l)+)+$', s)
</code></pre>
<p>See <a href="https://regex101.com/r/f186ye/2" rel="nofollow">this regex demo</a></p>
<p>Here, <code>(?P<l>[a-z])</code> captures an ASCII lowercase letter into Group "l" and <code>(?P=l)+</code> matches 1 or more (<code>+</code> matches one or more occurrences of the quantified subpattern as many times as possible) ocurrences of the captured letter.</p>
<p><a href="http://ideone.com/ipIFkx" rel="nofollow">Python demo</a>:</p>
<pre><code>import re
s = ["aaeegggwwqqqqq", "ttteeyyjjj", "sdfghj"]
for x in s:
if re.match(r'(?:([a-z])\1+)+$', x):
print("{0} matches the pattern".format(x))
</code></pre>
<p>Output:</p>
<pre><code>aaeegggwwqqqqq matches the pattern
ttteeyyjjj matches the pattern
</code></pre>
| 0 | 2016-10-11T18:37:51Z | [
"python",
"regex"
] |
Change file/folder tree structure based on name condition | 39,983,756 | <p>I've a folder/file tree like this:</p>
<pre><code>/source/photos/d831fae7-ed7f-44b1-8345-54fc54f0710f/car/1.jpg
/source/photos/20a33e40-8bb2-4ebe-b703-632115ba6714/house/
/source/photos/20a33e40-8bb2-4ebe-b703-632115ba6714/boat/b6a1b8bf-7f4c-45d6-84c1-37fbb8204328/2.jpg
/source/20dd7963-0d4a-4a80-83f8-4800de672087/music/1.mp3
/source/64e997aa-bb7e-4cdf-9348-8b8d48e2d336/music/c6a0b1d4-9d2d-4a21-bce3-8c922f8ad55b/2.mp3
/source/movies/83e760f4-7235-4d7e-bd51-56aa82192a94/572f3820-ea22-40c1-903a-31b7f412ae38/1.mp4
/source/movies/993209ed-092a-4665-a5d1-4ce537e2a680/4c200cf1-eb6b-40a7-84d7-9a2db0f75e09/1.mp4
</code></pre>
<p>To easily read the previous tree, here it is a simpler representation:</p>
<pre><code>/source/photos/uuid0/car/1.jpg
/source/photos/uuid1/house/
/source/photos/uuid1/boat/uuid2/2.jpg
/source/uuid3/music/1.mp3
/source/uuid4/music/uuid5/2.mp3
/source/movies/uuid6/uuid7/1.mp4
/source/movies/uuid8/uuid9/1.mp4
</code></pre>
<hr>
<p>I want to move the folders and files from "<code>source</code>" to the "<code>destination</code>" directory and perform a tweak in the tree structure on the fly. The resulting tree should look like this:</p>
<pre><code>/destination/photos/car/1.jpg
/destination/photos/house/
/destination/photos/boat/2.jpg
/destination/music/1.mp3
/destination/music/2.mp3
/destination/movies/1_1.mp4
/destination/movies/1_2.mp4
</code></pre>
<hr>
<p>As you can see, I want to:</p>
<ul>
<li>Ignore every <code>uuid</code> in the middle of the path; </li>
<li>When there is a conflict on file or folder name (i.e. <code>1.mp4</code>), an incremental suffix should be added (i.e. <code>1_1.mp4</code>);</li>
<li>Move empty folders (like "<code>house</code>");</li>
<li>Avoid conflicts when moving (moving a parent directory before moving its child) - it should <strong>recursively</strong> trespass the tree and move its contents.</li>
</ul>
<p>I've tried parsing the path with <code>os.walk</code> but can't accomplish this.</p>
<hr>
<p>Any ideas? Thanks!</p>
<hr>
<p><strong>NOTE:</strong> <code>uuid</code> (i.e. <code>6e56c11b-3adf-440e-96f5-375884c96c55</code>) can be checked using the following function:</p>
<pre><code>import uuid
def validate_uuid4(uuid_string):
try:
val = uuid.UUID(uuid_string, version=4)
except ValueError:
return False
return True
</code></pre>
<hr>
<p><strong>EDIT: Example code</strong></p>
<p>The main problem is this: </p>
<p>Given the following structure</p>
<pre><code>.
âââ Icon\r
âââ folder1
âââ d.txt
âââ folder1.1
âââ 64e997aa-bb7e-4cdf-9348-8b8d48e2d336
â  âââ a.mkv
âââ d831fae7-ed7f-44b1-8345-54fc54f0710f
âââ b.mkv
âââ d831fae7-ed7f-44b1-8345-54fc54f0710f
âââ b.mkv
âââ c.jpg
</code></pre>
<p>with this code:</p>
<pre><code>#!/usr/bin/python
import os
import uuid
args = {}
args['rootdirOriginal'] = "/Users/xxx/Desktop/UploadDropbox"
pathString = []
pathStringClean=[]
def validate_uuid4(uuid_string):
try:
val = uuid.UUID(uuid_string, version=4)
except ValueError:
return False
return True
for dirpath, dirs, files in os.walk(args['rootdirOriginal']):
if files:
for f in files:
pathTmp = []
pathRelative = os.path.relpath(dirpath, args['rootdirOriginal'])
for p in pathRelative.split("/"):
pathTmp.append(p)
pathTmp.append(f)
pathTmpClean = [x for x in pathTmp if not validate_uuid4(x) and x[0] != "." and x[0:4]!="Icon"]
pathStringTmp = ("/").join(pathTmp)
pathStringTmpClean = ("/").join(pathTmpClean)
if len(pathTmp) > 0:
pathString.append(pathStringTmp)
pathStringClean.append(pathStringTmpClean)
print pathString
print pathStringClean
</code></pre>
<p>this is the first output:</p>
<pre><code>['./.DS_Store', './Icon\r', 'folder1/.DS_Store', 'folder1/d.txt', 'folder1/folder1.1/.DS_Store', 'folder1/folder1.1/64e997aa-bb7e-4cdf-9348-8b8d48e2d336/.DS_Store', 'folder1/folder1.1/64e997aa-bb7e-4cdf-9348-8b8d48e2d336/a.mkv', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/.DS_Store', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/b.mkv', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/d831fae7-ed7f-44b1-8345-54fc54f0710f/b.mkv', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/d831fae7-ed7f-44b1-8345-54fc54f0710f/c.jpg']
</code></pre>
<p>and this is the second:</p>
<pre><code>['', '', 'folder1', 'folder1/d.txt', 'folder1/folder1.1', 'folder1/folder1.1', 'folder1/folder1.1/a.mkv', 'folder1/folder1.1', 'folder1/folder1.1/b.mkv', 'folder1/folder1.1/b.mkv', 'folder1/folder1.1/c.jpg']
</code></pre>
<p>I can't just remove the duplicates since some times they are not truly dups but, instead, should be renamed as I described before</p>
| 0 | 2016-10-11T18:04:42Z | 40,085,293 | <p>I don't have the time to give a full implementation, there are a lot of ways to do this as well, but here is an outline, which may provide you with a start.
Your question isn't very specific either unless you want a full solution handed to you, i.e. what specific problems you are running into and so on, but here goes:</p>
<pre><code>import os
from sets import Set
def remove_uuids_from_path(path):
# implement a function to remove uuids from the path name
# use os.path.split or os.path.splitdrive and validate_uuid function
# to build new paths without uuids
# build a set of source paths to work on
# or you can attempt to create the new path here
# and move files, as the file names are there in files list
source_paths = Set()
for root, dirs, files in os.walk(source_dir):
source_paths.add(root)
# edit: if you want file paths do:
for file_name in files:
file_name_path = os.path.join(root, file)
# run through source paths, replace source name with destination name
for s_path in source_paths:
new_path = remove_uuids_from_path(s_path.replace(source_name, destination_name))
# create new directory if it doesn't allready exist
if not os.path.exists(new_path):
os.makedirs(new_path)
# read file names from source directory into a list
# move files to new directory if they don't exists there allready, else employ naming scheme.
for new_path_to_file in file_name_list:
if new_path_to_file.is_file():
# change name by inspecting destination files names
else:
# move/copy file
</code></pre>
| 1 | 2016-10-17T11:32:37Z | [
"python",
"path",
"shutil"
] |
Change file/folder tree structure based on name condition | 39,983,756 | <p>I've a folder/file tree like this:</p>
<pre><code>/source/photos/d831fae7-ed7f-44b1-8345-54fc54f0710f/car/1.jpg
/source/photos/20a33e40-8bb2-4ebe-b703-632115ba6714/house/
/source/photos/20a33e40-8bb2-4ebe-b703-632115ba6714/boat/b6a1b8bf-7f4c-45d6-84c1-37fbb8204328/2.jpg
/source/20dd7963-0d4a-4a80-83f8-4800de672087/music/1.mp3
/source/64e997aa-bb7e-4cdf-9348-8b8d48e2d336/music/c6a0b1d4-9d2d-4a21-bce3-8c922f8ad55b/2.mp3
/source/movies/83e760f4-7235-4d7e-bd51-56aa82192a94/572f3820-ea22-40c1-903a-31b7f412ae38/1.mp4
/source/movies/993209ed-092a-4665-a5d1-4ce537e2a680/4c200cf1-eb6b-40a7-84d7-9a2db0f75e09/1.mp4
</code></pre>
<p>To easily read the previous tree, here it is a simpler representation:</p>
<pre><code>/source/photos/uuid0/car/1.jpg
/source/photos/uuid1/house/
/source/photos/uuid1/boat/uuid2/2.jpg
/source/uuid3/music/1.mp3
/source/uuid4/music/uuid5/2.mp3
/source/movies/uuid6/uuid7/1.mp4
/source/movies/uuid8/uuid9/1.mp4
</code></pre>
<hr>
<p>I want to move the folders and files from "<code>source</code>" to the "<code>destination</code>" directory and perform a tweak in the tree structure on the fly. The resulting tree should look like this:</p>
<pre><code>/destination/photos/car/1.jpg
/destination/photos/house/
/destination/photos/boat/2.jpg
/destination/music/1.mp3
/destination/music/2.mp3
/destination/movies/1_1.mp4
/destination/movies/1_2.mp4
</code></pre>
<hr>
<p>As you can see, I want to:</p>
<ul>
<li>Ignore every <code>uuid</code> in the middle of the path; </li>
<li>When there is a conflict on file or folder name (i.e. <code>1.mp4</code>), an incremental suffix should be added (i.e. <code>1_1.mp4</code>);</li>
<li>Move empty folders (like "<code>house</code>");</li>
<li>Avoid conflicts when moving (moving a parent directory before moving its child) - it should <strong>recursively</strong> trespass the tree and move its contents.</li>
</ul>
<p>I've tried parsing the path with <code>os.walk</code> but can't accomplish this.</p>
<hr>
<p>Any ideas? Thanks!</p>
<hr>
<p><strong>NOTE:</strong> <code>uuid</code> (i.e. <code>6e56c11b-3adf-440e-96f5-375884c96c55</code>) can be checked using the following function:</p>
<pre><code>import uuid
def validate_uuid4(uuid_string):
try:
val = uuid.UUID(uuid_string, version=4)
except ValueError:
return False
return True
</code></pre>
<hr>
<p><strong>EDIT: Example code</strong></p>
<p>The main problem is this: </p>
<p>Given the following structure</p>
<pre><code>.
âââ Icon\r
âââ folder1
âââ d.txt
âââ folder1.1
âââ 64e997aa-bb7e-4cdf-9348-8b8d48e2d336
â  âââ a.mkv
âââ d831fae7-ed7f-44b1-8345-54fc54f0710f
âââ b.mkv
âââ d831fae7-ed7f-44b1-8345-54fc54f0710f
âââ b.mkv
âââ c.jpg
</code></pre>
<p>with this code:</p>
<pre><code>#!/usr/bin/python
import os
import uuid
args = {}
args['rootdirOriginal'] = "/Users/xxx/Desktop/UploadDropbox"
pathString = []
pathStringClean=[]
def validate_uuid4(uuid_string):
try:
val = uuid.UUID(uuid_string, version=4)
except ValueError:
return False
return True
for dirpath, dirs, files in os.walk(args['rootdirOriginal']):
if files:
for f in files:
pathTmp = []
pathRelative = os.path.relpath(dirpath, args['rootdirOriginal'])
for p in pathRelative.split("/"):
pathTmp.append(p)
pathTmp.append(f)
pathTmpClean = [x for x in pathTmp if not validate_uuid4(x) and x[0] != "." and x[0:4]!="Icon"]
pathStringTmp = ("/").join(pathTmp)
pathStringTmpClean = ("/").join(pathTmpClean)
if len(pathTmp) > 0:
pathString.append(pathStringTmp)
pathStringClean.append(pathStringTmpClean)
print pathString
print pathStringClean
</code></pre>
<p>this is the first output:</p>
<pre><code>['./.DS_Store', './Icon\r', 'folder1/.DS_Store', 'folder1/d.txt', 'folder1/folder1.1/.DS_Store', 'folder1/folder1.1/64e997aa-bb7e-4cdf-9348-8b8d48e2d336/.DS_Store', 'folder1/folder1.1/64e997aa-bb7e-4cdf-9348-8b8d48e2d336/a.mkv', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/.DS_Store', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/b.mkv', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/d831fae7-ed7f-44b1-8345-54fc54f0710f/b.mkv', 'folder1/folder1.1/d831fae7-ed7f-44b1-8345-54fc54f0710f/d831fae7-ed7f-44b1-8345-54fc54f0710f/c.jpg']
</code></pre>
<p>and this is the second:</p>
<pre><code>['', '', 'folder1', 'folder1/d.txt', 'folder1/folder1.1', 'folder1/folder1.1', 'folder1/folder1.1/a.mkv', 'folder1/folder1.1', 'folder1/folder1.1/b.mkv', 'folder1/folder1.1/b.mkv', 'folder1/folder1.1/c.jpg']
</code></pre>
<p>I can't just remove the duplicates since some times they are not truly dups but, instead, should be renamed as I described before</p>
| 0 | 2016-10-11T18:04:42Z | 40,091,468 | <p>Here is my ending solution (thank you Sushanta!)</p>
<pre><code>#!/usr/bin/python
import os
import uuid
from collections import Counter
import shutil
args = {}
args['rootdirOriginal'] = "/Users/xxx/Desktop/UploadDropbox"
args['uuid'] = str(uuid.uuid4())
args['rootdir'] = args['rootdirOriginal']+"/"+args['uuid']
pathString = []
pathStringClean=[]
pathStringFolder = []
toDelete = []
# Validate if string is a valid UUID
def validate_uuid4(uuid_string):
try:
val = uuid.UUID(uuid_string, version=4)
except ValueError:
return False
return True
# Walk through "source" directory
for dirpath, dirs, files in os.walk(args['rootdirOriginal']):
# Files
if files:
for f in files:
pathTmp = []
pathFolderTmp = []
pathRelative = os.path.relpath(dirpath, args['rootdirOriginal'])
for p in pathRelative.split("/"):
pathTmp.append(p)
pathFolderTmp.append(p)
# Flag every fuke
pathTmp.append(f+"***")
# Delete list elements whose name are: a previous UUID //// starting with "." //// starting with "Icon"
pathTmpClean = [x for x in pathTmp if not validate_uuid4(x) and x[0] != "." and x[0:4]!="Icon"]
pathFolderTmpClean = [x for x in pathFolderTmp if not validate_uuid4(x) and x[0] != "." and x[0:4]!="Icon"]
# Convert to path string
if len(pathTmpClean) > 0:
pathStringTmp = ("/").join(pathTmp)
pathStringTmpFolder = ("/").join(pathFolderTmpClean)
pathStringTmpClean = ("/").join(pathTmpClean)
pathString.append(pathStringTmp)
pathStringClean.append(pathStringTmpClean)
if pathStringTmpFolder != "":
pathStringFolder.append(pathStringTmpFolder)
# Empty directory
if dirs:
for d in dirs:
emptyDir = os.path.relpath(dirpath, args['rootdirOriginal'])
if emptyDir == ".":
emptyDir = d
else:
emptyDir = os.path.join(emptyDir,d)
pathStringFolder.append(emptyDir)
# Delete repeating directories
pathStringFolder = list(set(pathStringFolder))
# Create a list with the indexes of first list when it is a directory
for i in range(0,len(pathStringClean)):
if (len(pathStringClean[i]) > 3):
if pathStringClean[i][-3:] != "***":
toDelete.append(i)
# Delete indexes of both "source" and "destinantion" lists where it is directory
for i in sorted(toDelete, reverse=True):
del pathString[i]
del pathStringClean[i]
# Delete the flag "***"
pathString = [x[:-3] for x in pathString]
pathStringClean = [x[:-3] for x in pathStringClean]
# Rename repeated filenames - create sequential suffix
counts = Counter(pathStringClean)
for s,num in counts.items():
if num > 1:
for suffix in range(1, num + 1):
pathStringClean[pathStringClean.index(s)] = ("/").join(s.split("/")[:-1]) +("/")+(".").join((s.split("/")[-1]).split(".")[:-1])+"_"+str(suffix)+"."+(s.split("/")[-1]).split(".")[-1]
pathStringFolder.reverse()
# Create root "destination" directory
os.mkdir(args['rootdir'])
# Create other directories
for folder in pathStringFolder:
try:
os.makedirs(os.path.join(args['rootdir'],folder))
except:
pass
# Move files
for i in range(0,len(pathStringClean)):
os.rename(os.path.join(args['rootdirOriginal'],pathString[i]),os.path.join(args['rootdir'],pathStringClean[i]))
# Delete everything except root "destination" directory
for f in os.listdir(args['rootdirOriginal']):
if (not (os.path.isfile(os.path.join( args['rootdirOriginal'],f))) and f != args['uuid']):
shutil.rmtree(os.path.join( args['rootdirOriginal'],f))
</code></pre>
| 0 | 2016-10-17T16:35:27Z | [
"python",
"path",
"shutil"
] |
RotatingFileHandler does not continue logging after error encountered | 39,983,860 | <pre><code>Traceback (most recent call last):
File "/usr/lib64/python2.6/logging/handlers.py", line 76, in emit
if self.shouldRollover(record):
File "/usr/lib64/python2.6/logging/handlers.py", line 150, in shouldRollover
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
ValueError: I/O operation on closed file
</code></pre>
<p>I have a line in my script:</p>
<pre><code>handler = logging.handlers.RotatingFileHandler(cfg_obj.log_file,maxBytes = maxlog_size, backupCount = 10)
</code></pre>
<p>It runs fine when there are no error messages. But when there's an error log, the logs after the error are not written to the file unless the process is restarted. We do not want to restart the process every time there is an error.
Thanks for your help in advance!</p>
| 0 | 2016-10-11T18:10:31Z | 39,984,369 | <p>I highly recommend you to use a configuration file. The configuration code below "logging.conf" has different handlers and formatters just as example: </p>
<pre><code>[loggers]
keys=root
[handlers]
keys=consoleHandler, rotatingFileHandler
[formatters]
keys=simpleFormatter, extendedFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler, rotatingFileHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[handler_rotatingFileHandler]
class=handlers.RotatingFileHandler
level=INFO
formatter=extendedFormatter
args=('path/logs_file.log', 'a', 2000000, 1000)
[formatter_simpleFormatter]
format=%(asctime)s - %(levelname)s - %(message)s
datefmt=
[formatter_extendedFormatter]
format= %(asctime)s - %(levelname)s - %(filename)s:%(lineno)s - %(funcName)s() %(message)s
datefmt=
</code></pre>
<p>Now how to use it "main.py":</p>
<pre><code>import logging.config
# LOGGER
logging.config.fileConfig('path_to_conf_file/logging.conf')
LOGGER = logging.getLogger('root')
try:
LOGGER.debug("Debug message...")
LOGGER.info("Info message...")
except Exception as e:
LOGGER.exception(e)
</code></pre>
<p>Let me know if you need more help.</p>
| 0 | 2016-10-11T18:37:42Z | [
"python",
"logging"
] |
Python - writing and reading from a temporary file | 39,983,886 | <p>I am trying to create a temporary file that I write in some lines from another file and then make some objects from the data. I am not sure how to find and open the temp file so I can read it. My code:</p>
<pre><code>with tempfile.TemporaryFile() as tmp:
lines = open(file1).readlines()
tmp.writelines(lines[2:-1])
dependencyList = []
for line in tmp:
groupId = textwrap.dedent(line.split(':')[0])
artifactId = line.split(':')[1]
version = line.split(':')[3]
scope = str.strip(line.split(':')[4])
dependencyObject = depenObj(groupId, artifactId, version, scope)
dependencyList.append(dependencyObject)
tmp.close()
</code></pre>
<p>Essentially I just want to make a middleman temporary document to protect against accidentally overwriting a file. </p>
| 0 | 2016-10-11T18:11:45Z | 39,984,048 | <p>As per the <a href="https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryFile" rel="nofollow">docs</a>, the file is deleted when the <code>TemporaryFile</code> is closed and that happens when you exit the <code>with</code> clause. So... don't exit the <code>with</code> clause. Rewind the file and do your work in the <code>with</code>. </p>
<pre><code>with tempfile.TemporaryFile() as tmp:
lines = open(file1).readlines()
tmp.writelines(lines[2:-1])
tmp.seek(0)
for line in tmp:
groupId = textwrap.dedent(line.split(':')[0])
artifactId = line.split(':')[1]
version = line.split(':')[3]
scope = str.strip(line.split(':')[4])
dependencyObject = depenObj(groupId, artifactId, version, scope)
dependencyList.append(dependencyObject)
</code></pre>
| 1 | 2016-10-11T18:20:47Z | [
"python",
"temporary-files"
] |
Python - writing and reading from a temporary file | 39,983,886 | <p>I am trying to create a temporary file that I write in some lines from another file and then make some objects from the data. I am not sure how to find and open the temp file so I can read it. My code:</p>
<pre><code>with tempfile.TemporaryFile() as tmp:
lines = open(file1).readlines()
tmp.writelines(lines[2:-1])
dependencyList = []
for line in tmp:
groupId = textwrap.dedent(line.split(':')[0])
artifactId = line.split(':')[1]
version = line.split(':')[3]
scope = str.strip(line.split(':')[4])
dependencyObject = depenObj(groupId, artifactId, version, scope)
dependencyList.append(dependencyObject)
tmp.close()
</code></pre>
<p>Essentially I just want to make a middleman temporary document to protect against accidentally overwriting a file. </p>
| 0 | 2016-10-11T18:11:45Z | 39,984,066 | <p>You've got a scope problem; the file <code>tmp</code> only exists within the scope of the <code>with</code> statement which creates it. Additionally, you'll need to use a <code>NamedTemporaryFile</code> if you want to access the file later outside of the initial <code>with</code> (this gives the OS the ability to access the file). Also, I'm not sure why you're trying to append to a temporary file... since it won't have existed before you instantiate it.</p>
<p>Try this:</p>
<pre><code>import tempfile
tmp = tempfile.NamedTemporaryFile()
# Open the file for writing.
with open(tmp.name, 'w') as f:
f.write(stuff) # where `stuff` is, y'know... stuff to write (a string)
...
# Open the file for reading.
with open(tmp.name) as f:
for line in f:
... # more things here
</code></pre>
| 0 | 2016-10-11T18:21:31Z | [
"python",
"temporary-files"
] |
How to filter datetime field +/- datetime using SQLalchemy | 39,983,887 | <p>I have a mysql table representing editorial articles and their metadata like title, author, and datecreated. </p>
<p>I have another table representing metrics (such as view counts) about those articles computed at different time points. Each row is a recording of these metrics for a particular article at a particular moment in time.</p>
<p><strong>I want to retrieve all rows of the metrics table where the metric row timestamp field is within a period of two hours occurring after one hour past the related article's datecreated field. I'd like to do this using SQLalchemy.</strong></p>
<p>My current SQLalchemy query looks like this:</p>
<pre><code>import sqlalchemy as sa
import models as m
s = session()
q = (s.query(m.Article.fb_shares, func.avg(m.ArticlesMetric.views)),
.join(m.ArticlesMetric)
.filter(sa.between(m.ArticlesMetric.tstamp,
m.Article.created + timedelta(hours=1),
m.Article.created + timedelta(hours=3))
)
.group_by(m.Article.id))
result = q.all()
s.close()
</code></pre>
<p>However, this results in the following error:</p>
<pre><code>Warning: (1292, u"Truncated incorrect DOUBLE value: '1970-01-01 05:30:00'")
</code></pre>
<p>mySQL internally casts data of different types to doubles before making a comparison when a comparison is attempted between different types. I believe this error is somehow a result of using the timedelta, but I'm not sure how else I can achieve what I'm trying to do. Any suggestions very welcome.</p>
| 1 | 2016-10-11T18:11:48Z | 39,984,868 | <p>Actually, this is harder than it looks. If you had done this in MySQL directly, this is what you would have written:</p>
<pre><code>SELECT ...
FROM ...
JOIN ...
WHERE tstamp BETWEEN DATE_ADD(created, INTERVAL 1 HOUR) AND DATE_ADD(created, INTERVAL 3 HOUR)
GROUP BY ...
</code></pre>
<p>And you have to do more or less the same thing with SQLAlchemy, simply because <code>m.Article.created</code> is not a constant.</p>
<p>If you enable query logging, you can see the MySQL query generated by your code, and see that it does not correspond to what you would have thought:</p>
<pre><code>INFO:sqlalchemy.engine.base.Engine:SELECT test.id AS test_id, test.dt AS test_dt, test.tp AS test_tp
FROM test
WHERE test.tp BETWEEN test.dt + %(dt_1)s AND test.dt + %(dt_2)s
INFO:sqlalchemy.engine.base.Engine:{'dt_1': datetime.datetime(1970, 1, 1, 1, 0), 'dt_2': datetime.datetime(1970, 1, 1, 3, 0)}
</code></pre>
<hr>
<p>I managed to find a way to do what you want, here is the code:</p>
<pre><code>from sqlalchemy.sql import func
from sqlalchemy.sql.expression import text
...
filter(sa.between(m.ArticlesMetric.tstamp,
func.date_add(m.Article.created, text('INTERVAL 1 HOUR')),
func.date_add(m.Article.created, text('INTERVAL 3 HOUR')))
</code></pre>
| 1 | 2016-10-11T19:07:10Z | [
"python",
"mysql",
"datetime",
"sqlalchemy"
] |
Tensorflow: Delay variable over training steps | 39,983,947 | <p>In Tensorflow, I want to use some of the variables of my network from the previous training step in the next training step. More specifically, I want to calculate a secondary cost function during training which utilizes some network tensors from the previous training step.</p>
<p>This question could be answered with fragments of RNN code, but I didn't figure out how yet. I was looking into <a href="http://stackoverflow.com/questions/35145645/how-can-i-feed-last-output-yt-1-as-input-for-generating-yt-in-tensorflow-rnn#">How can I feed last output y(t-1) as input for generating y(t) in tensorflow RNN?</a> and <a href="http://stackoverflow.com/questions/39681026/tensorflow-how-to-pass-output-from-previous-time-step-as-input-to-next-timestep">Tensorflow: How to pass output from previous time-step as input to next timestep</a> as well as <a href="http://stackoverflow.com/questions/38241410/tensorflow-remember-lstm-state-for-next-batch-stateful-lstm">TensorFlow: Remember LSTM state for next batch (stateful LSTM)</a>.</p>
<p>Assume h is the last layer of a neural network with several previous layers, e.g.:</p>
<pre><code>h = tf.nn.relu(tf.matmul(h_previous,W_previous))
</code></pre>
<p>How could I preserve the tensor h after processing a sample during training (e.g. save it to h_old), so that I can use it in the next training step for a computation like:</p>
<pre><code>d = tf.sub(h,h_old)
</code></pre>
<p>In this example h is updated with the current training sample and h_old is the tensor which was computed on the previous training sample. Some ideas for this issue would be great!</p>
| 1 | 2016-10-11T18:15:17Z | 39,988,777 | <p>How about making <code>h_old</code> a variable?</p>
<pre><code>h_old = tf.Variable(tf.zeros(<some-shape>))
.
.
h = tf.nn.relu(tf.matmul(h_previous,W_previous))
d = tf.sub(h,h_old)
h_old.assign(h)
</code></pre>
| 0 | 2016-10-12T00:34:03Z | [
"python",
"tensorflow",
"recurrent-neural-network"
] |
Retrieving values trimmed by ltrim in redis list | 39,983,954 | <p>A common design pattern in redis when handling lists is:</p>
<pre><code>redis_server.lpush(list_name, element)
redis_server.ltrim(list_name, 0, 99)
</code></pre>
<p>(used python syntax to illustate it)</p>
<p>What to do if one needs to retrieve all the values beyond index 99, before invoking <code>ltrim</code>? One way to do it is as follows, but is there a <em>faster</em> way to do it?</p>
<hr>
<pre><code>redis_server.lpush(list_name, element)
list_length = redis_server.llen(list_name)
extra = list_length - 100
while (extra > 0):
item = redis_server.lpop(list_name)
#do something with the item
extra = extra - 1
redis_server.ltrim(list_name, 0, 99)
</code></pre>
| 0 | 2016-10-11T18:15:33Z | 39,986,858 | <p>A first solution would be to get all extra items in one request, using <a href="http://redis.io/commands/lrange" rel="nofollow">LRANGE</a>:</p>
<p><code>
redis_server.lpush(list_name, element)
items = redis_server.lrange(list_name, 100, -1)
# do something with the items
redis_server.ltrim(list_name, 0, 99)
</code></p>
<p>A second solution, a bit more complex but <em>maybe</em> faster (would need to be confirmed by a test, it's not certain) as it requires only one request instead of two, would be to write a Lua script and to send it using <a href="http://redis.io/commands/eval" rel="nofollow">EVAL</a> and <a href="http://redis.io/commands/evalsha" rel="nofollow">EVALSHA</a>. But you probably don't need it, the first is certainly fast enough.</p>
| 1 | 2016-10-11T21:15:42Z | [
"python",
"redis"
] |
Element wise multiplication of a 2D and 1D array in python | 39,983,977 | <p>Lets say I have two numpy arrays:</p>
<pre><code>import numpy as np
x = np.array([[1,2,3], [4,5,6], [7,8,9]])
y = np.array([-1, 1, -1])
</code></pre>
<p>I want to multiply x and y in such a way that I get z:</p>
<pre><code>z = np.array([[-1,2,-3], [-4,5,-6], [-7,8,-9]])
</code></pre>
<p>In other words, if element j of y is -1, then all elements of the j-th row of x get multiplied by -1. If element k of y is 1, then all elements of the j-th row of x get multiplied by 1. </p>
<p>How do I do this?</p>
| 0 | 2016-10-11T18:16:51Z | 39,984,016 | <p>Simply use the multiplication operator:</p>
<pre><code>x * y
Out[6]:
array([[-1, 2, -3],
[-4, 5, -6],
[-7, 8, -9]])
</code></pre>
| 3 | 2016-10-11T18:19:03Z | [
"python",
"numpy"
] |
How to get the message sender in yowsup cli echo? | 39,984,039 | <p>I'm using Yowsup cli to send and receive messages using whatsapp. I could register and send the messages. But when I execute this command to listen the incoming messages:</p>
<pre><code>yowsup-cli demos --login number:password --echo -E s40
</code></pre>
<p>I can see the message text, but I cannot see who is the message sender. How can I get it using yowsup-cli?</p>
<p>This is my result:</p>
<pre><code>yowsup-cli v2.0.15
yowsup v2.5.0
Copyright (c) 2012-2016 Tarek Galal
http://www.openwhatsapp.org
This software is provided free of charge. Copying and redistribution is
encouraged.
If you appreciate this software and you would like to support future
development please consider donating:
http://openwhatsapp.org/yowsup/donate
WARNING:yowsup.layers.axolotl.layer_receive:Received a message that we've previously decrypted, goint to send the delivery receipt myself
DUMP:
Teste 8
['\n', '\x07', 'T', 'e', 's', 't', 'e', ' ', '8']
[10, 7, 84, 101, 115, 116, 101, 32, 56]
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1087, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1109, in InternalParse
(tag_bytes, new_pos) = local_ReadTag(buffer, pos)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/google/protobuf/internal/decoder.py", line 181, in ReadTag
while six.indexbytes(buffer, pos) & 0x80:
TypeError: unsupported operand type(s) for &: 'str' and 'int'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/bin/yowsup-cli", line 368, in <module>
if not parser.process():
File "/Library/Frameworks/Python.framework/Versions/3.5/bin/yowsup-cli", line 270, in process
self.startEcho()
File "/Library/Frameworks/Python.framework/Versions/3.5/bin/yowsup-cli", line 308, in startEcho
stack.start()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/demos/echoclient/stack.py", line 21, in start
self.stack.loop()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/stacks/yowstack.py", line 196, in loop
asyncore.loop(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncore.py", line 203, in loop
poll_fun(timeout, map)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncore.py", line 150, in poll
read(obj)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncore.py", line 87, in read
obj.handle_error()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncore.py", line 83, in read
obj.handle_read_event()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncore.py", line 423, in handle_read_event
self.handle_read()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/network/layer.py", line 102, in handle_read
self.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/network/layer.py", line 110, in receive
self.toUpper(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 76, in toUpper
self.__upper.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/stanzaregulator/layer.py", line 29, in receive
self.processReceived()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/stanzaregulator/layer.py", line 52, in processReceived
self.processReceived()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/stanzaregulator/layer.py", line 49, in processReceived
self.toUpper(oneMessageData)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 76, in toUpper
self.__upper.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/auth/layer_crypt.py", line 65, in receive
self.toUpper(payload)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 76, in toUpper
self.__upper.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/coder/layer.py", line 35, in receive
self.toUpper(node)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 76, in toUpper
self.__upper.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/logger/layer.py", line 14, in receive
self.toUpper(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 76, in toUpper
self.__upper.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/axolotl/layer_control.py", line 44, in receive
self.toUpper(protocolTreeNode)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 76, in toUpper
self.__upper.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/__init__.py", line 189, in receive
s.receive(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/axolotl/layer_receive.py", line 41, in receive
self.onMessage(protocolTreeNode)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/axolotl/layer_receive.py", line 74, in onMessage
self.handleEncMessage(protocolTreeNode)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/axolotl/layer_receive.py", line 88, in handleEncMessage
self.handleWhisperMessage(node)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/axolotl/layer_receive.py", line 144, in handleWhisperMessage
self.parseAndHandleMessageProto(encMessageProtocolEntity, plaintext[:-padding])
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/yowsup/layers/axolotl/layer_receive.py", line 171, in parseAndHandleMessageProto
m.ParseFromString(serializedData)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/google/protobuf/message.py", line 185, in ParseFromString
self.MergeFromString(serialized)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1093, in MergeFromString
raise message_mod.DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.
</code></pre>
| 0 | 2016-10-11T18:20:22Z | 40,130,133 | <p>Actually, this is a problem that occurs when python 3.5 is used. So, I used python 2.7 and it's solved. </p>
| 0 | 2016-10-19T11:27:04Z | [
"python",
"yowsup"
] |
Requirement already up-to-date: pip in | 39,984,046 | <p><a href="https://i.stack.imgur.com/hyCiv.png" rel="nofollow"><img src="https://i.stack.imgur.com/hyCiv.png" alt="enter image description here"></a></p>
<pre><code>Wameedhs-MacBook-Air:Desktop wameedh$ cd
Wameedhs-MacBook-Air:~ wameedh$ export PYTHONPATH=.
Wameedhs-MacBook-Air:~ wameedh$ python ~/Downloads/get-pip.py
Collecting pip
Using cached pip-8.1.2-py2.py3-none-any.whl
Collecting wheel
Using cached wheel-0.29.0-py2.py3-none-any.whl
Installing collected packages: pip, wheel
Exception:
Traceback (most recent call last):
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/req/req_set.py", line 742, in install
**kwargs
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/req/req_install.py", line 831, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/req/req_install.py", line 1032, in move_wheel_files
isolated=self.isolated,
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/wheel.py", line 346, in move_wheel_files
clobber(source, lib_dir, True)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/wheel.py", line 317, in clobber
ensure_dir(destdir)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpZqkyAz/pip.zip/pip/utils/__init__.py", line 83, in ensure_dir
os.makedirs(path)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pip'
Wameedhs-MacBook-Air:~ wameedh$ sudo chown -R $USER /Library/Python/2.7
Password:
Wameedhs-MacBook-Air:~ wameedh$ python ~/Downloads/get-pip.py
Collecting pip
Using cached pip-8.1.2-py2.py3-none-any.whl
Collecting wheel
Using cached wheel-0.29.0-py2.py3-none-any.whl
Installing collected packages: pip, wheel
Exception:
Traceback (most recent call last):
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/req/req_set.py", line 742, in install
**kwargs
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/req/req_install.py", line 831, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/req/req_install.py", line 1032, in move_wheel_files
isolated=self.isolated,
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/wheel.py", line 463, in move_wheel_files
generated.extend(maker.make(spec))
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/_vendor/distlib/scripts.py", line 372, in make
self._make_script(entry, filenames, options=options)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/_vendor/distlib/scripts.py", line 276, in _make_script
self._write_script(scriptnames, shebang, script, filenames, ext)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/_vendor/distlib/scripts.py", line 250, in _write_script
self._fileop.write_binary_file(outname, script_bytes)
File "/var/folders/7x/jx8z1sg941vfxf3p7tznf8ch0000gn/T/tmpPhatQx/pip.zip/pip/_vendor/distlib/util.py", line 405, in write_binary_file
with open(path, 'wb') as f:
IOError: [Errno 13] Permission denied: '/usr/local/bin/pip'
Wameedhs-MacBook-Air:~ wameedh$ sudo python ~/Downloads/get-pip.py
The directory '/Users/wameedh/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/wameedh/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already up-to-date: pip in /Library/Python/2.7/site-packages
Collecting wheel
Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
100% |ââââââââââââââââââââââââââââââââ| 71kB 1.8MB/s
Installing collected packages: wheel
Successfully installed wheel-0.29.0
Wameedhs-MacBook-Air:~ wameedh$ pip install requests
-bash: pip: command not found
Wameedhs-MacBook-Air:~ wameedh$ which python
/usr/bin/python
Wameedhs-MacBook-Air:~ wameedh$ pip
-bash: pip: command not found
Wameedhs-MacBook-Air:~ wameedh$ export PYTHONPATH=.
Wameedhs-MacBook-Air:~ wameedh$ pip
-bash: pip: command not found
Wameedhs-MacBook-Air:~ wameedh$ which python
/usr/bin/python
Wameedhs-MacBook-Air:~ wameedh$ python -V
Python 2.7.10
Wameedhs-MacBook-Air:~ wameedh$ pip
-bash: pip: command not found
Wameedhs-MacBook-Air:~ wameedh$ man pip
No manual entry for pip
Wameedhs-MacBook-Air:~ wameedh$
[Restored Oct 9, 2016, 6:33:31 PM]
Last login: Sun Oct 9 18:33:31 on ttys000
Restored session: Sun Oct 9 18:32:03 PDT 2016
Wameedhs-MacBook-Air:~ wameedh$
</code></pre>
<p>It says:</p>
<pre><code>Requirement already up-to-date: pip in /Library/Python/2.7/site-packages
Wameedhs-MacBook-Air:~ wameedh$ pip install requests
</code></pre>
<p>Still I am getting this:</p>
<pre><code>-bash: pip: command not found
</code></pre>
| -1 | 2016-10-11T18:20:38Z | 40,054,354 | <p>Thanks all for the help. I fixed it! The problem was I didn't have the latest version of Xcode! I ran xcode-select --install before installing pip! that's it!!</p>
| 0 | 2016-10-15T02:31:25Z | [
"python",
"python-2.7",
"pip"
] |
Python memory allocation with pandas and pickle | 39,984,097 | <p>I am running a python script which can be roughly summed (semi-psuedo-coded) as follows:</p>
<pre><code>import pandas as pd
for json_file in json_files:
with open(json_file,'r') as fin:
data = fin.readlines()
data_str = '[' + ','.join(x.strip() for x in data) + ']'
df = pd.read_json(data_str)
df.to_pickle('%s.pickle' % json_file)
del df, data, data_str
</code></pre>
<p>The process works iteratively creating data frames, saving them each to a unique file. However, my memory seems to get used up during the process, as if <code>del df, data, data_str</code> does not free up memory (originally, I did not include the <code>del</code> statement in the code, but I hoped that adding it would resolve the issue -- it did not). During each iteration, approximately the same amount of data is being read into the data frame, approximately 3% of my available memory; as the process iterates, each iteration a there is a reported 3% bump in <code>%MEM</code> (from <code>ps u | grep [p]ython</code> in my terminal), and eventually my memory is swamped and the process is killed. My question is how should I change my code/approach so that at each iteration, the memory from the previous iteration is freed? </p>
<p>To note, I'm running Ubuntu 16.04 with Python 3.5.2 via Anaconda. </p>
<p>Thanks in advance for your direction. </p>
| 2 | 2016-10-11T18:23:07Z | 39,984,419 | <p>In python automatic garbage collection deallocates the variable (pandas DataFrame are also just another object in terms of python). There are different garbage collection strategies that can be tweaked (requires significant learning).</p>
<p>You can manually trigger the garbage collection using</p>
<pre><code>import gc
gc.collect()
</code></pre>
<p>But frequent calls to garbage collection is discouraged as it is a costly operation and may affect performance.</p>
<p><a href="http://www.digi.com/wiki/developer/index.php/Python_Garbage_Collection" rel="nofollow">Reference</a></p>
| 1 | 2016-10-11T18:40:40Z | [
"python",
"memory-management",
"memory-leaks"
] |
Pycharm debugger shows the: "Variables are not available" message | 39,984,188 | <p>I am new to debugging, i simply want to see how the variables change as i run the program. I want to see what my program does and how.</p>
<p>But when i try to run the debugger, it shows the message: "Variables are not available (see picture). </p>
<p><a href="https://i.stack.imgur.com/XcPmy.png" rel="nofollow">Picture of the debug problem.</a> </p>
| -1 | 2016-10-11T18:27:51Z | 39,985,395 | <ol>
<li>Position the cursor on a line in your code where you are interested to see your variables.</li>
<li>Press Ctrl-F8 to toggle a breakpoint.</li>
<li>Debug your code.</li>
</ol>
| 0 | 2016-10-11T19:41:12Z | [
"python",
"debugging",
"pycharm"
] |
Py Processing pixels[] array does not contain all pixels in large images | 39,984,255 | <p>I'm using Processing in Python mode to load an image and do a calculation on it. The general idea is:</p>
<pre><code>def setup():
global maxx, maxy
maxx = 0
maxy = 0
# load the image
global img
img = loadImage("img.jpg");
maxx = img.width
maxy = img.height
def draw():
image(img, 0, 0);
def mousePressed():
calc()
def calc():
height = int(img.height)
width = int(img.width)
print "width: " + `width`
print "height: " + `height`
print "width*height: " + `width*height`
# iterate over the input image
loadPixels()
print "length of pixels array: " + `len(pixels)`
# ... do stuff with the image
</code></pre>
<p>for smaller images on the order of 1920x1200, the "width * height" and "length of pixel array" are the same. For large images like 3025âÃâ2009, the length of the pixels array is substantially less. For the example of 3025 x 2009 the difference is:<br>
width*height: 6077225
length of pixels array: 3944600</p>
<p>Any ideas what might be going on?</p>
| 0 | 2016-10-11T18:31:29Z | 39,984,708 | <p>In debugging, I found the problem. Calling loadPixel in the img gets the correct pixels ...</p>
<pre><code>def calc():
height = int(img.height)
width = int(img.width)
print "width: " + `width`
print "height: " + `height`
print "width*height: " + `width*height`
# iterate over the input image
img.loadPixels()
print "length of pixels array: " + `len(img.pixels)`
</code></pre>
<p>I'll update this answer after more research on the loadPixels()</p>
| 0 | 2016-10-11T18:57:54Z | [
"python",
"processing",
"pixels",
"loadimage"
] |
Consolidate python dictionary list | 39,984,444 | <p>Original python dictionary list:</p>
<pre><code> [
{"keyword": "nike", "country":"usa"},
{"keyword": "nike", "country":"can"},
{"keyword": "newBalance", "country":"usa"},
{"keyword": "newBalance", "country":"can"}
]
</code></pre>
<p>I would like to consolidate the python dict list and get an output like:</p>
<pre><code> [
{"keyword": "nike", "country":["usa","can"]},
{"keyword": "newBalance", "country":["usa","can"]}
]
</code></pre>
<p>What is the most efficient way to do this?</p>
| -2 | 2016-10-11T18:42:27Z | 39,984,555 | <p>Here are some general pointers. I'm not going to code the whole thing for you. If you post more of what you've tried and specific code you need help with, I'd be happy to help more.</p>
<p>It looks like you're combining only those which have the same value for the "keyword" key. So loop over values of that key and combine them based on that.</p>
<p>To combine a bunch of dictionaries once you've split them up as above, you'll first need to create a new one where "country" maps to an empty list. Then as you consider each of the dictionaries, check if its value for "country" is already in that list. If it's not, <code>append</code> it.</p>
| 0 | 2016-10-11T18:48:55Z | [
"python",
"python-2.7"
] |
Consolidate python dictionary list | 39,984,444 | <p>Original python dictionary list:</p>
<pre><code> [
{"keyword": "nike", "country":"usa"},
{"keyword": "nike", "country":"can"},
{"keyword": "newBalance", "country":"usa"},
{"keyword": "newBalance", "country":"can"}
]
</code></pre>
<p>I would like to consolidate the python dict list and get an output like:</p>
<pre><code> [
{"keyword": "nike", "country":["usa","can"]},
{"keyword": "newBalance", "country":["usa","can"]}
]
</code></pre>
<p>What is the most efficient way to do this?</p>
| -2 | 2016-10-11T18:42:27Z | 39,984,559 | <pre><code>L = [
{"keyword": "nike", "country":"usa"},
{"keyword": "nike", "country":"can"},
{"keyword": "newBalance", "country":"usa"},
{"keyword": "newBalance", "country":"can"}
]
def consolidate(L):
answer = {}
for d in L:
if d['keyword'] not in answer:
answer['keyword'] = set()
answer['keyword'].add(d['country'])
retval = []
for k,countries in answer.items():
retval.append({'keyword':d['keyword'], 'country':list(countries)})
return retval
</code></pre>
| -2 | 2016-10-11T18:49:02Z | [
"python",
"python-2.7"
] |
How to make a string visible on a webpage using Django form fields | 39,984,508 | <p>I have a django page I've created and managed to pass values to it.
I am trying to make a text box which would contain some string.
this string can be edited and saved by the user if needed.</p>
<p>views.py</p>
<pre><code>form = EditProjectForm(project_name = project_name,store_id=store_id, `enter code here`start_date=start_date,end_date=end_date,data = request.POST or None)
context = {'someList': someList ,'form':form}
return render(request, 'editview.html', context)
</code></pre>
<p>forms.py</p>
<pre><code>class EditProjectForm(forms.Form):
def __init__(self, *args, **kwargs):
self.project_name = kwargs.pop('project_name')
super(EditProjectForm, self).__init__(*args, **kwargs)
self.fields['project_name'] = forms.CharField(self.project_name)
</code></pre>
<p>I tried using </p>
<pre><code>forms.CharField(widget=forms.TextInput(attrs={'placeholder': self.project_name }))
</code></pre>
<p>But that does not show up as editable text.</p>
<p>I would like self.project_name to be visible on my page in a text box and be editable.</p>
<p>this is what I would want the output to look like</p>
<p><a href="https://i.stack.imgur.com/Zy3yE.png" rel="nofollow"><img src="https://i.stack.imgur.com/Zy3yE.png" alt="desired output"></a></p>
<p>EDIT 1 :
I used itzmeontv tip to use 'initial'
Thanks for the 'initial'<br>
However, it seems to work only when I am not passing arguments when creating the form </p>
<p>For example.
form = EditProjectForm() in my views.py seems to work,
but
form = EditProjectForm(project_name = project_name)
and then setting
initial = self.project_name , does not work.</p>
<p>EDIT 2 :
USing form = EditProjectForm(initial={'project_name': project_name})
seemd to work.
Thank you !!</p>
| 1 | 2016-10-11T18:46:09Z | 39,984,830 | <p>Try this with <code>initial</code></p>
<pre><code>forms.CharField(widget=forms.TextInput(attrs={'placeholder': self.project_name }), initial=self.project_name)
</code></pre>
<p><strong>OR</strong></p>
<pre><code>form = EditProjectForm(#your arguments, initial={'project_name': self.project_name})
</code></pre>
| 1 | 2016-10-11T19:04:59Z | [
"python",
"django",
"django-forms",
"django-views"
] |
Install graphlab-create in Python on Windws 10 | 39,984,524 | <p>All -- I see below which means graphlab is already installed (or not)? But help("modules") doesn't show graphlab as one of the installed packages, AND I am unable to run "import graphlab" as it results in "No module named graphlab".</p>
<pre><code>(gl-env) C:\Users>pip install graphlab-create
Requirement already satisfied (use --upgrade to upgrade): graphlab-create in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages
Requirement already satisfied (use --upgrade to upgrade): decorator==4.0.9 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): psclient in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): tornado==4.3 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): requests==2.9.1 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): genson==0.1.0 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): certifi==2015.04.28 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): jsonschema==2.5.1 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): awscli==1.6.2 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): boto==2.33.0 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): multipledispatch>=0.4.7 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): sseclient==0.0.8 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): prettytable==0.7.2 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): python-dateutil in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from psclient->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): singledispatch in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from tornado==4.3->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): backports.ssl-match-hostname in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from tornado==4.3->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): backports-abc>=0.4 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from tornado==4.3->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): functools32 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from jsonschema==2.5.1->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): colorama==0.2.5 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): botocore<0.74.0,>=0.73.0 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): docutils>=0.10 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): bcdoc<0.13.0,>=0.12.0 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): six>=1.1.0 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): rsa==3.1.2 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): jmespath==0.5.0 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from botocore<0.74.0,>=0.73.0->awscli==1.6.2->graphlab-create)
Requirement already satisfied (use --upgrade to upgrade): pyasn1>=0.1.3 in c:\users\a_sk\anaconda3\envs\gl-env\lib\site-packages (from rsa==3.1.2->awscli==1.6.2->graphlab-create)
</code></pre>
| 0 | 2016-10-11T18:47:25Z | 40,030,156 | <p>I suggest you use anaconda. If so, then you can just do within conda</p>
<pre><code>pip install --upgrade --no-cache-dir https://get.graphlab.com/GraphLab-Create/2.1/<YOUREMAILHERE>/<YOUR REGISTRATION CODE HERE- Should look like xxxx-yyyy-zzzz.../GraphLab-Create-License.tar.gz
</code></pre>
<p>Once you do that, you can then do</p>
<pre><code>source activate gl-env
jupyter notebook
</code></pre>
<p>The official docs walk you through this <a href="https://turi.com/download/install-graphlab-create-command-line.html" rel="nofollow">here</a>:</p>
| 0 | 2016-10-13T20:15:27Z | [
"python",
"graphlab"
] |
what is the result of ( 4 > + 4 )? | 39,984,569 | <p>I need the explanation of this syntax , does this mean that (+4) is the same as (4) ? I have tried many other operands and it completely works as if I disregarded the plus sign before the number .</p>
| -2 | 2016-10-11T18:49:23Z | 39,984,624 | <p>The <code>+</code> in <code>+4</code> is the <a href="https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bitwise-operations" rel="nofollow">unary plus operator</a>:</p>
<blockquote>
<p>The unary <code>+</code> (plus) operator yields its numeric argument unchanged.</p>
</blockquote>
<p>So yes, because <code>4</code> is a number (an <code>int</code>), <code>+4</code> means just the same thing as <code>4</code>, as the operator returns the number unchanged.</p>
<p>The operator exists as a counterpart to the <a href="https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bitwise-operations" rel="nofollow"><code>-</code> unary minus operator</a>:</p>
<pre><code>4 > -4
</code></pre>
<p>Custom classes could override it using the <a href="https://docs.python.org/3/reference/datamodel.html#object.__pos__" rel="nofollow"><code>__pos__()</code> method</a>, making it possible to return custom results.</p>
| 3 | 2016-10-11T18:52:32Z | [
"python",
"python-3.x"
] |
what is the result of ( 4 > + 4 )? | 39,984,569 | <p>I need the explanation of this syntax , does this mean that (+4) is the same as (4) ? I have tried many other operands and it completely works as if I disregarded the plus sign before the number .</p>
| -2 | 2016-10-11T18:49:23Z | 39,984,630 | <p>Yes, they're effectively the same. The unary <code>+</code> operator in <code>+4</code> is applied to the <code>4</code> and <code>4</code> is the result.</p>
| 0 | 2016-10-11T18:53:04Z | [
"python",
"python-3.x"
] |
what is the result of ( 4 > + 4 )? | 39,984,569 | <p>I need the explanation of this syntax , does this mean that (+4) is the same as (4) ? I have tried many other operands and it completely works as if I disregarded the plus sign before the number .</p>
| -2 | 2016-10-11T18:49:23Z | 39,984,641 | <p>Would you be confused by </p>
<pre><code>4 > -4
</code></pre>
<p>The only difference between that and</p>
<pre><code>4 > +4
</code></pre>
<p>is a different unary operator </p>
| 4 | 2016-10-11T18:53:40Z | [
"python",
"python-3.x"
] |
what is the result of ( 4 > + 4 )? | 39,984,569 | <p>I need the explanation of this syntax , does this mean that (+4) is the same as (4) ? I have tried many other operands and it completely works as if I disregarded the plus sign before the number .</p>
| -2 | 2016-10-11T18:49:23Z | 39,984,675 | <p><em>(In addition to the points pointed out by the other answers ...)</em></p>
<p>The comparison operations in Python <a href="https://docs.python.org/3/reference/expressions.html#comparisons" rel="nofollow">have lower precedence</a> than the <code>positive</code> unary operator (<code>+operand</code>):</p>
<blockquote>
<p>Unlike C, all comparison operations in Python have the same priority,
<em>which is lower than that of any arithmetic, shifting or bitwise
operation</em>.</p>
</blockquote>
<p>This means that the unary plus operator applied to its operand will be evaluated prior to the comparison operator, and <code>+4</code> will result in simply just <code>4</code> prior the unary comparison operation even starting.</p>
<pre><code>4 < +4
4 < (+4)
4 < 4
</code></pre>
| 2 | 2016-10-11T18:55:56Z | [
"python",
"python-3.x"
] |
what is the result of ( 4 > + 4 )? | 39,984,569 | <p>I need the explanation of this syntax , does this mean that (+4) is the same as (4) ? I have tried many other operands and it completely works as if I disregarded the plus sign before the number .</p>
| -2 | 2016-10-11T18:49:23Z | 39,984,924 | <p>To check if your variables are the same type them into your interpreter with the == operator between them. If they are the same, this will return True.</p>
<pre><code>>>> 4 == +4
True
</code></pre>
<p>This may seem daft or obvious in this case, but when working with more complex variables it can be more useful.</p>
| 0 | 2016-10-11T19:11:02Z | [
"python",
"python-3.x"
] |
Python code to sum a series incrementally | 39,984,576 | <p>Great site, and I really appreciate all the answers and tips I get from here. I'm trying to calculate the sum of a series INCREMENTALLY of fractions, but cant seem to get the loop-in-a-function right. I have been able to write the program to calculate the total sum, and to calculate the individual fractions in the series, but I need it to display the running sum at each step in the string. the series is 1/2, 2/3, 3/4, 4/5, ...., 20/21. so that it displays:
0.5000
1.1667
...
16.4023
17.3546</p>
<p>This is what I have so far:</p>
<pre><code>def frac(n, d):
a = n / d
return a
def main():
count = 1
while count <= 20:
num = count
den = count +1
ans = frac(num, den)
print(num, " ", format(ans, ".4f"))
count += 1
main()
</code></pre>
<p>But it only gives me this:</p>
<pre><code>1 0.5000
2 0.6667
3 0.7500
4 0.8000
5 0.8333
6 0.8571
7 0.8750
8 0.8889
9 0.9000
10 0.9091
11 0.9167
12 0.9231
13 0.9286
14 0.9333
15 0.9375
16 0.9412
17 0.9444
18 0.9474
19 0.9500
20 0.9524
</code></pre>
| 0 | 2016-10-11T18:49:35Z | 39,984,915 | <p>I don't know python at all, but it seems the step you have missed is to factor in the previous answer in each iteration. So you could do: </p>
<pre><code>def main():
count = 1
ans = 0
while count <= 20:
num = count
den = count +1
ans = ans + frac(num, den)
print(num, " ", format(ans, ".4f"))
count += 1
</code></pre>
| 3 | 2016-10-11T19:10:29Z | [
"python",
"add",
"series"
] |
Python code to sum a series incrementally | 39,984,576 | <p>Great site, and I really appreciate all the answers and tips I get from here. I'm trying to calculate the sum of a series INCREMENTALLY of fractions, but cant seem to get the loop-in-a-function right. I have been able to write the program to calculate the total sum, and to calculate the individual fractions in the series, but I need it to display the running sum at each step in the string. the series is 1/2, 2/3, 3/4, 4/5, ...., 20/21. so that it displays:
0.5000
1.1667
...
16.4023
17.3546</p>
<p>This is what I have so far:</p>
<pre><code>def frac(n, d):
a = n / d
return a
def main():
count = 1
while count <= 20:
num = count
den = count +1
ans = frac(num, den)
print(num, " ", format(ans, ".4f"))
count += 1
main()
</code></pre>
<p>But it only gives me this:</p>
<pre><code>1 0.5000
2 0.6667
3 0.7500
4 0.8000
5 0.8333
6 0.8571
7 0.8750
8 0.8889
9 0.9000
10 0.9091
11 0.9167
12 0.9231
13 0.9286
14 0.9333
15 0.9375
16 0.9412
17 0.9444
18 0.9474
19 0.9500
20 0.9524
</code></pre>
| 0 | 2016-10-11T18:49:35Z | 39,984,984 | <p>From what I can see you're not keeping track of your total sum anywhere, so just add a <code>total</code> variable before your loop and add to that (I also avoid while loops when a for loop will do the trick):</p>
<pre><code>def frac(n, d):
a = n / d
return a
def main():
total = 0
for num in range(1, 21):
ans = frac(num, num+1)
total += ans
print(num, " ", format(ans, ".4f"), 'total=', format(total, ".4f"))
main()
</code></pre>
<p>Output looks like this:</p>
<pre><code>1 0.5000 total= 0.5000
2 0.6667 total= 1.1667
3 0.7500 total= 1.9167
4 0.8000 total= 2.7167
5 0.8333 total= 3.5500
6 0.8571 total= 4.4071
7 0.8750 total= 5.2821
8 0.8889 total= 6.1710
9 0.9000 total= 7.0710
10 0.9091 total= 7.9801
11 0.9167 total= 8.8968
12 0.9231 total= 9.8199
13 0.9286 total= 10.7484
14 0.9333 total= 11.6818
15 0.9375 total= 12.6193
16 0.9412 total= 13.5604
17 0.9444 total= 14.5049
18 0.9474 total= 15.4523
19 0.9500 total= 16.4023
20 0.9524 total= 17.3546
</code></pre>
| 0 | 2016-10-11T19:16:03Z | [
"python",
"add",
"series"
] |
Python code to sum a series incrementally | 39,984,576 | <p>Great site, and I really appreciate all the answers and tips I get from here. I'm trying to calculate the sum of a series INCREMENTALLY of fractions, but cant seem to get the loop-in-a-function right. I have been able to write the program to calculate the total sum, and to calculate the individual fractions in the series, but I need it to display the running sum at each step in the string. the series is 1/2, 2/3, 3/4, 4/5, ...., 20/21. so that it displays:
0.5000
1.1667
...
16.4023
17.3546</p>
<p>This is what I have so far:</p>
<pre><code>def frac(n, d):
a = n / d
return a
def main():
count = 1
while count <= 20:
num = count
den = count +1
ans = frac(num, den)
print(num, " ", format(ans, ".4f"))
count += 1
main()
</code></pre>
<p>But it only gives me this:</p>
<pre><code>1 0.5000
2 0.6667
3 0.7500
4 0.8000
5 0.8333
6 0.8571
7 0.8750
8 0.8889
9 0.9000
10 0.9091
11 0.9167
12 0.9231
13 0.9286
14 0.9333
15 0.9375
16 0.9412
17 0.9444
18 0.9474
19 0.9500
20 0.9524
</code></pre>
| 0 | 2016-10-11T18:49:35Z | 39,985,127 | <p>You can also use <code>itertools.accumulate</code> to generate these values:</p>
<pre><code>from itertools import accumulate
for i, ans in enumerate(accumulate(n/(n+1) for n in range(1, 21)), start=1):
print(str(i).ljust(4), format(ans, '.4f'))
</code></pre>
<p>Output:</p>
<pre><code>1 0.5000
2 1.1667
3 1.9167
4 2.7167
5 3.5500
6 4.4071
7 5.2821
8 6.1710
9 7.0710
10 7.9801
11 8.8968
12 9.8199
13 10.7484
14 11.6818
15 12.6193
16 13.5604
17 14.5049
18 15.4523
19 16.4023
20 17.3546
</code></pre>
| 2 | 2016-10-11T19:25:56Z | [
"python",
"add",
"series"
] |
Python code to sum a series incrementally | 39,984,576 | <p>Great site, and I really appreciate all the answers and tips I get from here. I'm trying to calculate the sum of a series INCREMENTALLY of fractions, but cant seem to get the loop-in-a-function right. I have been able to write the program to calculate the total sum, and to calculate the individual fractions in the series, but I need it to display the running sum at each step in the string. the series is 1/2, 2/3, 3/4, 4/5, ...., 20/21. so that it displays:
0.5000
1.1667
...
16.4023
17.3546</p>
<p>This is what I have so far:</p>
<pre><code>def frac(n, d):
a = n / d
return a
def main():
count = 1
while count <= 20:
num = count
den = count +1
ans = frac(num, den)
print(num, " ", format(ans, ".4f"))
count += 1
main()
</code></pre>
<p>But it only gives me this:</p>
<pre><code>1 0.5000
2 0.6667
3 0.7500
4 0.8000
5 0.8333
6 0.8571
7 0.8750
8 0.8889
9 0.9000
10 0.9091
11 0.9167
12 0.9231
13 0.9286
14 0.9333
15 0.9375
16 0.9412
17 0.9444
18 0.9474
19 0.9500
20 0.9524
</code></pre>
| 0 | 2016-10-11T18:49:35Z | 39,985,219 | <p>As many pointed out already, you missed out the accumulation of partial sums in your loop. Now, geeking out a little bit, and using list comprehensions, the same can be achieved like this:</p>
<pre><code>fractions = [ float(x)/(x+1) for x in range(1,21) ]
cumulatives = [sum(fractions[0:i+1]) for i,j in enumerate(fractions)]
for i,item in enumerate(cumulatives):
print '{} {:.4f}'.format(i+1, item)
</code></pre>
<p>This is in python 2.7. Python 3.x <code>itertools</code> has function <code>accumulate</code>.
See it in action here: <a href="https://eval.in/658781" rel="nofollow">https://eval.in/658781</a></p>
<pre><code>1 0.5000
2 1.1667
3 1.9167
4 2.7167
5 3.5500
6 4.4071
7 5.2821
8 6.1710
9 7.0710
10 7.9801
11 8.8968
12 9.8199
13 10.7484
14 11.6818
15 12.6193
16 13.5604
17 14.5049
18 15.4523
19 16.4023
20 17.3546
</code></pre>
| 0 | 2016-10-11T19:31:52Z | [
"python",
"add",
"series"
] |
pip install python-dateutil does not work | 39,984,598 | <p>The command:</p>
<pre><code>pip install python-dateutil
</code></pre>
<p>Give this error:</p>
<pre><code>Collecting python-dateutil
Could not find a version that satisfies the requirement python-dateutil (from versions: )
No matching distribution found for python-dateutil
</code></pre>
<p>But easy_install python-dateutil works fine.... </p>
| 0 | 2016-10-11T18:51:01Z | 39,987,004 | <p>I had an error in ~/.pip/pip.conf </p>
<p>it should have been like:</p>
<pre><code>[global]
index-url = https://pypi.python.org/simple
</code></pre>
<p>but I has it pointing to a different index</p>
| 0 | 2016-10-11T21:25:20Z | [
"python",
"pip",
"easy-install",
"python-dateutil"
] |
Automated Direct Message Response using Tweepy | 39,984,654 | <p>I currently am making use of the tweepy package in python for a DM listener. I wish to send a reply to the sender on reception of their message. I have the following:</p>
<pre><code>class StdOutListener( StreamListener ):
def __init__( self ):
self.tweetCount = 0
def on_connect( self ):
print("Connection established!!")
def on_disconnect( self, notice ):
print("Connection lost!! : ", notice)
def on_data( self, status ):
status = str(status)
try:
json_acceptable_string = status.replace('\\','')
#string to dict
status=json.loads(json_acceptable_string)
if 'direct_message' in status.keys():
print '\n'
print status[u'direct_message'][u'sender_screen_name'] +' sent: '+ status[u'direct_message'][u'text']
message=str(status[u'direct_message'][u'text'])
api.send_direct_message(screen_name=str(status[u'direct_message'][u'sender_screen_name']),text='Out of office now - will respond to you asap')
print 'auto response submitted'
else:
#not direct message flow
pass
except:
#not important flows - couldn't convert to json/not correct flow in stream
pass
return True
def main():
global api
try:
auth = OAuthHandler(consumer_key, consumer_secret)
auth.secure = True
auth.set_access_token(access_token, access_token_secret)
api = API(auth)
print(api.me().name)
stream = Stream(auth, StdOutListener())
stream.userstream()
except BaseException as e:
print("Error in main()", e)
if __name__ == '__main__':
main()
</code></pre>
<p>For some reason, I can see the print statement of the user and what they sent but when it gets to the send_direct_message method it hangs.
Oddly enough, if I message myself, I receive a barrage of messages as it loops. Is this because it's on_data()? How can I make this work for other senders?</p>
<p><strong>UPDATE</strong>: Resolved - regnerated tokens and add conditional to check for sender, essentially blacklisting myself.</p>
| 1 | 2016-10-11T18:54:40Z | 40,022,648 | <p>UPDATE: Resolved - regenerated tokens and add conditional to check for sender, essentially blacklisting myself.</p>
| 0 | 2016-10-13T13:39:46Z | [
"python",
"twitter",
"wrapper",
"tweepy"
] |
Django with mod_wsgi: error importing sqlite3 | 39,984,681 | <p>I made a very simple project with django using python 3.4, django 1.10.2, and virtualenv all on FreeBSD. I cannot get mod_wsgi to work for the life of me, and I have done almost nothing beyond building the project and running manage.py migrate. It seems to be having a problem importing sqlite3 but in the virtualenv I can run python and import sqlite3 and _sqlite3.</p>
<p>I get the following:</p>
<pre><code>mod_wsgi (pid=15765): Target WSGI script '/server/apache/partner/partner/wsgi.py' cannot be loaded as Python module.
mod_wsgi (pid=15765): Exception occurred processing WSGI script '/server/apache/partner/partner/wsgi.py'.
Traceback (most recent call last):
File "/server/apache/partner/partner/wsgi.py", line 20, in <module>
application = get_wsgi_application()
File "/server/apache/partner-env/lib/python3.4/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
django.setup(set_prefix=False)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/apps/config.py", line 199, in import_models
self.models_module = import_module(models_module_name)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/contrib/auth/models.py", line 4, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/server/apache/partner-env/lib/python3.4/site-packages/django/contrib/auth/base_user.py", line 52, in <module>
class AbstractBaseUser(models.Model):
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/models/base.py", line 119, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/models/base.py", line 316, in add_to_class
value.contribute_to_class(cls, name)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/models/options.py", line 214, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/__init__.py", line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/utils.py", line 211, in __getitem__
backend = load_backend(db['ENGINE'])
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/utils.py", line 115, in load_backend
return import_module('%s.base' % backend_name)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/server/apache/partner-env/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py", line 39, in <module>
raise ImproperlyConfigured("Error loading either pysqlite2 or sqlite3 modules (tried in that order): %s" % exc)
ImproperlyConfigured: Error loading either pysqlite2 or sqlite3 modules (tried in that order): No module named _sqlite3
</code></pre>
<p>wsgi.py:</p>
<pre><code>import os
from sys import path
from django.core.wsgi import get_wsgi_application
add_path = '/server/apache/partner'
if add_path not in path:
path.append(add_path)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "partner.settings")
application = get_wsgi_application()
</code></pre>
<p>-gns</p>
| 0 | 2016-10-11T18:56:18Z | 40,005,548 | <p>Thanks to @Graham Dumpleton for helping me in the comments. I had compiled mod_wsgi while python 2.7 was installed and I thought the virtualenv was enough for python 3.4, but I needed to remove python 2 and recompile mod_wsgi.</p>
<p>-gns</p>
| 0 | 2016-10-12T18:12:06Z | [
"python",
"django",
"sqlite",
"sqlite3",
"mod-wsgi"
] |
Run skipped python tests | 39,984,865 | <p>I would like to run all unittests in a module, including the skipped ones, from the command line, without using a test runner like <code>nose</code> etc.
Currently I have <code>python3 -m unittest discover</code>, but I don't see any option to override skipped tests. Is there a command for this?</p>
| 1 | 2016-10-11T19:07:02Z | 39,999,427 | <p>Looking at the <a href="https://hg.python.org/cpython/file/3.5/Lib/unittest/main.py#l48" rel="nofollow">code</a> for the command line utility there doesn't seem to be a sane way to avoid the skipping. A nasty workaround would be to monkey-patch <code>skip</code> before discovering tests:</p>
<pre><code>python -c "import unittest as u; u.skip = lambda r: lambda t: t; u.main(None, argv=['', 'discover'])"
</code></pre>
<p>(Possibly also patch <code>unittest.case.skip</code>; this is not at all guaranteed to work as it relies on how the decorator is usually used, but you'll see whether anything still has been skipped.)</p>
| 0 | 2016-10-12T13:08:19Z | [
"python",
"python-unittest"
] |
How to do Sort the occurences in descending order (alphabetically in case of ties /in this program | 39,984,919 | <pre><code>>>> words=input('enter your sensence:')
enter your sensence:it was the best of times it was the worst of times it was the age of wisdom it was the age of foolishnes
>>> wordcount={}
>>> for word in words.split():
if word not in wordcount:
wordcount[word] = 1
else:
wordcount[word] += 1
>>> print(word, wordcount)
foolishnes {'age': 2, 'of': 4, 'it': 4, 'wisdom': 1, 'was': 4, 'the': 4, 'worst': 1, 'times': 2, 'foolishnes': 1, 'best': 1}
</code></pre>
| -1 | 2016-10-11T19:10:34Z | 39,985,020 | <p>You already have your wordcounts. Here's how you would print them out in sorted order:</p>
<pre><code>def printout(wordcount):
for k in sorted(wordcount, key=lambda k: (wordcount[k], k)):
print(k, wordcount[k])
</code></pre>
| 0 | 2016-10-11T19:18:25Z | [
"python"
] |
Python 2.7 datetime object returning a value of 60 seconds | 39,984,922 | <p>I have the following bit of code I use to generate timestamps from a lot of instruments. The data gets logged into an initial SQLite database when it is read (w/o milliseconds), then logged into a second database when it's transmitted (w. milliseconds). After several days of data capture, I find that there are a couple times per night where I end up with a logged time that ends with 60 seconds (vs 00 or 59). </p>
<p>I understand that Python has some support for leap seconds, but I know that there weren't any during the nights I was recording data. Also, datetime doesn't support seconds greater than 59. Anyway, I don't know what's going on, and I haven't been able to reproduce the problem by hand. I was hoping someone else might have seen this behavior before.</p>
<p>So, as an example, in my first log I have the entry 20160823043460, and in my transmit log I have 20160823043500.01.</p>
<p>Here's my timestamp code. (Yes, I know it's odd that everything doesn't get the same timestamp, but I didn't design the spec, I'm just coding to it)</p>
<pre><code>def makeTimeStamp(time=None, miliseconds=False):
timeStamp = ""
if time is None and miliseconds is False:
timeStamp = datetime.datetime.utcnow().strftime("%Y%m%d%H%M%S")
#time format: YYYY MM DD HH MM SS as one string
elif time is None and miliseconds is True:
timeStamp = datetime.datetime.utcnow().strftime("%Y%m%d%H%M%S.%f")
else:
try:
timeStamp = time.strftime("%Y%m%d%H%M%S")
except AttributeError:
print "error: attempted to make a timestamp from a " + type(time)
return timeStamp
</code></pre>
| 1 | 2016-10-11T19:10:56Z | 39,985,166 | <p>Check python docs for time library, <a href="https://docs.python.org/2/library/time.html#time.strftime" rel="nofollow" title="time">here</a>.</p>
<p>In the notes, under 2, it says that the range for seconds is really [0, 61].</p>
| -1 | 2016-10-11T19:28:22Z | [
"python",
"sqlite",
"datetime"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,984,973 | <p>Not sure off the top of my head whether there's a faster solution, but the following shouldn't be too bad:</p>
<pre><code>lower = sentence.lower()
for sub in list_of_string:
if sub.lower() in sentence:
print sub
</code></pre>
<p>I've converted both the sentence and the list to lowercase since you indicated by your example that you don't care about case. This will allow "Green" to match against "green", for example.</p>
| 0 | 2016-10-11T19:15:03Z | [
"python",
"algorithm",
"text"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,984,976 | <p>I would turn your <code>list</code> into a <code>set</code> to improve performance. Then, you can do this:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
set_of_strings = set(s.strip().lower() for s in list_of_strings)
sentence = 'Official Trailer | Green is gold | Releasing Tomorrow'
parts = [i.strip() for i in sentence.split("|")]
for part in parts:
if part.lower() in set_of_strings:
print(part, "is a movie name")
</code></pre>
| 0 | 2016-10-11T19:15:14Z | [
"python",
"algorithm",
"text"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,984,980 | <pre><code>for s in list_of_strings:
if s in sentence:
print 'found it!'
</code></pre>
<p>Your example sentence has a capital G in <code>Green is gold</code>, but the list of strings item has a lowercase g.</p>
| 0 | 2016-10-11T19:15:47Z | [
"python",
"algorithm",
"text"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,984,989 | <p>This solution will take care of all caps, spaces, tabs cases:</p>
<pre><code>for str in [str.lower().strip() for str in sentence.split(' | ')]:
if str in [str.lower().strip() for str in list_of_strings]:
print(str)
</code></pre>
| 0 | 2016-10-11T19:16:25Z | [
"python",
"algorithm",
"text"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,985,026 | <p>Its a slight modification of standard problem of finding occurrences of set of words in a given input text. This problem can be efficiently solved by <a href="https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm" rel="nofollow">Aho-Corasick</a> Algorithm. You can modify the source codes available for the algorithm to match you needs.<br>
Though sub-string functions can help you as answered by others but they work on small inputs. For larger input strings you will needed some linear time algorithm.</p>
| 2 | 2016-10-11T19:18:49Z | [
"python",
"algorithm",
"text"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,985,171 | <p>Try iterating over the list of strings and seeing if one of them is in the sentence. If it is, then return its index from the list.</p>
<pre><code>for name in list_of_strings:
if name in sentence:
print list_of_strings.index(name)
</code></pre>
<p>Note you may want to analyse all strings (in list, and sentence) as lowercase (using the <code>.lower()</code> method) since capitalisation may be different between the two.</p>
| 0 | 2016-10-11T19:28:42Z | [
"python",
"algorithm",
"text"
] |
Text identification | 39,984,928 | <p>Let's say I have a list of some strings (movie names in my case) and now I have a new sentence which contains one of the string from the list of strings. How do I find that which string does the sentence has?
For eg:</p>
<pre><code>list_of_strings = ['20th century women', 'green is gold ', 'fire at sea']
sentence = 'Official Trailer | Green is gold - Releasing Tomorrow'
</code></pre>
<p>For above case, solution should be able to find that <em>sentence</em> contains <em>green is gold</em>.
Please suggest which algorithm are available to solve this issue. Implementation/Library in Python will also work. </p>
<blockquote>
<p>Sentences might contain little different spellings.</p>
</blockquote>
<p>List of strings has 10000-15000 strings.</p>
| 0 | 2016-10-11T19:11:42Z | 39,985,489 | <p>As most of the answers here focus on string search part, I will consider the other interesting part of the problem, i.e. Spell Error. </p>
<p>Spell error case is an interesting and very practical in real data.</p>
<p>To deal with it, you can have a look at following metrics : </p>
<ol>
<li><p><a href="https://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow">Levenshtein distance</a> : Its a string metric to measure the similarity between two strings. Its basically the min. number of single character edits(insertion, deletion, replace etc), u can do to transform one string to another.</p>
<p>For ex : </p>
<p><code>"green in gold", "grren in gold" : Distane = 1 // replace r by e</code></p>
<p>Python package : <a href="https://pypi.python.org/pypi/python-Levenshtein/" rel="nofollow">Levenstein Distance</a></p></li>
<li><p><a href="https://en.wikipedia.org/wiki/Soundex" rel="nofollow">Soundex :</a> Generally spelling problems are solved by using some variation of Soundex Algorithm. Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. (Source Wiki )</p>
<p>For ex : Soundex(Clinton) = Soundex(Clenton)</p>
<p>Python library : <a href="https://pypi.python.org/pypi/Fuzzy" rel="nofollow">Fuzzy</a> </p></li>
</ol>
<p>I hope it helps. </p>
| 1 | 2016-10-11T19:45:38Z | [
"python",
"algorithm",
"text"
] |
How to separate ethernet package to 32 bits per pice | 39,985,143 | <p>I am trying to write some data to some shared memory, and read from other side. but for some other reason, I can only write 32 bits(byte? not sure, should be bits)</p>
<p>this is for a test purpose, the package may just a simple ping package. how I can separate a package or object to many different pics and regroup them together after read.</p>
<p>if python could do this, that will be best. any idea would help. Thanks</p>
| -2 | 2016-10-11T19:26:39Z | 39,985,170 | <p>Python struct package is your friend to handle this
<a href="https://docs.python.org/2/library/struct.html" rel="nofollow">https://docs.python.org/2/library/struct.html</a></p>
| 0 | 2016-10-11T19:28:42Z | [
"python",
"shared-memory"
] |
Python - Pandas Combining parts of multiple files | 39,985,151 | <p>Have a list of 200 or so files in a folder. Each has the same amount of columns but there can be some variation in the naming. For instance, i can have Global ID or Global id or Global Id. Is there a way to control for case in pandas column names so that it doesnt matter what it equals? Currently it will get through the first 15 or so files out of 200 and will error because it doesnt find Global ID.</p>
<p>Caveat that im a beginner and still learning.</p>
<pre><code>import pandas as pd
import glob
with open('test99.txt' , 'a') as out:
list_of_files = glob.glob('M:\AD HOC Docs\Client\Blinded\*')
for file_name in list_of_files:
df = pd.read_table(file_name, low_memory=False)
df['Client'] = file_name.split("_")[2].strip()
Final = df[['Client','ClientID','Global ID','Internal ID','campaign type','engagement type', 'file_name']]
Final.to_csv(out,index=False)
</code></pre>
| 2 | 2016-10-11T19:26:58Z | 39,985,204 | <p>Use <code>header=None, names=[list of column names you want to use]</code> as additional argument to <code>read_table</code>to ignore the header row and to get consistent names.</p>
| 2 | 2016-10-11T19:30:36Z | [
"python",
"pandas"
] |
Interest Calculator In Python | 39,985,231 | <p><strong>Interest Calculator</strong> Let the user calculate the amount of money they will have in the bank after their interest has compounded for a certain number of years. </p>
<p>Note: A = P(1+r)^t where A = total amount, P = principal, r = rate, and t = time.</p>
<p>This is what I tried: </p>
<pre><code>principal = input("How much money do you currently have in the bank?")
rate = input("What is your interest rate?")
time = input("Over how many years is the interest compounded?")
actual_principal = float(principal)
actual_rate = float(rate)
actual_time = int(time)
#TODO: Calculate the total amount and print the result
A = (actual_principal(1 + actual_rate) ^ actual_time)
print(A)
</code></pre>
<p>I used actual_principal, actual_rate, and actual_time to convert the strings into (exact) integers...I used float instead of int because float will maintain an input with decimals, right?</p>
<p>But I got an error message on line 9</p>
<pre><code>TypeError: 'float' object is not callable on line 9
</code></pre>
<p>I'm assuming that the 'float' object refers to either actual_principal or actual_rate, or both, since those are the two values that I converted. Why, then, aren't they callable? How can I fix this program to calculate interest correctly? Thank you! </p>
<p><strong>EDIT</strong> </p>
<p>Learning that the power function in python is ** and that implied multiplication operators don't exist in python syntax, I reworked my equation to look like this: </p>
<pre><code>final_actual_rate = actual_rate + 1
B = actual_principal * final_actual_rate
A = B ** actual_time
print(A)
</code></pre>
<p>When entering 13.58 for "How much money do you currently have in the bank?", 0.3 for "What is your interest rate?", and 5 for "Over how many years is the interest compounded?", I got the answer 1714808.43561 - so the calculator works! </p>
<p><strong>EDIT 2</strong></p>
<p>Replacing my way with this equation is the better / cleaner way to create the interest calculator - both will get you the same result: </p>
<pre><code>A = math.pow(actual_principal*(1 + actual_rate) , actual_time)
print(A)
</code></pre>
| -1 | 2016-10-11T19:32:32Z | 39,985,355 | <pre><code>import math
principal = input("How much money do you currently have in the bank?")
rate = input("What is your interest rate?")
time = input("Over how many years is the interest compounded?")
actual_principal = float(principal)
actual_rate = float(rate)
actual_time = int(time)
#TODO: Calculate the total amount and print the result
A = math.pow(actual_principal*(1 + actual_rate) , actual_time)
print(A)
</code></pre>
<p>A few things wrong with what you have. <code>^</code> is not what you think it is in Python. <code>^</code> is an XOR binary operator not power. If you want power you use <a href="https://docs.python.org/3/library/math.html#math.pow" rel="nofollow">math.pow() module.</a></p>
<p>Also using <code>action_principal(1 + actual_rate)</code> is wrong. You are trying to call the function <code>action_principal</code> with the argument <code>1 + actual_rate</code> instead of multiplying it. Try <code>*</code> instead. </p>
| 1 | 2016-10-11T19:39:04Z | [
"python",
"integer",
"calculator"
] |
how to loop over multiple lists in python | 39,985,271 | <p>Below, set1, set2, set3 are lists with len(setn) =len(index). I want to loop over each of these lists (setn) as follows, </p>
<pre><code>index = range(10)
set1 = range(10,20)
set2 = range(30,40)
set3 = range(40,50)
listset = [set1, set2, set3]
for i in listset:
for k, j in zip(index, i):
print k, j
Result:
0 s
1 e
2 t
3 1
0 s
1 e
2 t
3 2
0 s
1 e
2 t
3 3
</code></pre>
<p>How can I get a result that prints the each element of "index, set1" (as given below),followed by "index, set2", followed by "index, set3". </p>
<pre><code>0 10
1 11
2 12
3 13
4 14
5 15
6 16
7 17
8 18
9 19
and so on...
</code></pre>
| 0 | 2016-10-11T19:34:54Z | 39,985,367 | <p>You can concatenate set1, 2 and 3 together, then use itertools.cycle(index) and zip the resulting two things together:
<code>zip(itertools.cycle(index), set1 + set2 + set3)</code></p>
| 1 | 2016-10-11T19:39:32Z | [
"python",
"list",
"loops"
] |
how to loop over multiple lists in python | 39,985,271 | <p>Below, set1, set2, set3 are lists with len(setn) =len(index). I want to loop over each of these lists (setn) as follows, </p>
<pre><code>index = range(10)
set1 = range(10,20)
set2 = range(30,40)
set3 = range(40,50)
listset = [set1, set2, set3]
for i in listset:
for k, j in zip(index, i):
print k, j
Result:
0 s
1 e
2 t
3 1
0 s
1 e
2 t
3 2
0 s
1 e
2 t
3 3
</code></pre>
<p>How can I get a result that prints the each element of "index, set1" (as given below),followed by "index, set2", followed by "index, set3". </p>
<pre><code>0 10
1 11
2 12
3 13
4 14
5 15
6 16
7 17
8 18
9 19
and so on...
</code></pre>
| 0 | 2016-10-11T19:34:54Z | 39,985,376 | <p>You want to combine <code>enumerate</code> and <code>itertools.chain</code></p>
<pre><code>from itertools import chain
s1 = range(10)
s2 = range(10, 20)
s2 = range(20, 30)
c = chain(enumerate(s1), enumerate(s2), enumerate(s3))
for i, n in c:
print(str(i).ljust(4), n)
</code></pre>
| 1 | 2016-10-11T19:40:10Z | [
"python",
"list",
"loops"
] |
recursively iterate nested python dictionary | 39,985,300 | <p>I have nested python dictionary like this.</p>
<pre><code>d = {}
d[a] = b
d[c] = {1:2, 2:3}
</code></pre>
<p>I am trying to recursively convert the nested dictionary into an xml format since there can be more nested dictionary inside such as <code>d[e] = {1:{2:3}, 3:4}</code>. My desired XML format is like this</p>
<pre><code><root>
<a>b</a>
<c>
<1>2</1>
<2>3</3>
</c>
</root>
</code></pre>
<p>I have so far this python code to handle nested xml using lxml library. But it doesn't give me the desired output. </p>
<pre><code>def encode(node, Dict):
if len(Dict) == 0:
return node
for kee, val in Dict.items():
subNode = etree.SubElement(node, kee)
del msgDict[kee]
if not isinstance(val, dict):
subNode.text = str(val)
else:
return encode(subNode, val)
</code></pre>
<p>Any help is appreciated. Thank you. </p>
| 0 | 2016-10-11T19:36:41Z | 39,986,200 | <p>The way you recall encode does not look correct. Maybe this helps. For simplicity I just append stuff to a list (called <code>l</code>). Instead, you should do your <code>etree.SubElement(...)</code>.</p>
<pre><code>def encode(D, l=[]):
for k, v in D.items():
if isinstance(v, dict):
l2 = [k]
encode(v, l2)
l.append(l2)
else:
l.append([k, v])
</code></pre>
| 0 | 2016-10-11T20:30:44Z | [
"python",
"xml",
"dictionary",
"recursion",
"lxml"
] |
recursively iterate nested python dictionary | 39,985,300 | <p>I have nested python dictionary like this.</p>
<pre><code>d = {}
d[a] = b
d[c] = {1:2, 2:3}
</code></pre>
<p>I am trying to recursively convert the nested dictionary into an xml format since there can be more nested dictionary inside such as <code>d[e] = {1:{2:3}, 3:4}</code>. My desired XML format is like this</p>
<pre><code><root>
<a>b</a>
<c>
<1>2</1>
<2>3</3>
</c>
</root>
</code></pre>
<p>I have so far this python code to handle nested xml using lxml library. But it doesn't give me the desired output. </p>
<pre><code>def encode(node, Dict):
if len(Dict) == 0:
return node
for kee, val in Dict.items():
subNode = etree.SubElement(node, kee)
del msgDict[kee]
if not isinstance(val, dict):
subNode.text = str(val)
else:
return encode(subNode, val)
</code></pre>
<p>Any help is appreciated. Thank you. </p>
| 0 | 2016-10-11T19:36:41Z | 40,142,138 | <p>I found the bug in my code, which is I didn't return the recursive call back to the original loop. After going inside nested elements, it "returns" and doesn't get back to original loop. Instead of <code>return encode(subNode, val)</code>, saving in a variable <code>element = encode(subNode, val)</code> solves the problem. </p>
| 0 | 2016-10-19T21:54:52Z | [
"python",
"xml",
"dictionary",
"recursion",
"lxml"
] |
CountTokenizing a field, turning into columns | 39,985,351 | <p>I'm working with data that look something like this:</p>
<pre><code>ID PATH GROUP
11937 MM-YT-UJ-OO GT
11938 YT-RY-LM TQ
11939 XX-XX-OT DX
</code></pre>
<p>I'd like to tokenize the PATH column into n-grams and then one-hot encode those into their own columns so I'd end up with something like:</p>
<pre><code>ID GROUP MM YT UJ OO RY LM XX OT MM-YT YT-UH ...
11937 GT 1 1 1 1 0 0 0 0 1 1
</code></pre>
<p>I could also use counted tokens rather than one-hot, so 11939 would have a 2 in the XX column instead of a 1, but I can work with either.</p>
<p>I can tokenize the column quite easily with scikitlearn CountVectorizer, but then I have to cbind the <code>ID</code> and <code>GROUP</code> fields. Is there a standard way to do this or a best practice that anyone has discovered?</p>
| 1 | 2016-10-11T19:38:53Z | 39,985,571 | <p>A solution:</p>
<pre><code>df.set_index(['ID', 'GROUP'], inplace=True)
pd.get_dummies(df.PATH.str.split('-', expand=True).stack())\
.groupby(level=[0,1]).sum().reset_index()
</code></pre>
<hr>
<p>Isolate the ID and GROUP columns as index. Then convert the string to cell items</p>
<pre><code>df.PATH.str.split('-', expand=True)
Out[37]:
0 1 2 3
ID GROUP
11937 GT MM YT UJ OO
11938 TQ YT RY LM None
11939 DX XX XX OT None
</code></pre>
<p>Get them into a single column of data</p>
<pre><code>df.PATH.str.split('-', expand=True).stack()
Out[38]:
ID GROUP
11937 GT 0 MM
1 YT
2 UJ
3 OO
11938 TQ 0 YT
1 RY
2 LM
11939 DX 0 XX
1 XX
2 OT
</code></pre>
<p><code>get_dummies</code> bring the counter as columns spread accross rows</p>
<pre><code>pd.get_dummies(df.PATH.str.split('-', expand=True).stack())
Out[39]:
LM MM OO OT RY UJ XX YT
ID GROUP
11937 GT 0 0 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0 1
2 0 0 0 0 0 1 0 0
3 0 0 1 0 0 0 0 0
11938 TQ 0 0 0 0 0 0 0 0 1
1 0 0 0 0 1 0 0 0
2 1 0 0 0 0 0 0 0
11939 DX 0 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 1 0
2 0 0 0 1 0 0 0 0
</code></pre>
<p>Group by the data per ID, GROUP (levels 0 and 1 in the index) to sum up the rows together and have one line per tuple. And finally reset the index to get ID and GROUP column back as regular columns.</p>
| 1 | 2016-10-11T19:51:28Z | [
"python",
"pandas",
"numpy",
"scikit-learn"
] |
CountTokenizing a field, turning into columns | 39,985,351 | <p>I'm working with data that look something like this:</p>
<pre><code>ID PATH GROUP
11937 MM-YT-UJ-OO GT
11938 YT-RY-LM TQ
11939 XX-XX-OT DX
</code></pre>
<p>I'd like to tokenize the PATH column into n-grams and then one-hot encode those into their own columns so I'd end up with something like:</p>
<pre><code>ID GROUP MM YT UJ OO RY LM XX OT MM-YT YT-UH ...
11937 GT 1 1 1 1 0 0 0 0 1 1
</code></pre>
<p>I could also use counted tokens rather than one-hot, so 11939 would have a 2 in the XX column instead of a 1, but I can work with either.</p>
<p>I can tokenize the column quite easily with scikitlearn CountVectorizer, but then I have to cbind the <code>ID</code> and <code>GROUP</code> fields. Is there a standard way to do this or a best practice that anyone has discovered?</p>
| 1 | 2016-10-11T19:38:53Z | 39,985,600 | <p>Maybe you can try something like that.</p>
<pre><code># Test data
df = DataFrame({'GROUP': ['GT', 'TQ', 'DX'],
'ID': [11937, 11938, 11939],
'PATH': ['MM-YT-UJ-OO', 'YT-RY-LM', 'XX-XX-OT']})
# Expanding data and creating on column by token
tmp = pd.concat([df.loc[:,['GROUP', 'ID']],
df['PATH'].str.split('-', expand=True)], axis=1)
# Converting wide to long format
tmp = pd.melt(tmp, id_vars=['ID', 'GROUP'])
# Now grouping and counting
tmp.groupby(['ID', 'GROUP', 'value']).count().unstack().fillna(0)
# variable
# value LM MM OO OT RY UJ XX YT
# ID GROUP
# 11937 GT 0.0 1.0 1.0 0.0 0.0 1.0 0.0 1.0
# 11938 TQ 1.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0
# 11939 DX 0.0 0.0 0.0 1.0 0.0 0.0 2.0 0.0
</code></pre>
| 0 | 2016-10-11T19:52:52Z | [
"python",
"pandas",
"numpy",
"scikit-learn"
] |
Raspberry Pi: Python try/except loop | 39,985,358 | <p>I've just picked up my first Raspberry Pi and 2 channel relay. I'm trying to learn how to code in Python so I figured a Pi to play with would be a good starting point. I have a question regarding the timing of my relays via the GPIO pins. </p>
<p>Firstly though, I'm using Raspbian Pixel and am editing my scripts via Gedit. Please see below for the script I have so far:</p>
<pre><code># !/usr/bin/python
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
# init list with pin numbers
pinList = [14]
# loop through pins and set mode and state to 'high'
for i in pinList:
GPIO.setup(i, GPIO.OUT)
GPIO.output(i, GPIO.HIGH)
# time to sleep between operations in the main loop
SleepTimeL = 60 #1 minute
# main loop
try:
GPIO.output(14, GPIO.LOW)
print "open"
time.sleep(SleepTimeL);
GPIO.cleanup()
#Reset GPIO settings
GPIO.cleanup()
# end program cleanly
except KeyboardInterrupt:
print "done"
</code></pre>
<p>Now that works pretty well, it opens the relay attached to pin 14 no problem. It cycles through 60 seconds as requested and then ends the program. Once the program has ended, the GPIO settings are reset and the relay closes, but that's the end of the program and it's where my problem starts.</p>
<p>What I want this script to do is open the relay for 60 seconds, then close it for 180 seconds. Once it reaches 180 seconds it must re-run the 'try' statement and open the relay for another 60 seconds and so on. In short, I would like an infinite loop that can only be interrupted by cancelling the script from running. I am unsure of how to tell Python to close the relay for 180 seconds and then re-run the try statement, or how to make it an infinite loop for that matter.</p>
<p>I'd really appreciate some input from the community. Any feedback or assistance is greatly appreciated. Thanks All.</p>
| 0 | 2016-10-11T19:39:09Z | 39,985,449 | <p>Just use a <code>while True</code> loop, something like:</p>
<pre><code># main loop
while True:
GPIO.output(14, GPIO.LOW)
print "open"
time.sleep(SleepTimeL);
GPIO.cleanup()
print "done"
</code></pre>
| 0 | 2016-10-11T19:43:21Z | [
"python",
"exception",
"try-catch",
"gpio"
] |
Raspberry Pi: Python try/except loop | 39,985,358 | <p>I've just picked up my first Raspberry Pi and 2 channel relay. I'm trying to learn how to code in Python so I figured a Pi to play with would be a good starting point. I have a question regarding the timing of my relays via the GPIO pins. </p>
<p>Firstly though, I'm using Raspbian Pixel and am editing my scripts via Gedit. Please see below for the script I have so far:</p>
<pre><code># !/usr/bin/python
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
# init list with pin numbers
pinList = [14]
# loop through pins and set mode and state to 'high'
for i in pinList:
GPIO.setup(i, GPIO.OUT)
GPIO.output(i, GPIO.HIGH)
# time to sleep between operations in the main loop
SleepTimeL = 60 #1 minute
# main loop
try:
GPIO.output(14, GPIO.LOW)
print "open"
time.sleep(SleepTimeL);
GPIO.cleanup()
#Reset GPIO settings
GPIO.cleanup()
# end program cleanly
except KeyboardInterrupt:
print "done"
</code></pre>
<p>Now that works pretty well, it opens the relay attached to pin 14 no problem. It cycles through 60 seconds as requested and then ends the program. Once the program has ended, the GPIO settings are reset and the relay closes, but that's the end of the program and it's where my problem starts.</p>
<p>What I want this script to do is open the relay for 60 seconds, then close it for 180 seconds. Once it reaches 180 seconds it must re-run the 'try' statement and open the relay for another 60 seconds and so on. In short, I would like an infinite loop that can only be interrupted by cancelling the script from running. I am unsure of how to tell Python to close the relay for 180 seconds and then re-run the try statement, or how to make it an infinite loop for that matter.</p>
<p>I'd really appreciate some input from the community. Any feedback or assistance is greatly appreciated. Thanks All.</p>
| 0 | 2016-10-11T19:39:09Z | 39,985,936 | <p>I agree with reptilicus, you just need to add a while loop. "while True" will run forever, until you hit ctrl-C to break. You just need to add a second delay to hold the relay off for 180 seconds before looping. You can create a different sleep time variable, or I just multiply the one you have by 2.</p>
<pre><code># main loop
while True:
try:
GPIO.output(14, GPIO.LOW)
print "open"
time.sleep(SleepTimeL);
GPIO.cleanup()
#Reset GPIO settings
GPIO.cleanup()
time.sleep(2*SleepTimeL)
# end program cleanly
except KeyboardInterrupt:
print "done"
</code></pre>
| 0 | 2016-10-11T20:13:54Z | [
"python",
"exception",
"try-catch",
"gpio"
] |
Python - Tweepy - How to use lookup_friendships? | 39,985,434 | <p>I'm trying to figure out if I'm following a user from which the streaming API just received a tweet. If I don't, then I want to follow him.</p>
<p>I've got something like:</p>
<pre><code>def checkFollow(status):
relationship = api.lookup_friendships("Privacy_Watch_",status.user.id_str)
</code></pre>
<p>From there, how do I check if I follow this user already?</p>
| 0 | 2016-10-11T19:42:49Z | 40,015,567 | <p>The lookup_friendships method will return everyone you follow each time you call it, in blocks of 100 users. Provided you follow a lot of people, that will be highly inefficient and consume a lot of requests.</p>
<p>You can use instead the <a href="https://github.com/tweepy/tweepy/blob/master/tweepy/api.py#L461" rel="nofollow">show_friendship</a> method, it will return a JSON containing <a href="https://dev.twitter.com/rest/reference/get/friendships/show" rel="nofollow">information</a> about your relationship with the id provided.</p>
<p>I cannot test it right now, but the following code should do what you want:</p>
<pre><code>def checkFollow(status):
relation = api.show_friendship(source_screen_name=your_user_name, target_screen_name=status.user.id_str)
if relation.target.following: #I'm not sure if it should be "target" or "source" here
return True
return False
</code></pre>
| 0 | 2016-10-13T08:09:34Z | [
"python",
"twitter",
"tweepy"
] |
How to ignore capitalization BUT return same capitalization as input | 39,985,448 | <p>My code intends to identify the first non-repeating string characters, empty strings, repeating strings (i.e. <code>abba</code> or <code>aa</code>), but it's also meant to treat lower and upper case input as the same character while returning the accurate non-repeating character in it's orignial case input. </p>
<pre><code>def first_non_repeat(string):
order = []
counts = {}
for x in string:
if x in counts and x.islower() == True:
counts[x] += 1
else:
counts[x] = 1
order.append(x)
for x in order:
if counts[x] == 1:
return x
return ''
</code></pre>
<p>My logic on line 5 was that if I make all letter inputs lowercase, then it would iterate through the string input and not distinguish by case. But as of now, take the input <code>'sTreSS'</code>and output is <code>'s'</code> when really I need <code>'T'</code>. If the last two <code>S</code>'s were lowercase, then it would be <code>'T'</code> but I need code flexible enough to handle any case input. </p>
| 1 | 2016-10-11T19:43:21Z | 39,985,569 | <p>When comparing two letters, use lower() to compare the characters in a string. An example would be:</p>
<pre><code>string ="aabcC"
count = 0
while count < len(string) - 1:
if string[count].lower() == string[count + 1].lower():
print "Characters " + string[count] + " and " + string[count + 1] + " are repeating."
count += 1
</code></pre>
| 0 | 2016-10-11T19:51:27Z | [
"python",
"string",
"case"
] |
How to ignore capitalization BUT return same capitalization as input | 39,985,448 | <p>My code intends to identify the first non-repeating string characters, empty strings, repeating strings (i.e. <code>abba</code> or <code>aa</code>), but it's also meant to treat lower and upper case input as the same character while returning the accurate non-repeating character in it's orignial case input. </p>
<pre><code>def first_non_repeat(string):
order = []
counts = {}
for x in string:
if x in counts and x.islower() == True:
counts[x] += 1
else:
counts[x] = 1
order.append(x)
for x in order:
if counts[x] == 1:
return x
return ''
</code></pre>
<p>My logic on line 5 was that if I make all letter inputs lowercase, then it would iterate through the string input and not distinguish by case. But as of now, take the input <code>'sTreSS'</code>and output is <code>'s'</code> when really I need <code>'T'</code>. If the last two <code>S</code>'s were lowercase, then it would be <code>'T'</code> but I need code flexible enough to handle any case input. </p>
| 1 | 2016-10-11T19:43:21Z | 39,985,741 | <p>The point is that <code>x</code> in <code>counts</code> is searched for in a case-insensitive way. You have to implement your own case insensitive Dictionary, or use regular expressions to detect repeating letters:</p>
<pre><code>import re
def first_non_repeat(string):
r = re.compile(r'([a-z])(?=.*\1)', re.I|re.S)
m = r.search(string)
while m:
string = re.sub(m.group(1), '', string, re.I)
m = r.search(string)
return string[0]
print(first_non_repeat('sTreSS'))
</code></pre>
<p>See the <a href="http://ideone.com/l5Stt1" rel="nofollow">Python demo</a></p>
<p>The <code>([a-z])(?=.*\1)</code> regex finds any ASCII letter that also appears somewhere ahead (note that <code>([a-z])</code> <em>captures</em> the char into Group 1 and the <code>(?=.*\1)</code> is a lookahead where <code>\1</code> matches the same char captured into Group 1 after any 0+ characters matched with <code>.*</code> pattern, and <code>re.S</code> flag helps support strings with linebreaks).</p>
<p>The <code>re.sub</code> will remove all the found letters in a case insensitive way, so we will only get unique characters in the <code>string</code> after the <code>while</code> block.</p>
| 0 | 2016-10-11T20:01:53Z | [
"python",
"string",
"case"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.