title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
How to ignore capitalization BUT return same capitalization as input | 39,985,448 | <p>My code intends to identify the first non-repeating string characters, empty strings, repeating strings (i.e. <code>abba</code> or <code>aa</code>), but it's also meant to treat lower and upper case input as the same character while returning the accurate non-repeating character in it's orignial case input. </p>
<pre><code>def first_non_repeat(string):
order = []
counts = {}
for x in string:
if x in counts and x.islower() == True:
counts[x] += 1
else:
counts[x] = 1
order.append(x)
for x in order:
if counts[x] == 1:
return x
return ''
</code></pre>
<p>My logic on line 5 was that if I make all letter inputs lowercase, then it would iterate through the string input and not distinguish by case. But as of now, take the input <code>'sTreSS'</code>and output is <code>'s'</code> when really I need <code>'T'</code>. If the last two <code>S</code>'s were lowercase, then it would be <code>'T'</code> but I need code flexible enough to handle any case input. </p>
| 1 | 2016-10-11T19:43:21Z | 39,985,886 | <p>Here's little change u can make to your code to make it work. </p>
<pre><code> def first_non_repeat(string):
order = []
counts = {}
for x in string:
char_to_look = x.lower() #### convert to lowercase for all operations
if char_to_look in counts :
counts[char_to_look] += 1
else:
counts[char_to_look] = 1
order.append(char_to_look)
for x in string: ### search in the string instead or order, character and order will remain the same, except the case. So again do x.lower() to search in count
if counts[x.lower()] == 1:
return x
return ''1
</code></pre>
| 0 | 2016-10-11T20:10:27Z | [
"python",
"string",
"case"
] |
why its printing result in reverse order? | 39,985,466 | <p>I am trying to understand this program :</p>
<pre><code>def listSum(arr, result):
if not data:
return result
print("print final", listSum(arr[1:], result + arr[0]))
print("print A:", arr[1:])
print("print B:", result + arr[0])
listSum([1, 3, 4, 5, 6], 0)
</code></pre>
<p>Its printing result in reverse order :</p>
<pre><code>print final 19
print A: []
print B: 19
print final None
print A: [6]
print B: 13
print final None
print A: [5, 6]
print B: 8
print final None
print A: [4, 5, 6]
print B: 4
print final None
print A: [3, 4, 5, 6]
print B: 1
</code></pre>
<p>Please make me understand what is going on here? If this happening causes of recursion please make me understand how and why ? Explain in depth please</p>
| -2 | 2016-10-11T19:44:07Z | 39,985,829 | <p>Let's have a more simple example that displays the same behaviour</p>
<pre><code>def rev_printer(n):
if n:
print(rev_printer(n-1))
return n
</code></pre>
<p>Then <code>rev_printer(3)</code> prints</p>
<pre><code>0
1
2
3
</code></pre>
<p>Why? For each <code>print</code> to finish, it has to do all the work of<code>rev_printer(n-1)</code> and get the return value to print. The first value to be finished is <code>rev_printer(0)</code>, because it doesn't have a print statement in it. </p>
<p>So the <code>print</code> statement that finished first is the last one to get started, and so the values are printed in reverse order. </p>
| 2 | 2016-10-11T20:07:10Z | [
"python",
"python-2.7",
"python-3.x"
] |
why its printing result in reverse order? | 39,985,466 | <p>I am trying to understand this program :</p>
<pre><code>def listSum(arr, result):
if not data:
return result
print("print final", listSum(arr[1:], result + arr[0]))
print("print A:", arr[1:])
print("print B:", result + arr[0])
listSum([1, 3, 4, 5, 6], 0)
</code></pre>
<p>Its printing result in reverse order :</p>
<pre><code>print final 19
print A: []
print B: 19
print final None
print A: [6]
print B: 13
print final None
print A: [5, 6]
print B: 8
print final None
print A: [4, 5, 6]
print B: 4
print final None
print A: [3, 4, 5, 6]
print B: 1
</code></pre>
<p>Please make me understand what is going on here? If this happening causes of recursion please make me understand how and why ? Explain in depth please</p>
| -2 | 2016-10-11T19:44:07Z | 40,049,954 | <p>Sorry i seen <a href="http://stackoverflow.com/a/39815739/5904928">your message</a> too late,I just seen three question about recursion which you asked , I think you are confuse because you don't know the fundamental of recursion , How recursion works. </p>
<p>For understanding recursion ---> you have to understand how stack works , how stack store function and values of program.</p>
<p>I recommend you <a href="http://www.cryptroix.com/understanding-multiple-recursion/" rel="nofollow">read this article</a> which explain from scratch how stack store values and return values in terms of recursion. If want more info about recursion i suggest read first 100 page of <a href="https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%25_sec_1.2.1" rel="nofollow">SICP book</a> </p>
<p>Its not printing result in reverse order its returning values from the stack. </p>
<p>You have to understand two things :</p>
<ul>
<li>Stack works on Last In first Out principal , So Stack return values
which comes last.</li>
</ul>
<p><a href="https://i.stack.imgur.com/RiGdm.png" rel="nofollow"><img src="https://i.stack.imgur.com/RiGdm.png" alt="enter image description here"></a></p>
<ul>
<li>There are two types of recursion :
<ul>
<li>Head Recursion</li>
<li>Tail Recursion</li>
</ul></li>
</ul>
<p>In Head recursion recursive call first find its base condition then execute rest of code. </p>
<pre><code>def head_recursion(x):
if x==0:
return
else:
head_recursion(x-1)
print("Head Recursion",x)
head_recursion(3)
</code></pre>
<p>it will print :</p>
<pre><code>Head Recursion 1
Head Recursion 2
Head Recursion 3
</code></pre>
<p>In tail recursion , Everything execute first after recursion call main function. It means In tail recursion Recursive call is last thing in function.</p>
<pre><code>def head_recursion(x):
if x==0:
return
else:
print("Head Recursion",x)
head_recursion(x-1)
head_recursion(3)
</code></pre>
<p>it will print :</p>
<pre><code>Head Recursion 3
Head Recursion 2
Head Recursion 1
</code></pre>
<p>In your code its Head Recursion because after recursive call there is print and other code in function , so recursive call is not last thing in function.</p>
<p>As i said above in Head recursion , Recursive call first find its base condition then execute rest of code below of recursive call , Until it keep executing. So as it found base condition two things happen :</p>
<pre><code>It print rest of code below of recursive function.
It start returning values from stack.
</code></pre>
<p>It will print final value when all the work will be done and recursive call will find its base condition. That's why final value 19 executed at last and returned at first (causes of LIFO). </p>
<p>The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.</p>
<blockquote>
<p>Deallocating the stack is pretty simple because you always deallocate
in the reverse order in which you allocate. Stack stuff is added as
you enter functions, the corresponding data is removed as you exit
them. This means that you tend to stay within a small region of the
stack unless you call lots of functions that call lots of other
functions (or create a recursive solution).that's why it seems reverse
to you.</p>
</blockquote>
| 4 | 2016-10-14T18:35:52Z | [
"python",
"python-2.7",
"python-3.x"
] |
Combining the Values from Multiple Keys in Dictionary Python | 39,985,479 | <p>In Python, I have the following dictionary of sets:</p>
<pre><code>{
1: {'Hello', 'Bye'},
2: {'Bye', 'Do', 'Action'},
3: {'Not', 'But', 'No'},
4: {'No', 'Yes'}
}
</code></pre>
<p>My goal is combine the keys which contain match values (like in this example, "Bye" and "No"), so the result will look like this:</p>
<pre><code>{
1: {'Hello', 'Bye', 'Do', 'Action'},
3: {'Not', 'But', 'No', 'Yes'}
}
</code></pre>
<p>Is there a way to do this?</p>
| 0 | 2016-10-11T19:45:02Z | 39,985,637 | <p>If there is no overlapping matches:</p>
<pre><code>a = {1: {'Hello', 'Bye'}, 2: {'Bye', 'Do', 'Action'}, 3: {'Not', 'But', 'No'}, 4: {'No', 'Yes'}}
output = {}
for k, v in a.items():
if output:
for k_o, v_o in output.items():
if v_o.intersection(v):
output[k_o].update(v)
break
else:
output[k] = v
else:
output[k] = v
print(output)
</code></pre>
<p>Output:</p>
<pre><code>{1: {'Action', 'Bye', 'Do', 'Hello'}, 3: {'But', 'No', 'Not', 'Yes'}}
</code></pre>
| 1 | 2016-10-11T19:55:43Z | [
"python",
"dictionary"
] |
Combining the Values from Multiple Keys in Dictionary Python | 39,985,479 | <p>In Python, I have the following dictionary of sets:</p>
<pre><code>{
1: {'Hello', 'Bye'},
2: {'Bye', 'Do', 'Action'},
3: {'Not', 'But', 'No'},
4: {'No', 'Yes'}
}
</code></pre>
<p>My goal is combine the keys which contain match values (like in this example, "Bye" and "No"), so the result will look like this:</p>
<pre><code>{
1: {'Hello', 'Bye', 'Do', 'Action'},
3: {'Not', 'But', 'No', 'Yes'}
}
</code></pre>
<p>Is there a way to do this?</p>
| 0 | 2016-10-11T19:45:02Z | 39,985,668 | <p>If there are overlapping matches and you want the longest matches:</p>
<pre><code>from collections import defaultdict
d = {
1: {'Hello', 'Bye'},
2: {'Bye', 'Do', 'Action'},
3: {'Not', 'But', 'No'},
4: {'No', 'Yes'}
}
grp = defaultdict(list)
# first group all keys with common words
for k, v in d.items():
for val in v:
grp[val].append(k)
# sort the values by lengths to find longest matches.
for v in sorted(grp.values(), key=len, reverse=True):
for val in v[1:]:
if val not in d:
continue
# use first ele as the key and union to existing values
d[v[0]] |= d[val]
del d[val]
print(d)
</code></pre>
<p>if you don't have overlaps you can just:</p>
<pre><code>grp = defaultdict(list)
for k, v in d.items():
for val in v:
grp[val].append(k)
for v in grp.values():
for val in v[1:]:
d[v[0]] |= d[val]
del d[val]
</code></pre>
<p>Or if you want a new dict:</p>
<pre><code>new_d = {}
for v in grp.values():
if len(v) > 1:
k = v[0]
new_d[k] = d[k]
for val in v[1:]:
new_d[k] |= d[val]
</code></pre>
<p>All three give you the following but key order could be different:</p>
<pre><code>{1: set(['Action', 'Do', 'Bye', 'Hello']), 3: set(['Not', 'Yes', 'But', 'No'])}
</code></pre>
| 1 | 2016-10-11T19:57:46Z | [
"python",
"dictionary"
] |
for loop, trying to exclude numbers in the if and else statement | 39,985,522 | <p>I am reletively new to python, as you can see if the 'n' is divisible by 5, 6 and 5&6 something happens but for those numbers it still shows (ex. 100O) when it is divisble by 5 or 6 i dont want the 'n' number to print. Any suggestions will help but please keep it at a novice level so i may learn. </p>
<pre><code>def main():
numHigh = 101
for n in range(numHigh, 0, -1):
print(n)
if (n % 5 == 0):
print("Where do you see yourself in five years?")
elif (n % 6 == 0):
print("I'll believe six impossible things before breakfast.")
elif (n % 5 == 0) & (numHigh % 6 == 0):
print("Thirty days in hath September.")
main()
Sample Output:
101
100
Where do you see yourself in five years?
99
98
97
96
I'll believe six impossible things before breakfast.
95
Where do you see yourself in five years?
94
93
I want this output:
101
Where do you see yourself in five years?
99
98
97
96
I'll believe six impossible things before breakfast.
Where do you see yourself in five years?
94
93
</code></pre>
| 0 | 2016-10-11T19:48:17Z | 39,985,640 | <pre><code>def main():
numHigh = 101
for n in range(numHigh, 0, -1):
if (n % 5 == 0):
print("Where do you see yourself in five years?")
elif (n % 6 == 0):
print("I'll believe six impossible things before breakfast.")
elif (n % 5 == 0) & (numHigh % 6 == 0):
print("Thirty days in hath September.")
else:
print(n)
main()
</code></pre>
| 1 | 2016-10-11T19:56:01Z | [
"python"
] |
for loop, trying to exclude numbers in the if and else statement | 39,985,522 | <p>I am reletively new to python, as you can see if the 'n' is divisible by 5, 6 and 5&6 something happens but for those numbers it still shows (ex. 100O) when it is divisble by 5 or 6 i dont want the 'n' number to print. Any suggestions will help but please keep it at a novice level so i may learn. </p>
<pre><code>def main():
numHigh = 101
for n in range(numHigh, 0, -1):
print(n)
if (n % 5 == 0):
print("Where do you see yourself in five years?")
elif (n % 6 == 0):
print("I'll believe six impossible things before breakfast.")
elif (n % 5 == 0) & (numHigh % 6 == 0):
print("Thirty days in hath September.")
main()
Sample Output:
101
100
Where do you see yourself in five years?
99
98
97
96
I'll believe six impossible things before breakfast.
95
Where do you see yourself in five years?
94
93
I want this output:
101
Where do you see yourself in five years?
99
98
97
96
I'll believe six impossible things before breakfast.
Where do you see yourself in five years?
94
93
</code></pre>
| 0 | 2016-10-11T19:48:17Z | 39,985,650 | <p>The print(n) is in the beginning of your for loop so it will print n every time the loop runs. Instead you should create an else statement after your ifs and elifs. That way if all those ifs/elifs aren't true it will print n. </p>
<p>Like this</p>
<pre><code>if â¦
elif â¦
elif â¦
else:
print(n)
</code></pre>
| 1 | 2016-10-11T19:56:40Z | [
"python"
] |
Current options for Django permissions CBV/DRF | 39,985,652 | <p>What are the currently available options for permissions in Django that work for both class-based-views and Django-REST-Framework?</p>
<p>I don't want object-level permissions but rather something like <a href="https://github.com/dfunckt/django-rules" rel="nofollow">rules</a>, <a href="https://github.com/maraujop/django-rules" rel="nofollow">django-rules</a>, or <a href="https://github.com/dbkaplan/dry-rest-permissions" rel="nofollow">dry-rest-permissions</a>.</p>
<p>However, the first two appear to be specific to normal views while the second appears to be specific to DRF. I want both.</p>
<p>What are my options if I don't want to duplicate my permission rules.</p>
| 0 | 2016-10-11T19:56:45Z | 40,087,591 | <p>Actually, <code>django-rules</code> already integrates well into the permission system of Django. Thus, <code>User.has_perm()</code> works out of the box with it.</p>
| 0 | 2016-10-17T13:23:12Z | [
"python",
"django",
"django-rest-framework"
] |
onclick turtle method in python 3 | 39,985,755 | <p>I am re-creating the game of x and os, but I can't seem to get the onclick method in python3 to work here's my code:</p>
<pre><code>def hide(t):
t.hideturtle()
def begin():
global playersTurn
if playersTurn == 0:
playerOnesTurn.showturtle()
playerOnesTurn.onclick(hide)
else:
playerTwosTurn.showturtle()
playerTwosTurn.onclick(hide)
</code></pre>
<p>I'm displaying an image in the turtle <code>playerOnesTurn</code>and <code>playerTwosTurn</code>.
When I run the program and I want the turtle to hide when the player clicks on the image, it gives me the following error and I have not found a solution:</p>
<pre><code>TypeError: hide() takes 1 positional argument but 2 were given
</code></pre>
| 0 | 2016-10-11T20:02:29Z | 39,986,739 | <p>The <code>onclick()</code> handler expects a function that takes the x & y positions as arguments. So, instead do something like:</p>
<pre><code>playerOnesTurn.onclick(lambda x, y: playerOnesTurn.hideturtle())
</code></pre>
<p>Other comments: if you're only checking <code>playersTurn</code>, not changing it, then you don't need the <code>global playersTurn</code> statement; when you enable <code>onclick()</code> for one player, you might want to disable it for the other player by doing <code>playerTwosTurn.onclick(None)</code>.</p>
| 0 | 2016-10-11T21:07:29Z | [
"python",
"python-3.x",
"turtle-graphics"
] |
web-scrapping hidden href using python | 39,985,772 | <p>I´m using python to get all the possible href from the following webpage: </p>
<p><a href="http://www.congresovisible.org/proyectos-de-ley/" rel="nofollow">http://www.congresovisible.org/proyectos-de-ley/</a></p>
<p>example these two</p>
<pre><code>href="ppor-medio-de-la-cual-se-dictan-medidas-para-defender-el-acceso-de-los-usuarios-del-sistema-de-salud-a-medicamentos-de-calidad-eficacia-y-seguridad-acceso-de-los-usuarios-del-sistema-de-salud-a-medicamentos/8683">
href="ppor-medio-del-cual-el-congreso-de-la-republica-facultado-por-el-numeral-17-del-articulo-150-de-la-constitucion-politica-de-colombia-y-en-aras-de-facilitar-la-paz-decreta-otorgar-amnistia-e-indulto-a-los-miembros-del-grupo-armado-organizado-al-margen-de-la-ley-farc-ep/8682">
</code></pre>
<p>and at the end have a list with all possible href in that page.</p>
<p>However, by clicking in ver todos ("see all") there are more hrefs. But if you check the source page, even if you add /#page=4 or whatever page to the url, the total hrefs remain the same (actually the page source doesn't change). How could I get all those hidden hrefs? </p>
| 0 | 2016-10-11T20:03:32Z | 39,986,592 | <blockquote>
<p>Prenote: I assume you use Python 3+.</p>
</blockquote>
<p>What happens is, you click "See All", it requests an API, takes data, dumps into view. This is all AJAX process.</p>
<p>The hard and complicated way is to use, Selenium, but there is no need actually. With a little bit debug on browser, you can see <a href="http://www.congresovisible.org/proyectos-de-ley/search/proyectos-de-ley/?q=%20&page=1" rel="nofollow">where it loads the data</a>.</p>
<p>This is page one. <code>q</code> is probably search query, <code>page</code> is exactly which page. 5 element per page. You can request it via <code>urllib</code> or <code>requests</code> and parse it with <code>json</code> package into a dict.</p>
<hr>
<h1>A Simple Demonstration</h1>
<p>I wanted to try it myself and it seems server we get the data from needs a <code>User-Agent</code> header to process, otherwise, it simply throws <code>403</code> (Forbidden). I am trying on Python 3.5.1.</p>
<pre><code>from urllib.request import urlopen, Request
import json
# Creating headers as dict, to pass User-Agent. I am using my own User-Agent here.
# You can use the same or just google it.
# We need to use User-Agent, otherwise, server does not accept request and returns 403.
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36 OPR/39.0.2256.48"
}
# Creating a Request object.
# See, we pass headers, below.
req = Request("http://www.congresovisible.org/proyectos-de-ley/search/proyectos-de-ley/?q=%20&page=1", headers=headers)
# Getting a response
res = urlopen(req)
# The thing is, it returns binary, we need to convert it to str in order to pass it on json.loads function.
# This is just a little bit complicated.
data_b = res.read()
data_str = data_b.decode("utf-8")
# Now, this is the magic.
data = json.loads(data_str)
print(data)
# Now you can manipulate your data. :)
</code></pre>
<h3>For Python 2.7</h3>
<ul>
<li>You can use <code>urllib2</code>. <code>urllib2</code> does not seperate into packages like it does in Python 3. So, all you have to do <code>from urllib2 import Request, urlopen</code>.</li>
</ul>
| 1 | 2016-10-11T20:57:07Z | [
"javascript",
"python",
"web-scraping",
"href"
] |
Custom 404 django template | 39,985,774 | <p>I'm trying to customize the 404 error pages in my application. After searching many possible solutions, I created an 404.html template, added a method which should handle HTTP Error 404 and edited my urls.py.</p>
<p>But I guess I'm doing something really wrong. My log presents an invalid syntax error and I cannot solve it. </p>
<p>My views.py:</p>
<pre><code># HTTP Error 400
def page_not_found(request):
response = render_to_response('404.html',context_instance=RequestContext(request))
response.status_code = 404
return response
</code></pre>
<p>And the syntax error:</p>
<pre><code>Traceback (most recent call last):
File "/.../myenv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 393, in urlconf_module
return self._urlconf_module
AttributeError: 'RegexURLResolver' object has no attribute '_urlconf_module'
...
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/dcaled/work/portal-interface/portal/urls.py", line 12
handler404 = 'views.handler404'
^
SyntaxError: invalid syntax
</code></pre>
<p>Does anyone have an idea of what's going on?
Thanks.</p>
<p><strong>UPDATE:</strong>
After @Alasdair suggestion, I made some changes and fixes. The error has stopped.</p>
<p>Now, my urls.py is like:</p>
<pre><code>handler404 = 'views.page_not_found'
urlpatterns = [
url(r'^$', views.home),]
</code></pre>
<p>But I still don't get my custom 404.html when accessing a non existing page.
Instead, a default page is loaded, with this message:</p>
<p>"<strong>Not Found</strong> </p>
<p>The requested URL /url/404 not found on this server." </p>
<p>Also, my settings.py:</p>
<pre><code>DEBUG = False
ALLOWED_HOSTS = ['*']
TEMPLATE_DIRS = (os.path.join(BASE_DIR , 'portal/templates'),)
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
</code></pre>
<p>My project tree:</p>
<pre><code>âââ fb_dev
â  âââ __init__.py
â  âââ __pycache__
â  âââ settings.py
â  âââ urls.py
â  âââ wsgi.py
âââ manage.py
â
âââ portal
  âââ admin.py
  âââ forms.py
  âââ json
  âââ migrations
  âââ models.py
  âââ static
  âââ templates
  â  âââ 404.html
  â  âââ portal
  â  âââ base.html
  â  âââ home.html
 âââ templatetags
  âââ urls.py
  âââ views.py
</code></pre>
| 0 | 2016-10-11T20:03:37Z | 39,985,871 | <p>The <code>handler404</code> should be outside <code>urlpatterns</code>. If the view is called <code>page_not_found</code>, then it should refer to <code>page_not_found</code>, not <code>handler404</code>.</p>
<pre><code>handler404 = 'views.page_not_found'
urlpatterns = [
url(r'^$', views.home),
]
</code></pre>
<p>However, in your case, you do not need a custom 404 handler at all. Remove the <code>handler404</code> line completely, and the default <a href="https://docs.djangoproject.com/en/1.10/ref/views/#django.views.defaults.page_not_found" rel="nofollow"><code>page_not_found</code></a> view will render your <code>404.html</code> template.</p>
| 1 | 2016-10-11T20:09:46Z | [
"python",
"django"
] |
How to use gluLookAt in PyOpenGL? | 39,985,804 | <p>I'm trying to learn PyOpenGL, but I'm not sure how to use gluLookAt to move the camera. I've got an image displaying, but I think I might be missing something that allows me to use gluLookAt? Incoming wall o'text, I'm not sure where the problem might be. I've cut out the shaders and the texture code, because I don't <em>think</em> it's relevant, but if it is, I can post it.</p>
<pre><code>import sys
import ctypes
import numpy
from OpenGL import GL, GLU
from OpenGL.GL import shaders
from OpenGL.arrays import vbo
import pygame
from numpy import array
class OpenGLSprite():
def __init__(self,_sprite='',_vertexShader = None, _fragmentShader = None):
if not isinstance(_sprite, pygame.Surface):
self.sprite = pygame.image.load(_sprite)
else: self.sprite = _sprite
vertexData = numpy.array([
# X Y Z U, V
-1.0, -1.0, 0, 0.0, 0.0,
-1.0, 1.0, 0, 0.0, 1.0,
1.0, 1.0, 0, 1.0, 1.0,
1.0, 1.0, 0, 1.0, 1.0,
1.0, -1.0, 0, 1.0, 0.0,
-1.0, -1.0, 0, 0.0, 0.0,
], dtype=numpy.float32)
self.loadTexture()
self.buildShaders(_vertexShader, _fragmentShader)
self.VAO = GL.glGenVertexArrays(1)
GL.glBindVertexArray(self.VAO)
self.VBO = GL.glGenBuffers(1)
GL.glBindBuffer(GL.GL_ARRAY_BUFFER, self.VBO)
GL.glBufferData(GL.GL_ARRAY_BUFFER, vertexData.nbytes, vertexData,
GL.GL_STATIC_DRAW)
positionAttrib = GL.glGetAttribLocation(self.shaderProgram, 'position')
coordsAttrib = GL.glGetAttribLocation(self.shaderProgram, 'texCoords')
GL.glEnableVertexAttribArray(0)
GL.glEnableVertexAttribArray(1)
GL.glVertexAttribPointer(positionAttrib, 3, GL.GL_FLOAT, GL.GL_FALSE, 20,
None)
# the last parameter is a pointer
GL.glVertexAttribPointer(coordsAttrib, 2, GL.GL_FLOAT, GL.GL_TRUE, 20,
ctypes.c_void_p(12))
# load texture and assign texture unit for shaders
self.texUnitUniform = GL.glGetUniformLocation(self.shaderProgram, 'texUnit')
# Finished
GL.glBindBuffer(GL.GL_ARRAY_BUFFER, 0)
GL.glBindVertexArray(0)
def render(self):
GL.glClearColor(0, 0, 0, 1)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# active shader program
GL.glUseProgram(self.shaderProgram)
try:
# Activate texture
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, self.texture)
GL.glUniform1i(self.texUnitUniform, 0)
# Activate array
GL.glBindVertexArray(self.VAO)
# draw triangle
GL.glDrawArrays(GL.GL_TRIANGLES, 0, 6)
finally:
GL.glBindVertexArray(0)
GL.glUseProgram(0)
def main():
pygame.init()
screen = pygame.display.set_mode((640,480),pygame.OPENGL|pygame.DOUBLEBUF)
GL.glClearColor(0.5, 0.5, 0.5, 1.0)
camX = 0.0
camZ = 3.0
sprite = OpenGLSprite('logo-bluebg-square.png')
status = 0
while status == 0:
for event in pygame.event.get():
if event.type == pygame.QUIT:
status = 1
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
status = 1
if event.key == pygame.K_LEFT:
camX -= 0.1
if event.key == pygame.K_LEFT:
camX += 0.1
GLU.gluLookAt(camX,0.0,camZ,
0.0,0.0,0.0,
0.0,1.0,0.0)
sprite.render()
pygame.display.flip()
return 0
if __name__ == '__main__':
main()
</code></pre>
<p>The gluLookAt in the main game loop doesn't seem to do anything, no matter what parameters I modify. Changing camX with the keys isn't affecting anything, it just keeps looking at the surface straight on. From what I understand, the first three inputs to gluLookAt should be the location of the "Camera", correct? So if it's looking to the origin and the X position of the camera is moving, it should be rotating and moving the surface, right? Is there something special I have to do when setting up the object to allow it to work with gluLookAt? Do I have to do something after calling it to apply the transform to something? Does it need to be in the drawing code of the object, or the gameLoop?</p>
| 1 | 2016-10-11T20:05:57Z | 39,986,290 | <p>The GLU library is for use with the fixed-function pipeline. It doesn't work with modern OpenGL, when you're using vertex shaders (at least, not unless you're doing compatibility-profile stuff, but it looks like you're not doing that). Instead, you will create the equivalent matrix, and use it as part of one of your uniforms.</p>
<p>The GLM source code for the equivalent function is available on <a href="https://github.com/g-truc/glm/blob/f96bc5fd9d8ac6eb3fb6b3e5dbd396bcb26b8983/glm/gtc/matrix_transform.inl#L521" rel="nofollow">GitHub in glm/glm/gtc/matrix_transform.inl, in the function lookAtRH</a>. Use this function to create your matrix, multiply it by any other components of your view matrix, and set it as a uniform in your vertex shader.</p>
| 2 | 2016-10-11T20:35:37Z | [
"python",
"pyopengl",
"glu",
"glulookat"
] |
pandas merge on columns with different names and avoid duplicates | 39,985,861 | <p>How can I merge two pandas DataFrames on two columns with different names and keep one of the columns?</p>
<pre><code>df1 = pd.DataFrame({'UserName': [1,2,3], 'Col1':['a','b','c']})
df2 = pd.DataFrame({'UserID': [1,2,3], 'Col2':['d','e','f']})
pd.merge(df1, df2, left_on='UserName', right_on='UserID')
</code></pre>
<p>This provides a DataFrame like this</p>
<p><a href="https://i.stack.imgur.com/VxTe3.png" rel="nofollow"><img src="https://i.stack.imgur.com/VxTe3.png" alt="enter image description here"></a></p>
<p>But clearly I am merging on <code>UserName</code> and <code>UserID</code> so they are the same. I want it to look like this. Is there any clean ways to do this? </p>
<p><a href="https://i.stack.imgur.com/b6RmG.png" rel="nofollow"><img src="https://i.stack.imgur.com/b6RmG.png" alt="enter image description here"></a></p>
<p>Only the ways I can think of are either re-naming the columns to be the same before merge, or droping one of them after merge. I would be nice if pandas automatically drops one of them or I could do something like</p>
<pre><code>pd.merge(df1, df2, left_on='UserName', right_on='UserID', keep_column='left')
</code></pre>
| 1 | 2016-10-11T20:09:21Z | 39,985,966 | <p>There is nothing really nice in it: it's meant to be keeping the columns as the larger cases like left right or outer joins would bring additional information with two columns. Don't try to overengineer your merge line, be explicit as you suggest</p>
<p>Solution 1:</p>
<pre><code>df2.columns = ['Col2', 'UserName']
pd.merge(df1, df2,on='UserName')
Out[67]:
Col1 UserName Col2
0 a 1 d
1 b 2 e
2 c 3 f
</code></pre>
<p>Solution 2:</p>
<pre><code>pd.merge(df1, df2, left_on='UserName', right_on='UserID').drop('UserID', axis=1)
Out[71]:
Col1 UserName Col2
0 a 1 d
1 b 2 e
2 c 3 f
</code></pre>
| 2 | 2016-10-11T20:15:38Z | [
"python",
"pandas",
"merge"
] |
pandas merge on columns with different names and avoid duplicates | 39,985,861 | <p>How can I merge two pandas DataFrames on two columns with different names and keep one of the columns?</p>
<pre><code>df1 = pd.DataFrame({'UserName': [1,2,3], 'Col1':['a','b','c']})
df2 = pd.DataFrame({'UserID': [1,2,3], 'Col2':['d','e','f']})
pd.merge(df1, df2, left_on='UserName', right_on='UserID')
</code></pre>
<p>This provides a DataFrame like this</p>
<p><a href="https://i.stack.imgur.com/VxTe3.png" rel="nofollow"><img src="https://i.stack.imgur.com/VxTe3.png" alt="enter image description here"></a></p>
<p>But clearly I am merging on <code>UserName</code> and <code>UserID</code> so they are the same. I want it to look like this. Is there any clean ways to do this? </p>
<p><a href="https://i.stack.imgur.com/b6RmG.png" rel="nofollow"><img src="https://i.stack.imgur.com/b6RmG.png" alt="enter image description here"></a></p>
<p>Only the ways I can think of are either re-naming the columns to be the same before merge, or droping one of them after merge. I would be nice if pandas automatically drops one of them or I could do something like</p>
<pre><code>pd.merge(df1, df2, left_on='UserName', right_on='UserID', keep_column='left')
</code></pre>
| 1 | 2016-10-11T20:09:21Z | 39,985,970 | <p>How about set the <code>UserID</code> as index and then join on index for the second data frame?</p>
<pre><code>pd.merge(df1, df2.set_index('UserID'), left_on='UserName', right_index=True)
# Col1 UserName Col2
# 0 a 1 d
# 1 b 2 e
# 2 c 3 f
</code></pre>
| 2 | 2016-10-11T20:15:50Z | [
"python",
"pandas",
"merge"
] |
Python: POST request not working? | 39,986,019 | <p>Making a simple POST request to Firebase. For some reason, it's not working. cURL with the same data is working, no issues. Any ideas?</p>
<p>Code below:</p>
<pre><code>import requests
r = requests.post("https://testapp-f55e1.firebaseio.com/test.json", data={"location":{"altitude":"200","latitude":"23.2", "longitude":"44.32"},"polution":{"pm10":"11","pm2":"123"}})
logging.debug(r)
</code></pre>
<p>It starts to work, but nothing happens. </p>
<pre><code>INFO:Posting to https://testapp-f55e1.firebaseio.com/test.json
</code></pre>
<p>The request doesn't reach Firebase.</p>
<p>If I do a curl request with the same URL, it works like a charm. Any ideas?</p>
| 0 | 2016-10-11T20:19:47Z | 39,986,195 | <p>It expects <em>json</em> so replace <em>data=</em> with <em>json=</em>, requests will call <em>json.dumps</em> and set the headers for you:</p>
<pre><code>In [6]: import requests
...: r = requests.post("https://testapp-f55e1.firebaseio.com/test.json", json
...: ={"location":{"altitude":"200","latitude":"23.2", "longitude":"44.32"},"
...: polution":{"pm10":"11","pm2":"123"}})
...: print(r)
...: print(r.json())
...:
<Response [200]>
{'name': '-KTpRAvBqP4Ra-FSXtKO'}
</code></pre>
<p>The output from using data= was giving you a clue:</p>
<pre><code>In [7]: import requests
...: r = requests.post("https://testapp-f55e1.firebaseio.com/test.json", data
...: ={"location":{"altitude":"200","latitude":"23.2", "longitude":"44.32"},"
...: polution":{"pm10":"11","pm2":"123"}})
...: print(r)
...: print(r.json())
...:
<Response [400]>
{'error': "Invalid data; couldn't parse JSON object, array, or value. Perhaps you're using invalid characters in your key names."}
</code></pre>
| 2 | 2016-10-11T20:30:21Z | [
"python",
"curl",
"firebase",
"python-requests"
] |
Django template {%url%} concatenates to existing url | 39,986,025 | <p>Problem accured after moving application to other server.</p>
<p>For example url: <a href="http://example.com/path/otherpath" rel="nofollow">http://example.com/path/otherpath</a>.
And I want logout: </p>
<pre><code><a href="{% url 'logout' %}">Logout</a>
</code></pre>
<p>it goes to: <a href="http://example.com/path/otherpath/logout" rel="nofollow">http://example.com/path/otherpath/logout</a></p>
<p>urls.py line:</p>
<pre><code>url(r'^logout/$', 'logout', name='logout'),
</code></pre>
| 0 | 2016-10-11T20:20:06Z | 39,986,333 | <p>Problem was in fastcgi_params file.
Works with this settings:</p>
<pre><code> fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
</code></pre>
| 0 | 2016-10-11T20:38:49Z | [
"python",
"nginx",
"django-templates",
"django-urls",
"django-1.6"
] |
Converting days since epoch to date | 39,986,041 | <p>How can one convert a serial date number, representing the number of days since epoch (1970), to the corresponding date string? I have seen multiple posts showing how to go from string to date number, but I haven't been able to find any posts on how to do the reverse.</p>
<p>For example, <code>15951</code> corresponds to <code>"2013-09-02"</code>.</p>
<pre><code>>>> import datetime
>>> (datetime.datetime(2013, 9, 2) - datetime.datetime(1970,1,1)).days + 1
15951
</code></pre>
<p>(The <code>+ 1</code> because whatever generated these date numbers followed the convention that Jan 1, 1970 = 1.)</p>
<p>TL;DR: Looking for something to do the following:</p>
<pre><code>>>> serial_date_to_string(15951) # arg is number of days since 1970
"2013-09-02"
</code></pre>
<p>This is different from <a href="http://stackoverflow.com/questions/12400256/python-converting-epoch-time-into-the-datetime">Python: Converting Epoch time into the datetime</a> because I am starting with days since 1970. I not sure if you can just multiply by 86,400 due to leap seconds, etc.</p>
| 0 | 2016-10-11T20:21:01Z | 39,988,256 | <p>Use the <code>datetime</code> package as follows:</p>
<pre><code>import datetime
def serial_date_to_string(srl_no):
new_date = datetime.datetime(1970,1,1,0,0) + datetime.timedelta(srl_no - 1)
return new_date.strftime("%Y-%m-%d")
</code></pre>
<p>This is a function which returns the string as required.</p>
<p>So:</p>
<pre><code>serial_date_to_string(15951)
</code></pre>
<p>Returns</p>
<pre><code>>> "2013-09-02"
</code></pre>
| 1 | 2016-10-11T23:22:23Z | [
"python",
"date",
"datetime",
"epoch"
] |
Python Pandas dataframe division error: operation not ' 'safe' ' | 39,986,053 | <p>I am trying to normalize some columns of a Pandas DataFrame in Python to their sum. I have the following the DataFrame:</p>
<pre><code>import pandas as pd
l_a_2015 = ['Farh','Rob_Sens','Pressure','Septic',10.0,45.,52.,72.51]
l_a_2010 = ['Water_Column','Log','Humid','Top_Tank',58.64,35.42,10.,30.]
df = pd.DataFrame([l_a_2010,l_a_2015],columns=['Output_A','Tonnes_Rem',
'Log_Act_All','Readout','A1','A2','A3','A4'])
</code></pre>
<p>I would like to normalize the columns <code>A1</code>,<code>A2</code>,<code>A3</code>,<code>A4</code> to their sum as shown <a href="http://stackoverflow.com/questions/18594469/normalizing-a-pandas-dataframe-by-row">here</a> - divide each element on a row by the sum of 4 elements.</p>
<p>The first part of this appears to work fine - I get the sum of the last 4 columns, on each row, with this:</p>
<pre><code>x,y = df.sum(axis=1).tolist()
</code></pre>
<p>So, the list <code>[x,y]</code> gives me the sum of the first and second rows (last 4 columns). However, when I try to <strong>divide all DataFrame entries on each row by the sum of that row</strong>, then I am having problems:</p>
<pre><code>for b,n in enumerate([x,y]):
for f,elem in enumerate(list(df)[4:]):
df.iloc[b,f] = (df.iloc[b,f]/n)*100.
</code></pre>
<p>I get the following error:</p>
<pre><code>[Traceback (most recent call last):134.06, 179.50999999999999]
File "C:\test.py", line 13, in <module>
df.iloc[b,f] = (df.iloc[b,f]/n)*100.
TypeError: ufunc 'divide' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>When I use <code>print df.dtypes</code> I am getting <code>float64</code> for all the columns so I am not sure why the division is not safe.</p>
<p>Is there </p>
| 1 | 2016-10-11T20:21:36Z | 39,986,199 | <p>try this:</p>
<pre><code>In [5]: df
Out[5]:
Output_A Tonnes_Rem Log_Act_All Readout A1 A2 A3 A4
0 Water_Column Log Humid Top_Tank 58.64 35.42 10.0 30.00
1 Farh Rob_Sens Pressure Septic 10.00 45.00 52.0 72.51
In [8]: cols = df.select_dtypes(include=['number']).columns.tolist()
In [9]: cols
Out[9]: ['A1', 'A2', 'A3', 'A4']
</code></pre>
<p>let's create a view with numeric columns only:</p>
<pre><code>In [10]: v = df[cols]
In [13]: df[cols] = v.div(v.sum(axis=1), 0)
In [14]: df
Out[14]:
Output_A Tonnes_Rem Log_Act_All Readout A1 A2 A3 A4
0 Water_Column Log Humid Top_Tank 0.437416 0.264210 0.074593 0.223780
1 Farh Rob_Sens Pressure Septic 0.055707 0.250682 0.289677 0.403933
</code></pre>
<p>an alternative way to select <code>A*</code> columns:</p>
<pre><code>In [18]: df.filter(regex='^A\d+')
Out[18]:
A1 A2 A3 A4
0 0.437416 0.264210 0.074593 0.223780
1 0.055707 0.250682 0.289677 0.403933
In [19]: df.filter(regex='^A\d+').columns
Out[19]: Index(['A1', 'A2', 'A3', 'A4'], dtype='object')
</code></pre>
| 2 | 2016-10-11T20:30:41Z | [
"python",
"pandas",
"dataframe",
"divide"
] |
how does .isdigit() work in this case? | 39,986,198 | <p>When I was looking for a solution for the problem "count digits in a given string containing both letters and digits" there was one with built-in function .isdigit(). Here it is:</p>
<pre><code>def count_numbers1(a):
return sum(int(x) for x in a if x.isdigit())
</code></pre>
<p>It works nicely but I cannot get how it works. I have read that the <code>.isdigit()</code> returns true if there is at least one digit in a string false otherwise. </p>
<p>And one more question: how the function "takes out" the digits from the string and converts it in integers and how it skips the letters? Why <code>int(x)</code> when <code>x</code> is a letter does not produce an error? such as:</p>
<pre><code>>>> int('a')
Traceback (most recent call last):
File "<pyshell#77>", line 1, in <module>
int('a')
ValueError: invalid literal for int() with base 10: 'a'
</code></pre>
| -1 | 2016-10-11T20:30:34Z | 39,986,280 | <p>First of all, the function doesn't <em>count</em> digits in a string. It <em>sums up</em> the digits in a string. Secondly, <code>str.isdigit()</code> only returns true if <strong>all</strong> characters in a string are digits, not just one of the characters. From the <a href="https://docs.python.org/3/library/stdtypes.html#str.isdigit" rel="nofollow"><code>str.isdigit()</code> documentation</a>:</p>
<blockquote>
<p>Return true if all characters in the string are digits and there is at least one character, false otherwise.</p>
</blockquote>
<p>This means <code>'1a'.isdigit()</code> is false, because there is a non-digit character in that string. Iteration over a string produces 1-character strings, so there is always exactly one character in your function loop.</p>
<p>So, <code>int()</code> is never called on any non-digit, because the <a href="https://docs.python.org/3/tutorial/classes.html#generator-expressions" rel="nofollow">generator expression</a> filters out any character that is not a digit.</p>
<p>You can see what happens if you use a simple <code>for</code> loop instead:</p>
<pre><code>>>> string = 'foo 42 bar 8 1'
>>> for character in string:
... if character.isdigit():
... print(character)
...
4
2
8
1
</code></pre>
<p>Because <code>str.isdigit()</code> only returns true for strings (here consisting of just one character each) contains <em>only</em> digits.</p>
<p>Instead of a <code>for</code> loop, you could use a <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a> to produce a list:</p>
<pre><code>>>> [c for c in string if c.isdigit()]
['4', '2', '8', '1']
</code></pre>
<p>Now it is easy to add that <code>int()</code> call and see the difference:</p>
<pre><code>>>> [int(c) for c in string if c.isdigit()]
[4, 2, 8, 1]
</code></pre>
<p>Because only digits are passed through, <code>int()</code> always works, it is never called on a letter.</p>
<p>Your function then uses <code>sum()</code> on those values, so for my sample string, adding up 4 + 2 + 8 + 1 is 15:</p>
<pre><code>>>> sum(int(c) for c in string if c.isdigit())
15
</code></pre>
| 6 | 2016-10-11T20:34:58Z | [
"python",
"string",
"sum",
"digits"
] |
Create a query with Django | 39,986,205 | <p>Hi everyone I and try to do a query with the params pass by URL in my case the uRL is like</p>
<pre><code>http://127.0.0.1:8000/api/cpuProjects/cpp/es
http://127.0.0.1:8000/api/cpuProjects/cpp,ad/es
</code></pre>
<p>My code to create the query is like this </p>
<pre><code>def findElem(request, **kwargs):
projects_name = str(kwargs['project_name']).split(',')
status = str(kwargs['status'])
list_project = tuple(projects_name)
print(list_project)
query = "SELECT * FROM proj_cpus WHERE project in '%s'" % projects_name
print(query)
result = run_query(query)
</code></pre>
<p>the first return this query</p>
<pre><code>SELECT * FROM proj_cpus WHERE project in '['cpp']'
</code></pre>
<p>the second one has to by a query like this</p>
<pre><code>SELECT * FROM proj_cpus WHERE project in '['cpp', 'ad']'
</code></pre>
<p>In this case when I execute the query return that I have a error in the syntax, yes I know the [] is no correct</p>
<p>So I convert my params in a tuple
so now the query is like that</p>
<p>and the error is </p>
<pre><code>SELECT * FROM proj_cpus WHERE project in ('cpp')
SELECT * FROM proj_cpus WHERE project in ('cpp', 'ad')
not all arguments converted during string formatting
</code></pre>
<p>What is the best way to create the query?</p>
<p>Thanks in advance</p>
| 0 | 2016-10-11T20:31:11Z | 39,986,352 | <p>Im sorry to say that, but passing your variables directly into a query is dangerous in every language. Try something more like this. Django will take care for you to escape your arguments proper, otherwise you get SQL injection possibilities:</p>
<pre><code>from foo.models import CPUSModel
projects_name = ... whatever you did to get a tuple ...
results = CPUSModel.objects.raw('SELECT * FROM proj_cpus WHERE project in %s', [tuple(projects_name),])
list(results)
</code></pre>
<p>For more questions about django raw queries, you can check <a href="https://docs.djangoproject.com/en/1.10/topics/db/sql/" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/db/sql/</a></p>
| 2 | 2016-10-11T20:39:50Z | [
"python",
"django"
] |
Create a query with Django | 39,986,205 | <p>Hi everyone I and try to do a query with the params pass by URL in my case the uRL is like</p>
<pre><code>http://127.0.0.1:8000/api/cpuProjects/cpp/es
http://127.0.0.1:8000/api/cpuProjects/cpp,ad/es
</code></pre>
<p>My code to create the query is like this </p>
<pre><code>def findElem(request, **kwargs):
projects_name = str(kwargs['project_name']).split(',')
status = str(kwargs['status'])
list_project = tuple(projects_name)
print(list_project)
query = "SELECT * FROM proj_cpus WHERE project in '%s'" % projects_name
print(query)
result = run_query(query)
</code></pre>
<p>the first return this query</p>
<pre><code>SELECT * FROM proj_cpus WHERE project in '['cpp']'
</code></pre>
<p>the second one has to by a query like this</p>
<pre><code>SELECT * FROM proj_cpus WHERE project in '['cpp', 'ad']'
</code></pre>
<p>In this case when I execute the query return that I have a error in the syntax, yes I know the [] is no correct</p>
<p>So I convert my params in a tuple
so now the query is like that</p>
<p>and the error is </p>
<pre><code>SELECT * FROM proj_cpus WHERE project in ('cpp')
SELECT * FROM proj_cpus WHERE project in ('cpp', 'ad')
not all arguments converted during string formatting
</code></pre>
<p>What is the best way to create the query?</p>
<p>Thanks in advance</p>
| 0 | 2016-10-11T20:31:11Z | 39,986,527 | <p>The best thing to do is to use Django's ORM API to make queries:</p>
<pre><code>from foo.models import ProjCpu
...
projects_name = ...
...
ProjCpu.objects.filter(project__in=projects_name)
</code></pre>
<p>You can read everything about it in the Django documentation.</p>
<p>Basically That's all you need to know, but if you want to know more about the error you got, you are welcome to keep reading.</p>
<p>The error you got was caused by wrong usage of string formatting.
To pass multiple arguments to string formatting you use a tuple, like so:</p>
<pre><code>print("the %s, the %s and the %s" % ("good", "bad", "ugly"))
</code></pre>
<p>Since you supplied a tuple to the string formatting, python tried to format the items in the tuple as separate, multiple arguments. And because you specified only one "%s" in the string, there was an error.</p>
<p>In order to supply a tuple as the sole argument you must put it as the only member of another tuple:</p>
<pre><code>my_tuple = (1, 2, 3)
print("tuple: %s" % (my_tuple,))
</code></pre>
<p>or just use <code>.format</code>:</p>
<pre><code>print("tuple: {}".format(my_tuple))
</code></pre>
<p>In your case doing <code>query = "SELECT * FROM proj_cpus WHERE project in '%s'" % (projects_name,)</code> will no longer raise an error. But as mentioned before, just use Django's ORM for queries.</p>
| 1 | 2016-10-11T20:52:54Z | [
"python",
"django"
] |
Django: django.test TestCase self.assertTemplateUsed | 39,986,243 | <p>Working on <a href="http://hellowebapp.com" rel="nofollow">Hello Web App</a>.</p>
<p>Please help with TestCase: test_edit_thing.</p>
<p>Here's the error details:</p>
<p>The error in case of '/things/django-book/edit/':</p>
<pre><code># AssertionError: No templates used to render the response
response = self.client.get('/things/django-book/edit/')
self.assertTemplateUsed(response, 'things/edit_thing.html')
</code></pre>
<p>The error in case of '/things/django-book/':</p>
<pre><code># AssertionError: Template 'things/edit_thing.html' was not a template used to render the response. Actual template(s) used: things/thing_detail.html, base.html
response = self.client.get('/things/django-book/')
self.assertTemplateUsed(response, 'things/edit_thing.html')
</code></pre>
<p>Code Snippet:</p>
<pre><code>from django.contrib.auth.models import AnonymousUser, User
from django.test import TestCase, RequestFactory
from collection.models import Thing
from .views import *
class SimpleTest(TestCase):
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
self.user = User.objects.create_user(username='Foo', password='barbaz')
self.thing = Thing.objects.create(name="Django Book",
description="Learn how to build your first Django web app.",
slug="django-book",
user=self.user)
def test_index(self):
request = self.factory.get('/index/')
response = index(request)
print(response.status_code)
self.assertEqual(response.status_code, 200)
response = self.client.get('/')
self.assertTemplateUsed(response, 'base.html')
def test_thing_detail(self):
django_book = Thing.objects.get(name="Django Book")
request = self.factory.get('/things/django-book/')
response = thing_detail(request, django_book.slug)
print(response.status_code)
self.assertEqual(response.status_code, 200)
response = self.client.get('/things/django-book/')
self.assertTemplateUsed(response, 'things/thing_detail.html')
def test_edit_thing(self):
django_book = Thing.objects.get(name="Django Book")
request = self.factory.get('/things/django-book/edit/')
request.user = django_book.user
response = edit_thing(request, django_book.slug)
print(response.status_code)
self.assertEqual(response.status_code, 200)
# AssertionError: No templates used to render the response
#response = self.client.get('/things/django-book/edit/')
# AssertionError: Template 'things/edit_thing.html' was not a template used to render the response. Actual template(s) used: things/thing_detail.html, base.html
#response = self.client.get('/things/django-book/')
#self.assertTemplateUsed(response, 'things/edit_thing.html')
# Right?
response = self.client.get('/things/django-book/')
self.assertTemplateUsed(response, 'things/thing_detail.html')
</code></pre>
<p><a href="https://github.com/hellowebapp/hellowebapp-code/blob/master/collection/views.py" rel="nofollow">views.py</a></p>
<pre><code>from django.contrib.auth.decorators import login_required
from django.http import Http404
from django.shortcuts import render, redirect
from django.template.defaultfilters import slugify
from collection.forms import ThingForm
from collection.models import Thing
def index(request):
things = Thing.objects.all()
return render(request, 'index.html', {
'things': things,
})
def thing_detail(request, slug):
# grab the object...
thing = Thing.objects.get(slug=slug)
# and pass to the template
return render(request, 'things/thing_detail.html', {
'thing': thing,
})
@login_required
def edit_thing(request, slug):
# grab the object...
thing = Thing.objects.get(slug=slug)
# grab the current logged in user and make sure they're the owner of the thing
if thing.user != request.user:
raise Http404
# set the form we're using...
form_class = ThingForm
# if we're coming to this view from a submitted form,
if request.method == 'POST':
# grab the data from the submitted form
form = form_class(data=request.POST, instance=thing)
if form.is_valid():
# save the new data
form.save()
return redirect('thing_detail', slug=thing.slug)
# otherwise just create the form
else:
form = form_class(instance=thing)
# and render the template
return render(request, 'things/edit_thing.html', {
'thing': thing,
'form': form,
})
def create_thing(request):
form_class = ThingForm
# if we're coming from a submitted form, do this
if request.method == 'POST':
# grab the data from the submitted form and apply to the form
form = form_class(request.POST)
if form.is_valid():
# create an instance but do not save yet
thing = form.save(commit=False)
# set the additional details
thing.user = request.user
thing.slug = slugify(thing.name)
# save the object
thing.save()
# redirect to our newly created thing
return redirect('thing_detail', slug=thing.slug)
# otherwise just create the form
else:
form = form_class()
return render(request, 'things/create_thing.html', {
'form': form,
})
def browse_by_name(request, initial=None):
if initial:
things = Thing.objects.filter(
name__istartswith=initial).order_by('name')
else:
things = Thing.objects.all().order_by('name')
return render(request, 'search/search.html', {
'things': things,
'initial': initial,
})
</code></pre>
<p>Thanks</p>
| 0 | 2016-10-11T20:32:43Z | 39,986,769 | <p>So the problem that you was having is that your user wasn't authenticated, and if user is not the same as user of the thing you raise Http404</p>
<p>It in your <code>edit_thing</code> view</p>
<pre><code>if thing.user != request.user:
raise Http404
</code></pre>
<p>So you needed to be authenticated. And that could be made with <a href="https://docs.djangoproject.com/en/1.10/topics/testing/tools/#django.test.Client.login" rel="nofollow"><code>login</code></a></p>
<pre><code>def test_edit_thing(self):
django_book = Thing.objects.get(name="Django Book")
request = self.factory.get('/things/django-book/edit/')
request.user = django_book.user
response = edit_thing(request, django_book.slug)
print(response.status_code)
self.assertEqual(response.status_code, 200)
self.client.login(username='Foo', password='barbaz')
response = self.client.get('/things/django-book/edit/')
self.assertTemplateUsed(response, 'things/edit_thing.html')
</code></pre>
| 0 | 2016-10-11T21:09:03Z | [
"python",
"django",
"python-2.7",
"django-templates",
"django-testing"
] |
POSTing and GETing with grequests | 39,986,254 | <p>I'm using grequests to scape websites faster. However, I also need to login to the website. </p>
<p>Before (just using requests) I could do: </p>
<p>where <code>headers</code> is my <code>User-Agent</code>. </p>
<pre><code>with requests.Session() as s:
s.headers.update(headers)
s.post(loginURL, files = data)
s.get(scrapeURL)
</code></pre>
<p>Using <code>grequests</code> I've only been able to pass <code>headers</code> by doing: </p>
<pre><code>rs = (grequests.get(u, headers=header) for u in urls)
response = grequests.map(rs)
</code></pre>
<p>Is there anyway to do a <code>POST</code> at the same time so I can login? The login URL is differnt than the URL(s) I'm scrapping. </p>
| 1 | 2016-10-11T20:33:13Z | 39,986,308 | <p>First login the session, then pass it explicitly to your grequest like this:</p>
<pre><code>requests = []
for url in urls:
request = grequests.AsyncRequest(
method='GET',
url=url,
session=session,
)
requests.append(request)
</code></pre>
| 2 | 2016-10-11T20:37:09Z | [
"python",
"python-3.x",
"python-requests",
"grequests"
] |
POSTing and GETing with grequests | 39,986,254 | <p>I'm using grequests to scape websites faster. However, I also need to login to the website. </p>
<p>Before (just using requests) I could do: </p>
<p>where <code>headers</code> is my <code>User-Agent</code>. </p>
<pre><code>with requests.Session() as s:
s.headers.update(headers)
s.post(loginURL, files = data)
s.get(scrapeURL)
</code></pre>
<p>Using <code>grequests</code> I've only been able to pass <code>headers</code> by doing: </p>
<pre><code>rs = (grequests.get(u, headers=header) for u in urls)
response = grequests.map(rs)
</code></pre>
<p>Is there anyway to do a <code>POST</code> at the same time so I can login? The login URL is differnt than the URL(s) I'm scrapping. </p>
| 1 | 2016-10-11T20:33:13Z | 39,986,310 | <p>You can pass in the <em>Session</em> object exactly the same as the <em>headers</em>:</p>
<pre><code>with requests.Session() as s:
s.headers.update(headers)
s.post(loginURL, files = data)
s.get(scrapeURL)
rs = (grequests.get(u, headers=header, session=s) for u in urls)
response = grequests.map(rs)
</code></pre>
| 2 | 2016-10-11T20:37:19Z | [
"python",
"python-3.x",
"python-requests",
"grequests"
] |
Serialize Haystack SearchQuerySet | 39,986,317 | <p>I have some Django queries dumped in files that are delayed so I pass as parameter <code>sql_with_params</code> to later execute in the delayed a <code>raw</code> query.</p>
<p>I have migrated all queries to haystack so I wan't to do the same with <code>SearchQuerySet</code>.</p>
<p>Is there any way to get the raw_query of an already constructed SearchQuerySet?</p>
<p>PD: I am using ElasticSearch</p>
| 0 | 2016-10-11T20:37:53Z | 39,986,564 | <p>Sure, here's one way that unfortunately requires a bit of plumbing. You can create a custom search engine and set its query to your own query definition inheriting from <code>ElasticsearchSearchQuery</code>:</p>
<pre><code>from haystack.backends.elasticsearch_backend import ElasticsearchSearchEngine, ElasticsearchSearchQuery
class ExtendedElasticsearchSearchQuery(ElasticsearchSearchQuery):
def build_query(self):
raw_query = super(ExtendedElasticsearchSearchQuery, self).build_query()
# TODO: Do something with raw query
return raw_query
class ExtendedElasticsearchSearchEngine(ElasticsearchSearchEngine):
query = ExtendedElasticsearchSearchQuery
</code></pre>
<p>and reference that from your settings:</p>
<pre><code>HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'myapp.mymodule.ExtendedElasticsearchSearchEngine',
'URL': 'http://localhost:9200/',
'INDEX_NAME': 'haystack'
},
}
</code></pre>
| 0 | 2016-10-11T20:54:58Z | [
"python",
"django",
"elasticsearch",
"django-haystack"
] |
How do I open a tab delimited text file in pandas so that I can then use it for exploratory data analysis? | 39,986,386 | <p>I have a tab delimited file with data about some movies. Some movie titles are enclosed in a double quote. How can I read the file into a pandas dataframe and how can I remove the double quote from the movie titles? </p>
<p>I know how to do it in R and I know how to remove the double quotes. However, is there an equivalent function in pandas to do that? But most importantly how do I load it into a pandas dataframe?</p>
| 0 | 2016-10-11T20:42:29Z | 39,986,488 | <p>First you'll want to
import pandas</p>
<pre><code>Df = pandas.read_csv("file.csv")
</code></pre>
<p>Get rid of double quotes with</p>
<pre><code>Df2 = Df['columnwithquotes'].apply(lambda x: x.replace('"', ''))
</code></pre>
| 0 | 2016-10-11T20:50:27Z | [
"python",
"pandas"
] |
How do I open a tab delimited text file in pandas so that I can then use it for exploratory data analysis? | 39,986,386 | <p>I have a tab delimited file with data about some movies. Some movie titles are enclosed in a double quote. How can I read the file into a pandas dataframe and how can I remove the double quote from the movie titles? </p>
<p>I know how to do it in R and I know how to remove the double quotes. However, is there an equivalent function in pandas to do that? But most importantly how do I load it into a pandas dataframe?</p>
| 0 | 2016-10-11T20:42:29Z | 39,986,823 | <p>You can use <code>read_table</code> as its <code>quotechar</code> parameter is set to <code>'"'</code> by default and will so remove the double quotes.</p>
<pre><code>import pandas as pd
from io import StringIO
the_data = """
A B C D
ABC 2016-6-9 0:00 95 "foo foo"
ABC 2016-6-10 0:00 0 "bar bar"
"""
df = pd.read_table(StringIO(the_data))
print(df)
# A B C D
# 0 ABC 2016-6-9 0:00 95 foo foo
# 1 ABC 2016-6-10 0:00 0 bar bar
</code></pre>
| 0 | 2016-10-11T21:12:36Z | [
"python",
"pandas"
] |
How do I open a tab delimited text file in pandas so that I can then use it for exploratory data analysis? | 39,986,386 | <p>I have a tab delimited file with data about some movies. Some movie titles are enclosed in a double quote. How can I read the file into a pandas dataframe and how can I remove the double quote from the movie titles? </p>
<p>I know how to do it in R and I know how to remove the double quotes. However, is there an equivalent function in pandas to do that? But most importantly how do I load it into a pandas dataframe?</p>
| 0 | 2016-10-11T20:42:29Z | 39,987,002 | <p>First you can read tab delimited files using either <code>read_table</code> or <code>read_csv</code>. The former uses tab delimiter by default, for the latter you need to specify it:</p>
<pre><code>import pandas as pd
df = pd.read_csv('yourfile.txt', sep='\t')
</code></pre>
<p>Or:</p>
<pre><code>import pandas as pd
df = pd.read_table('yourfile.txt')
</code></pre>
<p>If you are receiving encoding errors it is because <code>read_table</code> doesn't understand the text encoding of the file. You can solve this by specifying the encoding directly, for example for UTF8:</p>
<pre><code>import pandas as pd
df = pd.read_table('yourfile.txt', encoding='utf8')
</code></pre>
<p>If you file is using a <a href="https://docs.python.org/2.4/lib/standard-encodings.html" rel="nofollow">different encoding</a>, you will need to specify that instead.</p>
| 0 | 2016-10-11T21:25:06Z | [
"python",
"pandas"
] |
Typical coin change program in python that asks for specific amounts of each coin | 39,986,520 | <p>Given a number "x" and a sorted array of coins "coinset", write a function that returns the amounts for each coin in the coinset that sums up to X or indicate an error if there is no way to make change for that x with the given coinset. For example, with x=7 and a coinset of [1,5,10,25]a valid answer would be {1: 7} or {1: 2, 5: 1}. With x = 3 and a coinset of [2,4] it should indicate an error.</p>
<p>I know of this code for coin change</p>
<pre><code>def change(n, coins_available, coins_so_far):
# n is target
result = []
if sum(coins_so_far) == n:
yield coins_so_far
elif sum(coins_so_far) > n:
pass
elif coins_available == []:
pass
else:
# multiple occurences of the same coin
for c in change(n, coins_available[:], coins_so_far+[coins_available[0]]):
yield c
for c in change(n, coins_available[1:], coins_so_far):
yield c
n = 5
coins = [1,5,10,25]
solutions = [s for s in change(n, coins, [])]
for s in solutions:
print(s)
</code></pre>
<p>But what if you can only use two parameters (n and coins_available). How could one possibly condense this. This is the only coin change code I've seen that actually shows the amounts for each coin. </p>
| -1 | 2016-10-11T20:52:45Z | 39,986,802 | <p>As a concept, change <strong>coins_so_far</strong> to <strong>coins_this_call</strong>.</p>
<p>Your recursion steps change to something of this ilk; although it's not complete, I hope you see the idea.</p>
<pre><code>for c in change(n-sum(coins_this_call), coins_available[:]):
yield coins_this_call.append(c)
</code></pre>
<p>You locally keep the count of coins at the current denomination -- don't pass the whole load through the recursions. When you recur, instead of passing the original amount as well as the coins already used, simply pass the <em>remaining</em> problem. When that returns with solutions, append the current level's solution, returning the augmented list to whatever called <em>this</em> instance.</p>
<p>Is that enough hint to get you moving?</p>
| 0 | 2016-10-11T21:11:13Z | [
"python",
"algorithm",
"python-2.7",
"data-structures"
] |
Which Python version used in Django project? 2 or 3? | 39,986,615 | <p>When I open someone's Django project, I need a time for understanding which Python version is used here. I try to find print or print() methods, but some Django projects contain both print methods.
How I can at once know, which version of Python is used in the Django project? </p>
| -2 | 2016-10-11T20:59:26Z | 39,986,799 | <p>depends on which version you've installed.
but really, that's all in the Django-docu..
<a href="https://docs.djangoproject.com/en/1.8/intro/install/" rel="nofollow">here you go! </a>;) try to check it before you ask that kind of question.</p>
| -2 | 2016-10-11T21:11:07Z | [
"python",
"django",
"version"
] |
Which Python version used in Django project? 2 or 3? | 39,986,615 | <p>When I open someone's Django project, I need a time for understanding which Python version is used here. I try to find print or print() methods, but some Django projects contain both print methods.
How I can at once know, which version of Python is used in the Django project? </p>
| -2 | 2016-10-11T20:59:26Z | 39,986,982 | <p>This is a list of the syntax that isn't backwards compatible with python 2. Use it to determine their version.</p>
<p><a href="https://docs.python.org/3/whatsnew/3.0.html" rel="nofollow">Whatâs New In Python 3.0</a></p>
| 0 | 2016-10-11T21:23:44Z | [
"python",
"django",
"version"
] |
what is the required order for S3 element values when making a Post request? | 39,986,653 | <p>I'm trying to upload a file to S3 by doing :</p>
<pre><code>r_response = requests.post(presigned_post["url"], json=presigned_post["fields"], files=files)
</code></pre>
<p>but I'm getting the following error:</p>
<blockquote>
<p>Bucket POST must contain a field named 'key'. If it is specified, please check the order of the fields.</p>
</blockquote>
<p>But I'm definitely including a <code>key</code> value. One other answer I saw recommended using a <code>OrderedDict</code> which I'm trying to do, but looking through the S3 documentation below, I don't see where it specifies a required order for they key,value data when making the request. </p>
<p><a href="http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTForms.html" rel="nofollow">http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTForms.html</a>
<a href="http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-authentication-HTTPPOST.html" rel="nofollow">http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-authentication-HTTPPOST.html</a></p>
<p>Anyone have any advice?</p>
<p>Boto3 returns a dictionary with the element values in the following order: <code>x-amz-signature</code>, <code>x-amz-algorithm</code>, <code>key</code>, <code>x-amz-credential</code>, <code>policy</code>, and <code>x-amz-date</code> and I'm just using the same dictionary.</p>
<pre><code>def get_signed_request(title, type, track_id, file):
S3_BUCKET = os.environ.get('S3_BUCKET')
file_name = title
file_type = type
region = 'us-east-1'
s3 = boto3.client('s3', region_name=region, config=Config(signature_version='s3v4'))
presigned_post = s3.generate_presigned_post(
Bucket = S3_BUCKET,
Key = file_name
)
files = {'file': file}
r_response = requests.post(presigned_post["url"], json=presigned_post["fields"], files=files)
</code></pre>
<p>Printing the contents of <code>presigned_post</code> shows the key:</p>
<pre><code> {'fields': {'x-amz-signature': '26eff5417d0d11a25dd294b059a088e2be37a97f14713962f4240c9f4e33febb', 'x-amz-algorithm': 'AWS4-HMAC-SHA256', 'key': u'sound.m4a', 'x-amz-credential': u'<AWSAccessID>/20161011/us-east-1/s3/aws4_request', 'policy': u'eyJjb25kaXRpb25zIjogW3siYnVja2V0IjogImZ1dHVyZWZpbGVzIn0sIHsia2V5IjogInNvdW5kLm00YSJ9LCB7IngtYW16LWFsZ29yaXRobSI6ICJBV1M0LUhNQUMtU0hBMjU2In0sIHsieC1hbXotY3JlZGVudGlhbCI6ICJBS0lBSTdLRktCTkJTNEM0VktKQS8yMDE2MTAxMS91cy1lYXN0LTEvczMvYXdzNF9yZXF1ZXN0In0sIHsieC1hbXotZGF0ZSI6ICIyMDE2MTAxMVQyMDM4NDlaIn1dLCAiZXhwaXJhdGlvbiI6ICIyMDE2LTEwLTExVDIxOjM4OjQ5WiJ9', 'x-amz-date': '20161011T203849Z'}, 'url': u'https://s3.amazonaws.com/bucketname'}
</code></pre>
| 0 | 2016-10-11T21:01:50Z | 39,988,770 | <p>I was originally doing:</p>
<pre><code>r_response = requests.post(presigned_post["url"], json=presigned_post["fields"], files=files)
</code></pre>
<p>I changed the <code>json</code> to <code>data</code> and it worked:</p>
<pre><code>r_response = requests.post(presigned_post["url"], data=presigned_post["fields"], files=files)
</code></pre>
<p>Unfortunately, I was dealt another error:</p>
<p><code><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message></code></p>
| 0 | 2016-10-12T00:32:54Z | [
"python",
"amazon-s3",
"boto3"
] |
f(x).subs() substitution in sympy (python) | 39,986,674 | <p>I defined symbols a and f. I'm expecting to use "a(3).subs(a,f)" to get f(3), but instead I got a(3). What's wrong with it?</p>
<pre><code>a, f = symbols('a f')
a(3).subs(a,f)
</code></pre>
| 0 | 2016-10-11T21:02:48Z | 39,986,773 | <p>You've defined <code>f</code> as a function and then you replaced it with the return value from <code>symbols()</code>.</p>
| 1 | 2016-10-11T21:09:07Z | [
"python",
"sympy"
] |
f(x).subs() substitution in sympy (python) | 39,986,674 | <p>I defined symbols a and f. I'm expecting to use "a(3).subs(a,f)" to get f(3), but instead I got a(3). What's wrong with it?</p>
<pre><code>a, f = symbols('a f')
a(3).subs(a,f)
</code></pre>
| 0 | 2016-10-11T21:02:48Z | 39,993,374 | <p>You can use <code>replace</code> to change the function. Here are some examples. </p>
<pre><code>import sympy as sp
f = sp.Function('f')
g = sp.Function('g')
x,y = sp.symbols('x, y')
f(x).replace(f, g)
</code></pre>
<blockquote>
<p><code>g(x)</code></p>
</blockquote>
<pre><code>f(x).replace(f, sp.sin)
</code></pre>
<blockquote>
<p><code>sin(x)</code></p>
</blockquote>
<pre><code>f(x,y).replace(f, lambda *args: sp.exp(-5 * args[0] + args[1] ))
</code></pre>
<blockquote>
<p><code>exp(-5*x + y)</code></p>
</blockquote>
<p>The <a href="http://docs.sympy.org/latest/modules/core.html#sympy.core.basic.Basic.replace" rel="nofollow">documentation</a> provides an extensive list of additional examples.</p>
| 0 | 2016-10-12T07:56:28Z | [
"python",
"sympy"
] |
Need Help for Quadratic formula on python | 39,986,707 | <p>I just started learning Python in school, here is my code for the quadratic formula solver. Problem is on line 4.</p>
<pre><code>a=int(input('a= ')) # A-stvis mnishvnelobis micema
b=int(input('b= ')) # B-stvis mnishvnelobis micema
c=int(input('c= ')) # C-stvis mnishvenlobis micema
int(a)*(x2)+int(b)*x+c=0
d=(-b2)-4*a*c
x1=-b+(d**(1/2))
x2=-b-(d**(1/2))
</code></pre>
| 0 | 2016-10-11T21:05:03Z | 39,986,789 | <pre><code>from math import sqrt
a = int(input('a= ')) # A-stvis mnishvnelobis micema
b = int(input('b= ')) # B-stvis mnishvnelobis micema
c = int(input('c= ')) # C-stvis mnishvenlobis micema
d = b**2 - 4*a*c
x1 = (-b - sqrt(d))/2
x2 = (-b + sqrt(d))/2
print("x1 =", x1)
print("x2 =", x2)
</code></pre>
<p>Your equation is not needed, and python doesn't understand it. You can comment it if you want.</p>
<p>try and use the square root (<code>sqrt</code>) instead of exponentiation (<code>**</code>)</p>
| 1 | 2016-10-11T21:10:06Z | [
"python",
"math",
"formula",
"quadratic"
] |
Search Specific Keyword From a File and add description associated to it to different file with Python | 39,986,781 | <p>Below text file shows the Description of Channel Name & the Channel Event</p>
<p><strong><em>desc.txt file:</em></strong></p>
<pre><code>Channel Name: CBS
Event Name: FIFA World Cup 2018 Qualifying
Channel Name: BEINSPORTS
Event Name: NFL
</code></pre>
<p>Below Python code only looks for the First "Event Name" and associated description. </p>
<pre><code>flist = open('/tmp/desc.txt').readlines()
for line in flist:
if line.startswith("Event Name:"):
eventname = line[12:-1]
file2 = open('/tmp/test_live.txt','w')
file2.write(eventname + "\n")
</code></pre>
<p>What I want is the below description and what I am getting currently is
"FIFA World Cup 2018 Qualifying" in the file. </p>
<p><strong><em>test_live.txt</em></strong></p>
<pre><code>FIFA World Cup 2018 Qualifying
NFL
</code></pre>
| 0 | 2016-10-11T21:09:36Z | 39,986,896 | <p>Notice that you're opening the file in write (<code>'w'</code>) mode so the entire content of the file test_live.txt is wiped with every iteration. Use append (<code>'a'</code>) mode instead.</p>
| 0 | 2016-10-11T21:18:13Z | [
"python",
"file",
"python-3.x"
] |
Search Specific Keyword From a File and add description associated to it to different file with Python | 39,986,781 | <p>Below text file shows the Description of Channel Name & the Channel Event</p>
<p><strong><em>desc.txt file:</em></strong></p>
<pre><code>Channel Name: CBS
Event Name: FIFA World Cup 2018 Qualifying
Channel Name: BEINSPORTS
Event Name: NFL
</code></pre>
<p>Below Python code only looks for the First "Event Name" and associated description. </p>
<pre><code>flist = open('/tmp/desc.txt').readlines()
for line in flist:
if line.startswith("Event Name:"):
eventname = line[12:-1]
file2 = open('/tmp/test_live.txt','w')
file2.write(eventname + "\n")
</code></pre>
<p>What I want is the below description and what I am getting currently is
"FIFA World Cup 2018 Qualifying" in the file. </p>
<p><strong><em>test_live.txt</em></strong></p>
<pre><code>FIFA World Cup 2018 Qualifying
NFL
</code></pre>
| 0 | 2016-10-11T21:09:36Z | 39,986,898 | <p>You keep opening the same file and writing one line to it. You have two options: You can either open it once outside the loop (more efficient), and close it when exiting the loop (you should close your other <code>open</code> statement too). Or you can open it with <code>'a'</code> instead of <code>'w'</code> to <em>append</em> to the file.
But I would just open it before going into the <code>for</code> loop.
Also your loop should not be indented under the <code>flist</code> line unless you use <code>with</code>... </p>
<p>Note: I think typically you should use <code>.rstrip()</code> instead of truncating the <code>\n</code> with <code>[:-1]</code> The last line might not have a newline at the end, and you would lose that character.</p>
<pre><code>with open('/tmp/desc.txt') as flist:
file2 = open('/tmp/test_live.txt','w')
for line in flist:
if line.startswith("Event Name:"):
eventname = line[12:-1]
file2.write(eventname + "\n")
file2.close()
</code></pre>
| 1 | 2016-10-11T21:18:17Z | [
"python",
"file",
"python-3.x"
] |
Search Specific Keyword From a File and add description associated to it to different file with Python | 39,986,781 | <p>Below text file shows the Description of Channel Name & the Channel Event</p>
<p><strong><em>desc.txt file:</em></strong></p>
<pre><code>Channel Name: CBS
Event Name: FIFA World Cup 2018 Qualifying
Channel Name: BEINSPORTS
Event Name: NFL
</code></pre>
<p>Below Python code only looks for the First "Event Name" and associated description. </p>
<pre><code>flist = open('/tmp/desc.txt').readlines()
for line in flist:
if line.startswith("Event Name:"):
eventname = line[12:-1]
file2 = open('/tmp/test_live.txt','w')
file2.write(eventname + "\n")
</code></pre>
<p>What I want is the below description and what I am getting currently is
"FIFA World Cup 2018 Qualifying" in the file. </p>
<p><strong><em>test_live.txt</em></strong></p>
<pre><code>FIFA World Cup 2018 Qualifying
NFL
</code></pre>
| 0 | 2016-10-11T21:09:36Z | 39,986,998 | <p>What the previous posts indicated with the implemented code.</p>
<pre><code>flist = open('/tmp/desc.txt', 'r').readlines()
file2 = open('/tmp/test_live.txt','a')
for line in flist:
if line.startswith("Event Name:"):
eventname = line[12:-1]
file2.write(eventname + "\n")
#Close Open File handlers
flist.close()
file2.close()
</code></pre>
| 0 | 2016-10-11T21:24:48Z | [
"python",
"file",
"python-3.x"
] |
How to limit the size of pandas queries on HDF5 so it doesn't go over RAM limit? | 39,986,786 | <p>Let's say I have a pandas Dataframe</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
df
Column1 Column2
0 0.189086 -0.093137
1 0.621479 1.551653
2 1.631438 -1.635403
3 0.473935 1.941249
4 1.904851 -0.195161
5 0.236945 -0.288274
6 -0.473348 0.403882
7 0.953940 1.718043
8 -0.289416 0.790983
9 -0.884789 -1.584088
........
</code></pre>
<p>An example of a query is <code>df.query('Column1 > Column2')</code></p>
<p>Let's say you wanted to limit the save of this query, so the object wasn't so large. Is there "pandas" way to accomplish this? </p>
<p>My question is primarily for querying at HDF5 object with pandas. An HDF5 object could be far larger than RAM, and therefore queries could be larger than RAM. </p>
<pre><code># file1.h5 contains only one field_table/key/HDF5 group called 'df'
store = pd.HDFStore('file1.h5')
# the following query could be too large
df = store.select('df',columns=['column1', 'column2'], where=['column1==5'])
</code></pre>
<p>Is there a pandas/Pythonic way to stop users for executing queries that surpass a certain size? </p>
| 6 | 2016-10-11T21:10:02Z | 39,986,894 | <p>Here is a small demonstration of how to use the <code>chunksize</code> parameter when calling <code>HDFStore.select()</code>:</p>
<pre><code>for chunk in store.select('df', columns=['column1', 'column2'],
where='column1==5', chunksize=10**6):
# process `chunk` DF
</code></pre>
| 3 | 2016-10-11T21:17:59Z | [
"python",
"pandas",
"dataframe",
"hdf5",
"pytables"
] |
How to check, whether exception was raisen in the current scope? | 39,986,820 | <p>I use the following code to call an arbitrary callable <code>f()</code> with appropriate number of parameters:</p>
<pre><code>try:
res = f(arg1)
except TypeError:
res = f(arg1, arg2)
</code></pre>
<p>If <code>f()</code> is a two parameter function, calling it with just one parameter raises <code>TypeError</code>, so the function can be called properly in the <code>except</code> branch.</p>
<p>The problem is when <code>f()</code> is one-parameter function and the exception is raisen in the body of <code>f()</code> (possibly because of a bad call to some function), for example:</p>
<pre><code>def f(arg):
map(arg) # raises TypeError
</code></pre>
<p>The control flow goes to the <code>except</code> branch because of the internal error of <code>f()</code>. Of course calling <code>f()</code> with two arguments raises a new <code>TypeError</code>. Then instead of traceback to the original error I get traceback to the call of <code>f()</code> with two parameters, which is much less helpful when debugging.</p>
<p>How can my code recognize exceptions raised not in the current scope to reraise them?</p>
<p>I want to write the code like this:</p>
<pre><code>try:
res = f(arg1)
except TypeError:
if exceptionRaisedNotInTheTryBlockScope(): # <-- subject of the question
raise
res = f(arg1, arg2)
</code></pre>
<p>I know I can use a walkarround by adding <code>exc_info = sys.exc_info()</code> in the <code>except</code> block.</p>
<p>One of assumptions is I have no control over <code>f()</code> since it is given by user of my module. Also its <code>__name__</code> attribute may be other than <code>'f'</code>. Internal exception may be raised by bad recurrent call to <code>f()</code>.
The walkarround is unsuitable since it complicates debugging by the author of <code>f()</code>.</p>
| 0 | 2016-10-11T21:12:09Z | 40,005,349 | <p>You can capture the exception object and examine it.</p>
<pre><code>try:
res = f(arg1)
except TypeError as e:
if "f() missing 1 required positional argument" in e.args[0]:
res = f(arg1, arg2)
else:
raise
</code></pre>
<p>Frankly, though, not going the extra length to classify the exception should work fine as you should be getting both the original traceback and the secondary traceback if the error originated inside f() -- debugging should not be a problem.</p>
<p>Also, if you have control over f() you can make second argument optional and not have to second-guess it:</p>
<pre><code>def f(a, b=None):
pass
</code></pre>
<p>Now you can call it either way.</p>
<p>How about this:</p>
<pre><code>import inspect
if len(inspect.getargspec(f).args) == 1:
res = f(arg1)
else:
res = f(arg1, arg2)
</code></pre>
| 0 | 2016-10-12T17:59:42Z | [
"python",
"python-2.7",
"python-3.x",
"debugging",
"exception-handling"
] |
How to check, whether exception was raisen in the current scope? | 39,986,820 | <p>I use the following code to call an arbitrary callable <code>f()</code> with appropriate number of parameters:</p>
<pre><code>try:
res = f(arg1)
except TypeError:
res = f(arg1, arg2)
</code></pre>
<p>If <code>f()</code> is a two parameter function, calling it with just one parameter raises <code>TypeError</code>, so the function can be called properly in the <code>except</code> branch.</p>
<p>The problem is when <code>f()</code> is one-parameter function and the exception is raisen in the body of <code>f()</code> (possibly because of a bad call to some function), for example:</p>
<pre><code>def f(arg):
map(arg) # raises TypeError
</code></pre>
<p>The control flow goes to the <code>except</code> branch because of the internal error of <code>f()</code>. Of course calling <code>f()</code> with two arguments raises a new <code>TypeError</code>. Then instead of traceback to the original error I get traceback to the call of <code>f()</code> with two parameters, which is much less helpful when debugging.</p>
<p>How can my code recognize exceptions raised not in the current scope to reraise them?</p>
<p>I want to write the code like this:</p>
<pre><code>try:
res = f(arg1)
except TypeError:
if exceptionRaisedNotInTheTryBlockScope(): # <-- subject of the question
raise
res = f(arg1, arg2)
</code></pre>
<p>I know I can use a walkarround by adding <code>exc_info = sys.exc_info()</code> in the <code>except</code> block.</p>
<p>One of assumptions is I have no control over <code>f()</code> since it is given by user of my module. Also its <code>__name__</code> attribute may be other than <code>'f'</code>. Internal exception may be raised by bad recurrent call to <code>f()</code>.
The walkarround is unsuitable since it complicates debugging by the author of <code>f()</code>.</p>
| 0 | 2016-10-11T21:12:09Z | 40,044,648 | <p>Finally figured it out:</p>
<pre><code>def exceptionRaisedNotInTheTryBlockScope():
return sys.exc_info()[2].tb_next is not None
</code></pre>
<p><code>sys.exc_info()</code> returns a 3-element <code>tuple</code>. Its last element is the traceback of the last exception. If the traceback object is the only one in the traceback chain, then the exception has been raisen in the scope of the <code>try</code> block (<a href="https://docs.python.org/2/reference/datamodel.html" rel="nofollow">https://docs.python.org/2/reference/datamodel.html</a>).</p>
<p>According to <a href="https://docs.python.org/2/library/sys.html#sys.exc_info" rel="nofollow">https://docs.python.org/2/library/sys.html#sys.exc_info</a>, it should be avoided to store the traceback value.</p>
| 0 | 2016-10-14T13:40:11Z | [
"python",
"python-2.7",
"python-3.x",
"debugging",
"exception-handling"
] |
Is there a way to check if a string is a valid filter for a django queryset? | 39,986,829 | <p>I'm trying to add some functionality to give a user the ability to filter a paginated queryset in Django via URL get parameters, and have got this successfully working:</p>
<pre><code>for f in self.request.GET.getlist('f'):
try:
k,v = f.split(':', 1)
queryset = queryset.filter(**{k:v})
except:
pass
</code></pre>
<p>However, I am hoping to do so in a way that doesn't use <code>try/except</code> blocks. Is there a standard way in django to check if a string is a valid filter parameter?</p>
<p>For example something like:</p>
<pre><code>my_str = "bad_string_not_in_database"
if some_queryset.is_valid_filter_string(my_str):
some_queryset.filter(**{my_str:100})
</code></pre>
| 0 | 2016-10-11T21:13:07Z | 39,990,884 | <p>The short answer is no, but there are other options.</p>
<p>Django does not provide, nor make it easy to create, the kind of validation function you're asking about. There are not just fields and forward relationships that you can filter on, but also reverse relationships, for which a <code>related_name</code> or a <code>related_query_name</code> on a field in a completely different model might be the <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.ForeignKey.related_name" rel="nofollow">valid way</a> to filter your querysets. And there are various filtering mechanisms, like <code>iexact</code>, <code>startswith</code>, <code>regex</code>, etc., that are valid as postfixes to those relationship names. So, to validate everything correctly, you would need to replicate a lot of Django's internal parsing code and that would be a big mistake.</p>
<p>If you just want to filter by this model's fields and forward relationships, you can use <code>hasattr(SomeModel, my_str)</code>, but that won't always work correctly (your model has other attributes besides fields, such as methods and properties).</p>
<p>Instead of doing a blanket <code>except: pass</code>, you can at least catch the specific error that will be thrown when an invalid string is used in the filtering kwargs (<a href="https://docs.djangoproject.com/en/1.10/topics/db/queries/#field-lookups" rel="nofollow">it's <code>TypeError</code></a>). You can also return a 400 error to let the client know that their request was not valid, instead of silently continuing with the un-filtered queryset.</p>
<p>My preferred solution would be to outsource this kind of boilerplate, generalizable logic to a library, such as <a href="https://github.com/AltSchool/dynamic-rest#field-based-filtering" rel="nofollow">dynamic-rest</a>.</p>
| 0 | 2016-10-12T05:06:50Z | [
"python",
"django",
"django-queryset",
"django-orm"
] |
Is there a way to check if a string is a valid filter for a django queryset? | 39,986,829 | <p>I'm trying to add some functionality to give a user the ability to filter a paginated queryset in Django via URL get parameters, and have got this successfully working:</p>
<pre><code>for f in self.request.GET.getlist('f'):
try:
k,v = f.split(':', 1)
queryset = queryset.filter(**{k:v})
except:
pass
</code></pre>
<p>However, I am hoping to do so in a way that doesn't use <code>try/except</code> blocks. Is there a standard way in django to check if a string is a valid filter parameter?</p>
<p>For example something like:</p>
<pre><code>my_str = "bad_string_not_in_database"
if some_queryset.is_valid_filter_string(my_str):
some_queryset.filter(**{my_str:100})
</code></pre>
| 0 | 2016-10-11T21:13:07Z | 40,004,003 | <p>You can start by looking at the field names:</p>
<pre><code>qs.model._meta.get_all_field_names()
</code></pre>
<p>You are also probably going to want to work with the extensions such as <code>field__icontains</code>, <code>field__gte</code> etc. So more work will be required. </p>
<p><em>Disclaimer</em>: <code>try</code>/<code>except</code> is the far superior way. I don't know why you want to dismiss this method. </p>
| 1 | 2016-10-12T16:44:35Z | [
"python",
"django",
"django-queryset",
"django-orm"
] |
python exceptions with same name but different subclassing | 39,986,849 | <p>I am reading the <a href="https://docs.python.org/2/library/exceptions.html" rel="nofollow">python doc</a> and it mentions that </p>
<blockquote>
<p>Two exception classes that are not related via subclassing are never equivalent, even if they have the same name.</p>
</blockquote>
<p>I'm not sure why it is possible to have two exception classes with the same name but different subclassing. Shouldn't some kind of error/warning be generated in that case?</p>
| 0 | 2016-10-11T21:14:39Z | 39,986,955 | <p>Exceptions are just specific types of classes. Class names are simply what they are defined as. Forbidding classes to have the same name would brake lots of coding schemes. One such example actually works on exceptions: programs that need to be backwards compatible with python2.6 will often override <code>subprocess.CalledProcessError</code> to conform to the python2.7/3.X interface.</p>
<p>How can you have two exceptions of the same name but different subclassing? You are for example free to do the following:</p>
<pre><code>class ExceptoPatronum(KeyError):
pass
KExcept = ExceptoPatronum
class ExceptoPatronum(OSError):
pass
OExcept = ExceptoPatronum
</code></pre>
<p>The classes are named the same but neither equal nor instances of each other:</p>
<pre><code>print(KExcept.__name__)
print(OExcept.__name__)
print(KExcept == OExcept, KExcept is OExcept)
</code></pre>
<p>This is a (contrived) example that runs even with just one file. However, imagine you have two separate modules which each define their own custom class with the same name, let's say <code>ResourceUnavailable</code>.</p>
<p>As long as they are separate, why should users be warned about such internals? If another module relies on both, would you require it to replace them? It would be a nightmare to track such name collisions.</p>
| 1 | 2016-10-11T21:21:55Z | [
"python",
"exception"
] |
spark dataframe read text file without headers | 39,986,893 | <p>i have a text file with no headers, how can i read it using spark dataframe api and specify headers. Is there a way to specify my schema</p>
<p>sample_data = spark.read.option("header", "false").text(sample)</p>
<p>print "Data size is {}".format(sample_data.count())</p>
<p>print type(sample_data)</p>
<p>print sample_data.take(2)</p>
| -2 | 2016-10-11T21:17:56Z | 39,987,744 | <p>First, save your file as csv. You can specify the schema:</p>
<pre><code>schema = StructType([ \
StructField("column1", StringType(), True), \
StructField("column2", DoubleType(), True), \
StructField("column3", IntegerType(), True)])
</code></pre>
<p>And so on.
If you're using spark 2.0 +:</p>
<pre><code>spark.read.csv(
"file.csv", header=True, schema=schema
)
</code></pre>
<p>If you're using spark < 2.0:</p>
<pre><code>sales = sqlContext.read.format('com.databricks.spark.csv')\
.options(header='true', delimiter='whatever youre using as delimiter')\
.load('file.csv', schema = schema)
</code></pre>
| 0 | 2016-10-11T22:26:40Z | [
"python",
"apache-spark",
"dataframe",
"pyspark"
] |
Making a box in python with controlled inputs from user | 39,986,902 | <p>I am trying to make a box where the user inputs the width, height, what kind of symbol the box should be made out of and the fill(inside the box). I am a new python coder, so any suggestions would be great, but the more novice level responses the better so i may learn and not skip into far advance techniques.</p>
<pre><code> def main():
width = print(int("Please enter the width of the box: "))
height = print(int("Please enter the height of the box: "))
symbol = print("Please enter the symbol for the box outline: ")
fill = print("Please enter the symbol for the box fill: ")
for a in range(width):
for b in range(height):
if i in #some condition here
print(symbol)
else:
print(fill)
main()
</code></pre>
<p>My projected input should be:</p>
<pre><code>width: 4
height: 4
symbol: #
fill:1
####
#11#
#11#
####
</code></pre>
| -1 | 2016-10-11T21:18:38Z | 39,987,042 | <p>Use <code>input("Enter number")</code> to get input from the user. You should first loop on height then on width. To print with no new-line use <code>end=""</code> as a parameter to <code>print</code>. You used <code>i</code> instead of <code>b</code> and <code>a</code>. That's about it I think. Next time ask more specific questions.</p>
| 0 | 2016-10-11T21:27:50Z | [
"python"
] |
Making a box in python with controlled inputs from user | 39,986,902 | <p>I am trying to make a box where the user inputs the width, height, what kind of symbol the box should be made out of and the fill(inside the box). I am a new python coder, so any suggestions would be great, but the more novice level responses the better so i may learn and not skip into far advance techniques.</p>
<pre><code> def main():
width = print(int("Please enter the width of the box: "))
height = print(int("Please enter the height of the box: "))
symbol = print("Please enter the symbol for the box outline: ")
fill = print("Please enter the symbol for the box fill: ")
for a in range(width):
for b in range(height):
if i in #some condition here
print(symbol)
else:
print(fill)
main()
</code></pre>
<p>My projected input should be:</p>
<pre><code>width: 4
height: 4
symbol: #
fill:1
####
#11#
#11#
####
</code></pre>
| -1 | 2016-10-11T21:18:38Z | 39,987,107 | <pre><code>def main():
# input is your friend here
width = input("Please enter the width of the box: ")
#width = print(int("Please enter the width of the box: "))
# input etc..
height = print(int("Please enter the height of the box: "))
symbol = print("Please enter the symbol for the box outline: ")
fill = print("Please enter the symbol for the box fill: ")
#since you'll be printing rows of text you should probably flip these loops
for row in range(height):
#for a in range(width):
for col in range(width):
#for b in range(height):
# i ??? huh where did this come from ?
#if i in [0, width-1] or [0, height-1]:
# descriptive variables can only help
if row in [0,height-1] or col in [0,width-1]:
print(symbol)
else:
print(fill)
</code></pre>
| 2 | 2016-10-11T21:32:41Z | [
"python"
] |
Making a box in python with controlled inputs from user | 39,986,902 | <p>I am trying to make a box where the user inputs the width, height, what kind of symbol the box should be made out of and the fill(inside the box). I am a new python coder, so any suggestions would be great, but the more novice level responses the better so i may learn and not skip into far advance techniques.</p>
<pre><code> def main():
width = print(int("Please enter the width of the box: "))
height = print(int("Please enter the height of the box: "))
symbol = print("Please enter the symbol for the box outline: ")
fill = print("Please enter the symbol for the box fill: ")
for a in range(width):
for b in range(height):
if i in #some condition here
print(symbol)
else:
print(fill)
main()
</code></pre>
<p>My projected input should be:</p>
<pre><code>width: 4
height: 4
symbol: #
fill:1
####
#11#
#11#
####
</code></pre>
| -1 | 2016-10-11T21:18:38Z | 39,987,997 | <pre><code>def main():
width = int(input("Please enter the width of the box: "))
height = int(input("Please enter the height of the box: "))
symbol = input("Please enter the symbol for the box outline: ")
fill = input("Please enter the symbol for the box fill: ")
dictionary = []
for row in range(height):
for col in range(width):
if row in [0, height-1] or col in [0, width-1]:
dictionary.append(symbol)
else:
dictionary.append(fill)
def slice_per(source, step):
return [source[i::step] for i in range(step)]
sliced = slice_per(dictionary, width)
for x in range(len(sliced)):
print("".join(sliced[x]), end="\n")
main()
</code></pre>
<p>Output - 5, 5, #, 0</p>
<pre><code>#####
#000#
#000#
#000#
#####
</code></pre>
| 0 | 2016-10-11T22:53:02Z | [
"python"
] |
Invalid Syntax error python 3.5.2 | 39,986,911 | <p>I'm getting syntax error in this particular line.</p>
<pre><code>print "Sense %i:" %(i),
</code></pre>
<p><strong>Full code:</strong></p>
<pre><code>for i in range(len(meas)):
p = sense(p, meas[i])
r = [format(j,'.3f') for j in p]
print "Sense %i:" % (i),
print r,
entropy(p)
p = move(p, mov[i])
r = [format(j,'.3f') for j in p]
print "Move %i:" % (i),
print r,
entropy(p)
print
</code></pre>
| 0 | 2016-10-11T21:19:14Z | 39,987,024 | <p>Try this:</p>
<pre><code>print ("Sense %s:" %i)
</code></pre>
<p>Will work just fine</p>
| 0 | 2016-10-11T21:26:16Z | [
"python",
"python-3.x",
"syntax"
] |
Invalid Syntax error python 3.5.2 | 39,986,911 | <p>I'm getting syntax error in this particular line.</p>
<pre><code>print "Sense %i:" %(i),
</code></pre>
<p><strong>Full code:</strong></p>
<pre><code>for i in range(len(meas)):
p = sense(p, meas[i])
r = [format(j,'.3f') for j in p]
print "Sense %i:" % (i),
print r,
entropy(p)
p = move(p, mov[i])
r = [format(j,'.3f') for j in p]
print "Move %i:" % (i),
print r,
entropy(p)
print
</code></pre>
| 0 | 2016-10-11T21:19:14Z | 39,987,030 | <p>In python 3, print is a function, you have to use brackets: <code>print("Sense %i:" %(i))</code></p>
| 0 | 2016-10-11T21:26:40Z | [
"python",
"python-3.x",
"syntax"
] |
Invalid Syntax error python 3.5.2 | 39,986,911 | <p>I'm getting syntax error in this particular line.</p>
<pre><code>print "Sense %i:" %(i),
</code></pre>
<p><strong>Full code:</strong></p>
<pre><code>for i in range(len(meas)):
p = sense(p, meas[i])
r = [format(j,'.3f') for j in p]
print "Sense %i:" % (i),
print r,
entropy(p)
p = move(p, mov[i])
r = [format(j,'.3f') for j in p]
print "Move %i:" % (i),
print r,
entropy(p)
print
</code></pre>
| 0 | 2016-10-11T21:19:14Z | 39,987,093 | <p>Several things:</p>
<ul>
<li>In Python 3, <code>print</code> is a function, no more a statement like it was with Python 2. So, you need to add parenthesis to call the function,</li>
<li>the <code>%</code> operator used for string formatting is deprecated. With Python 3, you should use the <code>format</code> method (or the <code>format</code> function),</li>
<li>the <code>%</code> operator usually take a <code>tuple</code> as a second parameter: the expression "(i)" is not a <code>tuple</code> but a constant. With Python, the singleton <code>tuple</code> has a trailing comma like this: "(i,)".</li>
<li>use the keyword argument <code>end=""</code> to replace the newline by an empty string (but I'm not sure this is what you want)</li>
</ul>
<p>So, you can replace your code by:</p>
<pre><code>print("Sense {}:".format(i), end="")
</code></pre>
<p><strong>EDIT: add code from comment</strong></p>
<p>Your code should be converted in Python 3 like bellow:</p>
<pre><code>for i in range(len(meas)):
p = sense(p, meas[i])
r = [format(j,'.3f') for j in p]
print("Sense {0}:".format(i), end="")
print(r, end="")
entropy(p)
p = move(p, mov[i])
r = [format(j,'.3f') for j in p]
print("Move {0}:".format(i), end="")
print(r, end="")
entropy(p)
print()
</code></pre>
| 1 | 2016-10-11T21:31:25Z | [
"python",
"python-3.x",
"syntax"
] |
Pandas Multiple columns same name | 39,986,925 | <p>I am creating a <code>dataframe</code> from <code>csv</code>.I have gone thru the docs,multiple <code>SO</code> posts,links as i have just started <code>Pandas</code> but didnt get it.The csv has multiple columns with same names say <code>a</code>.</p>
<p>So after forming <code>dataframe</code> and when i do <code>df['a']</code> which value will it return? It does not return all values .</p>
<p>Also only one of the values will have a string rest will be <code>None</code> .How can i get that column?</p>
| 4 | 2016-10-11T21:19:43Z | 39,986,959 | <p>the relevant parameter is <code>mangle_dupe_cols</code></p>
<p>from the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">docs</a></p>
<blockquote>
<pre><code>mangle_dupe_cols : boolean, default True
Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X'
</code></pre>
</blockquote>
<p>by default, all of your <code>'a'</code> columns get named <code>'a.0'...'a.N'</code> as specified above.</p>
<p>if you used <code>mangle_dupe_cols=False</code>, importing this <code>csv</code> would produce an error.</p>
<p>you can get all of your columns with </p>
<pre><code>df.filter(like='a')
</code></pre>
<hr>
<p><strong><em>demonstration</em></strong></p>
<pre><code>from StringIO import StringIO
import pandas as pd
txt = """a, a, a, b, c, d
1, 2, 3, 4, 5, 6
7, 8, 9, 10, 11, 12"""
df = pd.read_csv(StringIO(txt), skipinitialspace=True)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/iQhUw.png" rel="nofollow"><img src="https://i.stack.imgur.com/iQhUw.png" alt="enter image description here"></a></p>
<pre><code>df.filter(like='a')
</code></pre>
<p><a href="https://i.stack.imgur.com/1jhmQ.png" rel="nofollow"><img src="https://i.stack.imgur.com/1jhmQ.png" alt="enter image description here"></a></p>
| 5 | 2016-10-11T21:22:15Z | [
"python",
"python-2.7",
"csv",
"pandas"
] |
Variables reset with every recursive call so function doesn't work. (Python) | 39,986,949 | <p>Code: </p>
counter and storage are reset every time the function is called recusively
<pre><code>def bin_to_dec(b):
'''Takes a string b that represents a binary number and uses recursion
to convert the number from binary to decimal.'''
counter = -1
storage = 0
if b == 0:
return '0'
elif b == 1:
return '1'
else:
if b[-1] == 1:
counter += 1
storage += 2**counter
return storage + bin_to_dec(b[:-1])
else:
counter += 1
return bin_to_dec(b[:-1])
</code></pre>
<p>So I am writing a function that converts binary numbers to decimal numbers but every time the function is called recursively the variables counter and storage are reset. I must use recursion and can't use anything I haven't learned yet such as map or key. </p>
| -1 | 2016-10-11T21:21:40Z | 39,987,215 | <p>Anything you want to be preserved through function calls should not be local to the function. There are many ways to preserve values between calls. The recommended way in your case is to pass it as a parameter to the function.</p>
| 0 | 2016-10-11T21:40:44Z | [
"python"
] |
How to distribute Python libraries to users without internet | 39,986,952 | <p>Package managers like conda, pip and their online repositories make distributing packages easy and robust. But I am looking for ways to distribute to users that want to install and run my library on machines that are deliberately disconnected from internet for security purposes.</p>
<p>I am to assume these computers don't have Python or any other packages or package managers like conda installed. I am also looking for recommended workflows for bundling my dependencies with the package as well.</p>
| 0 | 2016-10-11T21:21:47Z | 39,987,141 | <p>There are many options:</p>
<ol>
<li>Create a pip repository in the offline network.</li>
<li>Deploy your project with its dependencies. Use setuptools to create a <code>setup.py</code> file for easy installation.</li>
<li>Use py2exe to create an executable instead of a python program.</li>
</ol>
| 0 | 2016-10-11T21:34:51Z | [
"python",
"anaconda",
"software-distribution"
] |
How to distribute Python libraries to users without internet | 39,986,952 | <p>Package managers like conda, pip and their online repositories make distributing packages easy and robust. But I am looking for ways to distribute to users that want to install and run my library on machines that are deliberately disconnected from internet for security purposes.</p>
<p>I am to assume these computers don't have Python or any other packages or package managers like conda installed. I am also looking for recommended workflows for bundling my dependencies with the package as well.</p>
| 0 | 2016-10-11T21:21:47Z | 39,987,208 | <p>Here are the steps:</p>
<ol>
<li>Create a virtual environment for the project. This link will help you create virtual environments <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">http://docs.python-guide.org/en/latest/dev/virtualenvs/</a></li>
<li>Add all the libraries that you want to use to this environment on your system.</li>
<li>Then zip the virtual environment folder and send it over. They can just use the virtual environment that you sent. </li>
</ol>
| -1 | 2016-10-11T21:40:11Z | [
"python",
"anaconda",
"software-distribution"
] |
Graceful interrupt of EventLoop.sock_accept() | 39,986,978 | <p>Consider the following code, how to stop execution in <code>listen()</code>? it seems to hang after <code>sock.close()</code> being called. No exceptions are raised</p>
<pre><code>#!/usr/bin/env python3.5
import asyncio, socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 8080))
sock.listen()
sock.setblocking(0)
async def listen():
print('listen')
try:
while True:
await asyncio.get_event_loop().sock_accept(sock)
print('accepted')
except:
print('exc')
async def stop():
await asyncio.sleep(1)
sock.close()
print('stopped')
asyncio.ensure_future(listen())
asyncio.ensure_future(stop())
asyncio.get_event_loop().run_forever()
</code></pre>
| 0 | 2016-10-11T21:23:27Z | 39,994,877 | <p>Closing the socket or removing the file descriptor using <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.remove_reader" rel="nofollow">loop.remove_reader</a> does not notify <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.sock_accept" rel="nofollow">loop.sock_accept</a>.</p>
<p>Either cancel it explicitly:</p>
<pre><code># Listening
accept_future = asyncio.ensure_future(loop.sock_accept(sock))
await accept_future
[...]
# Closing
loop.remove_reader(sock.fileno())
sock.close()
accept_future.cancel()
</code></pre>
<p>or use higher-level coroutines such as <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.create_server" rel="nofollow">loop.create_server</a> or <a href="https://docs.python.org/3/library/asyncio-stream.html#asyncio.start_server" rel="nofollow">asyncio.start_server</a>:</p>
<pre><code># Using a protocol factory
server = await loop.create_server(protocol_factory, sock=sock)
# Using callback and streams
server = await asyncio.start_server(callback, sock=sock)
</code></pre>
| 1 | 2016-10-12T09:15:37Z | [
"python",
"sockets",
"python-asyncio"
] |
Extract the string after title in BeautifulSoup | 39,986,997 | <p>html result is <code><div class="font-160 line-110" data-container=".snippet container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]"></code></p>
<p>How do I pull out <code>"XIAMEN [CN]"</code> right after <code>title</code>. I tried <code>find_all('title')</code> but that does not return a match. Nor can I call any from of <code>siblings</code> to traverse my way down the result. I couldn't even get <code>find(text='XIAMEN [CN]')</code> to return anything. </p>
| 1 | 2016-10-11T21:24:47Z | 39,987,083 | <pre><code>from bs4 import BeautifulSoup
myHTML = 'what you posted above'
soup = BeautifulSoup(myHTML, "html5lib")
title = soup.find('div')['title']
</code></pre>
<p>We're just searching for <code><div></code> tags here, you'll probably want to be more specific in vivo.</p>
| 0 | 2016-10-11T21:30:39Z | [
"python",
"html",
"beautifulsoup"
] |
Extract the string after title in BeautifulSoup | 39,986,997 | <p>html result is <code><div class="font-160 line-110" data-container=".snippet container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]"></code></p>
<p>How do I pull out <code>"XIAMEN [CN]"</code> right after <code>title</code>. I tried <code>find_all('title')</code> but that does not return a match. Nor can I call any from of <code>siblings</code> to traverse my way down the result. I couldn't even get <code>find(text='XIAMEN [CN]')</code> to return anything. </p>
| 1 | 2016-10-11T21:24:47Z | 39,987,238 | <p>Slightly safer way than the other answer</p>
<pre><code>from bs4 import BeautifulSoup
myHTML = 'what you posted above'
soup = BeautifulSoup(myHTML, "html5lib")
div = soup.find('div')
title = div.get('title', '') # safe way to check for the title, incase the div doesn't contain it
</code></pre>
| 0 | 2016-10-11T21:42:42Z | [
"python",
"html",
"beautifulsoup"
] |
Extract the string after title in BeautifulSoup | 39,986,997 | <p>html result is <code><div class="font-160 line-110" data-container=".snippet container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]"></code></p>
<p>How do I pull out <code>"XIAMEN [CN]"</code> right after <code>title</code>. I tried <code>find_all('title')</code> but that does not return a match. Nor can I call any from of <code>siblings</code> to traverse my way down the result. I couldn't even get <code>find(text='XIAMEN [CN]')</code> to return anything. </p>
| 1 | 2016-10-11T21:24:47Z | 39,987,712 | <p>You should use the class or some attribute to select the div, calling <code>find("div")</code> would select the first div on the page, also <em>title</em> is an attribute not a tag so you need to access the <em>title attribute</em> once you locate the tag. A few of examples of how to be specific and extract the <em>attribute</em>:</p>
<pre><code>html = """<div class="font-160 line-110" data-container=".snippet container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]">"""
soup = BeautifulSoup(html, "html.parser")
# use the css classes
print(soup.find("div", class_="font-160 line-110")["title"])
# use an attribute value
print(soup.find("div", {"data-container": ".snippet container"})["title"])
</code></pre>
<p>If there is only one div with an attribute, look for the div setting <em>title=True</em>:</p>
<pre><code>soup.find("div", title=True)
</code></pre>
<p>You can also combine the steps, i.e the class and one or more attributes.</p>
| 1 | 2016-10-11T22:24:10Z | [
"python",
"html",
"beautifulsoup"
] |
Creating dynamic variables for XML Parsing | 39,987,001 | <p>I'm incredibly new at this, and I've tried searching but nothing I've found has been able to work for me.</p>
<p>I have xml data that looks like this</p>
<pre><code><datainfo>
<data>
<info State="1" Reason="x" Start="01/01/2016 00:00:00.000" End="01/01/2016 02:00:00.000"></info>
<info State="1" Reason="y" Start="01/01/2016 02:00:00.000" End="01/01/2016 02:01:00.000">
<moreinfo Start="01/01/2016 02:00:00.000" End="01/01/2016 02:00:30.000"/>
<moreinfo Start="01/01/2016 02:00:30.000" End="01/01/2016 02:01:00.000"/>
</info>
<info State="2" Start="01/01/2016 02:01:00.000" End="01/01/2016 02:10:00.000"></info>
...
</data>
</datainfo>
</code></pre>
<p>I want find how much time was spent in State {1,2,...} for reason {x,y,...} on a specific day and have that print to a .csv format to be latter read in excel. </p>
<p>The issue I'm having is I can't use static variables because there are hundreds of different states for hundreds of different reasons, and they change constantly. </p>
<p>If I'm not clear please tell me, I am brand new to this and really appreciate any and all help.</p>
<p>Edit: Here is what I currently have, hopefully this will clear up what I'm trying to do.</p>
<pre><code>from datetime import datetime
from lxml import etree as ET
def parseXML(file):
handler = open(file, "r")
tree = ET.parse(handler)
info_list = tree.xpath('//info')
root = tree.getroot()
dictionary = {}
info_len = len(info_list)
for i in range(info_len):
info=root[0][0][i]
info_attribs = info.attrib
end = info_attribs[u'End']
start = info_attribs[u'Start']
FMT = '%m/%d/%Y %H:%M:%S.%f'
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
t_dif = (tdelta.total_seconds()) / 60
try:
dictionary[info_attribs[u'State'] + status_attribs[u'Reason']] = t_dif
except:
continue
</code></pre>
<p>I'm trying to iterate through each line, find the State and the reason, then add them to a dictionary. If the entry already exists for that state and reason, I want to add it to the current value.</p>
<p>Let me know if I should provide more info!</p>
<p>Edit #2:</p>
<p>The output I'm looking for would be in the form of a .csv, stuctured like this:</p>
<pre><code>State - Reason, [Total time spent in State 1 for x reason]
</code></pre>
| 2 | 2016-10-11T21:25:06Z | 39,987,458 | <p>This is assuming you have your xml parsed into an array of arrays</p>
<pre><code>import csv
# This is assuming you have your xml parsed into an array of arrays [['state', 'reason'], ['state', 'reason']]
# example of array format
data = [['1', 'x'], ['1', 'y'], ['2', 'z']]
with open("output.csv", "w") as f:
writer = csv.writer(f)
writer.writerows(data)
</code></pre>
| 0 | 2016-10-11T21:59:54Z | [
"python",
"xml",
"python-3.x",
"parsing",
"lxml"
] |
Creating dynamic variables for XML Parsing | 39,987,001 | <p>I'm incredibly new at this, and I've tried searching but nothing I've found has been able to work for me.</p>
<p>I have xml data that looks like this</p>
<pre><code><datainfo>
<data>
<info State="1" Reason="x" Start="01/01/2016 00:00:00.000" End="01/01/2016 02:00:00.000"></info>
<info State="1" Reason="y" Start="01/01/2016 02:00:00.000" End="01/01/2016 02:01:00.000">
<moreinfo Start="01/01/2016 02:00:00.000" End="01/01/2016 02:00:30.000"/>
<moreinfo Start="01/01/2016 02:00:30.000" End="01/01/2016 02:01:00.000"/>
</info>
<info State="2" Start="01/01/2016 02:01:00.000" End="01/01/2016 02:10:00.000"></info>
...
</data>
</datainfo>
</code></pre>
<p>I want find how much time was spent in State {1,2,...} for reason {x,y,...} on a specific day and have that print to a .csv format to be latter read in excel. </p>
<p>The issue I'm having is I can't use static variables because there are hundreds of different states for hundreds of different reasons, and they change constantly. </p>
<p>If I'm not clear please tell me, I am brand new to this and really appreciate any and all help.</p>
<p>Edit: Here is what I currently have, hopefully this will clear up what I'm trying to do.</p>
<pre><code>from datetime import datetime
from lxml import etree as ET
def parseXML(file):
handler = open(file, "r")
tree = ET.parse(handler)
info_list = tree.xpath('//info')
root = tree.getroot()
dictionary = {}
info_len = len(info_list)
for i in range(info_len):
info=root[0][0][i]
info_attribs = info.attrib
end = info_attribs[u'End']
start = info_attribs[u'Start']
FMT = '%m/%d/%Y %H:%M:%S.%f'
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
t_dif = (tdelta.total_seconds()) / 60
try:
dictionary[info_attribs[u'State'] + status_attribs[u'Reason']] = t_dif
except:
continue
</code></pre>
<p>I'm trying to iterate through each line, find the State and the reason, then add them to a dictionary. If the entry already exists for that state and reason, I want to add it to the current value.</p>
<p>Let me know if I should provide more info!</p>
<p>Edit #2:</p>
<p>The output I'm looking for would be in the form of a .csv, stuctured like this:</p>
<pre><code>State - Reason, [Total time spent in State 1 for x reason]
</code></pre>
| 2 | 2016-10-11T21:25:06Z | 39,988,022 | <p>You can use a <em>defaultdict</em> for recurring keys using lists as value, you can also filter the info nodes using an <em>xpath</em> to only find the nodes that have both of the <em>attributes</em> you want so no need for any except:</p>
<pre><code>x = """<datainfo>
<data>
<info State="1" Reason="x" Start="01/01/2016 00:00:00.000" End="01/01/2016 02:00:00.000"></info>
<info State="1" Reason="y" Start="01/01/2016 02:00:00.000" End="01/01/2016 02:01:00.000">
<moreinfo Start="01/01/2016 02:00:00.000" End="01/01/2016 02:00:30.000"/>
<moreinfo Start="01/01/2016 02:00:30.000" End="01/01/2016 02:01:00.000"/>
</info>
<info State="2" Start="01/01/2016 02:01:00.000" End="01/01/2016 02:10:00.000"></info>
</data>
</datainfo>"""
from collections import defaultdict
import lxml.etree as et
from datetime import datetime
FMT = '%m/%d/%Y %H:%M:%S.%f'
tree = et.fromstring(x)
d = defaultdict(list)
for node in tree.xpath("//data/info[@Reason and @State]"):
state = node.attrib["State"]
reason = node.attrib["Reason"]
end = node.attrib["End"]
start = node.attrib[u'Start']
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
d[state, reason].append((tdelta.total_seconds()) / 60))
print(d)
</code></pre>
<p>Depending on how you want the data to look for recurring keys would determine how you wrote to the csv, if you wanted one row each:</p>
<pre><code>import csv
with open("out.csv", "w") as f:
wr = csv.writer(f)
for k,v in d.items():
for val in v:
wr.writerow([k] + val)
</code></pre>
<p>If you actually want to sum:</p>
<pre><code>d = defaultdict(float)
for node in tree.xpath("//data/info[@Reason and @State]"):
state = node.attrib["State"]
reason = node.attrib["Reason"]
end = node.attrib["End"]
start = node.attrib[u'Start']
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
d[state, reason] += (tdelta.total_seconds()) / 60
</code></pre>
<p>Then:</p>
<pre><code>import csv
with open("out.csv", "w") as f:
wr = csv.writer(f)
wr.writerows(d.items())
</code></pre>
| 1 | 2016-10-11T22:55:11Z | [
"python",
"xml",
"python-3.x",
"parsing",
"lxml"
] |
Natural language date differences in python | 39,987,036 | <p>I have a dynamically generated pandas dataframe with this structure:</p>
<pre><code>name,Events,Last,Elapsed
10.0.0.103,11230,2016-10-11 23:16:45,0 days 00:00:08.708000000
10.0.0.24,14088,2016-10-11 23:16:52,0 days 00:00:01.708000000
</code></pre>
<p>This details the number of events per IP address (name), when the last event was (Last), and how much time has elapsed since that event (Elapsed). The Elapsed column is generated with datetime using the following code:</p>
<pre><code>dfTotalS['Elapsed'] = datetime.datetime.now() - dfTotalS['Last']
</code></pre>
<p>I need the Elapsed column to be in 'natural language', for example:</p>
<pre><code>0 days 00:00:01.708000000 => 'less than 5 seconds ago'
3 days 00:02:22.708000000 => 'over 3 days ago'
</code></pre>
<p>I have played around with dateutil without much success. What is the best way of going about this?</p>
| 0 | 2016-10-11T21:27:20Z | 40,000,835 | <p>Thanks to @Boud for getting me started.</p>
<p>CSV:</p>
<pre><code>name,Events,Last,Elapsed
10.0.0.103,11230,2016-10-11 23:16:45,0 days 00:00:08.708000000
10.0.0.24,14088,2016-10-11 23:16:52,0 days 00:00:01.708000000
</code></pre>
<p>Using texttime.py (<a href="http://code.activestate.com/recipes/498062-nicely-readable-timedelt" rel="nofollow">http://code.activestate.com/recipes/498062-nicely-readable-timedelt</a>ââa)</p>
<pre><code>import texttime
dfTotal['Elapsed'] = dfTotal['Elapsed'].apply(lambda x: texttime.stringify(x)) + ' ago'
print dfTotal
</code></pre>
<p>returns</p>
<pre><code>name,Events,Last,Elapsed
10.0.0.103,11230,2016-10-11 23:16:45,8 seconds ago
10.0.0.24,14088,2016-10-11 23:16:52,one second ago
</code></pre>
| 0 | 2016-10-12T14:10:04Z | [
"python",
"pandas",
"python-datetime"
] |
Plotting a dataframe as both a 'hist' and 'kde' on the same plot | 39,987,071 | <p>I have a pandas <code>dataframe</code> with user information. I would like to plot the age of users as both a <code>kind='kde'</code> and on <code>kind='hist'</code> on the same plot. At the moment I am able to have the two separate plots. The dataframe resembles:</p>
<pre><code>member_df=
user_id Age
1 23
2 34
3 63
4 18
5 53
...
</code></pre>
<p>using </p>
<pre><code>ax1 = plt.subplot2grid((2,3), (0,0))
member_df.Age.plot(kind='kde', xlim=[16, 100])
ax1.set_xlabel('Age')
ax2 = plt.subplot2grid((2,3), (0,1))
member_df.Age.plot(kind='hist', bins=40)
ax2.set_xlabel('Age')
ax3 = ...
</code></pre>
<p>I understand that the <code>kind='kde'</code> will give me frequencies for the y-axis whereas <code>kind='kde'</code> will give a cumulative distribution, but is there a way to combine both and have the y-axis be represented by the frequencies?</p>
| 4 | 2016-10-11T21:29:35Z | 39,987,117 | <p><code>pd.DataFrame.plot()</code> returns the <code>ax</code> it is plotting to. You can reuse this for other plots.</p>
<p>Try:</p>
<pre><code>ax = member_df.Age.plot(kind='kde')
member_df.Age.plot(kind='hist', bins=40, ax=ax)
ax.set_xlabel('Age')
</code></pre>
<p><strong><em>example</em></strong><br>
I plot <code>hist</code> first to put in background<br>
Also, I put <code>kde</code> on <code>secondary_y</code> axis </p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed([3,1415])
df = pd.DataFrame(np.random.randn(100, 2), columns=list('ab'))
ax = df.a.plot(kind='hist')
df.a.plot(kind='kde', ax=ax, secondary_y=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/3k5kQ.png" rel="nofollow"><img src="https://i.stack.imgur.com/3k5kQ.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>response to comment</em></strong><br>
using <code>subplot2grid</code>. just reuse <code>ax1</code></p>
<pre><code>import pandas as pd
import numpy as np
ax1 = plt.subplot2grid((2,3), (0,0))
np.random.seed([3,1415])
df = pd.DataFrame(np.random.randn(100, 2), columns=list('ab'))
df.a.plot(kind='hist', ax=ax1)
df.a.plot(kind='kde', ax=ax1, secondary_y=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/PPiRH.png" rel="nofollow"><img src="https://i.stack.imgur.com/PPiRH.png" alt="enter image description here"></a></p>
| 6 | 2016-10-11T21:33:28Z | [
"python",
"pandas",
"matplotlib",
"plot"
] |
simultaneous fitting python parameter sharing | 39,987,105 | <p>I have six datasets, I wish to fit all six datasets simultaneously, with two parameters common between the six datasets and one to be fit seperately.</p>
<p>I'm planning to fit a simple ax**2+bx+c polynomial to the datasets, where a and b is shared between the six datasets and the offset, c, is not shared between the six. </p>
<p>Therefore I'm fitting a common slope between the datasets but with a variable offset.</p>
<p>I'm fully competent in fitting them individually, however as the slopes are similar between each dataset, the error on the offset, c, would be greatly improved using simultaneous fitting.</p>
<p>I typically fit using the scipy.optmize.curve_fit.</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
def func(x,a,b,c):
return (a*(x**2)+b*x+c)
def fit(x,y,yerr):
popt, pcov = curve_fit(func,x,y,p0=[-0.6,5,-12],sigma=yerr)
chi=np.sum( ((func(x, *popt) - y) / yerr)**2)
redchi=(chi-1)/len(y)
return popt,pcov,redchi,len(y)
</code></pre>
<p>I'm handling 6 sets of: x,xerr,y,yerr
len(x) and len(y) is different for each set.</p>
<p>I understand I have to concatenate the datasets and fit them this way.</p>
<p>If anyone can offer any advice or help, I'm sure it would be beneficial for both me and the community.</p>
| 0 | 2016-10-11T21:32:37Z | 39,996,509 | <p>One possibility is to change the function to be fitted so that each data set has its own "a" and "b" parameter with a common "c", similar to this crude code snippet:</p>
<pre><code>def func(x,a1,b1,a2,b2,a3,b3,a4,b4,a5,b5,a6,b6, c):
if x in data_set_1:
return (a1*(x**2)+b1*x+c)
if x in data_set_2:
return (a2*(x**2)+b2*x+c)
if x in data_set_3:
return (a3*(x**2)+b3*x+c)
if x in data_set_4:
return (a4*(x**2)+b4*x+c)
if x in data_set_5:
return (a5*(x**2)+b5*x+c)
if x in data_set_6:
return (a6*(x**2)+b6*x+c)
raise Exception('Data outside fitting range') # just in case
</code></pre>
| 0 | 2016-10-12T10:37:49Z | [
"python",
"curve-fitting",
"least-squares",
"data-fitting"
] |
simultaneous fitting python parameter sharing | 39,987,105 | <p>I have six datasets, I wish to fit all six datasets simultaneously, with two parameters common between the six datasets and one to be fit seperately.</p>
<p>I'm planning to fit a simple ax**2+bx+c polynomial to the datasets, where a and b is shared between the six datasets and the offset, c, is not shared between the six. </p>
<p>Therefore I'm fitting a common slope between the datasets but with a variable offset.</p>
<p>I'm fully competent in fitting them individually, however as the slopes are similar between each dataset, the error on the offset, c, would be greatly improved using simultaneous fitting.</p>
<p>I typically fit using the scipy.optmize.curve_fit.</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
def func(x,a,b,c):
return (a*(x**2)+b*x+c)
def fit(x,y,yerr):
popt, pcov = curve_fit(func,x,y,p0=[-0.6,5,-12],sigma=yerr)
chi=np.sum( ((func(x, *popt) - y) / yerr)**2)
redchi=(chi-1)/len(y)
return popt,pcov,redchi,len(y)
</code></pre>
<p>I'm handling 6 sets of: x,xerr,y,yerr
len(x) and len(y) is different for each set.</p>
<p>I understand I have to concatenate the datasets and fit them this way.</p>
<p>If anyone can offer any advice or help, I'm sure it would be beneficial for both me and the community.</p>
| 0 | 2016-10-11T21:32:37Z | 39,998,622 | <p>Because I had similar fitting problems, I made <a href="http://symfit.readthedocs.io/en/latest/fitting_types.html#global-fitting" rel="nofollow"><code>symfit</code></a> to deal with this kind of scenario. So I'm sorry for shamelessly suggesting my own package but I think it would be very helpful for you. It wraps curve fit but provides a symbolic interface to make things easier.</p>
<p>Your problem could be solved like this:
</p>
<pre><code>from symfit import variables, parameters, Fit
xs = variables('x_1, x_2, x_3, x_4, x_5, x_6')
ys = variables('y_1, y_2, y_3, y_4, y_5, y_6')
a, b = parameters('a, b')
cs = parameters(', '.join('c_{}'.format(i) for i in range(6)))
model_dict = {
y: a * x**2 + b * x + c
for x, y, c in zip(xs, ys, cs)
}
fit = Fit(model_dict, x_1=x1_data, x_2=x2_data, ..., y_1=y1_data, ..., sigma_y_1=y1_err, sigma_y_2=y2_err, ...)
fit_result = fit.execute()
print(fit_result)
</code></pre>
<p>Check out the docs for more:
<a href="http://symfit.readthedocs.io/en/latest/fitting_types.html#global-fitting" rel="nofollow">http://symfit.readthedocs.io/en/latest/fitting_types.html#global-fitting</a></p>
<p>p.s. to give initial guesses to your parameters, each <code>Parameter</code> object comes with a <code>.value</code> attribute which holds the initial guess. So for example, <code>a.value = -0.6</code>.</p>
<p><strong>Edit:</strong>
Apperently <code>symfit</code> currently needs all the datasets to be the same shape. As a workaround you could define the following object:</p>
<pre><code>from symfit import Minimize
class GlobalLeastSquares(Minimize):
def __init__(self, model, *args, **kwargs):
try:
super(GlobalLeastSquares, self).__init__(model, *args, **kwargs)
except TypeError:
# Minimize currently enforces scalar functions, let's ignore that
self.constraints = []
def error_func(self, p, data):
"""
Least Squares optimalization.
:param p: array of floats for the parameters.
:param data: data to be provided to ``Variable``'s.
"""
evaluated_func = self.model(*(list(data) + list(p)))
chi2 = 0
for component_name, component_data in evaluated_func._asdict().items():
ydata = self.dependent_data[component_name]
sigma_data = self.sigma_data['sigma_{}'.format(component_name)]
chi2 += np.sum((component_data - ydata)**2/sigma_data**2)
return chi2
eval_jacobian = False
</code></pre>
<p>Use this object instead of the default <code>Fit</code>, so change</p>
<pre><code>fit = Fit(model_dict, x_1=x1_data, x_2=x2_data, ..., y_1=y1_data, ..., sigma_y_1=y1_err, sigma_y_2=y2_err, ...)
</code></pre>
<p>to</p>
<pre><code>fit = GlobalLeastSquares(model_dict, x_1=x1_data, x_2=x2_data, ..., y_1=y1_data, ..., sigma_y_1=y1_err, sigma_y_2=y2_err, ...)
</code></pre>
<p>I will make sure this functionality is wrapped into <code>Fit</code> in the next version of <code>symfit</code> (and massively generalized and cleaned up).</p>
| 0 | 2016-10-12T12:26:39Z | [
"python",
"curve-fitting",
"least-squares",
"data-fitting"
] |
simultaneous fitting python parameter sharing | 39,987,105 | <p>I have six datasets, I wish to fit all six datasets simultaneously, with two parameters common between the six datasets and one to be fit seperately.</p>
<p>I'm planning to fit a simple ax**2+bx+c polynomial to the datasets, where a and b is shared between the six datasets and the offset, c, is not shared between the six. </p>
<p>Therefore I'm fitting a common slope between the datasets but with a variable offset.</p>
<p>I'm fully competent in fitting them individually, however as the slopes are similar between each dataset, the error on the offset, c, would be greatly improved using simultaneous fitting.</p>
<p>I typically fit using the scipy.optmize.curve_fit.</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
def func(x,a,b,c):
return (a*(x**2)+b*x+c)
def fit(x,y,yerr):
popt, pcov = curve_fit(func,x,y,p0=[-0.6,5,-12],sigma=yerr)
chi=np.sum( ((func(x, *popt) - y) / yerr)**2)
redchi=(chi-1)/len(y)
return popt,pcov,redchi,len(y)
</code></pre>
<p>I'm handling 6 sets of: x,xerr,y,yerr
len(x) and len(y) is different for each set.</p>
<p>I understand I have to concatenate the datasets and fit them this way.</p>
<p>If anyone can offer any advice or help, I'm sure it would be beneficial for both me and the community.</p>
| 0 | 2016-10-11T21:32:37Z | 40,020,042 | <p>Thanks for all the suggestions, I seem to have found a way to fit them simultaneously with a,b and c1,c2,c3,c4,c5,c6 as the parameters, where a and b are shared.</p>
<p>Below is the code I used in the end:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
x=[vt,bt,ut,w1t,m2t,w2t]
y=[vmag,bmag,umag,w1mag,m2mag,w2mag]
xerr=[vterr,uterr,bterr,w1terr,m2terr,w2terr]
yerr=[vmagerr,umagerr,bmagerr,w1magerr,m2magerr,w2magerr]
def poly(x_, a, b, c1, c2, c3, c4, c5, c6):
#all this is just to split x_data into the original parts of x
l= len(x[0])
l1= len(x[1])
l2= len(x[2])
l3= len(x[3])
l4= len(x[4])
l5= len(x[5])
s=l+l1
s1=l2+s
s2=l3+s1
s3=l4+s2
s4=l5+s3
a= np.hstack([
a*x_[:l]**2 + b*x_[:l] +c1,
a*x_[l:(s)]**2 + b*x_[l:(s)] +c2,
a*x_[(s):(s1)]**2 + b*x_[(s):(s1)] +c3,
a*x_[(s1):(s2)]**2 + b*x_[(s1):(s2)] +c4,
a*x_[(s2):(s3)]**2 + b*x_[(s2):(s3)] +c5,
a*x_[(s3):(s4)]**2 + b*x_[(s3):(s4)] +c6
])
print a
return a
x_data = np.hstack([x[0],x[1],x[2],x[3],x[4],x[5]])
y_data = np.hstack([y[0],y[1],y[2],y[3],y[4],y[5]])
(a, b, c1, c2, c3, c4, c5, c6), _ = curve_fit(poly, x_data, y_data)
</code></pre>
<p>Apologies if this is awful coding! I'm very rough with my approach! However, it certainly does the job well! </p>
<p>Below is my resulting fit.</p>
<p><a href="https://i.stack.imgur.com/ShrfK.png" rel="nofollow">Fitted results from simultaneous fitting with shared parameters</a> </p>
| 0 | 2016-10-13T11:44:25Z | [
"python",
"curve-fitting",
"least-squares",
"data-fitting"
] |
Django - ManagementForm data is missing or has been tampered with | 39,987,145 | <p>I've been racking my brain over this problem for the past few days and I've read numerous other questions regarding the same error but they all seem to be different cases (not including management form, forgetting to update TOTAL_FORMS, etc etc) and do not resolve my problem. I have a page which could contain multiple formsets in a single HTML form. When I am posting the data back to the server, it fails on the is_valid() check for the formsets with the error in the title. I am new to web development and Django so please forgive me if I made a silly mistake or am taking an approach that will not work. </p>
<pre><code>def purchase(request):
return generic_form_view(request, "inventory_tracking/add_purchases.html",
"Successfully added purchases for %s.",
PurchaseForm,
[formset_factory(PurchaseForm.LiquorForm),
formset_factory(PurchaseForm.NonLiquorForm)])
def generic_form_view(request, template, success_message, ParentForm, FormSets):
if request.method == 'POST':
request_params = copy(request.POST)
parent_form = ParentForm(request_params)
formsets = list(map(lambda form_set: form_set(request_params), FormSets))
if parent_form.is_valid(): # This works.
for formset in formsets:
if formset.is_valid(): # Fails here.
</code></pre>
<p>Here is a snippet from my template:</p>
<pre><code> <form action="{% block form_action %}{% endblock %}" method="post">
{% csrf_token %}
<div class="row">
<div class="row">
<div class=" well well-lg">
<div class="row">
{{ parent_form.management_form }}
{% for field in parent_form %}
<div class="col-lg-6">
<div class="form-group">
<label class="control-label">{{ field.label }}</label>
{{ field }}
</div>
</div>
{% endfor %}
</div>
</div>
</div>
</div>
<div class="row">
{% for formset in formsets %}
{{ formset.management_form }}
<div class="row">
<div class="well well-lg">
{% for form in formset %}
<div id="{{ form.prefix }}" class="row">
...
</code></pre>
<p>I've been trying to debug this and I noticed something a little interesting but since I am not too familiar with Django it could be a red herring. In the POST, I see the management_form data for the formsets I am creating but I do not see the management_form data for the parent formset (in this case PurchaseForm). However the parent_form is passing validation and the other formsets are not. </p>
| 0 | 2016-10-11T21:35:03Z | 39,989,296 | <p>I expected this to be a silly problem and I turned about to be right! When my generic_form_view method creates the formsets on the GET request I was adding a prefix like the documentation mentioned but I was not adding a prefix when creating the formsets on the POST. </p>
| 0 | 2016-10-12T01:49:02Z | [
"python",
"django",
"django-forms"
] |
ImportError: cannot import name BayesianGaussianMixture | 39,987,374 | <p>I am trying to use the Batesian model in <code>sklearn</code> but I get the following error when I try.</p>
<pre><code>>>> from sklearn.mixture import BayesianGaussianMixture
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name BayesianGaussianMixture
</code></pre>
<p>I am on <code>python 2.7.11</code>. I saw the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.mixture.BayesianGaussianMixture.html" rel="nofollow">documentation</a> and I checked the spelling is correct. What should I do it import it?</p>
| 0 | 2016-10-11T21:53:25Z | 39,987,425 | <p>Update scikit-learn to 0.18, in <a href="http://scikit-learn.org/0.17/modules/classes.html#module-sklearn.mixture" rel="nofollow">previous versions</a> it was called <code>VBGMM</code> (Variational Bayesian Gaussian Mixture Model) - actually it was a bit different method, but it is the closest you will get in previous versions.</p>
| 1 | 2016-10-11T21:57:13Z | [
"python",
"machine-learning",
"scikit-learn",
"cluster-analysis"
] |
Python write of scraping data to csv file | 39,987,551 | <p>I wrote simple code which scrape data from website but i'm struggling to save all rows to csv file. Finished script save only one row - it's last occurance in loop.</p>
<pre><code>def get_single_item_data(item_url):
f= csv.writer(open("scrpe.csv", "wb"))
f.writerow(["Title", "Company", "Price_netto"])
source_code = requests.get(item_url)
soup = BeautifulSoup(source_code.content, "html.parser")
for item_name in soup.find_all('div', attrs={"id" :'main-container'}):
title = item_name.find('h1').text
prodDesc_class = item_name.find('div', class_='productDesc')
company = prodDesc_class.find('p').text
company = company.strip()
price_netto = item_name.find('div', class_="netto").text
price_netto = price_netto.strip()
#print title, company, ,price_netto
f.writerow([title.encode("utf-8"), company, price_netto, ])
</code></pre>
<p>Important is to save data to concurrent columns</p>
| -1 | 2016-10-11T22:07:26Z | 39,988,009 | <p>@PadraicCunningham This is my whole script:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import csv
url_klocki = "http://selgros24.pl/Dla-dzieci/Zabawki/Klocki-pc1121.html"
r = requests.get(url_klocki)
soup = BeautifulSoup(r.content, "html.parser")
def main_spider(max_page):
page = 1
while page <= max_page:
url = "http://selgros24.pl/Dla-dzieci/Zabawki/Klocki-pc1121.html"
source_code = requests.get(url)
soup = BeautifulSoup(source_code.content, "html.parser")
for link in soup.find_all('article', class_='small-product'):
url = "http://www.selgros24.pl"
a = link.findAll('a')[0].get('href')
href = url + a
#print href
get_single_item_data(href)
page +=1
def get_single_item_data(item_url):
f= csv.writer(open("scrpe.csv", "wb"))
f.writerow(["Title", "Comapny", "Price_netto"])
source_code = requests.get(item_url)
soup = BeautifulSoup(source_code.content, "html.parser")
for item_name in soup.find_all('div', attrs={"id" :'main-container'}):
title = item_name.find('h1').text
prodDesc_class = item_name.find('div', class_='productDesc')
company = prodDesc_class.find('p').text
company = company.strip()
price_netto = item_name.find('div', class_="netto").text
price_netto = price_netto.strip()
print title, company, price_netto
f.writerow([title.encode("utf-8"), company, price_netto])
main_spider(1)
</code></pre>
| 0 | 2016-10-11T22:53:53Z | [
"python",
"csv"
] |
Python write of scraping data to csv file | 39,987,551 | <p>I wrote simple code which scrape data from website but i'm struggling to save all rows to csv file. Finished script save only one row - it's last occurance in loop.</p>
<pre><code>def get_single_item_data(item_url):
f= csv.writer(open("scrpe.csv", "wb"))
f.writerow(["Title", "Company", "Price_netto"])
source_code = requests.get(item_url)
soup = BeautifulSoup(source_code.content, "html.parser")
for item_name in soup.find_all('div', attrs={"id" :'main-container'}):
title = item_name.find('h1').text
prodDesc_class = item_name.find('div', class_='productDesc')
company = prodDesc_class.find('p').text
company = company.strip()
price_netto = item_name.find('div', class_="netto").text
price_netto = price_netto.strip()
#print title, company, ,price_netto
f.writerow([title.encode("utf-8"), company, price_netto, ])
</code></pre>
<p>Important is to save data to concurrent columns</p>
| -1 | 2016-10-11T22:07:26Z | 39,988,071 | <p>The problem is that you are opening the output file in <code>get_single_item_data</code>, and it is getting closed when that function returns and <code>f</code> goes out of scope.
You want to pass an open file in to <code>get_single_item_data</code> so multiple rows will be written.</p>
| 0 | 2016-10-11T23:01:41Z | [
"python",
"csv"
] |
Calculating similarity "score" between multiple dictionaries | 39,987,596 | <p>I have a reference dictionary, "dictA" and I need to compare it (calculate similarity between key and vules) to n amount of dictionaries that are generated on the spot. Each dictionary has the same length. Lets say for the sake of the discussion that the n amount of dictionaries to compare it with is 3: dictB, dictC, dictD.</p>
<p>Here is how dictA looks like:</p>
<pre><code>dictA={'1':"U", '2':"D", '3':"D", '4':"U", '5':"U",'6':"U"}
</code></pre>
<p>Here are how dictB,dictC and dictD look like:</p>
<pre><code>dictB={'1':"U", '2':"U", '3':"D", '4':"D", '5':"U",'6':"D"}
dictC={'1':"U", '2':"U", '3':"U", '4':"D", '5':"U",'6':"D"}
dictD={'1':"D", '2':"U", '3':"U", '4':"U", '5':"D",'6':"D"}
</code></pre>
<p>I have a solution, but just for the option of two dictionaries:</p>
<pre><code>sharedValue = set(dictA.items()) & set(dictD.items())
dictLength = len(dictA)
scoreOfSimilarity = len(sharedValue)
similarity = scoreOfSimilarity/dictLength
</code></pre>
<p>My questions is:
How can I iterate through n amount of dictionaries with dictA being a primary dictionary which I compare others with. The goal is to get a "similarity" value for each dictionary I am gonna iterate through against the primary dictionary. </p>
<p>Thanks for your help.</p>
| 4 | 2016-10-11T22:11:51Z | 39,987,767 | <p>Here's a general structure -- assuming that you can generate the dictionaries individually, using each before generating the next. This sounds like what you might want. calculate_similarity would be a function containing your "I have a solution" code above.</p>
<pre><code>reference = {'1':"U", '2':"D", '3':"D", '4':"U", '5':"U",'6':"U"}
while True:
on_the_spot = generate_dictionary()
if on_the_spot is None:
break
calculate_similarity(reference, on_the_spot)
</code></pre>
<p>If you need to iterate through dictionaries already generated, then you have to have them in an iterable Python structure. As you generate them, create a list of dictionaries:</p>
<pre><code>victim_list = [
{'1':"U", '2':"U", '3':"D", '4':"D", '5':"U",'6':"D"},
{'1':"U", '2':"U", '3':"U", '4':"D", '5':"U",'6':"D"},
{'1':"D", '2':"U", '3':"U", '4':"U", '5':"D",'6':"D"}
]
for on_the_spot in victim_list:
# Proceed as above
</code></pre>
<p>Are you familiar with the Python construct <em>generator</em>? It's like a function that returns its value with a <strong>yield</strong>, not a <strong>return</strong>. If so, use that instead of the above list.</p>
| 1 | 2016-10-11T22:29:04Z | [
"python",
"python-3.x"
] |
Calculating similarity "score" between multiple dictionaries | 39,987,596 | <p>I have a reference dictionary, "dictA" and I need to compare it (calculate similarity between key and vules) to n amount of dictionaries that are generated on the spot. Each dictionary has the same length. Lets say for the sake of the discussion that the n amount of dictionaries to compare it with is 3: dictB, dictC, dictD.</p>
<p>Here is how dictA looks like:</p>
<pre><code>dictA={'1':"U", '2':"D", '3':"D", '4':"U", '5':"U",'6':"U"}
</code></pre>
<p>Here are how dictB,dictC and dictD look like:</p>
<pre><code>dictB={'1':"U", '2':"U", '3':"D", '4':"D", '5':"U",'6':"D"}
dictC={'1':"U", '2':"U", '3':"U", '4':"D", '5':"U",'6':"D"}
dictD={'1':"D", '2':"U", '3':"U", '4':"U", '5':"D",'6':"D"}
</code></pre>
<p>I have a solution, but just for the option of two dictionaries:</p>
<pre><code>sharedValue = set(dictA.items()) & set(dictD.items())
dictLength = len(dictA)
scoreOfSimilarity = len(sharedValue)
similarity = scoreOfSimilarity/dictLength
</code></pre>
<p>My questions is:
How can I iterate through n amount of dictionaries with dictA being a primary dictionary which I compare others with. The goal is to get a "similarity" value for each dictionary I am gonna iterate through against the primary dictionary. </p>
<p>Thanks for your help.</p>
| 4 | 2016-10-11T22:11:51Z | 39,987,842 | <p>If you stick your solution in a function, you can call it by name for any two dicts. Also, if you curry the function by breaking up the arguments across nested functions, you can partially apply the first dict to get back a function that just wants the second (or you could use <code>functools.partial</code>), which makes it easy to map:</p>
<pre><code>def similarity (a):
def _ (b):
sharedValue = set(a.items()) & set(b.items())
dictLength = len(a)
scoreOfSimilarity = len(sharedValue)
return scoreOfSimilarity/dictLength
return _
</code></pre>
<p>Aside: the above can also be written as a single expression via nested lambdas:</p>
<pre><code>similarity = lambda a: lambda b: len(set(a.items()) & set(b.items)) / len(a)
</code></pre>
<p>Now you can get the similarity between dictA and the remainder with a map:</p>
<pre><code>otherDicts = [dictB, dictC, dictD]
scores = map(similarity(dictA), otherdicts)
</code></pre>
<p>Now you can use <code>min()</code> (or <code>max()</code>, or whatever) to get the best from the scores list:</p>
<pre><code>winner = min(scores)
</code></pre>
<p>Warning: I have not tested any of the above.</p>
| 0 | 2016-10-11T22:36:26Z | [
"python",
"python-3.x"
] |
Calculating similarity "score" between multiple dictionaries | 39,987,596 | <p>I have a reference dictionary, "dictA" and I need to compare it (calculate similarity between key and vules) to n amount of dictionaries that are generated on the spot. Each dictionary has the same length. Lets say for the sake of the discussion that the n amount of dictionaries to compare it with is 3: dictB, dictC, dictD.</p>
<p>Here is how dictA looks like:</p>
<pre><code>dictA={'1':"U", '2':"D", '3':"D", '4':"U", '5':"U",'6':"U"}
</code></pre>
<p>Here are how dictB,dictC and dictD look like:</p>
<pre><code>dictB={'1':"U", '2':"U", '3':"D", '4':"D", '5':"U",'6':"D"}
dictC={'1':"U", '2':"U", '3':"U", '4':"D", '5':"U",'6':"D"}
dictD={'1':"D", '2':"U", '3':"U", '4':"U", '5':"D",'6':"D"}
</code></pre>
<p>I have a solution, but just for the option of two dictionaries:</p>
<pre><code>sharedValue = set(dictA.items()) & set(dictD.items())
dictLength = len(dictA)
scoreOfSimilarity = len(sharedValue)
similarity = scoreOfSimilarity/dictLength
</code></pre>
<p>My questions is:
How can I iterate through n amount of dictionaries with dictA being a primary dictionary which I compare others with. The goal is to get a "similarity" value for each dictionary I am gonna iterate through against the primary dictionary. </p>
<p>Thanks for your help.</p>
| 4 | 2016-10-11T22:11:51Z | 40,000,523 | <p>Thanks to everyone for participation on the answer. Here is result that does what I need:</p>
<pre><code>def compareTwoDictionaries(self, absolute, reference, listOfDictionaries):
#look only for absolute fit, yes or no
if (absolute == True):
similarity = reference == listOfDictionaries
else:
#return items that are the same between two dictionaries
shared_items = set(reference.items()) & set(listOfDictionaries.items())
#return the length of the dictionary for further calculation of %
dictLength = len(reference)
#return the length of shared_items for further calculation of %
scoreOfSimilarity = len(shared_items)
#return final score: similarity
similarity = scoreOfSimilarity/dictLength
return similarity
</code></pre>
<p>Here is the call of the function</p>
<pre><code>for dict in victim_list:
output = oandaConnectorCalls.compareTwoDictionaries(False, reference, dict)
</code></pre>
<p>"Reference" dict and "victim_list" dict are used as described above.</p>
| 0 | 2016-10-12T13:55:55Z | [
"python",
"python-3.x"
] |
Calculating similarity "score" between multiple dictionaries | 39,987,596 | <p>I have a reference dictionary, "dictA" and I need to compare it (calculate similarity between key and vules) to n amount of dictionaries that are generated on the spot. Each dictionary has the same length. Lets say for the sake of the discussion that the n amount of dictionaries to compare it with is 3: dictB, dictC, dictD.</p>
<p>Here is how dictA looks like:</p>
<pre><code>dictA={'1':"U", '2':"D", '3':"D", '4':"U", '5':"U",'6':"U"}
</code></pre>
<p>Here are how dictB,dictC and dictD look like:</p>
<pre><code>dictB={'1':"U", '2':"U", '3':"D", '4':"D", '5':"U",'6':"D"}
dictC={'1':"U", '2':"U", '3':"U", '4':"D", '5':"U",'6':"D"}
dictD={'1':"D", '2':"U", '3':"U", '4':"U", '5':"D",'6':"D"}
</code></pre>
<p>I have a solution, but just for the option of two dictionaries:</p>
<pre><code>sharedValue = set(dictA.items()) & set(dictD.items())
dictLength = len(dictA)
scoreOfSimilarity = len(sharedValue)
similarity = scoreOfSimilarity/dictLength
</code></pre>
<p>My questions is:
How can I iterate through n amount of dictionaries with dictA being a primary dictionary which I compare others with. The goal is to get a "similarity" value for each dictionary I am gonna iterate through against the primary dictionary. </p>
<p>Thanks for your help.</p>
| 4 | 2016-10-11T22:11:51Z | 40,004,171 | <p>Based on your problem setup, there appears to be no alternative to looping through the input list of dictionaries. However, there is a multiprocessing trick that can be applied here. </p>
<p>Here is your input:</p>
<pre><code>dict_a = {'1': "U", '2': "D", '3': "D", '4': "U", '5': "U", '6': "U"}
dict_b = {'1': "U", '2': "U", '3': "D", '4': "D", '5': "U", '6': "D"}
dict_c = {'1': "U", '2': "U", '3': "U", '4': "D", '5': "U", '6': "D"}
dict_d = {'1': "D", '2': "U", '3': "U", '4': "U", '5': "D", '6': "D"}
other_dicts = [dict_b, dict_c, dict_d]
</code></pre>
<p>I have included @gary_fixler's map technique as <code>similarity1</code>, in addition to the <code>similarity2</code> function that I will use for the loop technique.</p>
<pre><code>def similarity1(a):
def _(b):
shared_value = set(a.items()) & set(b.items())
dict_length = len(a)
score_of_similarity = len(shared_value)
return score_of_similarity / dict_length
return _
def similarity2(c):
a, b = c
shared_value = set(a.items()) & set(b.items())
dict_length = len(a)
score_of_similarity = len(shared_value)
return score_of_similarity / dict_length
</code></pre>
<p>We are evaluating 3 techniques here: <br />
(1) @gary_fixler's map <br />
(2) simple loop through the list of dicts<br />
(3) multiprocessing the list of dicts</p>
<p>Here are the execution statements:</p>
<pre><code>print(list(map(similarity1(dict_a), other_dicts)))
print([similarity2((dict_a, dict_v)) for dict_v in other_dicts])
max_processes = int(multiprocessing.cpu_count()/2-1)
pool = multiprocessing.Pool(processes=max_processes)
print([x for x in pool.map(similarity2, zip(itertools.repeat(dict_a), other_dicts))])
</code></pre>
<p>You will find that all 3 techniques produce the same result:</p>
<pre><code>[0.5, 0.3333333333333333, 0.16666666666666666]
[0.5, 0.3333333333333333, 0.16666666666666666]
[0.5, 0.3333333333333333, 0.16666666666666666]
</code></pre>
<p>Note that, for multiprocessing, you have <code>multiprocessing.cpu_count()/2</code> cores (with each core having hyper-threading). Assuming that you have nothing else running on your system, and your program has no I/O or synchronization needs (as is the case for our problem), you will often get optimum performance with <code>multiprocessing.cpu_count()/2-1</code> processes, the <code>-1</code> being for the parent process.</p>
<p>Now, to time the 3 techniques:</p>
<pre><code>print(timeit.timeit("list(map(similarity1(dict_a), other_dicts))",
setup="from __main__ import similarity1, dict_a, other_dicts",
number=10000))
print(timeit.timeit("[similarity2((dict_a, dict_v)) for dict_v in other_dicts]",
setup="from __main__ import similarity2, dict_a, other_dicts",
number=10000))
print(timeit.timeit("[x for x in pool.map(similarity2, zip(itertools.repeat(dict_a), other_dicts))]",
setup="from __main__ import similarity2, dict_a, other_dicts, pool",
number=10000))
</code></pre>
<p>This produces the following results on my laptop:</p>
<pre><code>0.07092539698351175
0.06757041101809591
1.6528456939850003
</code></pre>
<p>You can see that the basic loop technique performs the best. The multiprocessing was significantly worse than the other 2 techniques, because of the overhead of creating processes and passing data back and forth. This does not mean that multiprocessing is not useful here. Quite the contrary. Look at the results for a larger number of input dictionaries:</p>
<pre><code>for _ in range(7):
other_dicts.extend(other_dicts)
</code></pre>
<p>This extends the dictionary list to 384 items. Here are the timing results for this input:</p>
<pre><code>7.934810006991029
8.184540337068029
7.466550623998046
</code></pre>
<p>For any larger set of input dictionaries, the multiprocessing technique becomes the most optimum. </p>
| 1 | 2016-10-12T16:53:15Z | [
"python",
"python-3.x"
] |
How to use Mac OS X NSEvents within wxPython application? | 39,987,661 | <p>I am writing an application that has to react to system wide keypresses on Mac OS X.</p>
<p>So I found some key logger examples that should work and hit a wall, because all examples are based on NSSharedApplication() and PyObjC AppHelper.runEventLoop() while my application is written in wxPython.</p>
<p>Here I post a modification of the simplest example from <a href="https://github.com/ljos" rel="nofollow">https://github.com/ljos</a>
that I thought it should work. But it does not.</p>
<pre><code>from AppKit import *
import wx
class AppDelegate(NSObject):
def applicationDidFinishLaunching_(self, aNotification):
NSEvent.addGlobalMonitorForEventsMatchingMask_handler_(NSKeyDownMask, handler)
def handler(event):
print (u"%@", event)
app = wx.App()
delegate = AppDelegate.alloc().init()
NSApp().setDelegate_(delegate)
app.MainLoop()
</code></pre>
<p>It is obvious that the MainLoop() doesn't catch the delegated NSEvents.</p>
<p>After app = wx.App() the NSApp() is returned correctly. So why doesn't this work? How do I make it work?</p>
| 0 | 2016-10-11T22:17:27Z | 40,044,548 | <p>As nobody answered I went searching around with different angle in view.</p>
<p>So I discovered that Quartz module can be used to get to keyboard and mouse events. No custom loop needed, therefore wx.App() and wx.App.MainLoop() aren't getting in the way.</p>
<p>I also found a nice package named pynput that does just that for me, thus sparing me plenty of time. Quartz is pretty complicated, a lot of scrambled names for functions and constants. But it does a good job.</p>
| 0 | 2016-10-14T13:36:15Z | [
"python",
"osx",
"cocoa",
"wxpython",
"pyobjc"
] |
Python scraping via xml prints empty brackets | 39,987,672 | <p>I am trying to extract just a few characters from a website via lxml, to tree, then xpath. I've tried using google chrome to obtain the correct xpath yet it prints empty brackets.</p>
<pre><code> #imports
from lxml import html
import requests
#get magicseaweed Scripps report
msScrippsPage = requests.get("""http://magicseaweed.com/Scripps-Pier-
La-Jolla-Surf-Report/296/.html""")
#make tree from site
msScrippsTree = html.fromstring(msScrippsPage.content)
#get wave size
msScrippsWave = msScrippsTree.xpath("""/html/body/div[2]/div[5]/div/div[1]/div[2]/div[2]/div/div[2]/div[1]/div/div[1]/div/div/div/div/div[1]/div/div[2]/ul[1]/li[1]/text()""")
print 'ms SCripps: ', msScrippsWave
</code></pre>
<p>The output to terminal is 'msScripps: [ ]'</p>
| 1 | 2016-10-11T22:19:06Z | 39,987,911 | <p>You shouldn't use line break in your url. When you use one line your xpath work.</p>
<pre><code>msScrippsPage = requests.get("""http://magicseaweed.com/Scripps-Pier-La-Jolla-Surf-Report/296/.html""")
print msScrippsPage.content
[' 0.4-0.6', ' ']
########################################
url = """http://magicseaweed.com/Scripps-Pier-
La-Jolla-Surf-Report/296/.html"""
print url
'http://magicseaweed.com/Scripps-Pier-\n La-Jolla-Surf-Report/296/.html'
</code></pre>
<p>Edit: Add full example</p>
<pre><code>from lxml import html
import requests
msScrippsPage = requests.get("""http://magicseaweed.com/Scripps-Pier-La-Jolla-Surf-Report/296/.html""")
msScrippsTree = html.fromstring(msScrippsPage.content)
msScrippsWave = msScrippsTree.xpath("""/html/body/div[2]/div[5]/div/div[1]/div[2]/div[2]/div/div[2]/div[1]/div/div[1]/div/div/div/div/div[1]/div/div[2]/ul[1]/li[1]/text()""")
print 'ms SCripps: ', msScrippsWave
</code></pre>
| 2 | 2016-10-11T22:45:19Z | [
"python",
"xml",
"xpath",
"web-scraping"
] |
How to display custom python QTextEdit effects like underline in txt or rtf format? | 39,987,702 | <p>I'm having a problem where when I save the text from QTextEdit as a txt, or rtf, it doesnt save things like underline and font size. Here's the code:</p>
<pre><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'untitled.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
import gi
import signal
gi.require_version('Gtk', '3.0')
import sys
import dbus
import pygtk
import gi
import signal
from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import pyqtSlot
from PyQt4.QtCore import QUrl
from PyQt4.QtWebKit import QWebView
import sip
sip.setapi('QString', 2)
sip.setapi('QVariant', 2)
gi.require_version('Notify', '0.7')
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(679, 600)
self.underlined = False
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.saveButton = QtGui.QPushButton(self.centralwidget)
self.saveButton.setGeometry(QtCore.QRect(10, 0, 88, 28))
self.saveButton.setObjectName(_fromUtf8("pushButton"))
self.textEdit = QtGui.QTextEdit(self.centralwidget)
self.textEdit.setGeometry(QtCore.QRect(0, 30, 681, 800))
self.textEdit.setObjectName(_fromUtf8("textEdit"))
self.fontButton = QtGui.QPushButton(self.centralwidget)
self.fontButton.setGeometry(QtCore.QRect(400, 0, 88, 28))
self.fontButton.setObjectName(_fromUtf8("fontButton"))
self.fontSize = QtGui.QLineEdit(self.centralwidget)
self.fontSize.setGeometry(QtCore.QRect(100, 0, 28, 28))
self.fontSize.setObjectName(_fromUtf8("fontEdit"))
self.fontSize.returnPressed.connect(self.fontButton.click)
self.underlineButton = QtGui.QPushButton(self.centralwidget)
self.underlineButton.setGeometry(QtCore.QRect(130, 0, 28, 28))
self.underlineButton.setObjectName(_fromUtf8("underlineButton"))
self.disableUnderlineButton = QtGui.QPushButton(self.centralwidget)
self.disableUnderlineButton.setGeometry(QtCore.QRect(160, 0, 28, 28))
self.disableUnderlineButton.setObjectName(_fromUtf8("disableUnderlineButton"))
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 679, 24))
self.menubar.setObjectName(_fromUtf8("menubar"))
self.menuTest = QtGui.QMenu(self.menubar)
self.menuTest.setObjectName(_fromUtf8("menuTest"))
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
MainWindow.setStatusBar(self.statusbar)
self.menubar.addAction(self.menuTest.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def save(self):
with open('log.rtf', 'w') as yourFile:
yourFile.write(str(self.textEdit.toPlainText()))
def saveFont(self):
self.textEdit.setFontPointSize(int(self.fontSize.text()))
def underline(self):
self.textEdit.setFontUnderline(True)
def disableUnderline(self):
self.textEdit.setFontUnderline(False)
def commander(self):
save(self)
self.textEdit.setHtml('<u>hi</u>')
self.saveButton.clicked.connect(lambda: save(self))
self.fontButton.clicked.connect(lambda: saveFont(self))
self.underlineButton.clicked.connect(lambda: underline(self))
self.disableUnderlineButton.clicked.connect(lambda: disableUnderline(self))
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None))
self.saveButton.setText(_translate("MainWindow", "Save text", None))
self.fontButton.setText(_translate("MainWindow", "Save Font", None))
self.menuTest.setTitle(_translate("MainWindow", "test", None))
self.underlineButton.setText(_translate("MainWindow", "Uon", None))
self.disableUnderlineButton.setText(_translate("MainWindow", "Uoff", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
MainWindow = QtGui.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>I've attempted to fix this with HTML, and other formats but couldn't get it to work. </p>
| -1 | 2016-10-11T22:22:14Z | 39,989,011 | <p>It's because you're converting the text to <em>plain text</em> with <code>toPlainText</code>, which doesn't contain any formatting information.</p>
<pre><code>yourFile.write(str(self.textEdit.toPlainText()))
</code></pre>
<p>If you want to maintain the formatting, you need to use <code>toHtml</code>. </p>
<pre><code>yourFile.write(str(self.textEdit.toHtml()))
</code></pre>
<p>Be aware that this isn't the same thing as <code>rtf</code>. It's not even entirely standard HTML and it will likely display a bit differently if you try to look at it in another HTML viewer besides the <code>QTextEdit</code>. In my experience, the HTML generated from the <code>QTextEdit</code>'s is pretty ugly, and only really works well if you plan on only displaying it inside <code>QTextEdits</code>.</p>
| 1 | 2016-10-12T01:07:33Z | [
"python",
"pyqt",
"pyqt4"
] |
Python - Convert dictionary into list with length based on values | 39,987,708 | <p>I have a dictionary</p>
<pre><code>d = {1: 3, 5: 6, 10: 2}
</code></pre>
<p>I want to convert it to a list that holds the keys of the dictionary. Each key should be repeated as many times as its associated value.</p>
<p>I've written this code that does the job:</p>
<pre><code>d = {1: 3, 5: 6, 10: 2}
l = []
for i in d:
for j in range(d[i]):
l.append(i)
l.sort()
print(l)
</code></pre>
<p>Output:</p>
<pre><code>[1, 1, 1, 5, 5, 5, 5, 5, 5, 10, 10]
</code></pre>
<p>But I would like it to be a list comprehension. How can this be done?</p>
| 3 | 2016-10-11T22:23:24Z | 39,987,735 | <p>One approach is to use <a href="https://docs.python.org/3.6/library/itertools.html#itertools.chain" rel="nofollow"><code>itertools.chain</code></a> to glue sublists together</p>
<pre><code>>>> list(itertools.chain(*[[k]*v for k, v in d.items()]))
[1, 1, 1, 10, 10, 5, 5, 5, 5, 5, 5]
</code></pre>
<p>Or if you are dealing with a very large dictionary, then you could avoid constructing the sub lists with <a href="https://docs.python.org/3.6/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>itertools.chain.from_iterable</code></a> and <a href="https://docs.python.org/3.6/library/itertools.html#itertools.repeat" rel="nofollow"><code>itertools.repeat</code></a></p>
<pre><code>>>> list(itertools.chain.from_iterable(itertools.repeat(k, v) for k, v in d.items()))
[1, 1, 1, 10, 10, 5, 5, 5, 5, 5, 5]
</code></pre>
<p>Comparative timings for a very large dictionary with using a list comprehension that uses two loops:</p>
<pre><code>>>> d = {i: i for i in range(100)}
>>> %timeit list(itertools.chain.from_iterable(itertools.repeat(k, v) for k, v in d.items()))
10000 loops, best of 3: 55.6 µs per loop
>>> %timeit [k for k, v in d.items() for _ in range(v)]
10000 loops, best of 3: 119 µs per loop
</code></pre>
<p>It's not clear whether you want your output sorted (your example code does not sort it), but if so simply presort <code>d.items()</code></p>
<pre><code># same as previous examples, but we sort d.items()
list(itertools.chain(*[[k]*v for k, v in sorted(d.items())]))
</code></pre>
| 1 | 2016-10-11T22:26:13Z | [
"python",
"list",
"dictionary"
] |
Python - Convert dictionary into list with length based on values | 39,987,708 | <p>I have a dictionary</p>
<pre><code>d = {1: 3, 5: 6, 10: 2}
</code></pre>
<p>I want to convert it to a list that holds the keys of the dictionary. Each key should be repeated as many times as its associated value.</p>
<p>I've written this code that does the job:</p>
<pre><code>d = {1: 3, 5: 6, 10: 2}
l = []
for i in d:
for j in range(d[i]):
l.append(i)
l.sort()
print(l)
</code></pre>
<p>Output:</p>
<pre><code>[1, 1, 1, 5, 5, 5, 5, 5, 5, 10, 10]
</code></pre>
<p>But I would like it to be a list comprehension. How can this be done?</p>
| 3 | 2016-10-11T22:23:24Z | 39,987,749 | <p><code>[k for k,v in d.items() for _ in range(v)]</code>
... I guess...</p>
<p>if you want it sorted you can do</p>
<p><code>[k for k,v in sorted(d.items()) for _ in range(v)]</code></p>
| 1 | 2016-10-11T22:27:01Z | [
"python",
"list",
"dictionary"
] |
Python - Convert dictionary into list with length based on values | 39,987,708 | <p>I have a dictionary</p>
<pre><code>d = {1: 3, 5: 6, 10: 2}
</code></pre>
<p>I want to convert it to a list that holds the keys of the dictionary. Each key should be repeated as many times as its associated value.</p>
<p>I've written this code that does the job:</p>
<pre><code>d = {1: 3, 5: 6, 10: 2}
l = []
for i in d:
for j in range(d[i]):
l.append(i)
l.sort()
print(l)
</code></pre>
<p>Output:</p>
<pre><code>[1, 1, 1, 5, 5, 5, 5, 5, 5, 10, 10]
</code></pre>
<p>But I would like it to be a list comprehension. How can this be done?</p>
| 3 | 2016-10-11T22:23:24Z | 39,987,754 | <p>You can do it using a list comprehension:</p>
<pre><code>[i for i in d for j in range(d[i])]
</code></pre>
<p>yields:</p>
<pre><code>[1, 1, 1, 10, 10, 5, 5, 5, 5, 5, 5]
</code></pre>
<p>You can sort it again to get the list you were looking for.</p>
| 2 | 2016-10-11T22:27:29Z | [
"python",
"list",
"dictionary"
] |
Why does BeautifulSoup work the second time parsing, but not the first | 39,987,731 | <p>This is the <code>ResultSet</code> of running <code>soup[0].find_all('div', {'class':'font-160 line-110'})</code>: </p>
<pre><code>[<div class="font-160 line-110" data-container=".snippet-container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]">
<a class="no-underline group-ib color-inherit"
href="/en/ais/details/ports/959">
<span class="text-default">CN</span><span class="text-default text-darker">XMN
</span>
</a>
</div>]
</code></pre>
<p>In an attempt to pull out <code>XIAMEN [CN]</code> after <code>title</code> I could not use <code>a[0].find('div')['title]</code> (where <code>a</code> is the above <code>BeautifulSoup ResultSet</code>). However, if I copy and paste that HTML as a new string, say, </p>
<pre><code>b = '''<div class="font-160 line-110" data-container=".snippet container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]">'''
</code></pre>
<p>Then do: </p>
<pre><code>>>soup = BeautifulSoup(b, 'html.parser')
>>soup.find('div')['title']
>>XIAMEN [CN] #prints contents of title
</code></pre>
<p>Why do I have to reSoup the Soup? Why doesn't this work on my first search?</p>
<p>Edit, origin of <code>soup</code>:</p>
<p>I have a list of <code>urls</code> that I'm going though via <code>grequests</code>. One of the things I'm looking for is that <code>title</code> that contains <code>XIAMEN [CN]</code>. </p>
<p>So <code>soup</code> was created when I did </p>
<pre><code>soup = []
with i in range(2) #number of pages parsed
rawSoup = BeautifulSoup(response[i].content, 'html.parser')
souporigin = rawSoup.find_all('div', {'class': 'bg-default bg-white no- snippet-hide'})
soup.append(souporigin)
</code></pre>
<p>The urls are</p>
<pre><code>[
'http://www.marinetraffic.com/en/ais/details/ships/shipid:564352/imo:9643752/mmsi:511228000/vessel:DE%20MI',
'http://www.marinetraffic.com/en/ais/details/ships/shipid:3780155/imo:9712395/mmsi:477588800/vessel:SITC%20GUANGXI?cb=2267'
]
</code></pre>
| 2 | 2016-10-11T22:25:49Z | 39,988,337 | <p>You use wrong selection.</p>
<p>Selection <code>soup[0].find_all('div', {'class':'font-160 line-110'})</code> finds <code><div></code> and you can even see <code><div></code> when you print it. But when you add <code>.find()</code> it starts searching <strong>inside</strong> <code><div></code> - so <code>.find('div')</code> tries to find new <code>div</code> in current <code>div</code></p>
<p>You need </p>
<pre><code>a[0]['title']
</code></pre>
<hr>
<p>When you create new soup then main/root element is not <code>div</code> but <code>[document]</code> and <code>div</code> is its child (<code>div</code> is inside main "tag") so you can use <code>find('div')</code>.</p>
<pre><code>>>> a[0].name
div
>>> soup = BeautifulSoup(b, 'html.parser')
>>> soup.name
[document]
</code></pre>
| 0 | 2016-10-11T23:33:55Z | [
"python",
"html",
"beautifulsoup"
] |
Why does BeautifulSoup work the second time parsing, but not the first | 39,987,731 | <p>This is the <code>ResultSet</code> of running <code>soup[0].find_all('div', {'class':'font-160 line-110'})</code>: </p>
<pre><code>[<div class="font-160 line-110" data-container=".snippet-container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]">
<a class="no-underline group-ib color-inherit"
href="/en/ais/details/ports/959">
<span class="text-default">CN</span><span class="text-default text-darker">XMN
</span>
</a>
</div>]
</code></pre>
<p>In an attempt to pull out <code>XIAMEN [CN]</code> after <code>title</code> I could not use <code>a[0].find('div')['title]</code> (where <code>a</code> is the above <code>BeautifulSoup ResultSet</code>). However, if I copy and paste that HTML as a new string, say, </p>
<pre><code>b = '''<div class="font-160 line-110" data-container=".snippet container" data-html="true" data-placement="top" data-template='&lt;div class="tooltip infowin-tooltip" role="tooltip"&gt;&lt;div class="tooltip-arrow"&gt;&lt;div class="tooltip-arrow-inner"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="tooltip-inner" style="text-align: left"&gt;&lt;/div&gt;&lt;/div&gt;' data-toggle="tooltip" title="XIAMEN [CN]">'''
</code></pre>
<p>Then do: </p>
<pre><code>>>soup = BeautifulSoup(b, 'html.parser')
>>soup.find('div')['title']
>>XIAMEN [CN] #prints contents of title
</code></pre>
<p>Why do I have to reSoup the Soup? Why doesn't this work on my first search?</p>
<p>Edit, origin of <code>soup</code>:</p>
<p>I have a list of <code>urls</code> that I'm going though via <code>grequests</code>. One of the things I'm looking for is that <code>title</code> that contains <code>XIAMEN [CN]</code>. </p>
<p>So <code>soup</code> was created when I did </p>
<pre><code>soup = []
with i in range(2) #number of pages parsed
rawSoup = BeautifulSoup(response[i].content, 'html.parser')
souporigin = rawSoup.find_all('div', {'class': 'bg-default bg-white no- snippet-hide'})
soup.append(souporigin)
</code></pre>
<p>The urls are</p>
<pre><code>[
'http://www.marinetraffic.com/en/ais/details/ships/shipid:564352/imo:9643752/mmsi:511228000/vessel:DE%20MI',
'http://www.marinetraffic.com/en/ais/details/ships/shipid:3780155/imo:9712395/mmsi:477588800/vessel:SITC%20GUANGXI?cb=2267'
]
</code></pre>
| 2 | 2016-10-11T22:25:49Z | 39,998,044 | <p>I found out the problem occurred when I set up my BeautifulSoup. I created a list of partial search results then had to iterate over the list to research it. I fixed this by just searching for what I wanted in on line: </p>
<p>I changed:</p>
<pre><code>soup = []
with i in range(2) #number of pages parsed
rawSoup = BeautifulSoup(response[i].content, 'html.parser')
souporigin = rawSoup.find_all('div', {'class': 'bg-default bg-white no- snippet-hide'})
soup.append(souporigin)
</code></pre>
<p>to: </p>
<pre><code> a = soup.find("div", class_='font-160 line-110')["title"]
</code></pre>
<p>And run this search as soon as I create my <code>soup</code> which removes a lot of redundancies in the code-- I had been creating lists of <code>ResultSets</code> and having to use <code>find</code> on them for new fields. </p>
| 1 | 2016-10-12T11:58:45Z | [
"python",
"html",
"beautifulsoup"
] |
How to read values from from a text file with columns? | 39,987,756 | <p>I'm trying to read data from a file that is separated in columns with data from each year and lines with the name of the values (eg:number of times I opened Firefox on the year 2015). Now, I know how to read a specific line but I'm having trouble reading from the different columns. They are 10 spaces wide but only have values like -0.5 and 1.2, so it doesn't get to the tenths and only 1 decimal. Like this:</p>
<pre><code>-3.5 1.0 -2.9 6.8
</code></pre>
<p>How would I do it to read these values since I have to compare them as floats to other data? Would I slice it? If so I haven't found a way to do so.</p>
| 0 | 2016-10-11T22:27:52Z | 39,987,782 | <p>If you know how many columns you are going to have you can split the whitespace and coerce to a float:</p>
<pre><code>[float(i) for i in "-3.5 1.0 -2.9 6.8".split()]
</code></pre>
<p>Yields:</p>
<pre><code>[-3.5, 1.0, -2.9, 6.8]
</code></pre>
| 0 | 2016-10-11T22:30:43Z | [
"python",
"slice"
] |
How to read values from from a text file with columns? | 39,987,756 | <p>I'm trying to read data from a file that is separated in columns with data from each year and lines with the name of the values (eg:number of times I opened Firefox on the year 2015). Now, I know how to read a specific line but I'm having trouble reading from the different columns. They are 10 spaces wide but only have values like -0.5 and 1.2, so it doesn't get to the tenths and only 1 decimal. Like this:</p>
<pre><code>-3.5 1.0 -2.9 6.8
</code></pre>
<p>How would I do it to read these values since I have to compare them as floats to other data? Would I slice it? If so I haven't found a way to do so.</p>
| 0 | 2016-10-11T22:27:52Z | 39,987,803 | <p>Once you've got your line in a string, the simplest would be to split the string on the whitespace.</p>
<pre><code>values = line.split()
</code></pre>
<p>The iterate through that list and cast to a float.</p>
<pre><code>float_values = [float(v) for v in values]
</code></pre>
| 0 | 2016-10-11T22:32:40Z | [
"python",
"slice"
] |
Serving two Django sites with WSGI results in internal server error for only one of the sites | 39,987,844 | <p>I have a Django site and a Django CMS site which I am serving from the same ubuntu 14.04 server running MySQL 5.6, Apache2 2.4.7 and Django 1.8 via mod_wsgi version 4.5.7 using virtual hosts. Locally (on my Linux PC) I have managed to accomplish this with both sites working perfectly and hence decided to migrate to the server.</p>
<p>After the migration, taking care that everything has the same version, the situation is such that the Django site is working properly while the Django CMS site is giving me and internal server error 500. These are my virtual hosts' .conf files, and the wsgi.py file for the broken site.</p>
<pre><code># wsgi.py
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bsg.settings")
application = get_wsgi_application()
# Django_cms_site.conf
<VirtualHost *:80>
ServerName site.com
ServerAdmin [email protected]
Alias /static/ /home/bsg/cms/static/
Alias /media/ /home/bsg/cms/media/
WSGIScriptAlias / /home/bsg/cms/bsg/wsgi.py
WSGIDaemonProcess bsgcms python-path=/home/hicklin/bsg/cms:/home/venv-bsg/lib/python2.7/site-packages
WSGIProcessGroup bsgcms
<Directory /home/bsg/cms>
Require all granted
</Directory>
LogLevel warn
ErrorLog /var/log/apache2/cms-error.log
CustomLog /var/log/apache2/cms-access.log combined
</VirtualHost>
# Django_site.conf
<VirtualHost *:80>
ServerName django.site.com
ServerAdmin [email protected]
Alias /static/ /home/bsg/admin/site/static/
WSGIScriptAlias / /home/bsg/admin/site/wsgi.py
WSGIDaemonProcess bsgadmin python-path=/home/bsg/admin:/home/hicklin/venv-bsg/lib/python2.7/site-packages
WSGIProcessGroup bsgadmin
<Directory /home/bsg/admin>
Require all granted
</Directory>
LogLevel warn
ErrorLog /var/log/apache2/admin-error.log
CustomLog /var/log/apache2/admin-access.log combined
</VirtualHost>
</code></pre>
<p>As can be noted, I am using the same virtual environment for both sites. The relevant logs give the following error.</p>
<pre><code>[Tue Oct 11 22:39:43.416901 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] mod_wsgi (pid=19799): Target WSGI script '/home/bsg/cms/bsg/wsgi.py' cannot be loaded as Python module.
[Tue Oct 11 22:39:43.416942 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] mod_wsgi (pid=19799): Exception occurred processing WSGI script '/home/bsg/cms/bsg/wsgi.py'.
[Tue Oct 11 22:39:43.416977 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] Traceback (most recent call last):
[Tue Oct 11 22:39:43.417014 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/bsg/cms/bsg/wsgi.py", line 16, in <module>
[Tue Oct 11 22:39:43.417067 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] application = get_wsgi_application()
[Tue Oct 11 22:39:43.417093 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/venv-bsg/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
[Tue Oct 11 22:39:43.417134 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] django.setup()
[Tue Oct 11 22:39:43.417158 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/venv-bsg/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
[Tue Oct 11 22:39:43.417194 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] apps.populate(settings.INSTALLED_APPS)
[Tue Oct 11 22:39:43.417217 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/venv-bsg/lib/python2.7/site-packages/django/apps/registry.py", line 78, in populate
[Tue Oct 11 22:39:43.417255 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] raise RuntimeError("populate() isn't reentrant")
[Tue Oct 11 22:39:43.417289 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] RuntimeError: populate() isn't reentrant
</code></pre>
<p>After modifying the wsgi.py to get the <em>real error</em> as suggested by Dirk Eschler in <a href="http://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant">the thread</a>, the error in /var/log/apache2/cms-error.log changes to</p>
<pre><code>[Tue Oct 11 21:36:06.087723 2016] [wsgi:error] [pid 21584] handling WSGI exception
[Tue Oct 11 21:36:06.087811 2016] [wsgi:error] [pid 21584] Traceback (most recent call last):
[Tue Oct 11 21:36:06.087854 2016] [wsgi:error] [pid 21584] File "/home/bsg/cms/bsg/wsgi.py", line 9, in <module>
[Tue Oct 11 21:36:06.087975 2016] [wsgi:error] [pid 21584] application = get_wsgi_application()
[Tue Oct 11 21:36:06.087996 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
[Tue Oct 11 21:36:06.088067 2016] [wsgi:error] [pid 21584] django.setup()
[Tue Oct 11 21:36:06.088093 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/__init__.py", line 17, in setup
[Tue Oct 11 21:36:06.088164 2016] [wsgi:error] [pid 21584] configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
[Tue Oct 11 21:36:06.088190 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__
[Tue Oct 11 21:36:06.088366 2016] [wsgi:error] [pid 21584] self._setup(name)
[Tue Oct 11 21:36:06.088388 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/conf/__init__.py", line 42, in _setup
[Tue Oct 11 21:36:06.088417 2016] [wsgi:error] [pid 21584] % (desc, ENVIRONMENT_VARIABLE))
[Tue Oct 11 21:36:06.088442 2016] [wsgi:error] [pid 21584] ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
[Tue Oct 11 21:36:08.591950 2016] [wsgi:error] [pid 21584] [remote 77.71.226.73:3851] mod_wsgi (pid=21584): Target WSGI script '/home/bsg/cms/bsg/wsgi.py' does not contain WSGI application 'application'.
</code></pre>
<p>Please note that I have managed to run, access and work the cms site via python manage.py runserver and it works without a hitch. Following the latest error did not result in anything fruitful yet. Any help or hints are greatly appreciated.</p>
| 0 | 2016-10-11T22:36:39Z | 39,988,240 | <p>This is caused by your database or some other resource needed during initialisation of Django not being available, or Django initialisation otherwise failing in some other way the first time. Back in time the initialisation of Django was reentrant and could be called a second time if the first time it failed. This is no longer the case so if initialisation fails the first time, you are forced to restart the process.</p>
<p>As you are using a recent version of mod_wsgi, you should be able to add an additional option to <code>WSGIDaemonProcess</code> to deal with this issue.</p>
<pre><code>startup-timeout=15
</code></pre>
<p>What will happen is that if mod_wsgi can't successfully load the WSGI script file within 15 seconds of the first attempt to do so, then the mod_wsgi daemon process will be automatically restarted. It will keep doing this if it keeps failing because of some service required during initialisation not being available.</p>
<p>BTW, there should have been an earlier error message in the error log than that which gave the real reason why initialisation failed. That message you quote is from the subsequent failures after the first. I don't trust that the other message you give is correct, so go back and look for the first message after Apache restart.</p>
<p>Finally, best practice is not to use:</p>
<pre><code>WSGIDaemonProcess bsgadmin python-path=/home/bsg/admin:/home/hicklin/venv-bsg/lib/python2.7/site-packages
</code></pre>
<p>but:</p>
<pre><code>WSGIDaemonProcess bsgadmin python-home=home/hicklin/venv-bsg python-path=/home/bsg/admin
</code></pre>
<p>That is, use <code>python-home</code> option to specify location of Python virtual environment, do not add <code>site-packages</code> to <code>python-path</code>. See:</p>
<ul>
<li><a href="http://blog.dscpl.com.au/2014/09/using-python-virtual-environments-with.html" rel="nofollow">http://blog.dscpl.com.au/2014/09/using-python-virtual-environments-with.html</a></li>
</ul>
| 1 | 2016-10-11T23:20:02Z | [
"python",
"django",
"mod-wsgi",
"django-cms"
] |
Serving two Django sites with WSGI results in internal server error for only one of the sites | 39,987,844 | <p>I have a Django site and a Django CMS site which I am serving from the same ubuntu 14.04 server running MySQL 5.6, Apache2 2.4.7 and Django 1.8 via mod_wsgi version 4.5.7 using virtual hosts. Locally (on my Linux PC) I have managed to accomplish this with both sites working perfectly and hence decided to migrate to the server.</p>
<p>After the migration, taking care that everything has the same version, the situation is such that the Django site is working properly while the Django CMS site is giving me and internal server error 500. These are my virtual hosts' .conf files, and the wsgi.py file for the broken site.</p>
<pre><code># wsgi.py
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bsg.settings")
application = get_wsgi_application()
# Django_cms_site.conf
<VirtualHost *:80>
ServerName site.com
ServerAdmin [email protected]
Alias /static/ /home/bsg/cms/static/
Alias /media/ /home/bsg/cms/media/
WSGIScriptAlias / /home/bsg/cms/bsg/wsgi.py
WSGIDaemonProcess bsgcms python-path=/home/hicklin/bsg/cms:/home/venv-bsg/lib/python2.7/site-packages
WSGIProcessGroup bsgcms
<Directory /home/bsg/cms>
Require all granted
</Directory>
LogLevel warn
ErrorLog /var/log/apache2/cms-error.log
CustomLog /var/log/apache2/cms-access.log combined
</VirtualHost>
# Django_site.conf
<VirtualHost *:80>
ServerName django.site.com
ServerAdmin [email protected]
Alias /static/ /home/bsg/admin/site/static/
WSGIScriptAlias / /home/bsg/admin/site/wsgi.py
WSGIDaemonProcess bsgadmin python-path=/home/bsg/admin:/home/hicklin/venv-bsg/lib/python2.7/site-packages
WSGIProcessGroup bsgadmin
<Directory /home/bsg/admin>
Require all granted
</Directory>
LogLevel warn
ErrorLog /var/log/apache2/admin-error.log
CustomLog /var/log/apache2/admin-access.log combined
</VirtualHost>
</code></pre>
<p>As can be noted, I am using the same virtual environment for both sites. The relevant logs give the following error.</p>
<pre><code>[Tue Oct 11 22:39:43.416901 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] mod_wsgi (pid=19799): Target WSGI script '/home/bsg/cms/bsg/wsgi.py' cannot be loaded as Python module.
[Tue Oct 11 22:39:43.416942 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] mod_wsgi (pid=19799): Exception occurred processing WSGI script '/home/bsg/cms/bsg/wsgi.py'.
[Tue Oct 11 22:39:43.416977 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] Traceback (most recent call last):
[Tue Oct 11 22:39:43.417014 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/bsg/cms/bsg/wsgi.py", line 16, in <module>
[Tue Oct 11 22:39:43.417067 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] application = get_wsgi_application()
[Tue Oct 11 22:39:43.417093 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/venv-bsg/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
[Tue Oct 11 22:39:43.417134 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] django.setup()
[Tue Oct 11 22:39:43.417158 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/venv-bsg/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
[Tue Oct 11 22:39:43.417194 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] apps.populate(settings.INSTALLED_APPS)
[Tue Oct 11 22:39:43.417217 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] File "/home/venv-bsg/lib/python2.7/site-packages/django/apps/registry.py", line 78, in populate
[Tue Oct 11 22:39:43.417255 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] raise RuntimeError("populate() isn't reentrant")
[Tue Oct 11 22:39:43.417289 2016] [wsgi:error] [pid 19799] [remote 77.71.226.73:64984] RuntimeError: populate() isn't reentrant
</code></pre>
<p>After modifying the wsgi.py to get the <em>real error</em> as suggested by Dirk Eschler in <a href="http://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant">the thread</a>, the error in /var/log/apache2/cms-error.log changes to</p>
<pre><code>[Tue Oct 11 21:36:06.087723 2016] [wsgi:error] [pid 21584] handling WSGI exception
[Tue Oct 11 21:36:06.087811 2016] [wsgi:error] [pid 21584] Traceback (most recent call last):
[Tue Oct 11 21:36:06.087854 2016] [wsgi:error] [pid 21584] File "/home/bsg/cms/bsg/wsgi.py", line 9, in <module>
[Tue Oct 11 21:36:06.087975 2016] [wsgi:error] [pid 21584] application = get_wsgi_application()
[Tue Oct 11 21:36:06.087996 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
[Tue Oct 11 21:36:06.088067 2016] [wsgi:error] [pid 21584] django.setup()
[Tue Oct 11 21:36:06.088093 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/__init__.py", line 17, in setup
[Tue Oct 11 21:36:06.088164 2016] [wsgi:error] [pid 21584] configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
[Tue Oct 11 21:36:06.088190 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__
[Tue Oct 11 21:36:06.088366 2016] [wsgi:error] [pid 21584] self._setup(name)
[Tue Oct 11 21:36:06.088388 2016] [wsgi:error] [pid 21584] File "/home/venv-bsg/lib/python2.7/site-packages/django/conf/__init__.py", line 42, in _setup
[Tue Oct 11 21:36:06.088417 2016] [wsgi:error] [pid 21584] % (desc, ENVIRONMENT_VARIABLE))
[Tue Oct 11 21:36:06.088442 2016] [wsgi:error] [pid 21584] ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
[Tue Oct 11 21:36:08.591950 2016] [wsgi:error] [pid 21584] [remote 77.71.226.73:3851] mod_wsgi (pid=21584): Target WSGI script '/home/bsg/cms/bsg/wsgi.py' does not contain WSGI application 'application'.
</code></pre>
<p>Please note that I have managed to run, access and work the cms site via python manage.py runserver and it works without a hitch. Following the latest error did not result in anything fruitful yet. Any help or hints are greatly appreciated.</p>
| 0 | 2016-10-11T22:36:39Z | 40,065,477 | <p>Adding <code>startup-timeout=15</code> to <code>WSGIDaemonProcess</code> in the virtual hosts config file did not solve my issue.</p>
<p>As for the "earlier error message in the error log" with the modified <code>wsgi.py</code> to get the real error, the only errors present in the <code>/var/log/apache2/cms-error.log</code> seem to be related to the fact that the "Target WSGI script does not contain WSGI application 'application'". As indicated in the question. Inspection of the apache2 error log during restart and subsequent attempts of loading the site showed the typical output.</p>
<pre><code>[Wed Oct 12 16:16:01.372050 2016] [mpm_prefork:notice] [pid 23564] AH00169: caught SIGTERM, shutting down
[Wed Oct 12 16:16:02.292367 2016] [mpm_prefork:notice] [pid 1169] AH00163: Apache/2.4.7 (Ubuntu) mod_wsgi/4.5.7 Python/2.7 PHP/5.5.9-1ubuntu4.20 configured -- resuming normal operations
[Wed Oct 12 16:16:02.292455 2016] [core:notice] [pid 1169] AH00094: Command line: '/usr/sbin/apache2'
</code></pre>
<p>Finally, what worked was to format the server and start all over again. The mod_WSGI is now working perfectly with both sites. I have also upgraded to MySQL 5.7 and ubuntu 16.04.</p>
<p>I would like to point out that the final suggestion about how to set the <code>python-path</code> and <code>python-home</code> did not work with the old system, however, it worked without any issues on the new system. This leads me to think that I somehow installed a modified mod-WSGI or it was corrupted during installation. I am not sure of this.</p>
| 0 | 2016-10-16T00:04:22Z | [
"python",
"django",
"mod-wsgi",
"django-cms"
] |
python lists in a list | 39,987,852 | <p>I have a simple question. I know how to bring out(I don't know how to say besides 'bring out') from a list. for example,</p>
<pre><code>Alist = [ 1, 2, 3, 4 ]
</code></pre>
<p>Then,</p>
<pre><code>Alist[0] = 1
Alist[1] = 2
</code></pre>
<p>But, what if</p>
<pre><code>Blist = [[1, 2, 3 ,4], [5, 6, 7]]
Blist[0] = [1, 2, 3, 4]
Blist[1] = [5, 6, 7]
</code></pre>
<p>I can 'bring out' the whole [5,6,7] as calling Blists[1]</p>
<p>My question is, how to bring out specific number in the list, lets say number 5. Hope this makes sense</p>
| 0 | 2016-10-11T22:37:38Z | 39,987,871 | <p><strong>Blist[1]</strong> is a list itself. To get the head element, use an index on <strong>Blist[1]</strong></p>
<pre><code>Blist[1][0]
</code></pre>
| 2 | 2016-10-11T22:39:36Z | [
"python",
"python-3.x"
] |
python lists in a list | 39,987,852 | <p>I have a simple question. I know how to bring out(I don't know how to say besides 'bring out') from a list. for example,</p>
<pre><code>Alist = [ 1, 2, 3, 4 ]
</code></pre>
<p>Then,</p>
<pre><code>Alist[0] = 1
Alist[1] = 2
</code></pre>
<p>But, what if</p>
<pre><code>Blist = [[1, 2, 3 ,4], [5, 6, 7]]
Blist[0] = [1, 2, 3, 4]
Blist[1] = [5, 6, 7]
</code></pre>
<p>I can 'bring out' the whole [5,6,7] as calling Blists[1]</p>
<p>My question is, how to bring out specific number in the list, lets say number 5. Hope this makes sense</p>
| 0 | 2016-10-11T22:37:38Z | 39,987,883 | <p>You know that <code>B[1]</code> gets you a reference to the second list in <code>B</code>.</p>
<pre><code>lst = B[1]
</code></pre>
<p>You can index that result again to get another element</p>
<pre><code>lst[0]
</code></pre>
<p>However you can of course do this more easily in one line</p>
<pre><code>B[1][0]
</code></pre>
| 3 | 2016-10-11T22:40:57Z | [
"python",
"python-3.x"
] |
Update Pandas Cells based on Column Values and Other Columns | 39,987,860 | <p>I am looking to update many columns based on the values in one column; this is easy with a loop but takes far too long for my application when there are many columns and many rows. What is the most elegant way to get the desired counts for each letter?</p>
<p>Desired Output:</p>
<pre><code> Things count_A count_B count_C count_D
['A','B','C'] 1 1 1 0
['A','A','A'] 3 0 0 0
['B','A'] 1 1 0 0
['D','D'] 0 0 0 2
</code></pre>
| 3 | 2016-10-11T22:38:13Z | 39,987,904 | <p><strong><em>option 1</em></strong><br>
<code>apply</code> + <code>value_counts</code></p>
<pre><code>s = pd.Series([list('ABC'), list('AAA'), list('BA'), list('DD')], name='Things')
pd.concat([s, s.apply(lambda x: pd.Series(x).value_counts()).fillna(0)], axis=1)
</code></pre>
<p><a href="https://i.stack.imgur.com/isZYF.png" rel="nofollow"><img src="https://i.stack.imgur.com/isZYF.png" alt="enter image description here"></a></p>
<p><strong><em>option 2</em></strong><br>
use <code>pd.DataFrame(s.tolist())</code> + <code>stack</code> / <code>groupby</code> / <code>unstack</code></p>
<pre><code>pd.concat([s,
pd.DataFrame(s.tolist()).stack() \
.groupby(level=0).value_counts() \
.unstack(fill_value=0)],
axis=1)
</code></pre>
| 1 | 2016-10-11T22:44:04Z | [
"python",
"pandas",
"apply"
] |
Update Pandas Cells based on Column Values and Other Columns | 39,987,860 | <p>I am looking to update many columns based on the values in one column; this is easy with a loop but takes far too long for my application when there are many columns and many rows. What is the most elegant way to get the desired counts for each letter?</p>
<p>Desired Output:</p>
<pre><code> Things count_A count_B count_C count_D
['A','B','C'] 1 1 1 0
['A','A','A'] 3 0 0 0
['B','A'] 1 1 0 0
['D','D'] 0 0 0 2
</code></pre>
| 3 | 2016-10-11T22:38:13Z | 39,988,409 | <p>The most elegant is definitely the CountVectorizer from sklearn. </p>
<p>I'll show you how it works first, then I'll do everything in one line, so you can see how elegant it is. </p>
<h3>First, we'll do it step by step:</h3>
<p>let's create some data</p>
<pre><code>raw = ['ABC', 'AAA', 'BA', 'DD']
things = [list(s) for s in raw]
</code></pre>
<p>Then read in some packages and initialize count vectorizer</p>
<pre><code>from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
cv = CountVectorizer(tokenizer=lambda doc: doc, lowercase=False)
</code></pre>
<p>Next we generate a matrix of counts</p>
<pre><code>matrix = cv.fit_transform(things)
names = ["count_"+n for n in cv.get_feature_names()]
</code></pre>
<p>And save as a data frame</p>
<pre><code>df = pd.DataFrame(data=matrix.toarray(), columns=names, index=raw)
</code></pre>
<p>Generating a data frame like this: </p>
<pre><code> count_A count_B count_C count_D
ABC 1 1 1 0
AAA 3 0 0 0
BA 1 1 0 0
DD 0 0 0 2
</code></pre>
<h3>Elegant version:</h3>
<p>Everything above in one line</p>
<pre><code>df = pd.DataFrame(data=cv.fit_transform(things).toarray(), columns=["count_"+n for n in cv.get_feature_names()], index=raw)
</code></pre>
<h3>Timing:</h3>
<p>You mentioned that you're working with a rather large dataset, so I used the %%timeit function to give a time estimate. </p>
<p>Previous response by @piRSquared (which otherwise looks very good!) </p>
<pre><code>pd.concat([s, s.apply(lambda x: pd.Series(x).value_counts()).fillna(0)], axis=1)
</code></pre>
<p><code>100 loops, best of 3: 3.27 ms per loop</code></p>
<p>My answer:</p>
<pre><code>pd.DataFrame(data=cv.fit_transform(things).toarray(), columns=["count_"+n for n in cv.get_feature_names()], index=raw)
</code></pre>
<p><code>1000 loops, best of 3: 1.08 ms per loop</code></p>
<p>According to my testing, <em>CountVectorizer</em> is about 3x faster. </p>
| 3 | 2016-10-11T23:41:32Z | [
"python",
"pandas",
"apply"
] |
Count duplicates in list and assign the sum into list | 39,987,877 | <p>I have a list with duplicate strings:</p>
<pre><code>lst = ["abc", "abc", "omg", "what", "abc", "omg"]
</code></pre>
<p>and I would like to produce:</p>
<pre><code>lst = ["3 abc", "2 omg", "what"]
</code></pre>
<p>so basically count duplicates, remove duplicates and add the sum to the beginning of the string.</p>
<p>This is how I do it right now:</p>
<pre><code>from collections import Counter
list2=[]
for i in lst:
y = dict(Counter(i))
have = list(accumulate(y.items())) # creating [("omg", 3), ...]
for tpl in have: #
join_list = []
if tpl[1] > 1:
join_list.append(str(tpl[1])+" "+tpl[0])
else:
join_list.append(tpl[0])
list2.append(', '.join(join_list))
</code></pre>
<p>Is there a easier way to obtain the desired result in python?</p>
| 3 | 2016-10-11T22:40:20Z | 39,987,937 | <p>It seems you are needlessly complicating things. Here is a very Pythonic approach:</p>
<pre><code>>>> import collections
>>> class OrderedCounter(collections.Counter, collections.OrderedDict):
... pass
...
>>> lst = ["abc", "abc", "omg", "what", "abc", "omg"]
>>> counts = OrderedCounter(lst)
>>> counts
OrderedCounter({'abc': 3, 'omg': 2, 'what': 1})
>>> ["{} {}".format(v,k) if v > 1 else k for k,v in counts.items()]
['3 abc', '2 omg', 'what']
>>>
</code></pre>
| 5 | 2016-10-11T22:47:09Z | [
"python"
] |
Count duplicates in list and assign the sum into list | 39,987,877 | <p>I have a list with duplicate strings:</p>
<pre><code>lst = ["abc", "abc", "omg", "what", "abc", "omg"]
</code></pre>
<p>and I would like to produce:</p>
<pre><code>lst = ["3 abc", "2 omg", "what"]
</code></pre>
<p>so basically count duplicates, remove duplicates and add the sum to the beginning of the string.</p>
<p>This is how I do it right now:</p>
<pre><code>from collections import Counter
list2=[]
for i in lst:
y = dict(Counter(i))
have = list(accumulate(y.items())) # creating [("omg", 3), ...]
for tpl in have: #
join_list = []
if tpl[1] > 1:
join_list.append(str(tpl[1])+" "+tpl[0])
else:
join_list.append(tpl[0])
list2.append(', '.join(join_list))
</code></pre>
<p>Is there a easier way to obtain the desired result in python?</p>
| 3 | 2016-10-11T22:40:20Z | 39,987,952 | <p>You've properly used the Counter type to accumulate the needed values. Now, it's just a matter of a more Pythonic way to generate the results. Most of all, pull the initialization out of the loop, or you'll lose all but the last entry.</p>
<pre><code>list2 = []
for tpl in have:
count = "" if tpl[1] == 0 else str(tpl[1])+" "
list2.append(count + tpl[0])
</code></pre>
<p>Now, to throw all of that into a list comprehension:</p>
<pre><code>list2 = [ ("" if tpl[1] == 0 else str(tpl[1])+" ") + tpl[0] \
for tpl in have]
</code></pre>
| 1 | 2016-10-11T22:48:10Z | [
"python"
] |
Count duplicates in list and assign the sum into list | 39,987,877 | <p>I have a list with duplicate strings:</p>
<pre><code>lst = ["abc", "abc", "omg", "what", "abc", "omg"]
</code></pre>
<p>and I would like to produce:</p>
<pre><code>lst = ["3 abc", "2 omg", "what"]
</code></pre>
<p>so basically count duplicates, remove duplicates and add the sum to the beginning of the string.</p>
<p>This is how I do it right now:</p>
<pre><code>from collections import Counter
list2=[]
for i in lst:
y = dict(Counter(i))
have = list(accumulate(y.items())) # creating [("omg", 3), ...]
for tpl in have: #
join_list = []
if tpl[1] > 1:
join_list.append(str(tpl[1])+" "+tpl[0])
else:
join_list.append(tpl[0])
list2.append(', '.join(join_list))
</code></pre>
<p>Is there a easier way to obtain the desired result in python?</p>
| 3 | 2016-10-11T22:40:20Z | 39,988,073 | <p>Try this:</p>
<pre><code>lst = ["abc", "abc", "omg", "what", "abc", "omg"]
l = [lst.count(i) for i in lst] # Count number of duplicates
d = dict(zip(lst, l)) # Convert to dictionary
lst = [str(d[i])+' '+i if d[i]>1 else i for i in d] # Convert to list of strings
</code></pre>
| 1 | 2016-10-11T23:01:50Z | [
"python"
] |
Count duplicates in list and assign the sum into list | 39,987,877 | <p>I have a list with duplicate strings:</p>
<pre><code>lst = ["abc", "abc", "omg", "what", "abc", "omg"]
</code></pre>
<p>and I would like to produce:</p>
<pre><code>lst = ["3 abc", "2 omg", "what"]
</code></pre>
<p>so basically count duplicates, remove duplicates and add the sum to the beginning of the string.</p>
<p>This is how I do it right now:</p>
<pre><code>from collections import Counter
list2=[]
for i in lst:
y = dict(Counter(i))
have = list(accumulate(y.items())) # creating [("omg", 3), ...]
for tpl in have: #
join_list = []
if tpl[1] > 1:
join_list.append(str(tpl[1])+" "+tpl[0])
else:
join_list.append(tpl[0])
list2.append(', '.join(join_list))
</code></pre>
<p>Is there a easier way to obtain the desired result in python?</p>
| 3 | 2016-10-11T22:40:20Z | 39,988,461 | <p>Another possible solution with comments to help...</p>
<pre><code>import operator
#list
lst = ["abc", "abc", "omg", "what", "abc", "omg"]
#dictionary
countDic = {}
#iterate lst to populate dictionary: {'what': 1, 'abc': 3, 'omg': 2}
for i in lst:
if i in countDic:
countDic[i] += 1
else:
countDic[i] = 1
#clean list
lst = []
#convert dictionary to an inverse list sorted by value: [('abc', 3), ('omg', 2), ('what', 1)]
sortedLst = sorted(countDic.items(), key=operator.itemgetter(0))
#iterate sorted list to populate list
for k in sortedLst:
if k[1] != 1:
lst.append(str(k[1]) + " " + k[0])
else:
lst.append(k[0])
#result
print lst
</code></pre>
<p>Output:</p>
<pre><code>['3 abc', '2 omg', 'what']
</code></pre>
| 1 | 2016-10-11T23:47:27Z | [
"python"
] |
Python equivalent of SQL: SELECT w/ MAX() and GROUP BY | 39,987,920 | <p>I have data like this:</p>
<pre><code>df = pd.DataFrame( {
'ID': [1,1,2,3,3,3,4],
'SOME_NUM': [8,10,2,4,0,5,1]
} );
df
ID SOME_NUM
0 1 8
1 1 10
2 2 2
3 3 4
4 3 0
5 3 5
6 4 1
</code></pre>
<p>And I want to group by the ID column while retaining the maximum value of SOME_NUM as a separate column. This would be easy in SQL:</p>
<pre><code>SELECT ID,
MAX(SOME_NUM)
FROM DF
GROUP BY ID;
</code></pre>
<p>But I'm having trouble finding the equivalent Python code. Seems like this should be easy. Anyone have a solution?</p>
<p>Desired result:</p>
<pre><code> new_df
ID SOME_NUM
0 1 10
1 2 2
2 3 5
6 4 1
</code></pre>
| 2 | 2016-10-11T22:46:07Z | 39,987,983 | <p>Seeing as how you are using Pandas... use the groupby functionality baked in</p>
<pre><code>df.groupby("ID").max()
</code></pre>
| 3 | 2016-10-11T22:51:40Z | [
"python",
"sql",
"pandas",
"group-by",
"max"
] |
What is the origin/etymology of Matplotlib's symlog (a.k.a. symmetrical log) scale? | 39,988,048 | <p>This is not really a programming question. Rather a historical one...</p>
<hr>
<p>I am wondering about Matplotlib's <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.yscale" rel="nofollow"><em>symlog</em></a>
or "symmetrical log" scale:</p>
<ul>
<li>Is it a Matplotlib invention?</li>
<li>Has anyone seen a similar feature in another plotting tool?
A math text book?
Elsewhere?</li>
</ul>
<hr>
<p>For completeness, and as the documentation is a little bit on the short side:</p>
<p>In essence, <em>symlog</em> gives a linear scale below a certain threshold and a log scale above. This allows plotting a wide range of numbers (as does a log scale), including negative number and zero (which is not possible with a conventional log scale).</p>
<p>There are some examples <a href="http://matplotlib.org/examples/pylab_examples/symlog_demo.html" rel="nofollow">here</a> and <a href="http://stackoverflow.com/questions/3305865/what-is-the-difference-between-log-and-symlog/3513150#3513150">here</a>.</p>
<hr>
<p>As suggested by @Paul, I went ahead and asked the <a href="https://github.com/mdboom" rel="nofollow">original author</a> of the Matplotlib implementation. He "didn't invent the concept" but "believe[s] it was implemented on a user request". He couldn't find a reference in the Matplotlib mailing list, though.</p>
<p>Can anyone point to such a reference? It might be very insightful.</p>
| -2 | 2016-10-11T22:59:09Z | 39,988,706 | <p>1) You would have to ask <a href="https://github.com/mdboom" rel="nofollow">mdboom</a>, who appears to have authored the relevant class (according to git blame), which is </p>
<p>2) SymmetricalLogScale.</p>
<p>Matplotlib has a github, and has been under version control for some time, so these questions are easily checked by reading the source + git blame.</p>
| -1 | 2016-10-12T00:23:08Z | [
"python",
"matplotlib",
"plot",
"scale",
"logarithm"
] |
Find subarray that sums to given value in Python | 39,988,052 | <p>If I am given a subarray [1,2,3,4] and a value 8. I want to return the subarray [1,3,4]. I have a bug in my code and am not sure how to fix it since I am new to recursion. I have my Python code below. I am getting back the value [3,4] to print which is obviously not the correct answer. How do I get my first element in the array?</p>
<pre><code>def main():
s = 0
a = [1,2,3,4] # given array
sa = [] # sub-array
w = 8 # given weight
d = False
d, sa = checkForWeight(a,w,s,d,sa)
print sa
def checkForWeight(a,w,s,d,sa):
l = len(a)
s += a[0]
sa.append(a[0])
if s == w:
d = True
return d, sa
else:
try:
d, sa = checkForWeight(a[1:],w,s,d,sa)
if d != True:
d, sa = checkForWeight(a[2:],w,s,d,sa)
else:
return d, sa
except:
sa = [] # i put this here because I want to erase the incorrect array
return d, sa
</code></pre>
| 0 | 2016-10-11T22:59:43Z | 39,988,197 | <p>Do you need <strong>any</strong> subarray that matches your sum? or <strong>all</strong> subarrays? (Or the shortest, or the longest?) The proper answer is going to be highly dependent on this.</p>
<p>BTW, this is a varient of the knapsack problem: <a href="https://en.wikipedia.org/wiki/Knapsack_problem" rel="nofollow">https://en.wikipedia.org/wiki/Knapsack_problem</a></p>
<p>Also, your recursive strategy appears to be factorial in complexity. (If it were for a code test, this alone would likely fail the applicant.) I'd highly suggest a dynamic programming approach.</p>
<p><strong>EDIT</strong></p>
<p>If you need all possible, you're looking at an NP problem. I'd recommend focusing on ease of implementation/maintenance rather than absolute performance to show off your skills. For example:</p>
<pre><code>import itertools
def find_all_subsets_that_sum(elements, total):
for i in range(len(elements)):
for possible in itertools.combinations(elements, i+1):
if sum(possible)==total:
yield possible
print list(find_all_subsets_that_sum([1,2,3,4,5,6,7,8,9,10], 10))
</code></pre>
<p>Is not the absolute fastest (you could do a lot of pruning in a self-rolled recursive solution), but it'll be the same big-O as whatever more complicated solution you come up with. (All solutions will be dominated by the O(n choose n/2).) Very few interview candidates will respond with something like:</p>
<blockquote>
<p>This is not as fast as it can be, but it's within spitting distance of the fastest, and would likely be the best ROI on developer hours, in both implementation and maintenance. Unless of course the data set we're parsing is huge, in which case i would recommend relaxing the requirements to returning a heuristic of some number of solutions that could be calculated with a O(n^2) dynamic programming solution."</p>
</blockquote>
<p>And you can use that to stand out.</p>
| 1 | 2016-10-11T23:14:40Z | [
"python",
"recursion",
"sub-array"
] |
Find subarray that sums to given value in Python | 39,988,052 | <p>If I am given a subarray [1,2,3,4] and a value 8. I want to return the subarray [1,3,4]. I have a bug in my code and am not sure how to fix it since I am new to recursion. I have my Python code below. I am getting back the value [3,4] to print which is obviously not the correct answer. How do I get my first element in the array?</p>
<pre><code>def main():
s = 0
a = [1,2,3,4] # given array
sa = [] # sub-array
w = 8 # given weight
d = False
d, sa = checkForWeight(a,w,s,d,sa)
print sa
def checkForWeight(a,w,s,d,sa):
l = len(a)
s += a[0]
sa.append(a[0])
if s == w:
d = True
return d, sa
else:
try:
d, sa = checkForWeight(a[1:],w,s,d,sa)
if d != True:
d, sa = checkForWeight(a[2:],w,s,d,sa)
else:
return d, sa
except:
sa = [] # i put this here because I want to erase the incorrect array
return d, sa
</code></pre>
| 0 | 2016-10-11T22:59:43Z | 39,988,518 | <p>The immediate problem is that you append a[0] to sa at the top of the function, but later destroy that with the return value from a sub-call. To patch this, add a clause before you return the final result:</p>
<pre><code> except:
sa = [] # i put this here because I want to erase the incorrect array
if d:
sa = [a[0]] + sa
print "Leave: drop", d,sa
return d, sa
</code></pre>
<p>I do recommend that you follow the recommendations in the comments: instead of passing so much stuff around, concentrate on local control of partial solutions.</p>
<p>Try a solution two ways: with and without the current element. Your recursive calls will look something like this:</p>
<pre><code>sa = checkForWeight(a[1:], w) # Solutions without first element
-- and --
sa = checkForWeight(a[1:], w-a[0]) # Solutions using first element
You don't have to return a success flag; if **sa** is None or empty, the call failed to find a solution. If it succeeded, in the **w-a[0]** call, then you also need to prepend each solution in **sa** with **a[0]**.
</code></pre>
<p>Does this get you moving?</p>
| 1 | 2016-10-11T23:55:40Z | [
"python",
"recursion",
"sub-array"
] |
Find subarray that sums to given value in Python | 39,988,052 | <p>If I am given a subarray [1,2,3,4] and a value 8. I want to return the subarray [1,3,4]. I have a bug in my code and am not sure how to fix it since I am new to recursion. I have my Python code below. I am getting back the value [3,4] to print which is obviously not the correct answer. How do I get my first element in the array?</p>
<pre><code>def main():
s = 0
a = [1,2,3,4] # given array
sa = [] # sub-array
w = 8 # given weight
d = False
d, sa = checkForWeight(a,w,s,d,sa)
print sa
def checkForWeight(a,w,s,d,sa):
l = len(a)
s += a[0]
sa.append(a[0])
if s == w:
d = True
return d, sa
else:
try:
d, sa = checkForWeight(a[1:],w,s,d,sa)
if d != True:
d, sa = checkForWeight(a[2:],w,s,d,sa)
else:
return d, sa
except:
sa = [] # i put this here because I want to erase the incorrect array
return d, sa
</code></pre>
| 0 | 2016-10-11T22:59:43Z | 39,988,549 | <p>I made a recursive solution that works, hope it helps:</p>
<pre><code>def main():
success, solution = WeightChecker((1,2,3,4)).check(8)
print solution
class WeightChecker(object):
def __init__(self, to_check):
self._to_check = to_check
def check(self, weight):
return self._check((), 0, weight)
def _check(self, current_solution, index_to_check, remaining_weight):
if remaining_weight == 0:
return True, current_solution
if index_to_check == len(self._to_check):
return False, ()
current_check = self._to_check[index_to_check]
success, solution = self._check(current_solution + (current_check, ), index_to_check + 1, remaining_weight - current_check)
if not success:
success, solution = self._check(current_solution, index_to_check + 1, remaining_weight)
return success, solution
</code></pre>
<p>(The dynamic programming approach is better as keredson suggested)</p>
| 1 | 2016-10-12T00:00:32Z | [
"python",
"recursion",
"sub-array"
] |
Implementing softmax regression | 39,988,084 | <p>I am trying to make a neural network using softmax regression. I am using the following regression formula:</p>
<p><a href="https://i.stack.imgur.com/oaAL7.png" rel="nofollow"><img src="https://i.stack.imgur.com/oaAL7.png" alt="enter image description here"></a></p>
<p>Lets say I have an input of 1000x100. In other words, lets say I have 1000 images each of dimensions 10x10. Now, let's say the images are images of letters from A, B, C, D, E, F, G, H, I, J and I'm trying to predict this. My design is the following: to have 100 inputs (each image) and 10 outputs.</p>
<p>I have the following doubts. Given that n is superscript in x^n, with regards to the numerator, should I perform the dot product of w (w = weights whose dimensions are 10x100 - 10 representing the number of outputs and 100 representing the number of inputs) and a single x(a single image) or all the imagines combined(1000x100)? I am coding in python and so if I do dot product of w and x^T (10x100 dot 100x1000), then I am not sure how I can make that an exponent. I am using numpy. I am having a hard time wrapping my mind around these matrices on how they can be raised as an exponent.</p>
| 2 | 2016-10-11T23:03:12Z | 39,988,320 | <p>If you are training Neural Networks, it might we worth while to check <a href="http://deeplearning.net/tutorial/mlp.html" rel="nofollow">Theano</a> library. It features various output thresholding functions like <em>tanh</em>, <em>softmax</em>, etc. and allows training of neural networks on GPU.</p>
<p>Also x^n is the output of the last layer in the above formula, not input raised to some exponent. You can't put matrix in exponent.</p>
<p>You should check more about softmax regression. <a href="http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/" rel="nofollow">This</a> might be of help.</p>
| 1 | 2016-10-11T23:31:28Z | [
"python",
"neural-network",
"softmax"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.