title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
How can i solve this regular expression, Python? | 39,991,485 | <p>I would like to construct a reg expression pattern for the following string, and use Python to extract:</p>
<pre><code>str = "hello w0rld how 34 ar3 44 you\n welcome 200 stack000verflow\n"
</code></pre>
<p>What I want to do is extract the <strong>independent</strong> number values and add them which should be 278. A prelimenary python code is:</p>
<pre><code>import re
x = re.findall('([0-9]+)', str)
</code></pre>
<p>The problem with the above code is that numbers within a char substring like 'ar3' would show up. Any idea how to solve this?</p>
| 0 | 2016-10-12T06:02:42Z | 39,993,094 | <p>The solutions posted so far only work (if at all) for numbers that are preceded and followed by whitespace. They will fail if a number occurs at the very start or end of the string, or if a number appears at the end of a sentence, for example. This can be avoided using <a href="http://www.regular-expressions.info/wordboundaries.html" rel="nofollow">word boundary anchors</a>:</p>
<pre><code>s = "100 bottles of beer on the wall (ignore the 1000s!), now 99, now only 98"
s = re.findall(r"\b\d+\b", a) # \b matches at the start/end of an alphanumeric sequence
print(sum(map(int, s)))
</code></pre>
<p>Result: <code>297</code></p>
| 0 | 2016-10-12T07:40:59Z | [
"python",
"regex"
] |
Data Compression Model | 39,991,511 | <p>I am working on an exercise where we are to model Data Compression via a list. Say we were given a list: <code>[4,4,4,4,4,4,2,9,9,9,9,9,5,5,4,4]</code>
We should apply Run-length encoding, and get a new list of <code>[4,6,2,1,9,5,5,2,4,2]</code> where it shows how many 4's (6) 2's(1) 9's (5) 5's (2), etc.. jointly next to the integer. </p>
<p>So far I have the following code, however I am hitting a semantic error, and not sure how to fix it:</p>
<pre><code> def string_compression(List):
newlist=[]
counter=0
x=0
for elm in List:
prev_item= List[x-1]
current_item=List[x]
if prev_item == current_item:
counter+=1
else:
newlist+=[current_item]+[counter]
counter=0
</code></pre>
<p>P.S I am still a beginner, so I apologize if it is a 'dumb' question! I would really appreciate some help.</p>
| 0 | 2016-10-12T06:04:30Z | 39,991,625 | <p>Your code is really confusing with all these counters. What you need to do is implement the algorithm as it is defined. You need just a single index <code>i</code> which keeps track at what position of the list you are currently at and at each step compare the number to the previous number.</p>
<ul>
<li>if they are equal, then increment the counter</li>
<li>if they are not equal then add the new pair of the tracked number with the count and reset the counter to 0 and set the new tracking number to the current number.</li>
</ul>
| 0 | 2016-10-12T06:12:23Z | [
"python",
"encoding",
"compression"
] |
Data Compression Model | 39,991,511 | <p>I am working on an exercise where we are to model Data Compression via a list. Say we were given a list: <code>[4,4,4,4,4,4,2,9,9,9,9,9,5,5,4,4]</code>
We should apply Run-length encoding, and get a new list of <code>[4,6,2,1,9,5,5,2,4,2]</code> where it shows how many 4's (6) 2's(1) 9's (5) 5's (2), etc.. jointly next to the integer. </p>
<p>So far I have the following code, however I am hitting a semantic error, and not sure how to fix it:</p>
<pre><code> def string_compression(List):
newlist=[]
counter=0
x=0
for elm in List:
prev_item= List[x-1]
current_item=List[x]
if prev_item == current_item:
counter+=1
else:
newlist+=[current_item]+[counter]
counter=0
</code></pre>
<p>P.S I am still a beginner, so I apologize if it is a 'dumb' question! I would really appreciate some help.</p>
| 0 | 2016-10-12T06:04:30Z | 39,993,618 | <p>You're on the right track, but this is easier with an index-based loop:</p>
<pre><code>def rle_encode(ls):
# Special case: the empty list.
if not ls:
return []
result = []
# Count the first element in the list, whatever that is.
count = 1
# Loop from 1; we're considering the first element as counted already.
# This is safe because we know that the list isn't empty.
for i in range(1, len(ls)):
if ls[i] == ls[i - 1]:
count += 1
else:
# Store the last run.
result.append(ls[i - 1])
result.append(count)
# Count the current number.
count = 1
# Add the last run since we didn't get a chance to in the loop.
result.append(ls[-1])
result.append(count)
return result
</code></pre>
| 0 | 2016-10-12T08:10:21Z | [
"python",
"encoding",
"compression"
] |
Python: display subarray divisible by k , how do I solve with hashtable? | 39,991,540 | <p>I am looking for <strong>Python</strong> solution</p>
<p>For example, For A=[2, -3, 5, 4, 3, -1, 7]. The for k = 3 there is a subarray for example {-3, 5, 4}. For, k=5 there is subarray {-3, 5, 4, 3, -1, 7} etc.</p>
| -1 | 2016-10-12T06:06:20Z | 39,991,891 | <p>This is the solution. You can try to figure it out yourself.</p>
<pre><code>def solve(a, k):
tbl = {0 : -1}
sum = 0
n = len(a)
for i in xrange(n):
sum = (sum + a[i]) % k
if sum in tbl:
key = tbl[sum]
result = a[key+1: i+1]
return result
tbl[sum] = i
return []
</code></pre>
| 0 | 2016-10-12T06:31:56Z | [
"python",
"hashtable"
] |
Output is blank | 39,991,565 | <pre><code>def load():
name=0
count=0
totalpr=0
name=input("Enter stock name OR -999 to Quit: ")
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("Enter stock name OR -999 to Quit: ")
def calc():
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def print():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
print("Total Profit is $", format(totalpr, '10,.2f'))
def main():
load()
calc()
print()
</code></pre>
<p>I want to write the main() function to call the functions above it. </p>
<p>However, when I run the program, the output is blank - nothing - there is no error given to elucidate the problem. </p>
<p>What am I doing wrong? </p>
| 0 | 2016-10-12T06:08:04Z | 39,991,621 | <p>To run a python module as a program, you should run it like the below one. In your program main is just like other functions and won't be executed automatically.</p>
<pre><code>if __name__ == '__main__':
load()
calc()
print()
</code></pre>
<p>What we are doing is, we check if the module name is <code>__main__</code> and we call other functions. <code>__main__</code> is set only when we run the module as a main program. </p>
| 0 | 2016-10-12T06:12:10Z | [
"python",
"function",
"python-3.x",
"calling-convention"
] |
Output is blank | 39,991,565 | <pre><code>def load():
name=0
count=0
totalpr=0
name=input("Enter stock name OR -999 to Quit: ")
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("Enter stock name OR -999 to Quit: ")
def calc():
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def print():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
print("Total Profit is $", format(totalpr, '10,.2f'))
def main():
load()
calc()
print()
</code></pre>
<p>I want to write the main() function to call the functions above it. </p>
<p>However, when I run the program, the output is blank - nothing - there is no error given to elucidate the problem. </p>
<p>What am I doing wrong? </p>
| 0 | 2016-10-12T06:08:04Z | 39,991,683 | <p>You are not calling <code>main()</code> & also change <code>print()</code> function name, here I have changed it to <code>fprint()</code> </p>
<pre><code>def load():
name=0
count=0
totalpr=0
name=input("Enter stock name OR -999 to Quit: ")
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("Enter stock name OR -999 to Quit: ")
def calc():
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def fprint():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
print("Total Profit is $", format(totalpr, '10,.2f'))
def main():
load()
calc()
fprint()
main()
</code></pre>
<p>edit: changed function name of print()</p>
| 0 | 2016-10-12T06:17:19Z | [
"python",
"function",
"python-3.x",
"calling-convention"
] |
Output is blank | 39,991,565 | <pre><code>def load():
name=0
count=0
totalpr=0
name=input("Enter stock name OR -999 to Quit: ")
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("Enter stock name OR -999 to Quit: ")
def calc():
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def print():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
print("Total Profit is $", format(totalpr, '10,.2f'))
def main():
load()
calc()
print()
</code></pre>
<p>I want to write the main() function to call the functions above it. </p>
<p>However, when I run the program, the output is blank - nothing - there is no error given to elucidate the problem. </p>
<p>What am I doing wrong? </p>
| 0 | 2016-10-12T06:08:04Z | 39,991,860 | <p>Along with the problem of using built-in function, you've scope problem too. So, you must make the variables defined in each function <code>global</code> so that they can be assessed in other functions.</p>
<pre><code>def load():
global name
global count
global shares
global pp
global sp
global commission
name=input("Enter stock name OR -999 to Quit: ")
count =0
while name != '-999':
count=count+1
shares=int(input("Enter number of shares: "))
pp=float(input("Enter purchase price: "))
sp=float(input("Enter selling price: "))
commission=float(input("Enter commission: "))
name=input("Enter stock name OR -999 to Quit: ")
def calc():
global amount_paid
global amount_sold
global profit_loss
global commission_paid_sale
global commission_paid_purchase
global totalpr
totalpr=0
amount_paid=shares*pp
commission_paid_purchase=amount_paid*commission
amount_sold=shares*sp
commission_paid_sale=amount_sold*commission
profit_loss=(amount_sold - commission_paid_sale) -(amount_paid + commission_paid_purchase)
totalpr=totalpr+profit_loss
def display():
print("\nStock Name:", name)
print("Amount paid for the stock: $", format(amount_paid, '10,.2f'))
print("Commission paid on the purchase: $", format(commission_paid_purchase, '10,.2f'))
print("Amount the stock sold for: $", format(amount_sold, '10,.2f'))
print("Commission paid on the sale: $", format(commission_paid_sale, '10,.2f'))
print("Profit (or loss if negative): $", format(profit_loss, '10,.2f'))
print("Total Profit is $", format(totalpr, '10,.2f'))
def main():
load()
calc()
display()
main()
</code></pre>
| 1 | 2016-10-12T06:29:15Z | [
"python",
"function",
"python-3.x",
"calling-convention"
] |
Why does python not have a awgn function like Matlab does? | 39,991,590 | <p>I want to add noise to my synthetic data set in python. I am used to MATLAB but I noticed that python (nor numpy) the option of using <code>awgn</code> (which to my understanding can automatically determine the signal to noise ratio when adding Gaussian Noise, I guess this how it makes sure it doesn't destroy the signal by adding noise with too high variance by accident... not sure if there are other advantages of using it...). </p>
<p>I was wondering why is there no <code>awgn</code> function to add noise in python as there is MATLAB? Is there a profound reason or developer just don't think its not necessary for some reason? If that is the case, what is the reason?</p>
| -8 | 2016-10-12T06:10:02Z | 39,992,624 | <p>As per your link, <code>awgn</code> is part of the MATLAB Communications toolbox, <a href="https://www.mathworks.com/help/comm/ref/awgn.html" rel="nofollow">https://www.mathworks.com/help/comm/ref/awgn.html</a>. It isn't part of the basic MATLAB package.</p>
<p>Similarly, in Python you'll have to go looking at some third party package, something involving signals, communications. <code>numpy</code> is the package that implements MATLAB like arrays, and many specialized tasks are implemented in <code>scipy</code>, or even in an addon <code>scikit</code>.</p>
<p>So the proper question isn't <code>why does not Python have it?</code>, but rather, <code>is there some Python package that implements it?</code></p>
<p>And as Mahdi commented, there is a function of that name in <a href="https://github.com/veeresht/CommPy/blob/master/commpy/channels.py" rel="nofollow">https://github.com/veeresht/CommPy/blob/master/commpy/channels.py</a></p>
<p>It doesn't look very complicated, so others might have implemented it as well.</p>
<p>As this link shows, <a href="http://stackoverflow.com/questions/14058340/adding-noise-to-a-signal-in-python">adding noise to a signal in python</a>, <code>numpy</code> has a <code>np.random.normal</code>, which will do most of the work.</p>
<p>Python / Numpy / Scipy is not supported by an international company. Functionality like this gets added by individuals working on their own projects. I wouldn't be surprised if the MATLAB Communications package has its origin in some third-party project many years ago. </p>
<p>The <code>scipy.signal</code> docs has an example of adding normal noise to test case</p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/tutorial/signal.html#periodogram-measurements" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/tutorial/signal.html#periodogram-measurements</a></p>
<pre><code>fs = 10e3
N = 1e5
amp = 2*np.sqrt(2)
freq = 1270.0
noise_power = 0.001 * fs / 2 time = np.arange(N) / fs
x = amp*np.sin(2*np.pi*freq*time)
x += np.random.normal(scale=np.sqrt(noise_power), size=time.shape)
</code></pre>
| 6 | 2016-10-12T07:15:24Z | [
"python",
"matlab",
"numpy",
"machine-learning",
"statistics"
] |
How to get the reference of one variable of one class in another class in python? | 39,991,676 | <p>I have a class C1, and another class C2 which takes one instance of C1 as a variable. If I want to get the variable of C1 in C2, I have to use <code>self.c1.variable</code>. How can I get a reference of the variable of C1 in C2 so that I can get it directly?</p>
<pre><code>class C1():
def __init__(self,a):
self.variable = a
class C2():
def __init__(self, c1):
self.c1 = c1
def print_variable(self):
print self.c1.variable
c1 = C1(1)
c2 = C2(c1)
c2.print_variable()
</code></pre>
| 0 | 2016-10-12T06:16:37Z | 39,991,734 | <p>You can use <code>@property</code> decorator to achieve that:</p>
<pre><code>class C2():
def __init__(self, c1):
self.c1 = c1
@property
def variable(self):
return self.c1.variable
</code></pre>
<p>In case you want to modify the <code>variable</code> in C2 instance:</p>
<pre><code> @variable.setter
def variable(self, value):
self.c1.variable = value
</code></pre>
| 0 | 2016-10-12T06:20:25Z | [
"python",
"class",
"reference"
] |
Pygame pressing key to move a Rect | 39,991,867 | <p>I am trying to keep pressing a key and move the square automatically. I've tried to change <code>pygame.key.get.pressed()</code> to <code>pygame.key.get.focused()</code>, but still nothing.</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((400,300))
pygame.display.set_caption("shield hacking")
JogoAtivo = True
GAME_BEGIN = False
# Speed in pixels per frame
x_speed = 0
y_speed = 0
cordX = 10
cordY = 100
def desenha():
screen.fill((0, 0, 0))
quadrado = pygame.draw.rect(screen, (255, 0, 0), (cordX, cordY ,50, 52))
pygame.display.flip();
while JogoAtivo:
for evento in pygame.event.get():
print(evento)
#verifica se o evento que veio eh para fechar a janela
pressed_keys = pygame.key.get_pressed()
if evento.type == pygame.QUIT:
JogoAtivo = False
pygame.quit();
if pressed_keys[pygame.K_SPACE]:
print('GAME BEGIN')
GAME_BEGIN = True
desenha();
if pressed_keys[pygame.K_LEFT] and GAME_BEGIN:
speedX=-3
cordX+=speedX
desenha()
if pressed_keys[pygame.K_RIGHT] and GAME_BEGIN:
speedX=3
cordX+=speedX
desenha()
</code></pre>
<p>UPDATED THE CODE, BUT STILL THE SAME PROBLEMS (included KEYDOWN event).</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((400,300))
pygame.display.set_caption("shield hacking")
JogoAtivo = True
GAME_BEGIN = False
# Speed in pixels per frame
x_speed = 0
y_speed = 0
cordX = 10
cordY = 100
def desenha():
screen.fill((0, 0, 0))
quadrado = pygame.draw.rect(screen, (255, 0, 0), (cordX, cordY ,50, 52))
pygame.display.flip();
while JogoAtivo:
for evento in pygame.event.get():
print(evento)
#verifica se o evento que veio eh para fechar a janela
if evento.type == pygame.QUIT:
JogoAtivo = False
pygame.quit();
if evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_SPACE:
print('GAME BEGIN')
GAME_BEGIN = True
desenha();
if evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_LEFT:
speedX=-3
cordX+=speedX
desenha()
if evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_RIGHT:
speedX=3
cordX+=speedX
desenha()
</code></pre>
| 0 | 2016-10-12T06:29:32Z | 39,992,198 | <p>You can't use <code>get_pressed()</code> in <code>for evento</code> loop because when you keep pressed key then key doesn't generate events and <code>pygame.event.get()</code> returns empty list so <code>for</code> does nothing.</p>
<p>When you start pressing key then system generates single even <code>KEYDOWN</code>, when you stop pressing key then system generates single even <code>KEYUP</code> but system doesn't generate <code>KEYDOW</code> events between this two moments (when you keep pressed key).</p>
<p>You have to use <code>get_pressed()</code> (and other code) after <code>for</code> loop.</p>
<pre><code>for evento in pygame.event.get():
print(evento)
if evento.type == pygame.QUIT:
JogoAtivo = False
pygame.quit();
# after for loop
pressed_keys = pygame.key.get_pressed()
if pressed_keys[pygame.K_SPACE]:
print('GAME BEGIN')
GAME_BEGIN = True
desenha();
if pressed_keys[pygame.K_LEFT] and GAME_BEGIN:
speedX=-3
cordX+=speedX
desenha()
if pressed_keys[pygame.K_RIGHT] and GAME_BEGIN:
speedX=3
cordX+=speedX
desenha()
</code></pre>
<p>or (more or less)</p>
<pre><code>for evento in pygame.event.get():
print(evento)
#verifica se o evento que veio eh para fechar a janela
if evento.type == pygame.QUIT:
JogoAtivo = False
pygame.quit();
elif evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_SPACE:
print('GAME BEGIN')
GAME_BEGIN = True
elif evento.key == pygame.K_LEFT:
speedX = -3
elif evento.key == pygame.K_RIGHT:
speedX = 3
elif evento.type == pygame.KEYUP:
if evento.key in (pygame.K_LEFT, pygame.K_RIGHT):
speedX = 0
# after loop
if GAME_BEGIN:
cordX += speedX
desenha()
</code></pre>
| 0 | 2016-10-12T06:51:06Z | [
"python",
"keyboard",
"pygame"
] |
Pygame pressing key to move a Rect | 39,991,867 | <p>I am trying to keep pressing a key and move the square automatically. I've tried to change <code>pygame.key.get.pressed()</code> to <code>pygame.key.get.focused()</code>, but still nothing.</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((400,300))
pygame.display.set_caption("shield hacking")
JogoAtivo = True
GAME_BEGIN = False
# Speed in pixels per frame
x_speed = 0
y_speed = 0
cordX = 10
cordY = 100
def desenha():
screen.fill((0, 0, 0))
quadrado = pygame.draw.rect(screen, (255, 0, 0), (cordX, cordY ,50, 52))
pygame.display.flip();
while JogoAtivo:
for evento in pygame.event.get():
print(evento)
#verifica se o evento que veio eh para fechar a janela
pressed_keys = pygame.key.get_pressed()
if evento.type == pygame.QUIT:
JogoAtivo = False
pygame.quit();
if pressed_keys[pygame.K_SPACE]:
print('GAME BEGIN')
GAME_BEGIN = True
desenha();
if pressed_keys[pygame.K_LEFT] and GAME_BEGIN:
speedX=-3
cordX+=speedX
desenha()
if pressed_keys[pygame.K_RIGHT] and GAME_BEGIN:
speedX=3
cordX+=speedX
desenha()
</code></pre>
<p>UPDATED THE CODE, BUT STILL THE SAME PROBLEMS (included KEYDOWN event).</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((400,300))
pygame.display.set_caption("shield hacking")
JogoAtivo = True
GAME_BEGIN = False
# Speed in pixels per frame
x_speed = 0
y_speed = 0
cordX = 10
cordY = 100
def desenha():
screen.fill((0, 0, 0))
quadrado = pygame.draw.rect(screen, (255, 0, 0), (cordX, cordY ,50, 52))
pygame.display.flip();
while JogoAtivo:
for evento in pygame.event.get():
print(evento)
#verifica se o evento que veio eh para fechar a janela
if evento.type == pygame.QUIT:
JogoAtivo = False
pygame.quit();
if evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_SPACE:
print('GAME BEGIN')
GAME_BEGIN = True
desenha();
if evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_LEFT:
speedX=-3
cordX+=speedX
desenha()
if evento.type == pygame.KEYDOWN:
if evento.key == pygame.K_RIGHT:
speedX=3
cordX+=speedX
desenha()
</code></pre>
| 0 | 2016-10-12T06:29:32Z | 40,117,388 | <p>From what I understand (correct me if I'm wrong, I may not be quite understanding what you asked), but it looks like what you are trying to do is press a key, keep the key held down, and have pygame continually process KEYDOWN events as you are holding the key down. In pygame, this does not work, but you <em>can</em> handle holding a key down in a different way. Just think: if you start holding a key down, it creates a KEYDOWN event. When you let go of that key, it generates a KEYUP event. So therefore, the key is being held down <em>after</em> you push the key down and <em>before</em> you let it go. The following code explains the concept through example:</p>
<pre><code>import pygame, sys, time
pygame.init()
screen = pygame.display.set_mode([640, 480])
a_pressed = False
while 1:
time.sleep(.2)
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_a:
a_pressed = True
if event.type == pygame.KEYUP:
if event.key == pygame.K_a:
a_pressed = False
if a_pressed == True:
print '"A" is currently being pressed down.'
</code></pre>
<p>I haven't been able to test the code (using school computer), but this should print the above message while you are holding the "A" key, at approximately 5 times per second (this is a lazy way of limiting fps). The same applies not only for all keys, but also dragging with the mouse.</p>
| 0 | 2016-10-18T20:21:03Z | [
"python",
"keyboard",
"pygame"
] |
Reducing time complexity of contiguous subarray | 39,991,921 | <p>I was wondering how I could reduce the time complexity of this algorithm.
It calculates the length of the max subarray having elements that sum to the k integer.</p>
<p>a = an array of integers</p>
<p>k = max integer</p>
<p>ex: a = [1,2,3], k= 3</p>
<p>possible subarrays = [1],[1,2]</p>
<p>length of the max subarray = 2</p>
<pre><code> sys.setrecursionlimit(20000)
def maxLength(a, k):
#a = [1,2,3]
#k = 4
current_highest = 0
no_bigger = len(a)-1
for i in xrange(len(a)): #0 in [0,1,2]
current_sum = a[i]
sub_total = 1
for j in xrange(len(a)):
if current_sum <= k and ((i+sub_total)<=no_bigger) and (k>=(current_sum + a[i+sub_total])):
current_sum += a[i+sub_total]
sub_total += 1
else:
break
if sub_total > current_highest:
current_highest = sub_total
return current_highest
</code></pre>
| 0 | 2016-10-12T06:33:59Z | 39,992,681 | <p>You can use <code>sliding window</code> algorithm for this.</p>
<p>Start at <code>index 0</code>, and calculate sum of subarray as you move forward. When sum exceeds <code>k</code>, start decrementing the initial elements till sum is again less than <code>k</code> and start summing up again.</p>
<p>Find below the python code:</p>
<pre><code>def max_length(a,k):
s = 0
m_len = 0
i,j=0,0
l = len(a)
while i<l:
if s<=k and m_len<(j-i):
m_len = j-i
print i,j,s
if s<=k and j<l:
s+=a[j]
j+=1
else:
s-=a[i]
i+=1
return m_len
a = [1,2,3]
k = 3
print max_length(a,k)
</code></pre>
<p>OUTPUT:</p>
<pre><code>2
</code></pre>
| 0 | 2016-10-12T07:19:10Z | [
"python",
"algorithm",
"optimization"
] |
Splitting list items and adding value to them in python | 39,992,009 | <p>I want to split list items then add values to them. To do this I am required to take the first sentence; split it into a list; use the <code>isdigit()</code> to determine if the list element is a digit then add 1 to the element; join the new list elements together using <code>join()</code>. It needs to be done using the for loop with enumerate. </p>
<p>This is my code :</p>
<pre><code>a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
printer = a.split(" ")
print printer
if printer.isdigit():
</code></pre>
| 0 | 2016-10-12T06:38:43Z | 39,992,205 | <p>Looks like you want something like this<br>
I have replaced <code>li[0]</code> and other variables with a string "Some_Value" because I did not knew the value of those variables</p>
<pre><code>a="You would like to visit " + "Some_Value" +" as city 1 and " + "Some_Value" + " as city 2 and "+ "Some_Value" + " as city 3 on your trip"
a = a.split(" ")
index = 0
for word in a:
if word.isdigit():
a[index] = str(int(word) + 1)
index += 1
print " ".join(a)
</code></pre>
<p><strong>OP</strong><br>
<code>You would like to visit Some_Value as city 2 and Some_Value as city 3 and Some_Value as city 4 on your trip</code></p>
| 1 | 2016-10-12T06:51:16Z | [
"python"
] |
Splitting list items and adding value to them in python | 39,992,009 | <p>I want to split list items then add values to them. To do this I am required to take the first sentence; split it into a list; use the <code>isdigit()</code> to determine if the list element is a digit then add 1 to the element; join the new list elements together using <code>join()</code>. It needs to be done using the for loop with enumerate. </p>
<p>This is my code :</p>
<pre><code>a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
printer = a.split(" ")
print printer
if printer.isdigit():
</code></pre>
| 0 | 2016-10-12T06:38:43Z | 39,992,754 | <p>Here is another way to look at solution <em>(with comments added)</em></p>
<pre><code>li = ["New York", "London", "Tokyo"] #This is an example list for li
a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
printer = a.split(" ")
print printer
new_printer = []
for word in printer:
if word.isdigit():
word = str(int(word) + 1) #this increments word by 1. first we have to convert the string value of word to number (int) and then add one (+ 1), and then convert it back to a string (str) and save it back to word
new_printer.append(word) # this adds word (changed or not) at the end of new_printer
end_result = " ".join(new_printer) #this joins all the words in new_printer and places a space between them
print end_result
</code></pre>
<p>Good luck!</p>
| 0 | 2016-10-12T07:23:49Z | [
"python"
] |
Splitting list items and adding value to them in python | 39,992,009 | <p>I want to split list items then add values to them. To do this I am required to take the first sentence; split it into a list; use the <code>isdigit()</code> to determine if the list element is a digit then add 1 to the element; join the new list elements together using <code>join()</code>. It needs to be done using the for loop with enumerate. </p>
<p>This is my code :</p>
<pre><code>a="You would like to visit "+li[0]+" as city 1 and " +li[1]+ " as city 2 and "+li[2]+" as city 3 on your trip"
print a
printer = a.split(" ")
print printer
if printer.isdigit():
</code></pre>
| 0 | 2016-10-12T06:38:43Z | 39,993,388 | <pre><code>' '.join([ str( int(i)+1 ) if i.isdigit() else i for i in a.split() ] )
</code></pre>
| 0 | 2016-10-12T07:57:19Z | [
"python"
] |
How to add a named keyword argument to a function signature | 39,992,169 | <p>Can I use a mixin class to add a named keyword to the signature of a function in the base? At the moment I can't work out how to avoid overwriting the base function's signature:</p>
<pre><code>from inspect import signature
class Base(object):
def __init__(self, foo='bar', **kwargs):
pass
class Mixin(base):
def __init__(self, foo2='bar2', **kwargs):
super(Mixin, self).__init__(**kwargs)
class P(Mixin, Base):
pass
print(signature(P.__init__))
# Output: (self, foo2='bar2', **kwargs)
$ Desired output: (self, foo1='bar1', foo2='bar2', **kwargs)
</code></pre>
<p><strong>Edit</strong> Thanks for the answers so far, unfortunately I actually need to add the named parameter to the function signature, while also keeping the named parameters from the original base function signature (and these will vary depending on the base). The reason is that the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L207" rel="nofollow"><code>signature</code> is used elsewhere for intraspection to extract the parameter names.</a>: Is this going to be possible in the <code>__init__</code> method?</p>
<p>Here is a (horrible) partial solution, which changes the signature on instances, but not on the class itself, also it's missing <code>**kwargs</code> for some reason:</p>
<pre><code>class Mixin(Base):
def __init__(self, foo2='bar2', **kwargs):
super(Mixin, self).__init__(**kwargs)
sig = signature(super(Mixin, self).__init__)
params = {k:v.default for k,v in sig.parameters.items() if v.default != _empty}
params['foo2'] = 'bar2'
argstring = ",".join("{}='{}'".format(k,v) for k,v in params.items())
exec("def new_init({}, **kwargs): self.__init__(**kwargs)".format(argstring))
self.__init__ = new_init
class P(Mixin, Base):
pass
p = P()
print(signature(p.__init__))
# (foo2='bar2', foo='bar')
</code></pre>
| 0 | 2016-10-12T06:49:22Z | 39,993,173 | <p>When subclassing, you can either extend or override the methods of the superclass, but you can't* modify them directly. Overiding is achieved by simply writing a new method of the same name; extension is achieved by overriding the method and then calling the superclass's method as a part of your replacement.</p>
<p>Here are two classes, the second of which overrides one method and extends another:</p>
<pre><code>class A:
def method1(self):
print("Method A.1 called")
def method2(self, arg1):
print("Method A.2 called with argument", arg1)
class B(A):
def method1(self):
print("Method B.1 called")
def method2(self, arg1, arg2="keyword arg"):
print("Method B.1 called with arg1=", arg1, "arg2=", arg2)
super().method2(arg1)
print("Extended method complete")
</code></pre>
<p>Using this in an interactive context we see the following:</p>
<pre><code>>>> b = B()
>>> b.method1()
Method B.1 called
>>> b.method2("first", arg2="second")
Method B.1 called with arg1= first arg2= second
Method A.2 called with argument first
Extended method complete
</code></pre>
<p>* Technically it's possible in Python, but the code would be ugly and anyway any changes to the superclass would be seen by everything that <em>used</em> the superclass.</p>
| 0 | 2016-10-12T07:45:12Z | [
"python",
"oop"
] |
How to add a named keyword argument to a function signature | 39,992,169 | <p>Can I use a mixin class to add a named keyword to the signature of a function in the base? At the moment I can't work out how to avoid overwriting the base function's signature:</p>
<pre><code>from inspect import signature
class Base(object):
def __init__(self, foo='bar', **kwargs):
pass
class Mixin(base):
def __init__(self, foo2='bar2', **kwargs):
super(Mixin, self).__init__(**kwargs)
class P(Mixin, Base):
pass
print(signature(P.__init__))
# Output: (self, foo2='bar2', **kwargs)
$ Desired output: (self, foo1='bar1', foo2='bar2', **kwargs)
</code></pre>
<p><strong>Edit</strong> Thanks for the answers so far, unfortunately I actually need to add the named parameter to the function signature, while also keeping the named parameters from the original base function signature (and these will vary depending on the base). The reason is that the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L207" rel="nofollow"><code>signature</code> is used elsewhere for intraspection to extract the parameter names.</a>: Is this going to be possible in the <code>__init__</code> method?</p>
<p>Here is a (horrible) partial solution, which changes the signature on instances, but not on the class itself, also it's missing <code>**kwargs</code> for some reason:</p>
<pre><code>class Mixin(Base):
def __init__(self, foo2='bar2', **kwargs):
super(Mixin, self).__init__(**kwargs)
sig = signature(super(Mixin, self).__init__)
params = {k:v.default for k,v in sig.parameters.items() if v.default != _empty}
params['foo2'] = 'bar2'
argstring = ",".join("{}='{}'".format(k,v) for k,v in params.items())
exec("def new_init({}, **kwargs): self.__init__(**kwargs)".format(argstring))
self.__init__ = new_init
class P(Mixin, Base):
pass
p = P()
print(signature(p.__init__))
# (foo2='bar2', foo='bar')
</code></pre>
| 0 | 2016-10-12T06:49:22Z | 39,997,698 | <p>@jonrsharpe answered this in the comment, but if you want to require a parameter you could do this as a dynamic check in the mixin's method:</p>
<pre><code>class Base(object):
def __init__(self, foo='bar', **kwargs):
pass
class Mixin(base):
def __init__(self, **kwargs):
kwargs.pop('required_param')
kwargs.pop('optional_param', None)
super(Mixin, self).__init__(**kwargs)
class P(Mixin, Base):
pass
</code></pre>
| 0 | 2016-10-12T11:40:42Z | [
"python",
"oop"
] |
WindowsError: [Error 5] Access is denied in Flask | 39,992,222 | <p>I am trying to move an uploaded file to a specific folder in my Windows system and it gives me WindowsError: [Error 5] Access is denied error. The solutions I happen to see for such problems are run python as Administrator from cmd line. I am not sure if that is possible since it's a web app and i am using the default flask server for development purpose .</p>
<p>my code is</p>
<pre><code>@app.route('/test',methods=['POST'])
def test():
import os
if not os.path.exists("history_plugin"):
os.makedirs("test")
f = open('test/abc.txt', 'w+')
f.close()
</code></pre>
| 0 | 2016-10-12T06:52:17Z | 39,993,019 | <p>I had been running the application directly from Pycharm, which doesn't run it in administrator mode</p>
<p>I tried running it using command prompt as administrator and it worked for me.</p>
| 0 | 2016-10-12T07:37:13Z | [
"python",
"flask"
] |
WindowsError: [Error 5] Access is denied in Flask | 39,992,222 | <p>I am trying to move an uploaded file to a specific folder in my Windows system and it gives me WindowsError: [Error 5] Access is denied error. The solutions I happen to see for such problems are run python as Administrator from cmd line. I am not sure if that is possible since it's a web app and i am using the default flask server for development purpose .</p>
<p>my code is</p>
<pre><code>@app.route('/test',methods=['POST'])
def test():
import os
if not os.path.exists("history_plugin"):
os.makedirs("test")
f = open('test/abc.txt', 'w+')
f.close()
</code></pre>
| 0 | 2016-10-12T06:52:17Z | 39,996,819 | <p>Running the application 'directly in pycharm' is the equivlant of running it on the command prompt, but with a few caveats. <em>Personally I don't like running python in pycharm as I find it can cause errors.</em></p>
<p>Ideally you don't want to run as administrator, but you might find you have a couple of problems when it comes to windows. Firstly are you sure the Access Denied is from the file, and not from the trying to bind the app to port 80 (also note other issues with trying to bind on windows such as Skype taking over port 80)</p>
<p>If the problem is caused by the mkdir, make sure your user has permissions on the parent folder, not just the folder its creating. You are right to be wary of running as admin. Generally speaking you should create users per service and run as that, but it can be a pain during development (also, in 'production' you would want to run something like uwsgi or similar to act as a python process manager).</p>
<p>The other thing to note is that where you are running from - if you're running from your Desktop folder, I have noticed that this can also have strange permission problems for applications - but I'm assuming you're in a user 'workbench' folder of some kind.</p>
| 0 | 2016-10-12T10:54:56Z | [
"python",
"flask"
] |
Python not recognised as an internal or external command in windows 7 | 39,992,339 | <p>I have installed python 2.7.11 from <strong><a href="https://www.python.org/downloads/release/python-2711/" rel="nofollow">this</a></strong> link and then restarted my system. However when I go to cmd and run <code>python --version</code>. It gives me an error that </p>
<blockquote>
<p>python not recognized as an internal or external command.</p>
</blockquote>
<p>So I try to manually add it to my <code>Path variable</code> I see my python being installed at <code>C:\Python27</code> so I add <code>someotherpath;C:\Python27</code> to path variable and reopened <code>windows cmd</code>. But it still gives me the same error.</p>
<p>Is there some other way to get over with this problem.</p>
<p>Thanks</p>
| 0 | 2016-10-12T06:59:06Z | 39,992,394 | <p>Changes in PATH variable do not affect already open programs. Close your command line (or powershell) window and reopen it in order to use new PATH variable.</p>
| 1 | 2016-10-12T07:02:17Z | [
"python",
"cmd",
"path"
] |
Python not recognised as an internal or external command in windows 7 | 39,992,339 | <p>I have installed python 2.7.11 from <strong><a href="https://www.python.org/downloads/release/python-2711/" rel="nofollow">this</a></strong> link and then restarted my system. However when I go to cmd and run <code>python --version</code>. It gives me an error that </p>
<blockquote>
<p>python not recognized as an internal or external command.</p>
</blockquote>
<p>So I try to manually add it to my <code>Path variable</code> I see my python being installed at <code>C:\Python27</code> so I add <code>someotherpath;C:\Python27</code> to path variable and reopened <code>windows cmd</code>. But it still gives me the same error.</p>
<p>Is there some other way to get over with this problem.</p>
<p>Thanks</p>
| 0 | 2016-10-12T06:59:06Z | 39,992,468 | <p>Please run the following command in the command prompt.</p>
<blockquote>
<p>echo %PATH%
It should have whatever path you have set manually. Otherwise Open a new Command prompt and try the same command.
Run python</p>
</blockquote>
<p>If it is not working after that.</p>
<p>Please kindly check the Python.exe is available in <em>C:\Python</em> or Not ?</p>
| 2 | 2016-10-12T07:06:03Z | [
"python",
"cmd",
"path"
] |
Python not recognised as an internal or external command in windows 7 | 39,992,339 | <p>I have installed python 2.7.11 from <strong><a href="https://www.python.org/downloads/release/python-2711/" rel="nofollow">this</a></strong> link and then restarted my system. However when I go to cmd and run <code>python --version</code>. It gives me an error that </p>
<blockquote>
<p>python not recognized as an internal or external command.</p>
</blockquote>
<p>So I try to manually add it to my <code>Path variable</code> I see my python being installed at <code>C:\Python27</code> so I add <code>someotherpath;C:\Python27</code> to path variable and reopened <code>windows cmd</code>. But it still gives me the same error.</p>
<p>Is there some other way to get over with this problem.</p>
<p>Thanks</p>
| 0 | 2016-10-12T06:59:06Z | 39,993,065 | <p>Easiest way to fix this is to reinstall Python and check "Add to Path" button during the installation.</p>
| 0 | 2016-10-12T07:39:44Z | [
"python",
"cmd",
"path"
] |
Python not recognised as an internal or external command in windows 7 | 39,992,339 | <p>I have installed python 2.7.11 from <strong><a href="https://www.python.org/downloads/release/python-2711/" rel="nofollow">this</a></strong> link and then restarted my system. However when I go to cmd and run <code>python --version</code>. It gives me an error that </p>
<blockquote>
<p>python not recognized as an internal or external command.</p>
</blockquote>
<p>So I try to manually add it to my <code>Path variable</code> I see my python being installed at <code>C:\Python27</code> so I add <code>someotherpath;C:\Python27</code> to path variable and reopened <code>windows cmd</code>. But it still gives me the same error.</p>
<p>Is there some other way to get over with this problem.</p>
<p>Thanks</p>
| 0 | 2016-10-12T06:59:06Z | 39,993,424 | <p>Finally after lot of searching I found that python comes with a script for adding to path variable. So I just ran the script with</p>
<pre><code>c:\python27\tools\scripts\win_add2path.py
</code></pre>
<p>and it works now.</p>
| 0 | 2016-10-12T07:59:01Z | [
"python",
"cmd",
"path"
] |
to_datetime Value Error: at least that [year, month, day] must be specified Pandas | 39,992,411 | <p>I am reading from two different CSVs each having date values in their columns. After read_csv I want to convert the data to datetime with the to_datetime method. The formats of the dates in each CSV are slightly different, and although the differences are noted and specified in the to_datetime format argument, the one converts fine, while the other returns the following value error.</p>
<pre><code>ValueError: to assemble mappings requires at least that [year, month, day] be sp
ecified: [day,month,year] is missing
</code></pre>
<p>first dte.head()</p>
<pre><code>0 10/14/2016 10/17/2016 10/19/2016 8/9/2016 10/17/2016 7/20/2016
1 7/15/2016 7/18/2016 7/20/2016 6/7/2016 7/18/2016 4/19/2016
2 4/15/2016 4/14/2016 4/18/2016 3/15/2016 4/18/2016 1/14/2016
3 1/15/2016 1/19/2016 1/19/2016 10/19/2015 1/19/2016 10/13/2015
4 10/15/2015 10/14/2015 10/19/2015 7/23/2015 10/14/2015 7/15/2015
</code></pre>
<p>this dataframe converts fine using the following code:</p>
<pre><code>dte = pd.to_datetime(dte, infer_datetime_format=True)
</code></pre>
<p>or </p>
<pre><code>dte = pd.to_datetime(dte[x], format='%m/%d/%Y')
</code></pre>
<p>the second dtd.head()</p>
<pre><code>0 2004-01-02 2004-01-02 2004-01-09 2004-01-16 2004-01-23 2004-01-30
1 2004-01-05 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
2 2004-01-06 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
3 2004-01-07 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
4 2004-01-08 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
</code></pre>
<p>this csv doesn't convert using either: </p>
<pre><code>dtd = pd.to_datetime(dtd, infer_datetime_format=True)
</code></pre>
<p>or </p>
<pre><code>dtd = pd.to_datetime(dtd, format='%Y-%m-%d')
</code></pre>
<p>It returns the value error above. Interestingly, however, using the parse_dates and infer_datetime_format as arguments of the read_csv method work fine. What is going on here? </p>
| 2 | 2016-10-12T07:02:54Z | 39,992,492 | <p>For me works <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>apply</code></a> function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a>:</p>
<pre><code>print (dtd)
1 2 3 4 5 6
0
0 2004-01-02 2004-01-02 2004-01-09 2004-01-16 2004-01-23 2004-01-30
1 2004-01-05 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
2 2004-01-06 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
3 2004-01-07 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
4 2004-01-08 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
dtd = dtd.apply(pd.to_datetime)
print (dtd)
1 2 3 4 5 6
0
0 2004-01-02 2004-01-02 2004-01-09 2004-01-16 2004-01-23 2004-01-30
1 2004-01-05 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
2 2004-01-06 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
3 2004-01-07 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
4 2004-01-08 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
</code></pre>
| 2 | 2016-10-12T07:08:02Z | [
"python",
"csv",
"pandas"
] |
to_datetime Value Error: at least that [year, month, day] must be specified Pandas | 39,992,411 | <p>I am reading from two different CSVs each having date values in their columns. After read_csv I want to convert the data to datetime with the to_datetime method. The formats of the dates in each CSV are slightly different, and although the differences are noted and specified in the to_datetime format argument, the one converts fine, while the other returns the following value error.</p>
<pre><code>ValueError: to assemble mappings requires at least that [year, month, day] be sp
ecified: [day,month,year] is missing
</code></pre>
<p>first dte.head()</p>
<pre><code>0 10/14/2016 10/17/2016 10/19/2016 8/9/2016 10/17/2016 7/20/2016
1 7/15/2016 7/18/2016 7/20/2016 6/7/2016 7/18/2016 4/19/2016
2 4/15/2016 4/14/2016 4/18/2016 3/15/2016 4/18/2016 1/14/2016
3 1/15/2016 1/19/2016 1/19/2016 10/19/2015 1/19/2016 10/13/2015
4 10/15/2015 10/14/2015 10/19/2015 7/23/2015 10/14/2015 7/15/2015
</code></pre>
<p>this dataframe converts fine using the following code:</p>
<pre><code>dte = pd.to_datetime(dte, infer_datetime_format=True)
</code></pre>
<p>or </p>
<pre><code>dte = pd.to_datetime(dte[x], format='%m/%d/%Y')
</code></pre>
<p>the second dtd.head()</p>
<pre><code>0 2004-01-02 2004-01-02 2004-01-09 2004-01-16 2004-01-23 2004-01-30
1 2004-01-05 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
2 2004-01-06 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
3 2004-01-07 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
4 2004-01-08 2004-01-09 2004-01-16 2004-01-23 2004-01-30 2004-02-06
</code></pre>
<p>this csv doesn't convert using either: </p>
<pre><code>dtd = pd.to_datetime(dtd, infer_datetime_format=True)
</code></pre>
<p>or </p>
<pre><code>dtd = pd.to_datetime(dtd, format='%Y-%m-%d')
</code></pre>
<p>It returns the value error above. Interestingly, however, using the parse_dates and infer_datetime_format as arguments of the read_csv method work fine. What is going on here? </p>
| 2 | 2016-10-12T07:02:54Z | 39,992,541 | <p>You can <code>stack</code> / <code>pd.to_datetime</code> / <code>unstack</code></p>
<pre><code>pd.to_datetime(dte.stack()).unstack()
</code></pre>
<p><a href="https://i.stack.imgur.com/V9daD.png" rel="nofollow"><img src="https://i.stack.imgur.com/V9daD.png" alt="enter image description here"></a></p>
<p><strong><em>explanation</em></strong><br>
<code>pd.to_datetime</code> works on a string, list, or <code>pd.Series</code>. <code>dte</code> is a <code>pd.DataFrame</code> and is why you are having issues. <code>dte.stack()</code> produces a a <code>pd.Series</code> where all rows are stacked on top of each other. However, in this stacked form, because it is a <code>pd.Series</code>, I can get a vectorized <code>pd.to_datetime</code> to work on it. the subsequent <code>unstack</code> simply reverses the initial <code>stack</code> to get the original form of <code>dte</code></p>
| 2 | 2016-10-12T07:10:34Z | [
"python",
"csv",
"pandas"
] |
Saving images with pixels in django | 39,992,420 | <p>I want to save images with specific pixels, when someone uploads image on django models it will be resized and then save according to id. I want them to be saved in path product/medium/id. I have tried defining path in it which saves image but not on the path I want. </p>
<p><strong>Here is my models.py</strong></p>
<pre><code>class Product(models.Model):
product_name = models.CharField(max_length=100)
product_description = models.TextField(default=None, blank=False, null=False)
product_short_description = models.TextField(default=None,blank=False,null=False,max_length=120)
product_manufacturer = models.CharField(choices=MANUFACTURER,max_length=20,default=None,blank=True)
product_material = models.CharField(choices=MATERIALS,max_length=20,default=None,blank=True)
No_of_days_for_delivery = models.IntegerField(default=0)
product_medium = models.ImageField(upload_to='product/id/medium',null=True,blank=True)
def save(self, *args, **kwargs):
self.slug = slugify(self.product_name)
super(Product, self).save(*args, **kwargs)
</code></pre>
<p>Now on this, I want to resize the image get it's id and save in path <code>product/medium/id/image.jpg</code></p>
| 1 | 2016-10-12T07:03:29Z | 39,993,056 | <pre><code>from PIL import Image
import StringIO
import os
from django.core.files.uploadedfile import InMemoryUploadedFile
class Product(models.Model):
pass # your model description
def save(self, *args, **kwargs):
"""Override model save."""
if self.product_medium:
img = Image.open(self.image)
size = (500, 600) # new size
image = img.resize(size, Image.ANTIALIAS) #transformation
try:
path, full_name = os.path.split(self.product_medium.name)
name, ext = os.path.splitext(full_name)
ext = ext[1:]
except ValueError:
return super(Product, self).save(*args, **kwargs)
thumb_io = StringIO.StringIO()
if ext == 'jpg':
ext = 'jpeg'
image.save(thumb_io, ext)
# Add the in-memory file to special Django class.
resized_file = InMemoryUploadedFile(
thumb_io,
None,
name,
'image/jpeg',
thumb_io.len,
None)
# Saving image_thumb to particular field.
self.product_medium.save(name, resized_file, save=False)
super(Product, self).save(*args, **kwargs)
</code></pre>
| 0 | 2016-10-12T07:39:05Z | [
"python",
"django",
"image",
"file-upload",
"image-resizing"
] |
How to rename (exposed in API) filter field name using django-filters? | 39,992,515 | <p>As the question states - I'm trying to rename the filter field name exposed in my API.</p>
<p>I have the following models:</p>
<pre><code>class Championship(Model):
...
class Group(Model):
championship = ForeignKey(Championship, ...)
class Match(Model):
group = ForeignKey(Group, ...)
</code></pre>
<p>I have exposed all of these models in REST API. I've defined <code>filter_fields</code> for the <code>Match</code> model:</p>
<pre><code>class MatchViewSet(ModelViewSet):
filter_fields = ['group__championship']
...
</code></pre>
<p>This way, I can filter for specific championship's matches (tested and working):</p>
<blockquote>
<p>curl /api/matches/?group__championship=1</p>
</blockquote>
<p>Is is possible to use some kind of alias for the exposed filter so I can use the following:</p>
<blockquote>
<p>curl /api/matches/?championship=1</p>
</blockquote>
<p>where <code>championship</code> in this case will be an alias for <code>group__championship</code>?</p>
<p><code>pip freeze</code> returns:</p>
<pre><code>django-filter==0.15.2
(...)
</code></pre>
<p>I've also tried implementing custom <code>FilterSet</code> with <code>ModelChoiceFilter</code> and custom lookup method:</p>
<pre><code>class MatchFilterSet(FilterSet):
championship = ModelChoiceFilter(method='filter_championship')
def filter_championship(self, queryset, name, value):
return queryset.filter(group__championship=value)
class Meta:
model = Match
fields = ['championship']
</code></pre>
<p>With view:</p>
<pre><code>class MatchViewSet(ModelViewSet):
filter = MatchFilterSet
(...)
</code></pre>
<p>But with no luck. The <code>filter_championship</code> method was even never called.</p>
| 1 | 2016-10-12T07:09:03Z | 39,992,704 | <p>You need to provide model field as name in django_filters with field type. I am considering you are trying to filter by championship id.</p>
<pre><code>class MatchFilterSet(FilterSet):
championship = django_filters.NumberFilter(name='group__championship_id')
class Meta:
model = Match
fields = ['championship']
</code></pre>
| 2 | 2016-10-12T07:20:25Z | [
"python",
"django",
"rest",
"django-filter"
] |
How to rename (exposed in API) filter field name using django-filters? | 39,992,515 | <p>As the question states - I'm trying to rename the filter field name exposed in my API.</p>
<p>I have the following models:</p>
<pre><code>class Championship(Model):
...
class Group(Model):
championship = ForeignKey(Championship, ...)
class Match(Model):
group = ForeignKey(Group, ...)
</code></pre>
<p>I have exposed all of these models in REST API. I've defined <code>filter_fields</code> for the <code>Match</code> model:</p>
<pre><code>class MatchViewSet(ModelViewSet):
filter_fields = ['group__championship']
...
</code></pre>
<p>This way, I can filter for specific championship's matches (tested and working):</p>
<blockquote>
<p>curl /api/matches/?group__championship=1</p>
</blockquote>
<p>Is is possible to use some kind of alias for the exposed filter so I can use the following:</p>
<blockquote>
<p>curl /api/matches/?championship=1</p>
</blockquote>
<p>where <code>championship</code> in this case will be an alias for <code>group__championship</code>?</p>
<p><code>pip freeze</code> returns:</p>
<pre><code>django-filter==0.15.2
(...)
</code></pre>
<p>I've also tried implementing custom <code>FilterSet</code> with <code>ModelChoiceFilter</code> and custom lookup method:</p>
<pre><code>class MatchFilterSet(FilterSet):
championship = ModelChoiceFilter(method='filter_championship')
def filter_championship(self, queryset, name, value):
return queryset.filter(group__championship=value)
class Meta:
model = Match
fields = ['championship']
</code></pre>
<p>With view:</p>
<pre><code>class MatchViewSet(ModelViewSet):
filter = MatchFilterSet
(...)
</code></pre>
<p>But with no luck. The <code>filter_championship</code> method was even never called.</p>
| 1 | 2016-10-12T07:09:03Z | 39,993,081 | <p>After <a href="http://stackoverflow.com/users/3805509/naresh">Naresh</a> response I have figured out the source of error.</p>
<p>It was the implementation of the model's view:</p>
<pre><code>class MatchViewSet(ModelViewSet):
filter = MatchFilterSet
(...)
</code></pre>
<p>For <code>django-filter</code> it should be <code>filter_class</code> rather than <code>filter</code>, so the correct implementation is:</p>
<pre><code>class MatchViewSet(ModelViewSet):
filter_class = MatchFilterSet
(...)
</code></pre>
<p>Also, I've changed the implementation of the model's filter to be more like Naresh suggested:</p>
<pre><code>class MatchFilterSet(FilterSet):
championship = NumberFilter(name='group__championship')
class Meta:
model = Match
fields = ['championship']
</code></pre>
<p>The difference between above and the Naresh's one is the luck of <code>_id</code> part which is not necessary.</p>
<p>After these changes everything works fine.</p>
| 0 | 2016-10-12T07:40:31Z | [
"python",
"django",
"rest",
"django-filter"
] |
How to display specific digit in pandas dataframe | 39,992,653 | <p>I have dataframe like below</p>
<pre><code> month
0 1
1 2
2 3
3 10
4 11
</code></pre>
<p>for example,I would like to display this dataframe in 2 digit like this</p>
<pre><code> month
0 01
1 02
2 03
3 10
4 11
</code></pre>
<p>I tried many method but didn't work well. How can I get this result?</p>
| 3 | 2016-10-12T07:17:00Z | 39,992,665 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.zfill.html"><code>str.zfill</code></a>:</p>
<pre><code>print (df['month'].astype(str).str.zfill(2))
0 01
1 02
2 03
3 10
4 11
Name: month, dtype: object
</code></pre>
| 5 | 2016-10-12T07:18:00Z | [
"python",
"pandas",
"dataframe"
] |
How to display specific digit in pandas dataframe | 39,992,653 | <p>I have dataframe like below</p>
<pre><code> month
0 1
1 2
2 3
3 10
4 11
</code></pre>
<p>for example,I would like to display this dataframe in 2 digit like this</p>
<pre><code> month
0 01
1 02
2 03
3 10
4 11
</code></pre>
<p>I tried many method but didn't work well. How can I get this result?</p>
| 3 | 2016-10-12T07:17:00Z | 39,992,806 | <p>I'd still pick @jezrael's answer over this, but I like this answer too</p>
<pre><code>df.month.apply('{:02d}'.format)
0 01
1 02
2 03
3 10
4 11
Name: month, dtype: object
</code></pre>
| 2 | 2016-10-12T07:27:10Z | [
"python",
"pandas",
"dataframe"
] |
Calculate MRR in Python Pandas dataframe | 39,992,771 | <p>I have a Pandas dataframe with the following columns</p>
<p><code>date | months | price</code></p>
<p>I calculate some basic BI metrics. I did the Net Revenue by grouping the dataframe on date and sum the price:</p>
<p><code>df = df[["Date", "Price"]].groupby(df['Date'])["Price"].sum().reset_index()</code></p>
<p>Now, I want to find the MRR, which is similar to the Net Revenue, but in case the column months have more than 1 month, the price should be "moved" equally to the next months. And also, it is grouped by month and not day.</p>
<p>For example, if I am on January 2016 and I have a row with 3 months and price 30$, I should add 10$ to January, 10$ to February and 10$ to March.</p>
<p>My first idea was to iterate through the dataframe, keep track of the months and the amount of price I should "move" on next months and create a new dataframe manually.</p>
<p>But, first, is there any Pythonic way in Pandas to do it?</p>
<p><strong>Data to reproduce a dataframe:</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame({'date': ['01-01-2016', '05-01-2016', '10-01-2016','04-02-2016'],
'months': [1, 3, 1, 6],
'price': [40, 60, 20, 60]})
</code></pre>
<p>Desired result:</p>
<pre><code>Date | MRR
January 2016 | 80
February 2016| 30
March 2016 | 10
April 2016 | 10
May 2016 | 10
June 2016 | 10
July 2016 | 10
</code></pre>
<p>And the results calculated like this for each row</p>
<pre><code>January 2016 = 40 + 20 + 20 + 0
February 2016 = 0 + 20 + 0 + 10
March 2016 = 0 + 0 + 0 + 10
April 2016 = 0 + 0 + 0 + 10
May 2016 = 0 + 0 + 0 + 10
June 2016 = 0 + 0 + 0 + 10
July 2016 = 0 + 0 + 0 + 10
</code></pre>
| 2 | 2016-10-12T07:24:51Z | 39,993,747 | <p>I don't know any way around using a loop. However, I can suggest a way to make the code pretty clean and efficient.</p>
<p>First, let's load the example data you supplied in the question text:</p>
<pre><code>df = pd.DataFrame({'date': ['01-01-2016', '05-01-2016', '10-01-2016','04-02-2016'],
'months': [1, 3, 1, 6],
'price': [40, 60, 20, 60]})
</code></pre>
<p>In order to use Panda's date functionality (e.g. grouping by month), we will use the <code>date</code> column as index. A <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.html" rel="nofollow"><code>DateTimeIndex</code></a> in fact:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'], format='%d-%m-%Y')
df = df.set_index('date')
</code></pre>
<p>Now, it's really easy to, for example, view a month-by-month summary, by using the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow">resample</a> function that works like the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow">groupby</a> function you already know, but uses time-periods:</p>
<pre><code>df.resample('M').sum()
</code></pre>
<p>Now to "spread out" rows where the <code>months</code> column is > 1 over multiple months. My approach here is to generate a new <code>DataFrame</code> for each row:</p>
<pre><code>dfs = []
for date, values in df.iterrows():
months, price = values
dfs.append(
pd.DataFrame(
# Compute the price for each month, and repeat this value
data={'price': [price / months] * months},
# The index is a date range for the requested number of months
index=pd.date_range(date, periods=months, freq='M')
)
)
</code></pre>
<p>Now we can just concatenate the list of <code>DataFrame</code>s, resample to months and take the sum:</p>
<pre><code>pd.concat(dfs).resample('M').sum()
</code></pre>
<p>Output:</p>
<pre><code> price
2016-01-31 80
2016-02-29 30
2016-03-31 30
2016-04-30 10
2016-05-31 10
2016-06-30 10
2016-07-31 10
</code></pre>
<p>See <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/timeseries.html</a> for all the cool things Panda's can do regarding time. For example, to exactly produce your desired output you could do this:</p>
<pre><code>output.index = output.index.strftime('%B %Y')
</code></pre>
<p>Which results in this:</p>
<pre><code> price
January 2016 80
February 2016 30
March 2016 30
April 2016 10
May 2016 10
June 2016 10
July 2016 10
</code></pre>
| 1 | 2016-10-12T08:19:19Z | [
"python",
"pandas",
"dataframe",
"business-intelligence"
] |
why it is not entering to "for loop" | 39,992,899 | <pre><code>page = requests.get('http://anywebsite/anysearch/')
tree = html.fromstring(page.content)
lists = tree.xpath('.//div[@class="normal-view"]')
print "lists"
for i in lists:
print "1"
title = i.xpath('.//div[@class="post-entry"/h1//a/@href]//text()')
print title,"2"
print "3"
</code></pre>
<p>i have given the print("list","1","2","3") statments to check whether the program is entering into the loop or not. </p>
<p>The output i am getting is</p>
<pre><code>lists
3
[Finished in 0.3s]
</code></pre>
| -2 | 2016-10-12T07:31:24Z | 39,996,037 | <p>The following Python 2 code successfully prints the title of the film under review using the URL you provided.</p>
<pre><code>from lxml import etree
parser = etree.HTMLParser()
tree = etree.parse("http://boxofficeindia.co.in/review-mirzya/", parser)
title = tree.xpath("string(//h1)")
print title
</code></pre>
<p>Executing this gives:</p>
<pre><code>> python ~/test.py
Review: Mirzya
</code></pre>
<p>If this is not what you are looking for, please be more specific in your question.</p>
| 0 | 2016-10-12T10:12:40Z | [
"python",
"python-2.7",
"xpath",
"python-2.x"
] |
Check if two nodes are on the same path in constant time for a DAG | 39,992,937 | <p>I have a DAG (directed acyclic graph) which is represented by an edge list e.g.</p>
<pre><code>edges = [('a','b'),('a','c'),('b','d')]
</code></pre>
<p>will give the graph </p>
<pre><code>a - > b -> d
|
v
c
</code></pre>
<p>I'm doing many operations where I have to check if two nodes are on the same path (<code>b</code> and <code>d</code> are on the same path whereas <code>b</code> and <code>c</code> are not from example above) which in turn is slowing down my program. I was hoping I could somehow traverse the graph once and save all paths so I can check this in constant time.</p>
<p>Is this speed-up possible and how would I go about to implement this in python?</p>
<p>Edit:
Note that I need to check for both directions, so if I have a pair of nodes <code>(a,b)</code> I need to check for both <code>a</code> to <code>b</code> and <code>b</code> to <code>a</code>.</p>
| 2 | 2016-10-12T07:32:50Z | 39,994,424 | <p>You actually want to find the <a href="https://en.wikipedia.org/wiki/Transitive_closure" rel="nofollow">transitive closure</a> of the graph. </p>
<blockquote>
<p>In computer science, the concept of transitive closure can be thought
of as constructing a data structure that makes it possible to answer
reachability questions. That is, can one get from node a to node d in
one or more hops?</p>
</blockquote>
<p>There are multiple ways of finding the transitive closure of the graph. The simplest way is using the <a href="https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm" rel="nofollow">floyd-warshall</a> algorithm <em>O(|V|<sup>3</sup>)</em>. Explained <a href="http://www.geeksforgeeks.org/transitive-closure-of-a-graph/" rel="nofollow">here</a>. </p>
<p>Another way is to perform a DFS from each node and mark all the nodes you visit as reachable from the current node. Explained <a href="http://www.geeksforgeeks.org/transitive-closure-of-a-graph-using-dfs/" rel="nofollow">here</a>.</p>
<p>There is also a method that works only on DAGs. First perform a topological sorting on your DAG. Then work backward in the topological sorted list and <code>OR</code> the transitive closure of the children of the current node together, to get the transitive closure of the current node. Explained <a href="http://cs.stackexchange.com/questions/7231/efficient-algorithm-for-retrieving-the-transitive-closure-of-a-directed-acyclic">here</a>.</p>
<p>Below is the Python implementation of the DFS based method:</p>
<pre><code>def create_adj_dict_from_edges(edges):
adj_dict = {}
for e in edges:
for u in e:
if u not in adj_dict:
adj_dict[u] = []
for e in edges:
adj_dict[e[0]].append(e[1])
return adj_dict
def transitive_closure_from_adj_dict(adj_dict):
def dfs(node, visited):
if node not in adj_dict:
return
for next in adj_dict[node]:
if next in visited:
continue
visited.add(next)
dfs(next,visited)
reachable = {}
for node in adj_dict:
visited = set(node,)
dfs(node,visited)
reachable[node] = visited
return reachable
def main():
edges = [('a','b'),('a','c'),('b','d')]
adj_dict = create_adj_dict_from_edges(edges)
tc = transitive_closure_from_adj_dict(adj_dict)
print tc
# is there a path from (b to d) or (d to b)
print 'd' in tc['b'] or 'b' in tc['d']
# is there a path from (b to c) or (c to b)
print 'c' in tc['b'] or 'b' in tc['c']
if __name__ == "__main__":
main()
</code></pre>
<p><strong>output</strong></p>
<pre><code>{'a': set(['a', 'c', 'b', 'd']), 'c': set(['c']), 'b': set(['b', 'd']), 'd': set(['d'])}
True
False
</code></pre>
| 3 | 2016-10-12T08:55:04Z | [
"python",
"algorithm",
"graph-algorithm",
"directed-acyclic-graphs",
"graph-traversal"
] |
Questions on pandas moving average | 39,992,985 | <p>I am a beginner of python and pandas. I am having difficulty with making volatility adjusted moving average, so I need your help.</p>
<p>Volatility adjusted moving average is a kind of moving average, of which moving average period is not static, but dynamically adjusted according to volatility.</p>
<p>What I'd like to code is,</p>
<ol>
<li>Get stock data from yahoo finance (monthly close)</li>
<li>Calculate monthly volatility X some constant --> use variables of dynamic moving average period</li>
<li>Calculate dynamic moving average</li>
</ol>
<p>I've tried this code, but only to fail. I don't know what the problem is. If you know the problem, or any better code suggestion, please let me know.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pandas_datareader.data as web
def price(stock, start):
price = web.DataReader(name=stock, data_source='yahoo', start=start)['Adj Close']
price = price / price[0]
a = price.resample('M').last().to_frame()
a.columns = ['price']
return a
a = price('SPY','2000-01-01')
a['volperiod'] = round(a.rolling(12).std()*100)*2
for i in range(len(a.index)):
k = a['price'].rolling(int(a['volperiod'][i])).mean()
a['ma'][i] = k[i]
print(a)
</code></pre>
| 2 | 2016-10-12T07:35:23Z | 39,993,814 | <p>first of all: you need to calculate <code>pct_change</code> on <code>price</code> to calculate <code>volatility</code> of <code>returns</code></p>
<p><strong><em>my solution</em></strong></p>
<pre><code>def price(stock, start):
price = web.DataReader(name=stock, data_source='yahoo', start=start)['Adj Close']
return price.div(price.iat[0]).resample('M').last().to_frame('price')
a = price('SPY','2000-01-01')
v = a.pct_change().rolling(12).std().dropna().mul(200).astype(int)
def dyna_mean(x):
end = a.index.get_loc(x.name)
start = end - x.price
return a.price.iloc[start:end].mean()
pd.concat([a.price, v.price, v.apply(dyna_mean, axis=1)],
axis=1, keys=['price', 'vol', 'mean'])
</code></pre>
| 2 | 2016-10-12T08:22:29Z | [
"python",
"pandas"
] |
maximum recursion depth exceeded getter Django | 39,993,144 | <p>I have some class with custom getter for specific attr:</p>
<pre><code>class Item(a, b, c, d, models.Model):
title = Field()
description = Field()
a = Field()
_custom_getter = ['title','description']
def __getattribute__(self, name):
if name in self.custom_getter:
return 'xxx'
else:
return super(Item, self).__getattribute__(name)
</code></pre>
<p>this code raise <code>RunetimeError</code>: <code>maximum recursion depth exceeded while calling a Python object</code>
but when i use this piece of code:</p>
<pre><code>class Item(a, b, c, d, models.Model):
title = Field()
description = Field()
a = Field()
def __getattribute__(self, name):
custom_getter = ['title','description']
if name in custom_getter:
return 'xxx'
else:
return super(Item, self).__getattribute__(name)
</code></pre>
<p>all works like I want. Wher is my mistake in first piece of code ?</p>
| 1 | 2016-10-12T07:43:53Z | 39,993,276 | <p>Because <code>__getattribute__</code> gets called when you do <code>self.custom_getter</code>. You can use <code>self.__dict__</code> for this. Read more <a href="http://stackoverflow.com/questions/371753/python-using-getattribute-method#371833">Python - Using __getattribute__ method</a></p>
<pre><code>class Item(a, b, c, d, models.Model):
title = Field()
description = Field()
a = Field()
custom_getter = ['title','description']
def __getattribute__(self, name):
if name in self.__dict__['custom_getter']:
return 'xxx'
else:
return super(Item, self).__getattribute__(name)
</code></pre>
| 3 | 2016-10-12T07:50:43Z | [
"python",
"django",
"recursion",
"django-models",
"attributes"
] |
I need to cut strings from a line(xml file) and save them in an array | 39,993,147 | <p>i have another question regarding python. I need to save a specific string of a xml file(the string appears several times in the file) in a list.</p>
<pre><code>def parse_xml(self):
file_ = open("ErrorReactions_TestReport_20160831_165153.xml", "r")
for line in file_:
line.rstrip()
if "result_str" in line:
if line == "Skipped":
"count how much test cases are skipped"
elif line == "Failed"
"count how much test cases failed
else:
"count how much test cases passed"
</code></pre>
<p>This is my code, my problem is that i need to save the string behind the parameter result_str and check if they match with "Skipped" or "Failed". How do i get that string saved in a variable?
The lines look like this:</p>
<p>< verdict time="1472654306.7" result_str="Passed" result="2">Generator run successfully< /verdict>
< verdict time="1472654306.7" result_str="Skipped" result="0" final="True">Testgenerator not active< /verdict></p>
| 0 | 2016-10-12T07:44:00Z | 39,993,398 | <p>Regex is one of many ways of doing this. I'm sure there are excellent tried and tested XML parsers out there though. This code will print <code>Passed</code>:</p>
<pre><code>import re
line='''< verdict time="1472654306.7" result_str="Passed" result="2">Generator run successfully< /verdict> < verdict time="1472654306.7" result_str="Skipped" result="0" final="True">Testgenerator not active< /verdict>'''
p = re.compile(r'.*?result_str="(.*?)"')
match = p.match(line)
print(match.group(1))
</code></pre>
| 0 | 2016-10-12T07:57:43Z | [
"python",
"xml",
"string",
"list",
"split"
] |
vlookup between 2 Pandas dataframes | 39,993,238 | <p>I have 2 pandas Dataframes as follows.</p>
<p>DF1:</p>
<pre><code>Security ISIN
ABC I1
DEF I2
JHK I3
LMN I4
OPQ I5
</code></pre>
<p>and DF2:</p>
<pre><code>ISIN Value
I2 100
I3 200
I5 300
</code></pre>
<p>I would like to end up with a third dataframe looking like this:</p>
<p>DF3:</p>
<pre><code>Security Value
DEF 100
JHK 200
OPQ 300
</code></pre>
| 2 | 2016-10-12T07:48:39Z | 39,993,273 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>, by default is inner join, so <code>how=inner</code> is omit and if there is only one common column in both <code>Dataframes</code>, you can also omit parameter <code>on='ISIN'</code>:</p>
<pre><code>df3 = pd.merge(df1, df2)
#remove column ISIN
df3.drop('ISIN', axis=1, inplace=True)
print (df3)
Security Value
0 DEF 100
1 JHK 200
2 OPQ 300
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> column <code>ISIN</code> by <code>Series</code> from <code>df1</code>:</p>
<pre><code>print (df1.set_index('ISIN')['Security'])
ISIN
I1 ABC
I2 DEF
I3 JHK
I4 LMN
I5 OPQ
Name: Security, dtype: object
#create new df by copy of df2
df3 = df2.copy()
df3['Security'] = df3.ISIN.map(df1.set_index('ISIN')['Security'])
#remove column ISIN
df3.drop('ISIN', axis=1, inplace=True)
#change order of columns
df3 = df3[['Security','Value']]
print (df3)
Security Value
0 DEF 100
1 JHK 200
2 OPQ 300
</code></pre>
| 2 | 2016-10-12T07:50:21Z | [
"python",
"pandas"
] |
vlookup between 2 Pandas dataframes | 39,993,238 | <p>I have 2 pandas Dataframes as follows.</p>
<p>DF1:</p>
<pre><code>Security ISIN
ABC I1
DEF I2
JHK I3
LMN I4
OPQ I5
</code></pre>
<p>and DF2:</p>
<pre><code>ISIN Value
I2 100
I3 200
I5 300
</code></pre>
<p>I would like to end up with a third dataframe looking like this:</p>
<p>DF3:</p>
<pre><code>Security Value
DEF 100
JHK 200
OPQ 300
</code></pre>
| 2 | 2016-10-12T07:48:39Z | 39,993,574 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>pd.merge</code></a> to automatically do an inner join on <code>ISIN</code>. The following line of code should get you going:</p>
<pre><code>df3 = pd.merge(df1, df2)[['Security', 'Value']]
</code></pre>
<p>Which results in <code>df3</code>:</p>
<pre><code> Security Value
0 DEF 100
1 JHK 200
2 OPQ 300
</code></pre>
<p>The fully reproducible code sample looks like:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'Security': ['ABC', 'DEF', 'JHK', 'LMN', 'OPQ'],
'ISIN' : ['I1', 'I2', 'I3', 'I4', 'I5']
})
df2 = pd.DataFrame({
'Value': [100, 200, 300],
'ISIN' : ['I2', 'I3', 'I5']
})
df3 = pd.merge(df1, df2)[['Security', 'Value']]
print(df3)
</code></pre>
| 1 | 2016-10-12T08:07:28Z | [
"python",
"pandas"
] |
Sony QX1 Contents Transfer - Internal Server Error 500 | 39,993,338 | <p>I have been trying to use the Sony Camera Remote SDK for controlling a Sony QX1 using python (using <a href="https://github.com/Bloodevil/sony_camera_api" rel="nofollow">https://github.com/Bloodevil/sony_camera_api</a>) </p>
<p>I managed to take pictures, set all the modes etc. Everything works. When I take a picture, a URL is returned of where it is stored eg. <a href="http://192.168.122.1:8080/postview/memory/DCIM/101MSDCF/DSC01111.JPG" rel="nofollow">http://192.168.122.1:8080/postview/memory/DCIM/101MSDCF/DSC01111.JPG</a></p>
<p>But when I try to get the picture in Contents Transfer mode, I get an Internal Server Error 500 ? If I take a picture without SD card, I get a different kind of URL but then I am able to get the picture.</p>
<p>Any thoughts?</p>
<p>Thanks a lot</p>
| 0 | 2016-10-12T07:54:41Z | 40,112,704 | <p>I have found the solution: the mode of the camera was "Contents Transfer".
Apparently, you can only access the images when the camera is in "Remote Shooting" mode. </p>
| 0 | 2016-10-18T15:47:55Z | [
"python",
"sony"
] |
Tensorflow gradients: without automatic implicit sum | 39,993,377 | <p>In tensorflow, if one has two tensors <code>x</code> and <code>y</code> and one want to have the gradients of <code>y</code> with respect to <code>x</code> using <code>tf.gradients(y,x)</code>. Then what one actually gets is :</p>
<pre><code>gradient[n,m] = sum_ij d y[i,j]/ d x[n,m]
</code></pre>
<p>There is a sum over the indices of y, is there a way to avoid this implicit sum? To get the whole gradient tensor <code>gradient[i,j,n,m]</code>?</p>
| 2 | 2016-10-12T07:56:46Z | 40,003,139 | <p>There isn't a way. TensorFlow 0.11 <code>tf.gradients</code> implements standard reverse-mode AD which gives the derivative of a scalar quantity. You'd need to call <code>tf.gradients</code> for each <code>y[i,j]</code> separately</p>
| 1 | 2016-10-12T15:59:28Z | [
"python",
"tensorflow",
"gradients"
] |
Tensorflow gradients: without automatic implicit sum | 39,993,377 | <p>In tensorflow, if one has two tensors <code>x</code> and <code>y</code> and one want to have the gradients of <code>y</code> with respect to <code>x</code> using <code>tf.gradients(y,x)</code>. Then what one actually gets is :</p>
<pre><code>gradient[n,m] = sum_ij d y[i,j]/ d x[n,m]
</code></pre>
<p>There is a sum over the indices of y, is there a way to avoid this implicit sum? To get the whole gradient tensor <code>gradient[i,j,n,m]</code>?</p>
| 2 | 2016-10-12T07:56:46Z | 40,005,908 | <p>Here is my work around just taking the derivative of each component (as also mentionned by @Yaroslav) and then packing them all together again in the case of rank 2 tensors (Matrices):</p>
<pre><code>import tensorflow as tf
def twodtensor2list(tensor,m,n):
s = [[tf.slice(tensor,[j,i],[1,1]) for i in range(n)] for j in range(m)]
fs = []
for l in s:
fs.extend(l)
return fs
def grads_all_comp(y, shapey, x, shapex):
yl = twodtensor2list(y,shapey[0],shapey[1])
grads = [tf.gradients(yle,x)[0] for yle in yl]
gradsp = tf.pack(grads)
gradst = tf.reshape(gradsp,shape=(shapey[0],shapey[1],shapex[0],shapex[1]))
return gradst
</code></pre>
<p>Now <code>grads_all_comp(y, shapey, x, shapex)</code> will output the rank 4 tensor in the desired format. It is a very inefficient way because everything needs to be sliced up and repacked together, so if someone finds a better I would be very interested to see it. </p>
| 3 | 2016-10-12T18:31:36Z | [
"python",
"tensorflow",
"gradients"
] |
Can't load Django static files in view | 39,993,410 | <p>I'm a new player in Django and Python. I want to load the static files from the project level static folder. The problem is I can't but I still can load the static of an app.
This is the structure of my project:</p>
<pre><code>Yaas
__init__.py
settings.py
urls.py
wsgi.py
templates
base.html
static
style.css
auction
templates
auction
index.html
static
auction
style.css
__init__.py
apps.py
models.py
urls.py
views.py
</code></pre>
<p>The thing is it works with static file of auction, like this in base.html:</p>
<pre><code>{% load static %}
<link rel="stylesheet" type="text/css" href="{% static 'auction/style.css' %}" />
</code></pre>
<p>But it returns 404 with the project static file:</p>
<pre><code>{% load static %}
<link rel="stylesheet" type="text/css" href="{% static 'style.css' %}" />
</code></pre>
<p>settings.py</p>
<pre><code>STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(BASE_DIR, "static")
</code></pre>
<p>I tried other solutions and document on Django but it still didn't work. I am using Django 1.10.</p>
| -1 | 2016-10-12T07:58:26Z | 39,994,149 | <p>I made this work by adding the following:</p>
<pre><code>STATICFILES_DIRS = [os.path.join(BASE_DIR, "static")]
</code></pre>
<p>From the <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#configuring-static-files" rel="nofollow">docs</a> (after point 4):</p>
<blockquote>
<p>Your project will probably also have static assets that arenât tied to
a particular app. In addition to using a static/ directory inside your
apps, you can define a list of directories (STATICFILES_DIRS) in your
settings file where Django will also look for static files.</p>
</blockquote>
<p>Please note that the <code>STATIC_ROOT</code> directory, where Django will output all collected static files found in each <code><app>/static</code> and <code>STATICFILES_DIRS</code> directories, must be different.</p>
<p>Hope that helps!</p>
| 0 | 2016-10-12T08:40:09Z | [
"python",
"django"
] |
Similar errors in MultiProcessing. Mismatch number of arguments to function | 39,993,436 | <p>I couldn't find a better way to describe the error I'm facing, but this error seems to come up everytime I try to implement Multiprocessing to a loop call.</p>
<p>I've used both sklearn.externals.joblib as well as multiprocessing.Process but error are similar though different.</p>
<p><strong>Original Loop on which want to apply Multiprocessing, where one iteration in executed in single thread/process</strong></p>
<pre><code>for dd in final_col_dates:
idx1 = final_col_dates.tolist().index(dd)
dataObj = GetPrevDataByDate(d1, a, dd, self.start_hour_of_day)
data2 = dataObj.fit()
dataObj = GetAppointmentControlsSchedule(data2, idx1, d, final_col_dates_mod, dd, self.DC, frgt_typ_filter)
data3 = dataObj.fit()
if idx1 > 0:
data3['APPT_SCHD_ARVL_D_{}'.format(idx1)] = np.nan
iter += 1
days_out_vars.append(data3)
</code></pre>
<p>For implementing the above code snipet as Multi Processing, I created a method, where the above code goes except the <strong>for loop</strong>.</p>
<p>Using Joblib, the following is my code snippet.</p>
<pre><code>Parallel(n_jobs=2)(
delayed(self.ParallelLoopTest)(dd, final_col_dates, d1, a, d, final_col_dates_mod, iter, return_list)
for dd in final_col_dates)
</code></pre>
<p>the variable <strong>return_list</strong> is shared variable which is executed inside method ParallelLoopTest. it is declared as :</p>
<pre><code>manager = Manager()
return_list = manager.list()
</code></pre>
<p>Using the above code snippet, I face the following error:</p>
<pre><code>Process SpawnPoolWorker-3:
Traceback (most recent call last):
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\pool.py", line 108, in worker
task = get()
File "C:\Users\dkanhar\Anaconda3\lib\site-packages\sklearn\externals\joblib\pool.py", line 359, in get
return recv()
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\connection.py", line 251, in recv
return ForkingPickler.loads(buf.getbuffer())
TypeError: function takes at most 0 arguments (1 given)
</code></pre>
<p>I also tried multiprocessing module to execute the above mentioned code, and still faced similar error. The following code was used to run using multiprocessing module:</p>
<pre><code>for dd in final_col_dates:
# multiprocessing.Pipe(False)
p = multiprocessing.Process(target=self.ParallelLoopTest, args=(dd, final_col_dates, d1, a, d, final_col_dates_mod, iter, return_list))
jobs.append(p)
p.start()
for proc in jobs:
proc.join()
</code></pre>
<p>And, I face the following traceback of error:</p>
<pre><code>File "<string>", line 1, in <module>
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
TypeError: function takes at most 0 arguments (1 given)
Traceback (most recent call last):
File "E:/Projects/Predictive Inbound Cartoon Estimation-MLO/Python/dataprep/DataPrep.py", line 457, in <module>
print(obj.fit())
File "E:/Projects/Predictive Inbound Cartoon Estimation-MLO/Python/dataprep/DataPrep.py", line 39, in fit
return self.__driver__()
File "E:/Projects/Predictive Inbound Cartoon Estimation-MLO/Python/dataprep/DataPrep.py", line 52, in __driver__
final = self.process_()
File "E:/Projects/Predictive Inbound Cartoon Estimation-MLO/Python/dataprep/DataPrep.py", line 135, in process_
sch_dat = self.inline_apply_(all_dates_schd, d1, d2, a)
File "E:/Projects/Predictive Inbound Cartoon Estimation-MLO/Python/dataprep/DataPrep.py", line 297, in inline_apply_
p.start()
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 66, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\dkanhar\Anaconda3\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>So, I tried uncommenting the line <strong>multiprocessing.Pipe(False)</strong> thinking it is maybe because of using Pipe, which I disabled, but still the problem persists and I face same error.</p>
<p>If of any help, following is my method ParallerLoopTest:</p>
<pre><code>def ParallelLoopTest(self, dd, final_col_dates, d1, a, d, final_col_dates_mod, iter, days_out_vars):
idx1 = final_col_dates.tolist().index(dd)
dataObj = GetPrevDataByDate(d1, a, dd, self.start_hour_of_day)
data2 = dataObj.fit()
dataObj = GetAppointmentControlsSchedule(data2, idx1, d, final_col_dates_mod, dd, self.DC, frgt_typ_filter)
data3 = dataObj.fit()
if idx1 > 0:
data3['APPT_SCHD_ARVL_D_{}'.format(idx1)] = np.nan
print("Iter ", iter)
iter += 1
days_out_vars.append(data3)
</code></pre>
<p>The reason why I said similar errors is because if you look at Traceback of both errors, they both have similar error line inbetween:</p>
<p><strong>TypeError: function takes at most 0 arguments (1 given)</strong> while loading from Pickle which I dont know why it is happening.</p>
<p>Also note, that I've successfully implemented both of these modules in other projects earlier, but never faced an issue, so I dont know why this problem started coming up now, and what exactly this problem means.</p>
<p>Any help would be really appreciated, as I've been wasting time to debug this since 3 days.</p>
<p>Thanks</p>
<p><strong>Edit 1 after last answer</strong></p>
<p>After answer, the following this I tried.
added decorator <strong>@staticmethod</strong>, removed self, and called the method using DataPrep.ParallelLoopTest(args).</p>
<p>Also, moved the method out of class DataPrep, and called simply by ParallelLoopTest(args), </p>
<p>but in both cases the error remains same. </p>
<p>PS: I tried using joblib for both cases.
So, neither of solutions worked.</p>
<p>New method defination:</p>
<pre><code>def ParallelLoopTest(dd, final_col_dates, d1, a, d, final_col_dates_mod, iter, days_out_vars, DC, start_hour):
idx1 = final_col_dates.tolist().index(dd)
dataObj = GetPrevDataByDate(d1, a, dd, start_hour_of_day)
data2 = dataObj.fit()
dataObj = GetAppointmentControlsSchedule(data2, idx1, d, final_col_dates_mod, dd, DC, frgt_typ_filter)
data3 = dataObj.fit()
if idx1 > 0:
data3['APPT_SCHD_ARVL_D_{}'.format(idx1)] = np.nan
print("Iter ", iter)
iter += 1
days_out_vars.append(data3)
</code></pre>
<p>Edit 2:</p>
<p>I was facing error as Python was unable to pickle some large dataframes. I had 2 DataFrames in my parameter/arguments, one around 20MB other 200MB in pickle format. But that shouldn't be an issue right? We should be able to pass Pandas DataFrame. Correct me if I'm wrong.</p>
<p>Also, workaround this was I saved the DataFrame as csv before method call with a random name, pass the file name, and read csv, but that is slow process as it involved reasong huge csv files. Any suggestions?</p>
| 2 | 2016-10-12T07:59:30Z | 40,022,304 | <p>You actually get the exact same error in both case but as you use a <code>Pool</code> in one example (<code>joblib</code>) and a <code>Process</code> in the other you don't get the same failure/traceback in your main thread as they do not manage the Process failure the same way.<br>
In both cases, your process seems to fail to unpickle your child job in the new <code>Process</code>. The <code>Pool</code> give you back the unpickling error whereas using <code>Process</code>, you get a failure as when the subprocess dies from this unpickling error, it closes the pipe used by the main thread to write data, causing an error in the main process.</p>
<p>My first idea would be that the error is caused as you try to pickle an instance method whereas you should try to use a static method here (using an instance method does not seem right as the object is not shared between processes).<br>
Use the decorator <code>@staticmethod</code> before you declare <code>ParallelLoopTest</code> and remove the <code>self</code> argument.</p>
<p>EDIT:
Another possibility is that one of the arguments <code>dd, final_col_dates, d1, a, d, final_col_dates_mod, iter, return_list</code> cannot be unpickled. Apparently, it comes from <code>panda.DataFrame</code>.<br>
I do not see any reason why the unpickling fail in this case but I don't know <code>panda</code> that well.<br>
One work around is to dump the data in a temporary file. You can look at this link <a href="http://matthewrocklin.com/blog/work/2015/03/16/Fast-Serialization" rel="nofollow">here</a> for efficient serialization of <code>panda.DataFrame</code>. Another solution is to use the <code>DataFrame.to_pickle</code> method and <code>panda.read_pickle</code> to dump/retrieve it to/from a file.</p>
<p>Note that it would be better to compare <code>joblib.Parallel</code> with <code>multiprocessing.Pool</code> and not with <code>multiprocessing.Process</code>.</p>
| 0 | 2016-10-13T13:24:28Z | [
"python",
"multiprocessing",
"pickle",
"joblib"
] |
Why does pandas dataframe indexing change axis depending on index type? | 39,993,460 | <p>when you index into a pandas dataframe using a list of ints, it returns columns.</p>
<p>e.g. <code>df[[0, 1, 2]]</code> returns the first three columns.</p>
<p>why does indexing with a boolean vector return a list of rows?</p>
<p>e.g. <code>df[[True, False, True]]</code> returns the first and third rows. (and errors out if there aren't 3 rows.)</p>
<p>why? Shouldn't it return the first and third columns?</p>
<p>Thanks!</p>
| 2 | 2016-10-12T08:00:53Z | 39,993,488 | <p>Because if use:</p>
<pre><code>df[[True, False, True]]
</code></pre>
<p>it is called <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> by mask:</p>
<pre><code>[True, False, True]
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9]})
print (df)
A B C
0 1 4 7
1 2 5 8
2 3 6 9
print (df[[True, False, True]])
A B C
0 1 4 7
2 3 6 9
</code></pre>
<p>Boolean mask is same as:</p>
<pre><code>print (df.B != 5)
0 True
1 False
2 True
Name: B, dtype: bool
print (df[df.B != 5])
A B C
0 1 4 7
2 3 6 9
</code></pre>
| 2 | 2016-10-12T08:02:15Z | [
"python",
"pandas"
] |
Why does pandas dataframe indexing change axis depending on index type? | 39,993,460 | <p>when you index into a pandas dataframe using a list of ints, it returns columns.</p>
<p>e.g. <code>df[[0, 1, 2]]</code> returns the first three columns.</p>
<p>why does indexing with a boolean vector return a list of rows?</p>
<p>e.g. <code>df[[True, False, True]]</code> returns the first and third rows. (and errors out if there aren't 3 rows.)</p>
<p>why? Shouldn't it return the first and third columns?</p>
<p>Thanks!</p>
| 2 | 2016-10-12T08:00:53Z | 39,994,038 | <p>There are very specific slicing accessors to target rows and columns in specific ways.</p>
<p><a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1751/indexing-and-selecting-data/5962/mixed-position-and-label-based-selection#t=201610120828283711024">Mixed position and label based selection</a></p>
<p><a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1751/indexing-and-selecting-data/5732/selecting-by-position#t=201610120828283711024">Selecting by position</a></p>
<p><a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1751/indexing-and-selecting-data/5684/selection-by-label#t=201610120828283711024">Selection By Label</a></p>
<ul>
<li><code>loc[]</code>, <code>at[]</code>, and <code>get_value()</code> take row and column labels and return the appropriate slice</li>
<li><code>iloc[]</code> and <code>iat[]</code> take row and column positions and return the appropriate slice</li>
</ul>
<p>What you are seeing is the result of <code>pandas</code> trying to infer what you are trying to do. As you have noticed this is inconsistent at times. In fact, it is more pronounced than just what you've highlighted... but I wont go into that now.</p>
<p>See also</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow">pandas docs</a></p>
<blockquote>
<pre><code>However, when an axis is integer based,
ONLY label based access and not positional access is supported.
Thus, in such cases, itâs usually better to be explicit and use .iloc or .loc.
</code></pre>
</blockquote>
| 2 | 2016-10-12T08:34:15Z | [
"python",
"pandas"
] |
How do I memoize this LIS python2.7 algorithm properly? | 39,993,507 | <p>I'm practicing Dynamic Programming and I am writing the Longest Increasing Subsequence problem.</p>
<p>I have the DP solution:</p>
<pre><code>def longest_subsequence(lst, lis=[], mem={}):
if not lst:
return lis
if tuple(lst) not in mem.keys():
if not lis or lst[0] > lis[-1]:
mem[tuple(lst)] = max([longest_subsequence(lst[1:], lis+[lst[0]], mem), longest_subsequence(lst[1:], lis, mem)], key=len)
else:
mem[tuple(lst)] = longest_subsequence(lst[1:], lis, mem)
return mem[tuple(lst)]
</code></pre>
<p>And a non-memoized version</p>
<pre><code>def longest_subsequence(lst, lis=[]):
if not lst:
return lis
if not lis or lst[0] > lis[-1]:
result = max([longest_subsequence(lst[1:], lis+[lst[0]]), longest_subsequence(lst[1:], lis)], key=len)
else:
result = longest_subsequence(lst[1:], lis)
return result
</code></pre>
<p>However, the two functions have different behaviours. For example, the test case <code>longest_subsequence([10,9,2,5,3,7,101,18])</code> fails for the memoized version. </p>
<pre><code>>>> longest_subsequence([10,9,2,5,3,7,101,18])
[10, 101]
</code></pre>
<p>The non-memoized version is fully correct however (although much slower). </p>
<pre><code>>>> longest_subsequence([10,9,2,5,3,7,101,18])
[2, 5, 7, 101]
</code></pre>
<p>what I am doing wrong?</p>
<p><strong>EDIT:</strong> Tempux's answer fails on the following:</p>
<pre><code>>>> longest_subsequence([3,5,6,2,5,4,19,5,6,7,12])
[3, 5, 6, 7, 12]
</code></pre>
<p>where the solution the non-memoized version is:</p>
<pre><code>>>> longest_subsequence([3,5,6,2,5,4,19,5,6,7,12])
[3, 4, 5, 6, 7, 12]
</code></pre>
| 3 | 2016-10-12T08:03:02Z | 39,993,904 | <p>Your state depends on both <code>lst</code> and previous item you have picked. But you are only considering the <code>lst</code>. That is why you are getting incorrect results. To fix it you just have to add previous item to your dynamic state.</p>
<pre><code>def longest_subsequence(lst, prev=None, mem={}):
if not lst:
return []
if (tuple(lst),prev) not in mem:
if not prev or lst[0] > prev:
mem[(tuple(lst),prev)] = max([[lst[0]]+longest_subsequence(lst[1:], lst[0]), longest_subsequence(lst[1:], prev)], key=len)
else:
mem[(tuple(lst),prev)] = longest_subsequence(lst[1:], prev)
return mem[(tuple(lst),prev)]
print longest_subsequence([3,5,6,2,5,4,19,5,6,7,12])
</code></pre>
<p>Note that using the <code>tuple(list)</code> as your dynamic state is not a very good idea. You can simply use the index of the item in the <code>list</code> that you are checking instead of the whole list:</p>
<pre><code>def longest_subsequence(lst, index=0, prev=None, mem={}):
if index>=len(lst):
return []
if (index,prev) not in mem:
if not prev or lst[index] > prev:
mem[(index,prev)] = max([[lst[index]]+longest_subsequence(lst, index+1, lst[index]), longest_subsequence(lst, index+1, prev)], key=len)
else:
mem[(index,prev)] = longest_subsequence(lst,index+1, prev)
return mem[(index,prev)]
print longest_subsequence([3,5,6,2,5,4,19,5,6,7,12])
</code></pre>
<p>For more efficient approaches you can check <a href="http://stackoverflow.com/questions/2631726/how-to-determine-the-longest-increasing-subsequence-using-dynamic-programming">this</a> question.</p>
| 4 | 2016-10-12T08:27:32Z | [
"python",
"algorithm",
"dynamic-programming",
"memoization"
] |
How do I memoize this LIS python2.7 algorithm properly? | 39,993,507 | <p>I'm practicing Dynamic Programming and I am writing the Longest Increasing Subsequence problem.</p>
<p>I have the DP solution:</p>
<pre><code>def longest_subsequence(lst, lis=[], mem={}):
if not lst:
return lis
if tuple(lst) not in mem.keys():
if not lis or lst[0] > lis[-1]:
mem[tuple(lst)] = max([longest_subsequence(lst[1:], lis+[lst[0]], mem), longest_subsequence(lst[1:], lis, mem)], key=len)
else:
mem[tuple(lst)] = longest_subsequence(lst[1:], lis, mem)
return mem[tuple(lst)]
</code></pre>
<p>And a non-memoized version</p>
<pre><code>def longest_subsequence(lst, lis=[]):
if not lst:
return lis
if not lis or lst[0] > lis[-1]:
result = max([longest_subsequence(lst[1:], lis+[lst[0]]), longest_subsequence(lst[1:], lis)], key=len)
else:
result = longest_subsequence(lst[1:], lis)
return result
</code></pre>
<p>However, the two functions have different behaviours. For example, the test case <code>longest_subsequence([10,9,2,5,3,7,101,18])</code> fails for the memoized version. </p>
<pre><code>>>> longest_subsequence([10,9,2,5,3,7,101,18])
[10, 101]
</code></pre>
<p>The non-memoized version is fully correct however (although much slower). </p>
<pre><code>>>> longest_subsequence([10,9,2,5,3,7,101,18])
[2, 5, 7, 101]
</code></pre>
<p>what I am doing wrong?</p>
<p><strong>EDIT:</strong> Tempux's answer fails on the following:</p>
<pre><code>>>> longest_subsequence([3,5,6,2,5,4,19,5,6,7,12])
[3, 5, 6, 7, 12]
</code></pre>
<p>where the solution the non-memoized version is:</p>
<pre><code>>>> longest_subsequence([3,5,6,2,5,4,19,5,6,7,12])
[3, 4, 5, 6, 7, 12]
</code></pre>
| 3 | 2016-10-12T08:03:02Z | 40,008,192 | <p>So I had just discovered that Tempux's answer did not work for all cases.</p>
<p>I went back and through about encapsulating the entire state into the memoization dictionary and thus added <code>tuple(lis)</code> as part of the key. Also, the <code>lst</code> index trick may not be as easy to implement since I am mutating <code>lst</code> through the recursion, hence why I am using <code>tuple()</code> as my keys.</p>
<p>The reasoning behind what I did is that multiple <code>lis</code> may have the same <code>[-1]</code> value. So, with this new state, the code is:</p>
<pre><code>def longest_subsequence(lst, lis=[],mem={}):
if not lst:
return lis
if (tuple(lst),tuple(lis)) not in mem:
if not lis or lst[0] > lis[-1]:
mem[(tuple(lst),tuple(lis))] = max([longest_subsequence(lst[1:], lis+[lst[0]]), longest_subsequence(lst[1:], lis)], key=len)
else:
mem[(tuple(lst),tuple(lis))] = longest_subsequence(lst[1:], lis)
return mem[(tuple(lst),tuple(lis))]
</code></pre>
<p>This works for all cases I have tested so far.</p>
| 0 | 2016-10-12T20:54:53Z | [
"python",
"algorithm",
"dynamic-programming",
"memoization"
] |
Cython efficient cycle 'for' for a given list instead of range(N) | 39,993,593 | <p>I'm coding on cython (python 2.7) and I'm dealing with 'for' cycles. As long as I use the standard <code>for i in range(N)</code>, I got a cool code: no yellow warning on the cythonized <code>code.html</code>.</p>
<p>When I create a list of integer (as range(N) is, isn't it?), for example:</p>
<pre><code>cdef long [:] lista = np.array(list(nx.node_connected_component(Graph, v)))
</code></pre>
<p>which gives me the list of all indices of the nodes in the connected component of <code>v</code> in the graph <code>Graph</code>. I got a yellow warning when I try to define <code>for i in lista:</code>:</p>
<pre><code> __pyx_t_1 = __pyx_memoryview_fromslice(__pyx_v_lista, 1, (PyObject *(*)(char *)) __pyx_memview_get_long, (int (*)(char *, PyObject *)) __pyx_memview_set_long, 0);; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) {
__pyx_t_6 = __pyx_t_1; __Pyx_INCREF(__pyx_t_6); __pyx_t_15 = 0;
__pyx_t_17 = NULL;
} else {
__pyx_t_15 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_17 = Py_TYPE(__pyx_t_6)->tp_iternext; if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 151, __pyx_L1_error)
}
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
for (;;) {
if (likely(!__pyx_t_17)) {
if (likely(PyList_CheckExact(__pyx_t_6))) {
if (__pyx_t_15 >= PyList_GET_SIZE(__pyx_t_6)) break;
#if CYTHON_COMPILING_IN_CPYTHON
__pyx_t_1 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_15); __Pyx_INCREF(__pyx_t_1); __pyx_t_15++; if (unlikely(0 < 0)) __PYX_ERR(0, 151, __pyx_L1_error)
#else
__pyx_t_1 = PySequence_ITEM(__pyx_t_6, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
#endif
} else {
if (__pyx_t_15 >= PyTuple_GET_SIZE(__pyx_t_6)) break;
#if CYTHON_COMPILING_IN_CPYTHON
__pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_15); __Pyx_INCREF(__pyx_t_1); __pyx_t_15++; if (unlikely(0 < 0)) __PYX_ERR(0, 151, __pyx_L1_error)
#else
__pyx_t_1 = PySequence_ITEM(__pyx_t_6, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
#endif
}
} else {
__pyx_t_1 = __pyx_t_17(__pyx_t_6);
if (unlikely(!__pyx_t_1)) {
PyObject* exc_type = PyErr_Occurred();
if (exc_type) {
if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
else __PYX_ERR(0, 151, __pyx_L1_error)
}
break;
}
__Pyx_GOTREF(__pyx_t_1);
}
__pyx_t_2 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_2 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v_i = __pyx_t_2;
/* ⦠*/
}
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
</code></pre>
<p>The code obviously works, but, since I need to use these cycles quite often, I would like to now how to implement them correctly.</p>
<p>Which is the correct assignment for <code>lista</code> ?</p>
| 0 | 2016-10-12T08:08:48Z | 40,005,818 | <p>You're best off just looping over it by index</p>
<pre><code>cdef Py_ssize_t i
cdef long val
with cython.boundscheck(False): # these two lines are optional but may increase speed
with cython.wraparound(False):
for i in range(lista.shape[0]):
val = lista[i]
</code></pre>
<p>Another optimization you could probably make is to define <code>lista</code> as</p>
<pre><code>cdef long [::1] lista
</code></pre>
<p>which states that it's continuous in memory.</p>
<p>(My initial reading of the question made me think it was about the conversion from a <code>set</code>: <code>np.array(list(nx.node_connected_component(Graph, v)))</code>. I don't think this is what you're asking about, but in case it was, I can't see a way to speed up this line.)</p>
| 0 | 2016-10-12T18:26:28Z | [
"python",
"python-2.7",
"optimization",
"cython"
] |
Replicating results with different artificial neural network frameworks (ffnet, tensorflow) | 39,993,598 | <p>I'm trying to model a technical process (a number of nonlinear equations) with artificial neural networks. The function has a number of inputs and a number of outputs (e.g. 50 inputs, 150 outputs - all floats).</p>
<p>I have tried the <a href="http://ffnet.sourceforge.net/overview.html" rel="nofollow">python library ffnet</a> (wrapper for a fortran library) with great success. The errors for a certain dataset are well below 0.2%.</p>
<p>It is using a fully connected graph and these additional parameters. </p>
<pre><code>Basic assumptions and limitations:
Network has feed-forward architecture.
Input units have identity activation function, all other units have sigmoid activation function.
Provided data are automatically normalized, both input and output, with a linear mapping to the range (0.15, 0.85). Each input and output is treated separately (i.e. linear map is unique for each input and output).
Function minimized during training is a sum of squared errors of each output for each training pattern.
</code></pre>
<p>I am using one input layer, one hidden layer (size: 2/3 of input vector + size of output vector) and an output layer. I'm using the scipy conjugate gradient optimizer.</p>
<p>The downside of ffnet is the long training time and the lack of functionality to use GPUs. Therefore i want to switch to a different framework and have chosen <strong>keras with TensorFlow</strong> as the backend.</p>
<p>I have tried to model the previous configuration:</p>
<pre><code>model = Sequential()
model.add(Dense(n_hidden, input_dim=n_in))
model.add(BatchNormalization())
model.add(Dense(n_hidden))
model.add(Activation('sigmoid'))
model.add(Dense(n_out))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='mean_squared_error',
optimizer='Adamax',
metrics=['accuracy'])
</code></pre>
<p>However the results are far worse, the error is up to 0.5% with a few thousand (!) epochs of training. The ffnet training was automatically canceled at 292 epochs. Furthermore the differences between the network response and the validation target are not centered around 0, but mostly negative.
I have tried all optimizers and different loss functions. I have also skipped the BatchNormalization and normalized the data manually in the same way that ffnet does it. Nothing helps.</p>
<p><strong>Does anyone have a suggestion to obtain better results with keras?</strong></p>
| 0 | 2016-10-12T08:09:11Z | 39,996,371 | <p>I understand you are trying to re-train the same architecture from scratch, with a different library. The first fundamental issue to keep in mind here is that neural nets <a href="http://stackoverflow.com/q/24161525/777285">are not necessarily reproducible</a>, when weights are initialized randomly.</p>
<p>For example, here is the default constructor parameter for <code>Dense</code> in Keras:</p>
<pre><code>init='glorot_uniform'
</code></pre>
<p>But even before trying to evaluate the convergence of Keras optimizations, I would recommend trying to port the weights for which you got good results, from ffnet, into your Keras model. You can do so either with the <em>kwarg</em> <code>Dense(..., weights=</code>) of each layer, or globally at the end <code>model.set_weights(...)</code></p>
<p>Using the same weights <strong>must</strong> yield the exact same result between the two libs. Unless you run into some floating point rounding issues. I believe that as long as porting the weights is not consistent, working on the optimization is unlikely to help.</p>
| 0 | 2016-10-12T10:30:12Z | [
"python",
"neural-network",
"tensorflow",
"keras"
] |
python re expression confusion | 39,993,628 | <p>When reading book: web scraping with python, the re expression confused me,</p>
<blockquote>
<pre><code>webpage_regex = re.compile('<a[^>]+href=["\'](.*?)["\']', re.IGNORECASE)
</code></pre>
</blockquote>
<p>And a link in usually looks like:</p>
<pre><code><a href="/view/Afghanistan-1">
</code></pre>
<p>My confusion is that:</p>
<ol>
<li><p>Since <code>[^>]</code> means no <code>></code>, why it followed by a <code>+</code>? This <code>+</code> seems useless.</p></li>
<li><p>The confusion is that <code>(.*?)</code> , since <code>*</code> means repeat 0 or more times, why it needs <code>?</code> to repeat <code>*</code> again?</p></li>
</ol>
| -1 | 2016-10-12T08:11:16Z | 39,994,124 | <ol>
<li><p><code>[>]+</code> matches any other attributes and their corresponding values inside the tag.</p></li>
<li><p><code>*?</code> matches between zero and unlimited times, as few times as possible, expanding as needed. so it will only capture the text that would before the NEXT <code>["\']</code></p></li>
</ol>
| 0 | 2016-10-12T08:38:55Z | [
"python",
"python-2.7"
] |
How to not import imported submodules in a Python module? | 39,993,663 | <p>In my Python modules I often use sub-modules such as <code>datetime</code>. The issue is that these modules become accessible from outside: </p>
<pre><code># module foo
import datetime
def foosay(a):
print "Foo say: %s" % a
</code></pre>
<p>From IPython:</p>
<pre><code>import foo
foo.datetime.datetime.now()
</code></pre>
<p>I would like to know how to properly hide the sub-modules that are the internal business of <code>foo</code>.</p>
<p>I naively thought about <code>import datetime as _datetime</code> or even <code>import datetime as __datetime</code>, but this is not very pleasant solution. I've also read about <code>__all__</code>, but it only concerns what is imported using <code>from foo import *</code>. </p>
| 1 | 2016-10-12T08:13:40Z | 39,993,835 | <p>You can do the import datetime within the function that uses it in module foo:</p>
<pre><code>def foodate():
import datetime
print datetime.datetime.now()
def foosay(a):
print "Foo say: %s" % a
</code></pre>
<p>Now importing foo will not import datetime.</p>
<p><strong>EDIT:</strong> You can also reduce the memory footprint by not importing the whole datetime module, only the methods/functions you need:</p>
<pre><code>from datetime.datetime import now
</code></pre>
| 1 | 2016-10-12T08:23:47Z | [
"python",
"import",
"module",
"visibility"
] |
user.is_authenticated always returns False for inactive users on template | 39,993,714 | <p>In my template, <code>login.html</code>, I have:</p>
<pre><code>{% if form.errors %}
{% if user.is_authenticated %}
<div class="alert alert-warning"><center>Your account doesn't have access to this utility.</center></div>
{% else %}
<div class="alert alert-warning"><center>Incorrect username or password!</center></div>
{% endif %}
{% endif %}
</code></pre>
<p>What I am trying to do is, if after form submission, user is inactive then display a different error message and if user is simply not authenticated then display Incorrect username password error message. This doesn't work. It always displays "Incorrect username or password!" in both the cases. However inside a view, user.is_authenticated returns <code>True</code> even for inactive users.</p>
<p>I there any other wway to get this done? I also tried</p>
<pre><code>{% if 'inactive' in form.errors %}
</code></pre>
<p>But this doesnt work too, even though when I try to print <code>form.errors</code>, it shows text "This account is inactive" for inactive users.</p>
<p>EDIT:
For view, I am simply using django's login view inside a custom login view</p>
<p>views.py:</p>
<pre><code>from django.contrib.auth.views import login, logout
from django.shortcuts import render, redirect
def custom_login(request, **kwargs):
if request.user.is_authenticated():
return redirect('/homepage/')
else:
return login(request, **kwargs)
</code></pre>
| 1 | 2016-10-12T08:17:01Z | 39,994,258 | <p>There isn't any point checking <code>{% if user.is_authenticated %}</code> in your login template. If the user is authenticated, then your <code>custom_login</code> view would have redirected them to the homepage.</p>
<p>If the account is inactive, then the form will be invalid and the user will not be logged in. The forms's errors will look like:</p>
<pre><code>{'__all__': [u'This account is inactive.']}
</code></pre>
<p>Therefore checking <code>{% if 'inactive' in form.errors %}</code> will not work, because the error is stored with the key <code>__all__</code>, not <code>inactive</code>.</p>
<p>You could do <code>{% if 'This account is inactive.' in form.non_field_errors %}</code> but this is very fragile, and would break if Django ever changed the text of the error message for inactive users. </p>
<p>It would be better to display the actual errors, rather than trying to find out what sort of error it is in the template. The easiest way to display non-field errors is to include:</p>
<pre><code>{{ form.non_field_errors }}
</code></pre>
<p>Or, if you need more control:</p>
<pre><code>{% for error in form.non_field_errors %}
{{ error }}
{% endfor %}
</code></pre>
<p>If you need to change the error message for inactive users, you can subclass the authentication form, then use that in your login view.</p>
<pre><code>my_error_messages = AuthenticationForm.error_messages.copy()
my_error_messages['inactive'] = 'My custom message'
class MyAuthenticationForm(AuthenticationForm):
error_messages = my_error_messages
</code></pre>
| 2 | 2016-10-12T08:45:07Z | [
"python",
"django",
"django-templates",
"django-authentication"
] |
user.is_authenticated always returns False for inactive users on template | 39,993,714 | <p>In my template, <code>login.html</code>, I have:</p>
<pre><code>{% if form.errors %}
{% if user.is_authenticated %}
<div class="alert alert-warning"><center>Your account doesn't have access to this utility.</center></div>
{% else %}
<div class="alert alert-warning"><center>Incorrect username or password!</center></div>
{% endif %}
{% endif %}
</code></pre>
<p>What I am trying to do is, if after form submission, user is inactive then display a different error message and if user is simply not authenticated then display Incorrect username password error message. This doesn't work. It always displays "Incorrect username or password!" in both the cases. However inside a view, user.is_authenticated returns <code>True</code> even for inactive users.</p>
<p>I there any other wway to get this done? I also tried</p>
<pre><code>{% if 'inactive' in form.errors %}
</code></pre>
<p>But this doesnt work too, even though when I try to print <code>form.errors</code>, it shows text "This account is inactive" for inactive users.</p>
<p>EDIT:
For view, I am simply using django's login view inside a custom login view</p>
<p>views.py:</p>
<pre><code>from django.contrib.auth.views import login, logout
from django.shortcuts import render, redirect
def custom_login(request, **kwargs):
if request.user.is_authenticated():
return redirect('/homepage/')
else:
return login(request, **kwargs)
</code></pre>
| 1 | 2016-10-12T08:17:01Z | 39,994,473 | <p>Just to complement Alasdair's very sensible answer, if you <strong>really</strong> want to explicitely check whether the user exists but is inactive, you can use <code>AuthenticationForm.get_user()</code>, ie:</p>
<pre><code>{% if form.errors %}
{% with form.get_user as user %}
{% if user %}
{# the user is inactive #}
{% else %}
{# no user matching username/password #}
{% endif %}
{% endwith %}
{% endif %}
</code></pre>
<p>This is assuming you're using the default <code>django.contrib.auth.forms.AuthenticationForm</code> of course - you can use your own and override the <code>confirm_login_allowed()</code> to implement your own policy.</p>
| 1 | 2016-10-12T08:57:03Z | [
"python",
"django",
"django-templates",
"django-authentication"
] |
Python: machine learning without imputing missing data | 39,993,738 | <p>I am currently working with a quite particular dataset : it has about 1000 columns and 1M rows, but about 90% of the values are Nan.
This is not because the records are bad, but because the data represent measurement made on individuals and only about 100 features are relevant for each individual. As such, imputing missing values would completely destroy the information in the data. </p>
<p>It is not easy either to just group together individuals that have the same features and only consider the column relevant to each subgroup, as this would actually yield extremely small groups for each set of columns (almost any combination of filled in columns is possible for a given individual). </p>
<p>The issue is, scikit learn dimension reduction methods cannot handle missing values. Is there a package that does, or should I use a different method and skip dimension reduction?
I </p>
| 2 | 2016-10-12T08:18:54Z | 40,019,480 | <p>You can use gradient boosting packages which handle missing values and are ideal for your case.Since you asked for packages gbm in R and xgboost in python can be used.If you want to know how missing values are handled automatically in xgboost go through section 3.4 of <a href="https://arxiv.org/pdf/1603.02754v3.pdf" rel="nofollow">this paper</a> to get an insight.</p>
| 1 | 2016-10-13T11:16:17Z | [
"python",
"machine-learning",
"pca",
"missing-data"
] |
How to create list of 3 or 4 columns of Dataframe in Pandas when we have 20 to 50 colums? | 39,993,744 | <p>When we create list using the below code, we get the list of all the columns but I want to get the list of only 3 to 5 specific columns.</p>
<p>col_list= list(df)</p>
| 3 | 2016-10-12T08:19:08Z | 39,993,766 | <p>Use slicing of <code>list</code>:</p>
<pre><code>df[df.columns[2:5]]
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
print (df.columns[2:5])
Index(['C', 'D', 'E'], dtype='object')
print (df[df.columns[2:5]])
C D E
0 7 1 5
1 8 3 3
2 9 5 6
</code></pre>
<p>Another solution:</p>
<pre><code>col_list= list(df)
print (col_list[2:5])
['C', 'D', 'E']
print (df[col_list[2:5]])
C D E
0 7 1 5
1 8 3 3
2 9 5 6
</code></pre>
| 2 | 2016-10-12T08:20:09Z | [
"python",
"pandas",
"data-science"
] |
python preserving output csv file column order | 39,993,900 | <p>The issue is common, when I import a csv file and process it and finally output it, the order of column in the output csv file may be different from the original one,for instance:</p>
<pre><code>dct={}
dct['a']=[1,2,3,4]
dct['b']=[5,6,7,8]
dct['c']=[9,10,11,12]
header = dct.keys()
rows=pd.DataFrame(dct).to_dict('records')
with open('outTest.csv', 'wb') as f:
f.write(','.join(header))
f.write('\n')
for data in rows:
f.write(",".join(str(data[h]) for h in header))
f.write('\n')
</code></pre>
<p>the original csv file is like:</p>
<pre><code>a,c,b
1,9,5
2,10,6
3,11,7
4,12,8
</code></pre>
<p>while I'd like to fixed the order of the column like the output:</p>
<pre><code>a,b,c
1,5,9
2,6,10
3,7,11
4,8,12
</code></pre>
<p>and the answers I found are mostly related to <code>pandas</code>, I wonder if we can solve this in another way.</p>
<p>Any help is appreciated, thank you.</p>
| 0 | 2016-10-12T08:27:18Z | 39,993,960 | <p>Instead of <code>dct={}</code> just do this:</p>
<pre><code>from collections import OrderedDict
dct = OrderedDict()
</code></pre>
<p>The keys will be ordered in the same order you define them.</p>
<p>Comparative test:</p>
<pre><code>from collections import OrderedDict
dct = OrderedDict()
dct['a']=[1,2,3,4]
dct['b']=[5,6,7,8]
dct['c']=[9,10,11,12]
stddct = dict(dct) # create a standard dictionary
print(stddct.keys()) # "wrong" order
print(dct.keys()) # deterministic order
</code></pre>
<p>result:</p>
<pre><code>['a', 'c', 'b']
['a', 'b', 'c']
</code></pre>
| 3 | 2016-10-12T08:30:55Z | [
"python",
"csv",
"order"
] |
python preserving output csv file column order | 39,993,900 | <p>The issue is common, when I import a csv file and process it and finally output it, the order of column in the output csv file may be different from the original one,for instance:</p>
<pre><code>dct={}
dct['a']=[1,2,3,4]
dct['b']=[5,6,7,8]
dct['c']=[9,10,11,12]
header = dct.keys()
rows=pd.DataFrame(dct).to_dict('records')
with open('outTest.csv', 'wb') as f:
f.write(','.join(header))
f.write('\n')
for data in rows:
f.write(",".join(str(data[h]) for h in header))
f.write('\n')
</code></pre>
<p>the original csv file is like:</p>
<pre><code>a,c,b
1,9,5
2,10,6
3,11,7
4,12,8
</code></pre>
<p>while I'd like to fixed the order of the column like the output:</p>
<pre><code>a,b,c
1,5,9
2,6,10
3,7,11
4,8,12
</code></pre>
<p>and the answers I found are mostly related to <code>pandas</code>, I wonder if we can solve this in another way.</p>
<p>Any help is appreciated, thank you.</p>
| 0 | 2016-10-12T08:27:18Z | 39,994,021 | <p>Try using <code>OrderedDict</code> instead of <code>dictionary</code>. <code>OrderedDict</code> is part of <code>collections</code> module.</p>
<p>Docs: <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="nofollow">link</a></p>
| 1 | 2016-10-12T08:33:10Z | [
"python",
"csv",
"order"
] |
What's the best way to separate each stone in an image with many stones (some above the others) | 39,993,910 | <p>I need to make a study of the floor of a lake, and have to calculate the number and average size of the stones on the floor.</p>
<p>I have been studying morphological procedures to solve it in OpenCV but still need to find a way to try to separate the stones and make a binary image that precisely shows theirs contours.</p>
<p>I still didn't take the pictures, but the images will be something like the following:</p>
<p><a href="https://i.stack.imgur.com/tgeAf.png" rel="nofollow"><img src="https://i.stack.imgur.com/tgeAf.png" alt="enter image description here"></a></p>
<p>What is the best algorithm to separate each stone and get their contours?
And if this kind of image is too complex to properly separate the stones, what is the best method to have a size x number estimation? </p>
| -1 | 2016-10-12T08:27:53Z | 39,994,108 | <p>Wathershed is probably what will give you an approximation to the solution.</p>
<p>Other thing that you may do is try labelling images with the number of flakes and particles you can count and feed it to a deep neural net and let it findout how to count the flakes, if you have enough training cases you may endup with a good result in the output</p>
<p>information on whatershed can be found at this link
<a href="http://www.pyimagesearch.com/2015/11/02/watershed-opencv/" rel="nofollow">http://www.pyimagesearch.com/2015/11/02/watershed-opencv/</a></p>
| 0 | 2016-10-12T08:38:11Z | [
"python",
"opencv",
"image-processing",
"image-segmentation"
] |
What's the best way to separate each stone in an image with many stones (some above the others) | 39,993,910 | <p>I need to make a study of the floor of a lake, and have to calculate the number and average size of the stones on the floor.</p>
<p>I have been studying morphological procedures to solve it in OpenCV but still need to find a way to try to separate the stones and make a binary image that precisely shows theirs contours.</p>
<p>I still didn't take the pictures, but the images will be something like the following:</p>
<p><a href="https://i.stack.imgur.com/tgeAf.png" rel="nofollow"><img src="https://i.stack.imgur.com/tgeAf.png" alt="enter image description here"></a></p>
<p>What is the best algorithm to separate each stone and get their contours?
And if this kind of image is too complex to properly separate the stones, what is the best method to have a size x number estimation? </p>
| -1 | 2016-10-12T08:27:53Z | 40,010,811 | <p>You should start with the "basic" contour detection based on Canny filter:
<a href="http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html" rel="nofollow">http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html</a>
This simple method will not solve your problem, but no simple methods will. Still, you have to start with something to understand the complexity of your problem. </p>
| 0 | 2016-10-13T01:21:03Z | [
"python",
"opencv",
"image-processing",
"image-segmentation"
] |
Match a substring in Pandas wit str.extract method | 39,993,921 | <p>I have a string looking like:</p>
<pre><code>29818-218705-61709-2
</code></pre>
<p>I want to extract the second to last 5 digits number between the two dashes</p>
<pre><code>61709
</code></pre>
<p>each string is contained in a pandas series:</p>
<p>I came up with:</p>
<pre><code>df.id.str.extract(r'[.-]([0-9]{5})[.-]?')
</code></pre>
<p>but it extracts the first 5 digits group.</p>
<p>I can I match the one I want?</p>
| 2 | 2016-10-12T08:28:35Z | 39,993,970 | <p>You may use</p>
<pre><code>[.-]([0-9]{5})[.-][0-9]+$
</code></pre>
<p>See <a href="https://regex101.com/r/UHzTOq/1" rel="nofollow">this regex demo</a></p>
<p><strong>Details</strong>:</p>
<ul>
<li><code>[.-]</code> - a <code>.</code> or <code>-</code> separator</li>
<li><code>([0-9]{5})</code> - Group 1 capturing 5 digits</li>
<li><code>[.-]</code> - again a separator</li>
<li><code>[0-9]+</code> -1+ digits</li>
<li><code>$</code> - end of string.</li>
</ul>
<p>Thanks to the <code>$</code> anchor, the digit groups that are at the end are matched.</p>
<p>An alternative is to leverage backtracking with:</p>
<pre><code>^.*[.-]([0-9]{5})[.-]
</code></pre>
<p>See <a href="https://regex101.com/r/UHzTOq/3" rel="nofollow">this demo</a></p>
<p>The <code>^.*</code> will match any 0+ chars other than linebreak symbols from the start of the string, as many as possible, so the last <code>-|.</code>+<code>5 digits</code>+<code>-|.</code> are matched.</p>
| 0 | 2016-10-12T08:31:25Z | [
"python",
"regex",
"string",
"pandas"
] |
Match a substring in Pandas wit str.extract method | 39,993,921 | <p>I have a string looking like:</p>
<pre><code>29818-218705-61709-2
</code></pre>
<p>I want to extract the second to last 5 digits number between the two dashes</p>
<pre><code>61709
</code></pre>
<p>each string is contained in a pandas series:</p>
<p>I came up with:</p>
<pre><code>df.id.str.extract(r'[.-]([0-9]{5})[.-]?')
</code></pre>
<p>but it extracts the first 5 digits group.</p>
<p>I can I match the one I want?</p>
| 2 | 2016-10-12T08:28:35Z | 39,994,193 | <p>you can use <code>split</code></p>
<pre><code>df.id.str.split('-').str[-2]
</code></pre>
<hr>
<p><strong><em>demo</em></strong> </p>
<pre><code>df = pd.DataFrame(dict(id=['29818-218705-61709-2'] * 1000))
df.id.str.split('-').str[-2].head()
0 61709
1 61709
2 61709
3 61709
4 61709
Name: id, dtype: object
</code></pre>
| 2 | 2016-10-12T08:42:30Z | [
"python",
"regex",
"string",
"pandas"
] |
Match a substring in Pandas wit str.extract method | 39,993,921 | <p>I have a string looking like:</p>
<pre><code>29818-218705-61709-2
</code></pre>
<p>I want to extract the second to last 5 digits number between the two dashes</p>
<pre><code>61709
</code></pre>
<p>each string is contained in a pandas series:</p>
<p>I came up with:</p>
<pre><code>df.id.str.extract(r'[.-]([0-9]{5})[.-]?')
</code></pre>
<p>but it extracts the first 5 digits group.</p>
<p>I can I match the one I want?</p>
| 2 | 2016-10-12T08:28:35Z | 39,994,257 | <p>You can try:</p>
<pre><code>>>> s = "29818-218705-61709-2 "
>>> s.split("-")[2]
'61709'
</code></pre>
| 1 | 2016-10-12T08:45:06Z | [
"python",
"regex",
"string",
"pandas"
] |
Edit HTML, inserting CSS reference | 39,993,959 | <p>It's possible to create HTML page from a CSV file, with the following:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv('../data.csv',delimiter=';', engine='python')
df.to_html('csv.html')
</code></pre>
<p>I would like to make that HTML to respect some CSS present in <code>csv.css</code>. One way to get this is to manually edit <code>csv.html</code> <code>head</code>, inserting:</p>
<pre><code><head><link rel="stylesheet" type="text/css" href="csv.css"></head>
</code></pre>
<p>Instead of doing it manually, how one can get there programmatically (using Python)?</p>
| 0 | 2016-10-12T08:30:45Z | 39,994,259 | <p>The <code>to_html</code> method does not output an entire HTML document. Instead, it just creates a single <code>table</code> element.</p>
<p>If you want to include CSS, you have to create additional HTML elements, and insert them yourself before writing the data. One of the simplest ways is as follows:</p>
<pre><code>with open('test.html', 'w') as fobj:
fobj.write('<html><head><link rel="stylesheet" href="test.css"></head><body>')
df.to_html(fobj)
fobj.write('</body></html>')
</code></pre>
<p>The first argument of <code>to_html</code> has to be a file-like object: so it can be either a file object as in the above example or a <a href="https://docs.python.org/3/library/io.html#io.StringIO" rel="nofollow"><code>StringIO</code></a>.</p>
| 1 | 2016-10-12T08:45:15Z | [
"python",
"html",
"python-2.7",
"pandas"
] |
How to include __build_class__ when creating a module in the python C API | 39,994,010 | <p>I am trying to use the <a href="https://docs.python.org/3/c-api/" rel="nofollow">Python 3.5 C API</a> to execute some code that includes constructing a class. Specifically this:</p>
<pre><code>class MyClass:
def test(self):
print('test')
MyClass().test()
</code></pre>
<p>The problem I have is that it errors like this:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: __build_class__ not found
</code></pre>
<p>So somehow I need my module to include <code>__build_class__</code>, but I am not sure how (I guess that I would also miss other things you get by default when using <a href="https://www.python.org/" rel="nofollow">Python</a> too) - is there a way to include all this built-in stuff in my module?</p>
<p>Here is my code so far:</p>
<pre><code>#include <Python.h>
int main(void)
{
int ret = 0;
PyObject *pValue, *pModule, *pGlobal, *pLocal;
Py_Initialize();
pGlobal = PyDict_New();
pModule = PyModule_New("mymod");
pLocal = PyModule_GetDict(pModule);
pValue = PyRun_String(
"class MyClass:\n\tdef test(self):\n\t\tprint('test')\n\nMyClass().test()",
Py_file_input,
pGlobal,
pLocal);
if (pValue == NULL) {
if (PyErr_Occurred()) {
PyErr_Print();
}
ret = 1;
} else {
Py_DECREF(pValue);
}
Py_Finalize();
return ret;
}
</code></pre>
<p>so <code>pValue</code> is <code>NULL</code> and it is calling <code>PyErr_Print</code>.</p>
| 0 | 2016-10-12T08:32:46Z | 39,995,510 | <p>Their are (at least) two ways to solve this it seems...</p>
<h2>Way 1</h2>
<p>Instead of:</p>
<pre><code>pGlobal = PyDict_New();
</code></pre>
<p>You can import the <code>__main__</code> module and get it's globals dictionary like this:</p>
<pre><code>pGlobal = PyModule_GetDict(PyImport_AddModule("__main__"));
</code></pre>
<p>This way is described like so:</p>
<blockquote>
<p>BUT PyEval_GetGlobals will return null it it is not called from within
Python. This will never be the case when extending Python, but when Python
is embedded, it may happen. This is because PyRun_* define the global
scope, so if you're not somehow inside a PyRun_ thing (e.g. module called
from python called from embedder), there are no globals. </p>
<p>In an embedded-python situation, if you decide that all of your PyRun_*
calls are going to use <code>__main__</code> as the global namespace,
PyModule_GetDict(PyImport_AddModule("<code>__main__</code>")) will do it.</p>
</blockquote>
<p>Which I got from the question <a href="http://www.gossamer-threads.com/lists/python/python/8946#8946" rel="nofollow">embedding</a> I found over on this <a href="http://www.gossamer-threads.com/lists/python/python/" rel="nofollow">Python list</a>.</p>
<h2>Way 2</h2>
<p>Or as an alternative, which I personally prefer to importing the main module (and found <a href="http://stackoverflow.com/a/10684099/1039947">here</a>), you can do this to populate the new dictionary you created with the built-in stuff which includes <code>__build_class__</code>:</p>
<pre><code>pGlobal = PyDict_New();
PyDict_SetItemString(pGlobal, "__builtins__", PyEval_GetBuiltins());
</code></pre>
| 0 | 2016-10-12T09:45:57Z | [
"python",
"c",
"python-3.x",
"python-c-api"
] |
How could you refactor current_user in User.query.all() in Flask-Security? | 39,994,099 | <p>Index page should be shown only to logged in users and redirect other users to landing page.</p>
<pre><code>@app.route('/')
def index():
if current_user in User.query.all():
return render_template('index.html')
else:
return render_template('landing.html')
</code></pre>
<p>So how could you refactor <code>current_user in User.query.all()</code> part? Should I customize the <code>@login_required</code> somehow? How have others dealt with this problem?</p>
| 0 | 2016-10-12T08:37:35Z | 39,998,142 | <p>Use <code>current_user.is_authenticated</code> property. e.g </p>
<pre><code>if current_user.is_authenticated:
return render_template('index.html')
else:
return render_template('landing.html')
</code></pre>
| 3 | 2016-10-12T12:03:15Z | [
"python",
"flask",
"flask-security"
] |
Pandas: using groupby if values in columns are dictionaries | 39,994,110 | <p>I have dataframe</p>
<pre><code>category dictionary
moto {'motocycle':10, 'buy":8, 'motocompetition':7}
shopping {'buy':200, 'order':20, 'sale':30}
IT {'iphone':214, 'phone':1053, 'computer':809}
shopping {'zara':23, 'sale':18, 'sell':20}
IT {'lenovo':200, 'iphone':300, 'mac':200}
</code></pre>
<p>I need groupby category and as result concatenate dictionaries and choose 3 keys with the biggest values. And next get dataframe, where at the column <code>category</code> I have unique category, and in the column <code>data</code> I have list with keys.</p>
<p>I know, that I can use <code>Counter</code> to concatenate dicts, but I don't know, how do that to categories.
Desire output</p>
<pre><code>category data
moto ['motocycle', 'buy', 'motocompetition']
shopping ['buy', 'sale', 'zara']
IT ['phone', 'computer', 'iphone']
</code></pre>
| 2 | 2016-10-12T08:38:11Z | 39,994,459 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with custom function with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html" rel="nofollow"><code>nlargest</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.tolist.html" rel="nofollow"><code>Index.tolist</code></a>:</p>
<pre><code>df = pd.DataFrame({
'category':['moto','shopping','IT','shopping','IT'],
'dictionary':
[{'motocycle':10, 'buy':8, 'motocompetition':7},
{'buy':200, 'order':20, 'sale':30},
{'iphone':214, 'phone':1053, 'computer':809},
{'zara':23, 'sale':18, 'sell':20},
{'lenovo':200, 'iphone':300, 'mac':200}]})
print (df)
category dictionary
0 moto {'motocycle': 10, 'buy': 8, 'motocompetition': 7}
1 shopping {'sale': 30, 'buy': 200, 'order': 20}
2 IT {'phone': 1053, 'computer': 809, 'iphone': 214}
3 shopping {'sell': 20, 'zara': 23, 'sale': 18}
4 IT {'lenovo': 200, 'mac': 200, 'iphone': 300}
import collections
import functools
import operator
def f(x):
#some possible solution for sum values of dict
#http://stackoverflow.com/a/3491086/2901002
return pd.Series(functools.reduce(operator.add, map(collections.Counter, x)))
.nlargest(3).index.tolist()
print (df.groupby('category')['dictionary'].apply(f).reset_index())
category dictionary
0 IT [phone, computer, iphone]
1 moto [motocycle, buy, motocompetition]
2 shopping [buy, sale, zara]
</code></pre>
| 3 | 2016-10-12T08:56:28Z | [
"python",
"pandas",
"dictionary"
] |
Pandas: using groupby if values in columns are dictionaries | 39,994,110 | <p>I have dataframe</p>
<pre><code>category dictionary
moto {'motocycle':10, 'buy":8, 'motocompetition':7}
shopping {'buy':200, 'order':20, 'sale':30}
IT {'iphone':214, 'phone':1053, 'computer':809}
shopping {'zara':23, 'sale':18, 'sell':20}
IT {'lenovo':200, 'iphone':300, 'mac':200}
</code></pre>
<p>I need groupby category and as result concatenate dictionaries and choose 3 keys with the biggest values. And next get dataframe, where at the column <code>category</code> I have unique category, and in the column <code>data</code> I have list with keys.</p>
<p>I know, that I can use <code>Counter</code> to concatenate dicts, but I don't know, how do that to categories.
Desire output</p>
<pre><code>category data
moto ['motocycle', 'buy', 'motocompetition']
shopping ['buy', 'sale', 'zara']
IT ['phone', 'computer', 'iphone']
</code></pre>
| 2 | 2016-10-12T08:38:11Z | 39,994,485 | <pre><code>df = pd.DataFrame(dict(category=['moto', 'shopping', 'IT', 'shopping', 'IT'],
dictionary=[
dict(motorcycle=10, buy=8, motocompetition=7),
dict(buy=200, order=20, sale=30),
dict(iphone=214, phone=1053, computer=809),
dict(zara=23, sale=18, sell=20),
dict(lenovo=200, iphone=300, mac=200),
]))
def top3(x):
return x.dropna().sort_values().tail(3)[::-1].index.tolist()
df.dictionary.apply(pd.Series).groupby(df.category).sum().apply(top3, axis=1)
category
IT [phone, computer, iphone]
moto [motorcycle, buy, motocompetition]
shopping [buy, sale, zara]
dtype: object
</code></pre>
| 2 | 2016-10-12T08:57:27Z | [
"python",
"pandas",
"dictionary"
] |
What happen after recursion found base condition? | 39,994,167 | <p>I am trying to understand recursion , I am following stackoverflow answer <a href="http://stackoverflow.com/questions/717725/understanding-recursion">Understanding recursion</a> but i am not getting what i am looking. until i have learned this : <code>" if recursion is in program then below block of code will not execute until recursion found its base condition, once it found its base condition then that recursion will never execute again and now below block of code will execute without recursion "</code> is am getting right ?</p>
<p>Here is a program :</p>
<pre><code>def countdown(n):
print(n)
if n == 0:
return
countdown(n - 1) # the recursive call
countdown(4)
</code></pre>
<p>I tried to take help of python visualized code , so my confusion is first recursive function will call function again and again until it found base condition ,ok i understand this part ,like in this program it found base condtion if <code>n==0</code> so now n is 0 and return value is 0 , i understand this :</p>
<p><a href="https://i.stack.imgur.com/5UpOT.png" rel="nofollow"><img src="https://i.stack.imgur.com/5UpOT.png" alt="enter image description here"></a></p>
<p>but what happen after ? why it return result in reverse order after found base condition ? this is my confusion , why its going reverse ? its like first its unpacking values then its packing old values ? from where n=1 came ?? how ??</p>
<p><a href="https://i.stack.imgur.com/O9Exu.png" rel="nofollow"><img src="https://i.stack.imgur.com/O9Exu.png" alt="enter image description here"></a></p>
<p>now n=2 how ? how its going reverse ???? whats happening here ?</p>
<p><a href="https://i.stack.imgur.com/IO7fe.png" rel="nofollow"><img src="https://i.stack.imgur.com/IO7fe.png" alt="enter image description here"></a></p>
<p>now n=3 then n=4 and then at last it goes to first where it started , whats happening here , my big confusion is what happen after base condition found </p>
<p>like in this program :</p>
<pre><code>def listSum(arr, result):
if not data:
return result
print("print final", listSum(arr[1:], result + arr[0]))
print("print A:", arr[1:])
print("print B:", result + arr[0])
listSum([1, 3, 4, 5, 6], 0)
</code></pre>
<p>when it found base condition that if list is empty return result so after it found base condition why its printing result it reverse ? </p>
<p><a href="https://i.stack.imgur.com/plb7T.png" rel="nofollow"><img src="https://i.stack.imgur.com/plb7T.png" alt="enter image description here"></a></p>
| 0 | 2016-10-12T08:41:09Z | 39,994,500 | <p>The recursion is quite easy to understand...in your example the function is calling itself multiple times, but with different values:</p>
<p>F(4)-> F(3) -> F(2) -> F(1) -> F(0) when you reach the last one (printing 4 3 2 1), then the function after printing also 0 it is just "return" so it reaches the last iteration. Going back at the point in the stack were F(1) WAS LEFT!! This is important...it will not execute again whole F(1)...your computer will understand and start where it was left...in your case it means "Do nothing" and it will again "return" --> back to the point in the stack where F(2) was and so on again "Do nothing" and return till F(4) that will also do the same!! If you use an additional IF like below you will be surprised to see also the Happy ending at the END, but of course it is completely normal!! When it came back to F(4) then you can still check and do something else for a special case ("Ending")...of course in all the other steps it will NOT match the if case :-)</p>
<p>Hope this solve your doubts!! If not then please let me know.</p>
<pre><code>def countdown(n):
print(n)
if n == 0:
return
countdown(n - 1) # the recursive call
if (n==4):
print("Happy Ending")
countdown(4)
</code></pre>
| -1 | 2016-10-12T08:58:05Z | [
"python",
"recursion"
] |
What happen after recursion found base condition? | 39,994,167 | <p>I am trying to understand recursion , I am following stackoverflow answer <a href="http://stackoverflow.com/questions/717725/understanding-recursion">Understanding recursion</a> but i am not getting what i am looking. until i have learned this : <code>" if recursion is in program then below block of code will not execute until recursion found its base condition, once it found its base condition then that recursion will never execute again and now below block of code will execute without recursion "</code> is am getting right ?</p>
<p>Here is a program :</p>
<pre><code>def countdown(n):
print(n)
if n == 0:
return
countdown(n - 1) # the recursive call
countdown(4)
</code></pre>
<p>I tried to take help of python visualized code , so my confusion is first recursive function will call function again and again until it found base condition ,ok i understand this part ,like in this program it found base condtion if <code>n==0</code> so now n is 0 and return value is 0 , i understand this :</p>
<p><a href="https://i.stack.imgur.com/5UpOT.png" rel="nofollow"><img src="https://i.stack.imgur.com/5UpOT.png" alt="enter image description here"></a></p>
<p>but what happen after ? why it return result in reverse order after found base condition ? this is my confusion , why its going reverse ? its like first its unpacking values then its packing old values ? from where n=1 came ?? how ??</p>
<p><a href="https://i.stack.imgur.com/O9Exu.png" rel="nofollow"><img src="https://i.stack.imgur.com/O9Exu.png" alt="enter image description here"></a></p>
<p>now n=2 how ? how its going reverse ???? whats happening here ?</p>
<p><a href="https://i.stack.imgur.com/IO7fe.png" rel="nofollow"><img src="https://i.stack.imgur.com/IO7fe.png" alt="enter image description here"></a></p>
<p>now n=3 then n=4 and then at last it goes to first where it started , whats happening here , my big confusion is what happen after base condition found </p>
<p>like in this program :</p>
<pre><code>def listSum(arr, result):
if not data:
return result
print("print final", listSum(arr[1:], result + arr[0]))
print("print A:", arr[1:])
print("print B:", result + arr[0])
listSum([1, 3, 4, 5, 6], 0)
</code></pre>
<p>when it found base condition that if list is empty return result so after it found base condition why its printing result it reverse ? </p>
<p><a href="https://i.stack.imgur.com/plb7T.png" rel="nofollow"><img src="https://i.stack.imgur.com/plb7T.png" alt="enter image description here"></a></p>
| 0 | 2016-10-12T08:41:09Z | 39,997,613 | <p>Sometimes the most basic information is important. What it appears you may <em>not</em> yet have realised is that Python creates a <strong>new (local) namespace</strong> for each function call. Not only that, but the function you use as an example doesn't even really take much advantage of recursion. But consider this simple function:</p>
<pre><code>def factorial(n):
if n==1:
return 1
else:
return n * factorial(n-1)
</code></pre>
<p>When you make the call <code>factorial(2)</code> a namespace is created in which the name <code>n</code> is bound to the value 2, and execution of the function code starts. Since <code>n</code> isn't 1 it then has to compute its return value, during which it executes the call <code>factorial(1)</code>, so the interpreter creates a new namespace in which the name <code>n</code> is bound to the value 1, and execution of the function code starts. This time, <code>n</code> <em>is</em> 1, and so it returns the value 1 (after destroying the namespace).</p>
<p>This completes the computation of the first call's multiplication operands, so it now multiplies <code>n</code> (which in this original namespace is 2) by <code>factorial(1)</code> (which it now knows is 1) to get 2, which it then returns (after destroying its namespace).</p>
<p>Does this tediously detailed description help at all? The idea of recursion isn't an easy one to understand.</p>
| 2 | 2016-10-12T11:36:05Z | [
"python",
"recursion"
] |
What happen after recursion found base condition? | 39,994,167 | <p>I am trying to understand recursion , I am following stackoverflow answer <a href="http://stackoverflow.com/questions/717725/understanding-recursion">Understanding recursion</a> but i am not getting what i am looking. until i have learned this : <code>" if recursion is in program then below block of code will not execute until recursion found its base condition, once it found its base condition then that recursion will never execute again and now below block of code will execute without recursion "</code> is am getting right ?</p>
<p>Here is a program :</p>
<pre><code>def countdown(n):
print(n)
if n == 0:
return
countdown(n - 1) # the recursive call
countdown(4)
</code></pre>
<p>I tried to take help of python visualized code , so my confusion is first recursive function will call function again and again until it found base condition ,ok i understand this part ,like in this program it found base condtion if <code>n==0</code> so now n is 0 and return value is 0 , i understand this :</p>
<p><a href="https://i.stack.imgur.com/5UpOT.png" rel="nofollow"><img src="https://i.stack.imgur.com/5UpOT.png" alt="enter image description here"></a></p>
<p>but what happen after ? why it return result in reverse order after found base condition ? this is my confusion , why its going reverse ? its like first its unpacking values then its packing old values ? from where n=1 came ?? how ??</p>
<p><a href="https://i.stack.imgur.com/O9Exu.png" rel="nofollow"><img src="https://i.stack.imgur.com/O9Exu.png" alt="enter image description here"></a></p>
<p>now n=2 how ? how its going reverse ???? whats happening here ?</p>
<p><a href="https://i.stack.imgur.com/IO7fe.png" rel="nofollow"><img src="https://i.stack.imgur.com/IO7fe.png" alt="enter image description here"></a></p>
<p>now n=3 then n=4 and then at last it goes to first where it started , whats happening here , my big confusion is what happen after base condition found </p>
<p>like in this program :</p>
<pre><code>def listSum(arr, result):
if not data:
return result
print("print final", listSum(arr[1:], result + arr[0]))
print("print A:", arr[1:])
print("print B:", result + arr[0])
listSum([1, 3, 4, 5, 6], 0)
</code></pre>
<p>when it found base condition that if list is empty return result so after it found base condition why its printing result it reverse ? </p>
<p><a href="https://i.stack.imgur.com/plb7T.png" rel="nofollow"><img src="https://i.stack.imgur.com/plb7T.png" alt="enter image description here"></a></p>
| 0 | 2016-10-12T08:41:09Z | 39,999,929 | <p>What you have is simply a <em>call stack</em>. Whenever a function calls another function, that new call is pushed onto the call stack.</p>
<pre><code>def a():
b()
def b():
c()
def c():
pass
a()
</code></pre>
<p><code>a</code> calls <code>b</code>, <code>b</code> calls <code>c</code>. You now have a call stack of <code>a</code> â <code>b</code> â <code>c</code>. Once <code>c</code> returns, the stack will be unwound in the reverse order: <code>c</code> will get discarded from the top, then <code>b</code>, finally <code>a</code>.</p>
<p>This is more apparent when there's something <em>after</em> the function call:</p>
<pre><code>def a():
b()
print('foo')
def b():
pass
a()
</code></pre>
<p><code>a</code> will get executed and call <code>b</code>. Now you have a call stack of <code>a</code> â <code>b</code>. <code>b</code> finishes, popping off the stack. You now have a call stack of just <code>a</code>. Then <code>'foo'</code> will be output. Then the stack unwinds completely.</p>
<p>This is no different with a recursive function, only that instead of three different functions, you're calling the same function. You'll have a call stack of <code>countdown</code> â <code>countdown</code> â <code>countdown</code> â â¦, or <code>countdown(4)</code> â <code>countdown(3)</code> â <code>countdown(2)</code> â â¦</p>
<p>BTW, the call stack is what you see as <em>stack trace</em> whenever you see Python's default error output:</p>
<pre><code>Traceback (most recent call last):
File â¦, line â¦, in <module>
File â¦, line â¦, in a
File â¦, line â¦, in b
File â¦, line â¦, in c
SomeError: some message
</code></pre>
<p>It tells you what functions where called in what order to get to the place the error occurred.</p>
| 1 | 2016-10-12T13:29:35Z | [
"python",
"recursion"
] |
What happen after recursion found base condition? | 39,994,167 | <p>I am trying to understand recursion , I am following stackoverflow answer <a href="http://stackoverflow.com/questions/717725/understanding-recursion">Understanding recursion</a> but i am not getting what i am looking. until i have learned this : <code>" if recursion is in program then below block of code will not execute until recursion found its base condition, once it found its base condition then that recursion will never execute again and now below block of code will execute without recursion "</code> is am getting right ?</p>
<p>Here is a program :</p>
<pre><code>def countdown(n):
print(n)
if n == 0:
return
countdown(n - 1) # the recursive call
countdown(4)
</code></pre>
<p>I tried to take help of python visualized code , so my confusion is first recursive function will call function again and again until it found base condition ,ok i understand this part ,like in this program it found base condtion if <code>n==0</code> so now n is 0 and return value is 0 , i understand this :</p>
<p><a href="https://i.stack.imgur.com/5UpOT.png" rel="nofollow"><img src="https://i.stack.imgur.com/5UpOT.png" alt="enter image description here"></a></p>
<p>but what happen after ? why it return result in reverse order after found base condition ? this is my confusion , why its going reverse ? its like first its unpacking values then its packing old values ? from where n=1 came ?? how ??</p>
<p><a href="https://i.stack.imgur.com/O9Exu.png" rel="nofollow"><img src="https://i.stack.imgur.com/O9Exu.png" alt="enter image description here"></a></p>
<p>now n=2 how ? how its going reverse ???? whats happening here ?</p>
<p><a href="https://i.stack.imgur.com/IO7fe.png" rel="nofollow"><img src="https://i.stack.imgur.com/IO7fe.png" alt="enter image description here"></a></p>
<p>now n=3 then n=4 and then at last it goes to first where it started , whats happening here , my big confusion is what happen after base condition found </p>
<p>like in this program :</p>
<pre><code>def listSum(arr, result):
if not data:
return result
print("print final", listSum(arr[1:], result + arr[0]))
print("print A:", arr[1:])
print("print B:", result + arr[0])
listSum([1, 3, 4, 5, 6], 0)
</code></pre>
<p>when it found base condition that if list is empty return result so after it found base condition why its printing result it reverse ? </p>
<p><a href="https://i.stack.imgur.com/plb7T.png" rel="nofollow"><img src="https://i.stack.imgur.com/plb7T.png" alt="enter image description here"></a></p>
| 0 | 2016-10-12T08:41:09Z | 40,050,211 | <p>@deceze already clear the points but i would like to add few points more:</p>
<p>For Understanding this , You have to the understand fundamental of stack , i have <a href="http://stackoverflow.com/a/40049954/5904928">answered here</a> in detail. </p>
<blockquote>
<p>After recursion find its base condition then what happen or what
execute its depend on Whether it is Tail Recursion or Head Recursion.</p>
</blockquote>
<p>If its Tail then all code will execute before the recursive call.</p>
<p>If its Head Then rest of code below of Head recursive function will be
execute.</p>
<blockquote>
<p>After recursion find its base call it means now stack will start
returning values which it have stored during recursive calls.</p>
</blockquote>
<p>:</p>
| 1 | 2016-10-14T18:54:03Z | [
"python",
"recursion"
] |
what is the purpose of solid_i18n_patterns? | 39,994,217 | <p>i am new to <strong>python/django</strong> and i just want to know the purpose of below function/code (<code>solid_i18n_patterns</code>). </p>
<pre><code>from django.conf.urls import url
from solid_i18n.urls import solid_i18n_patterns
urlpatterns =
solid_i18n_patterns(<appname.views>,<urlpattern>,<anotherurlpattern>.....)
+
solid_i18n_patterns(<anotherappname.views>,<urlpattern>,<anotherurlpattern>.....)
</code></pre>
<p>what is the purpose of <code>solid_i18n_patterns</code> and its arguments?</p>
| -2 | 2016-10-12T08:43:32Z | 39,994,836 | <p>solid_i18n is a helper package for internationalization.
Suppose you have an English website and you want to serve it also in French. By default, If you make your site bilingual, you should specify language code at the beginning of your URLs:</p>
<pre><code>www.example.com/en/* - serves English
www.example.com/fr/* - serves French
</code></pre>
<p>With solid_i18n_patterns, you can serve your default language without language code in URL.</p>
<pre><code>www.example.com/* - serves English (note no /en/ in URL)
www.example.com/fr/* - serves French
</code></pre>
| 1 | 2016-10-12T09:13:54Z | [
"python",
"django",
"url-pattern"
] |
Sorting Bigram by number of occurrence NLTK | 39,994,312 | <p>I am currently running this code for search for bigram for entire of my text processing.</p>
<p>Variable alltext is really long text (over 1 million words)</p>
<p>I ran this code to extract bigram</p>
<pre><code>from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
import re
tokenizer = RegexpTokenizer(r'([A-za-z]{2,})')
tokens = tokenizer.tokenize(alltext)
stopwords_list = stopwords.words('english')
tokens = [word for word in tokens if word not in stopwords.words('english')]
finder = BigramCollocationFinder.from_words(tokens, window_size = 2)
bigram_measures = nltk.collocations.BigramAssocMeasures()
for k,v in finder.ngram_fd.items():
print k,v
</code></pre>
<p>The code above searches for the frequency occurrence for possible bigrams.</p>
<p>The code prints me lots of bigrams and its number of occurrence.</p>
<p>The output is similar to this.</p>
<pre><code>(('upper', 'front'), 1)
(('pad', 'Teething'), 1)
(('shoulder', 'strap'), 1)
(('outer', 'breathable'), 1)
(('memory', 'foam'), 1)
(('shields', 'inner'), 1)
(('The', 'garment'), 2)
......
type(finder.ngram_fd.items()) is a list.
</code></pre>
<p>How can i sort the frequency from highest to lowest number of occurrence. My desire result would be.</p>
<pre><code>(('The', 'garment'), 2)
(('upper', 'front'), 1)
(('pad', 'Teething'), 1)
(('shoulder', 'strap'), 1)
(('outer', 'breathable'), 1)
(('memory', 'foam'), 1)
(('shields', 'inner'), 1)
</code></pre>
<p>Thank you very much, I am quite new to nltk and text processing so my explanation would not be as clear.</p>
| 2 | 2016-10-12T08:48:51Z | 39,994,755 | <p>It looks like <code>finder.ngram_fd</code> is a dictionary. In that case, in Python 3 the <code>items()</code> method does not return a list, so you'll have to cast it to one.</p>
<p>Once you have a list, you can simply use the <code>key=</code> parameter of the <a href="https://docs.python.org/3/library/stdtypes.html#list.sort" rel="nofollow"><code>sort()</code></a> method, which specifies what we're sorting against:</p>
<pre><code>ngram = list(finder.ngram_fd.items())
ngram.sort(key=lambda item: item[-1], reverse=True)
</code></pre>
<p>You have to add <code>reverse=True</code> because otherwise the results would be in ascending order. Note that this will sort the list <strong>in place</strong>. This is best when you want to avoid copying. If instead you wish to obtain a new list, just use the <code>sorted()</code> built-in function with the same arguments.</p>
<p>Alternatively, you can replace the lambda with <a href="https://docs.python.org/3/library/operator.html#operator.itemgetter" rel="nofollow"><code>operator.itemgetter</code></a> module, which does the same thing:</p>
<pre><code>ngram.sort(key=operator.itemgetter(-1), reverse=True)
</code></pre>
| 2 | 2016-10-12T09:10:08Z | [
"python",
"nltk"
] |
Create n strings in Python | 39,994,332 | <p>I get a number (n) from user.
Just</p>
<pre><code>n = int(input())
</code></pre>
<p>After that, I have to to create <strong>n</strong> strings and get their values from user.</p>
<pre><code>i = 0;
while (i < n):
word = input() # so here is my problem:
# i don't know how to create n different strings
i += 1
</code></pre>
<p>How to create <strong>n</strong> strings?</p>
| 1 | 2016-10-12T08:49:57Z | 39,994,385 | <p>You need to use a list, like this:</p>
<pre><code>n = int(input())
i = 0
words = []
while ( i < n ):
word = input()
words.append(word)
i += 1
</code></pre>
<p>Also, this loop is better created as a for loop:</p>
<pre><code>n = int(input())
words = []
for i in range(n):
words.append(input())
</code></pre>
| 3 | 2016-10-12T08:52:46Z | [
"python"
] |
Create n strings in Python | 39,994,332 | <p>I get a number (n) from user.
Just</p>
<pre><code>n = int(input())
</code></pre>
<p>After that, I have to to create <strong>n</strong> strings and get their values from user.</p>
<pre><code>i = 0;
while (i < n):
word = input() # so here is my problem:
# i don't know how to create n different strings
i += 1
</code></pre>
<p>How to create <strong>n</strong> strings?</p>
| 1 | 2016-10-12T08:49:57Z | 39,994,400 | <p>Try this (python 3):</p>
<pre><code>n = int(input())
s = []
for i in range(n):
s.append(str(input()))
</code></pre>
<p>The lits s will contains all the n strings.</p>
| 2 | 2016-10-12T08:53:37Z | [
"python"
] |
Create n strings in Python | 39,994,332 | <p>I get a number (n) from user.
Just</p>
<pre><code>n = int(input())
</code></pre>
<p>After that, I have to to create <strong>n</strong> strings and get their values from user.</p>
<pre><code>i = 0;
while (i < n):
word = input() # so here is my problem:
# i don't know how to create n different strings
i += 1
</code></pre>
<p>How to create <strong>n</strong> strings?</p>
| 1 | 2016-10-12T08:49:57Z | 39,994,585 | <p>If you are aware of <a href="http://www.secnetix.de/olli/Python/list_comprehensions.hawk" rel="nofollow">list comprehensions</a>, you can do this in a single line</p>
<pre><code>s = [str(input()) for i in range(int(input()))]
</code></pre>
<p>int(input()) - This gets the input on the number of strings. Then the for loop is run for the input number of iterations and str(input()) is called and the input is automatically appended to the list 's'. </p>
| 2 | 2016-10-12T09:02:35Z | [
"python"
] |
How to convert huge binary data into ASCII format? | 39,994,357 | <p>I want to read a file which contains huge binary data. I want to convert this binary data into ASCII format. At the time of start, I want to read 2 bytes which indicates size of message, message is ahead of size. After reading this whole message, again repeat same action, 2 bytes for size of message and then actual message.</p>
<p><strong>code to print input data-</strong></p>
<pre><code>with open("abc.dat", "rb") as f:
byte = f.read(1)
i = 0
while byte:
i += 1
print byte+' ',
byte = f.read(1)
if i is 80:
sys.exit()
</code></pre>
<p><strong>Input Data(80 bytes)-</strong> </p>
<pre><code> O T C _ A _ R C V R P V � W � w / � � � ' � � & �
</code></pre>
<p>edit1-
. >
<strong>output ussing hexdump -n200 otc_a_primary_1003_0600.dat command-</strong></p>
<pre><code>0000000 4f03 4354 415f 525f 5643 0052 0000 0000
0000010 0000 0000 0000 0000 0000 0000 0000 0000
0000020 0000 0000 0000 0000 5650 57f2 0000 0000
0000030 77d1 0002 0000 0000 902f 0004 0000 0000
0000040 a2bd 1027 0000 0000 d695 e826 2e0b 3e11
0000050 aa55 0300 f332 0000 0046 0000 0000 0000
0000060 5650 57f2 0000 0000 22f8 0a6c 0000 0000
0000070 3030 3030 3730 3435 5135 0000 0000 0100
0000080 bdb4 0100 3000 5131 5a45 1420 077a 9c11
0000090 3591 1416 077a 9c11 dc8d 00c0 0000 0000
00000a0 0000 4300 5241 2020 7f0c 0700 ed0d 0700
00000b0 2052 2020 2030 aa55 0300 f332 0000 0046
00000c0 0000 0000 0000 5650
00000c8
</code></pre>
<p>I'm using python's <a href="https://docs.python.org/2/library/struct.html" rel="nofollow">struct</a> module. python version - python 2.7.6</p>
<p><strong>program code-</strong></p>
<pre><code>import struct
msg_len = struct.unpack('h', f.read(2))[0]
msg_data = struct.unpack_from('s', f.read(msg_len))[0]
print msg_data
</code></pre>
<p>But I'm not able to see actual message, only single character is printing on console. How I can read such binary file's message in appropriate manner?</p>
| 2 | 2016-10-12T08:51:26Z | 39,994,615 | <p>from the <a href="https://docs.python.org/2/library/struct.html" rel="nofollow">docs</a>:</p>
<blockquote>
<p>For the 's' format character, the count is interpreted as the size of the string, not a repeat count like for the other format characters; for example, '10s' means a single 10-byte string, while '10c' means 10 characters. If a count is not given, it defaults to 1. For packing, the string is truncated or padded with null bytes as appropriate to make it fit. For unpacking, the resulting string always has exactly the specified number of bytes. As a special case, '0s' means a single, empty string (while '0c' means 0 characters).</p>
</blockquote>
<p><code>'s'</code> should be modified to <code>str(msg_len)+'s'</code>. It seems like a good idea to check that <code>msg_len</code> is sensible in advance.</p>
| 1 | 2016-10-12T09:03:46Z | [
"python",
"struct",
"python-2.x",
"binascii"
] |
How to convert huge binary data into ASCII format? | 39,994,357 | <p>I want to read a file which contains huge binary data. I want to convert this binary data into ASCII format. At the time of start, I want to read 2 bytes which indicates size of message, message is ahead of size. After reading this whole message, again repeat same action, 2 bytes for size of message and then actual message.</p>
<p><strong>code to print input data-</strong></p>
<pre><code>with open("abc.dat", "rb") as f:
byte = f.read(1)
i = 0
while byte:
i += 1
print byte+' ',
byte = f.read(1)
if i is 80:
sys.exit()
</code></pre>
<p><strong>Input Data(80 bytes)-</strong> </p>
<pre><code> O T C _ A _ R C V R P V � W � w / � � � ' � � & �
</code></pre>
<p>edit1-
. >
<strong>output ussing hexdump -n200 otc_a_primary_1003_0600.dat command-</strong></p>
<pre><code>0000000 4f03 4354 415f 525f 5643 0052 0000 0000
0000010 0000 0000 0000 0000 0000 0000 0000 0000
0000020 0000 0000 0000 0000 5650 57f2 0000 0000
0000030 77d1 0002 0000 0000 902f 0004 0000 0000
0000040 a2bd 1027 0000 0000 d695 e826 2e0b 3e11
0000050 aa55 0300 f332 0000 0046 0000 0000 0000
0000060 5650 57f2 0000 0000 22f8 0a6c 0000 0000
0000070 3030 3030 3730 3435 5135 0000 0000 0100
0000080 bdb4 0100 3000 5131 5a45 1420 077a 9c11
0000090 3591 1416 077a 9c11 dc8d 00c0 0000 0000
00000a0 0000 4300 5241 2020 7f0c 0700 ed0d 0700
00000b0 2052 2020 2030 aa55 0300 f332 0000 0046
00000c0 0000 0000 0000 5650
00000c8
</code></pre>
<p>I'm using python's <a href="https://docs.python.org/2/library/struct.html" rel="nofollow">struct</a> module. python version - python 2.7.6</p>
<p><strong>program code-</strong></p>
<pre><code>import struct
msg_len = struct.unpack('h', f.read(2))[0]
msg_data = struct.unpack_from('s', f.read(msg_len))[0]
print msg_data
</code></pre>
<p>But I'm not able to see actual message, only single character is printing on console. How I can read such binary file's message in appropriate manner?</p>
| 2 | 2016-10-12T08:51:26Z | 39,995,465 | <p>It depends on how your two byte length is stored in the data, for example, if the first two bytes of your file (as hex) were <code>00 01</code> does this mean a message following is <code>1</code> byte long or <code>256</code> bytes long? This is referred to as either big or little endian format. Try both of the following, one should give more meaningful results, it is designed to read the data in message length chunks:</p>
<p><strong>Big endian format</strong></p>
<pre><code>import struct
with open('test.bin', 'rb') as f_input:
length = f_input.read(2)
while len(length) == 2:
print f_input.read(struct.unpack(">H", length)[0])
length = f_input.read(2)
</code></pre>
<p><strong>Little endian format</strong></p>
<pre><code>import struct
with open('test.bin', 'rb') as f_input:
length = f_input.read(2)
while len(length) == 2:
print f_input.read(struct.unpack("<H", length)[0])
length = f_input.read(2)
</code></pre>
<p>The actually data will need further processing. The <code>H</code> tells struct to process the 2 bytes as an <code>unsigned short</code> (i.e. the value can never be considered to be negative).</p>
<p>Something else to consider is that sometimes the length includes itself, so a length of 2 could mean an empty message. </p>
| 2 | 2016-10-12T09:44:06Z | [
"python",
"struct",
"python-2.x",
"binascii"
] |
How to convert huge binary data into ASCII format? | 39,994,357 | <p>I want to read a file which contains huge binary data. I want to convert this binary data into ASCII format. At the time of start, I want to read 2 bytes which indicates size of message, message is ahead of size. After reading this whole message, again repeat same action, 2 bytes for size of message and then actual message.</p>
<p><strong>code to print input data-</strong></p>
<pre><code>with open("abc.dat", "rb") as f:
byte = f.read(1)
i = 0
while byte:
i += 1
print byte+' ',
byte = f.read(1)
if i is 80:
sys.exit()
</code></pre>
<p><strong>Input Data(80 bytes)-</strong> </p>
<pre><code> O T C _ A _ R C V R P V � W � w / � � � ' � � & �
</code></pre>
<p>edit1-
. >
<strong>output ussing hexdump -n200 otc_a_primary_1003_0600.dat command-</strong></p>
<pre><code>0000000 4f03 4354 415f 525f 5643 0052 0000 0000
0000010 0000 0000 0000 0000 0000 0000 0000 0000
0000020 0000 0000 0000 0000 5650 57f2 0000 0000
0000030 77d1 0002 0000 0000 902f 0004 0000 0000
0000040 a2bd 1027 0000 0000 d695 e826 2e0b 3e11
0000050 aa55 0300 f332 0000 0046 0000 0000 0000
0000060 5650 57f2 0000 0000 22f8 0a6c 0000 0000
0000070 3030 3030 3730 3435 5135 0000 0000 0100
0000080 bdb4 0100 3000 5131 5a45 1420 077a 9c11
0000090 3591 1416 077a 9c11 dc8d 00c0 0000 0000
00000a0 0000 4300 5241 2020 7f0c 0700 ed0d 0700
00000b0 2052 2020 2030 aa55 0300 f332 0000 0046
00000c0 0000 0000 0000 5650
00000c8
</code></pre>
<p>I'm using python's <a href="https://docs.python.org/2/library/struct.html" rel="nofollow">struct</a> module. python version - python 2.7.6</p>
<p><strong>program code-</strong></p>
<pre><code>import struct
msg_len = struct.unpack('h', f.read(2))[0]
msg_data = struct.unpack_from('s', f.read(msg_len))[0]
print msg_data
</code></pre>
<p>But I'm not able to see actual message, only single character is printing on console. How I can read such binary file's message in appropriate manner?</p>
| 2 | 2016-10-12T08:51:26Z | 39,996,499 | <p>Try:</p>
<pre><code>import struct
with open('abc.dat', 'rb') as f:
while True:
try:
msg_len = struct.unpack('h', f.read(2))[0] # assume native byte order
msg_data = f.read(msg_len) # just read 'msg_len' bytes
print repr(msg_data)
except:
# something wrong or reach EOF
break
</code></pre>
| 0 | 2016-10-12T10:37:28Z | [
"python",
"struct",
"python-2.x",
"binascii"
] |
python decorators and static methods | 39,994,468 | <p>I want to add decorator with my python static method, like following:</p>
<pre><code>class AdminPanelModel(db.Model):
id = db.Column('id', db.Integer, primary_key=True)
visible = db.Column(db.Boolean,)
def decor_init(func):
def func_wrapper(*kargs, **kwargs):
for l in model.all(): #internal logic
pass
return func(*kargs, **kwargs)
return func_wrapper
@staticmethod
@decor_init
def all_newscollection_at_adminpanel():
pass
</code></pre>
<p>I tried adding <code>@staticmethod</code> to my decorator and tried to make function call like following <code>func(AdminPanelModel,*kargs, **kwargs)</code>, but no luck, still stuck at following error message:</p>
<pre><code>TypeError: unbound method func_wrapper() must be called with AdminPanelModel instance as first argument (got nothing instead)
</code></pre>
<p>Is there any way I can achieve that? I am aware of that there can be other way I can do the same work, but its more of an educational question rather than get things done somehow.</p>
| 0 | 2016-10-12T08:56:57Z | 39,994,743 | <p>You need to use a <code>classmethod</code> instead:</p>
<pre><code>@classmethod
@decor_init
def all_newscollection_at_adminpanel(cls):
pass
</code></pre>
<p>The call is the same, but <code>classmethod</code>s implicitly receive the class as the first argument, which will then also be passed into the decorated function</p>
<p>to see the difference:</p>
<pre><code>class AdminPanelModel(object):
def decor_init(func):
def func_wrapper(*kargs, **kwargs):
print kargs # The first element should be of type class AdminPanelModel
return func(*kargs, **kwargs)
return func_wrapper
@staticmethod
@decor_init
def staticm():
pass
@classmethod
@decor_init
def classm(cls):
pass
</code></pre>
<p>they yield</p>
<pre><code>AdminPanelModel.staticm()
>>> () # empty, which is reason for error
AdminPanelModel.classm()
>>> (<class '__main__.AdminPanelModel'>,) # class instance as first parameter
</code></pre>
| 2 | 2016-10-12T09:09:41Z | [
"python",
"sqlalchemy"
] |
iterate through classes in selenium python | 39,994,489 | <p>** Im running this on a popup page. therefore cant simply use "entry" class as it clashes with the original page.</p>
<p>I want to iterate through classes to pick out the text, from the "entry" class</p>
<pre><code>from selenium import webdriver
driver=webdriver.Firefox()
</code></pre>
<p><a href="https://i.stack.imgur.com/Llbhu.png" rel="nofollow"><img src="https://i.stack.imgur.com/Llbhu.png" alt="enter image description here"></a></p>
<p>When I pick the xpath of this elemnt from chrome , its coming like this</p>
<p>But this isnt working with <code>driver.find_element_by_xpath(/html/body/span/div[1]/div/div[3])</code></p>
<p>But below one is working, but its giving me date, heading etc. But I just want the text</p>
<pre><code>driver.find_element_by_class_name("ui_overlay").text
</code></pre>
| 1 | 2016-10-12T08:57:39Z | 39,994,611 | <p>Consider using this:</p>
<pre><code>for div in driver.find_elements_by_class_name("entry"):
do_something(div.text)
</code></pre>
<p>After edited question:
This then must be solved using xpath.</p>
<pre><code>for div in driver.find_elements_by_xpath("//span/div/div/div[@class='entry']"):
do_something(div)
</code></pre>
<p>If You provide webpage You're using, I can use even more precise xpath.</p>
| 0 | 2016-10-12T09:03:29Z | [
"python",
"selenium",
"web-scraping"
] |
iterate through classes in selenium python | 39,994,489 | <p>** Im running this on a popup page. therefore cant simply use "entry" class as it clashes with the original page.</p>
<p>I want to iterate through classes to pick out the text, from the "entry" class</p>
<pre><code>from selenium import webdriver
driver=webdriver.Firefox()
</code></pre>
<p><a href="https://i.stack.imgur.com/Llbhu.png" rel="nofollow"><img src="https://i.stack.imgur.com/Llbhu.png" alt="enter image description here"></a></p>
<p>When I pick the xpath of this elemnt from chrome , its coming like this</p>
<p>But this isnt working with <code>driver.find_element_by_xpath(/html/body/span/div[1]/div/div[3])</code></p>
<p>But below one is working, but its giving me date, heading etc. But I just want the text</p>
<pre><code>driver.find_element_by_class_name("ui_overlay").text
</code></pre>
| 1 | 2016-10-12T08:57:39Z | 39,994,983 | <p>Try this xpath it will locate correctly : </p>
<pre><code>".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']"
</code></pre>
<p>Example use:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome("C:\Jars\chromedriver.exe")
driver.maximize_window()
url="https://www.tripadvisor.com/Airline_Review-d8729164-Reviews-Cheap-Flights-TAP-Portugal#REVIEWS"
driver.get(url)
wait = WebDriverWait(driver, 10)
langselction = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "span.sprite-date_picker-triangle")))
langselction.click()
driver.find_element_by_xpath("//div[@class='languageList']//li[normalize-space(.)='Portuguese first']").click()
gt= driver.find_elements(By.CSS_SELECTOR,".googleTranslation>.link")
for i in gt:
i.click()
time.sleep(2)
x = driver.find_element(By.XPATH, ".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']")
print x.text
driver.find_element_by_class_name("ui_close_x").click()
time.sleep(2)
</code></pre>
<p>It will print the corresponding text</p>
| 1 | 2016-10-12T09:20:52Z | [
"python",
"selenium",
"web-scraping"
] |
Python TCP Server sending message to all clients | 39,994,533 | <p>I have been trying to make a TCP python server over my LAN, but I've constantly run into issues with this project. My question is: Is it possible to send a message (over TCP) to multiple clients from 1 server? (I.e. client-1 sends a message "Hello world" and it shows the message on all other clients [clients-2, clients-3]). Heres my code for the server so far:</p>
<pre><code>import socket, time, sys
import threading
TCP_IP = input("Host IP: ")
TCP_PORT = int(input("Host Port: "))
BUFFER_SIZE = 1024
def createNewThread(function):
threading.Thread(target=function).start()
def Listening():
try:
while True:
s.listen(1)
conn,addr = s.accept()
threading.Thread(target=Listening).start()
print("User joined with IP %s" % (addr[0]))
while 1:
data = conn.recv(BUFFER_SIZE)
if not data: break
conn.send(addr[0].encode("utf-8") + b': ' + data)
conn.close()
except ConnectionResetError as e:
print("Connection was closed: ", e)
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP,TCP_PORT))
print("-----Server started-----")
Listening()
except socket.error as e:
print("Socket error occured. More info: ", e)
</code></pre>
<p>And heres my code for client:</p>
<pre><code>import socket, sys, time
TCP_IP = input("Connect to Local IP: ")
TCP_PORT = int(input("Connect to Local Port: "))
BUFFER_SIZE = 1024
running = True
while running == True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print("Connecting...")
s.connect((TCP_IP,TCP_PORT))
print("Connected!")
while True:
MESSAGE = input("Message: ")
if MESSAGE == "exit":
s.close()
raise SystemExit
s.send(MESSAGE.encode('ascii'))
data = s.recv(BUFFER_SIZE)
print(data.decode("utf-8"))
running = False
time.sleep(20)
except:
print(sys.exc_info()[0])
time.sleep(1)
</code></pre>
<p>Thanks in advance to any answers!</p>
<p><strong>Edit:</strong>
I want the ouput to look something like this:</p>
<pre><code>User3's IP: Message they sent
User1's IP: Message they sent
Message: What do you want to send?
</code></pre>
| 0 | 2016-10-12T09:00:12Z | 39,994,796 | <p>Your code is... very weird. First thing is that you create a new thread on <code>accept</code> but you send listener to that thread instead of the client. Thus your threads never die and you have a memory and cpu leak. It's even worse: in your code the number of threads is equal to the number of <strong>all</strong> clients that have <strong>ever</strong> been connected to the server. This is bad.</p>
<p>Try this on the server side:</p>
<pre><code>def client(conn):
while True:
data = conn.recv(BUFFER_SIZE)
if not data:
break
conn.send(data) # simple ping
def listener():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(5)
while True:
conn, addr = s.accept()
threading.Thread(target=client, args=(conn,)).start()
if __name__ == '__main__':
listener()
</code></pre>
<p>The code is shorter, simpler and there are no memory/cpu leaks.</p>
<p>Now as for sending data to all clients. You have to keep track of them. You can achieve this by keeping a global dict of clients:</p>
<pre><code>CLIENTS = {}
</code></pre>
<p>Now in listener you do:</p>
<pre><code>def listener():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP,TCP_PORT))
s.listen(5)
while True:
conn, addr = s.accept()
# register client
CLIENTS[conn.fileno()] = conn
threading.Thread(target=client, args=(conn,)).start()
</code></pre>
<p>and in client:</p>
<pre><code>def client(conn):
while True:
data = conn.recv(BUFFER_SIZE)
if not data:
break
# broadcast
for client in CLIENTS.values():
client.send(data)
# the connection is closed: unregister
del CLIENTS[conn.fileno()]
</code></pre>
<p>There's one little problem with that code (well actually there are several, e.g. error handling). What happens if some clients unregisters while we loop through <code>CLIENTS</code> dictionary? Python will throw an exception. Simple solution is to lock the dictionary on inserts, deletions and iterations.</p>
<p>Also there's a race condition if some other socket reuses previous <code>fileno()</code>. In that case you may want to generate ids for sockets manually (preferably by wrapping <code>socket</code> object with a custom class).</p>
<p>Note that it is possible to use <code>set</code> instead of <code>dict</code>. However you will eventually need a dict since at some point you would want to send msg to a specific client (identified by some id).</p>
| 1 | 2016-10-12T09:12:34Z | [
"python",
"sockets",
"tcp"
] |
Matrix Form Representation of Results | 39,994,660 | <p>I have 7 csv files which has list of words. I have taken all the words from 7 csv and put in a new file called Total_Words_list. </p>
<p>The issue is that I need an output in the following matrix:</p>
<pre><code> APPLE BALL CAT DOG....
A 0 1 1 0
B 1 1 0 1
C 1 1 1 0
</code></pre>
<p>Here the words from the main list forms the rows and the 7 file names form the column. If a word is present in file A it turns 1 else 0, and so on. This goes on for all 7 csv files in a single run and i get the above result.</p>
<p>I am not sure how to approach the issue.</p>
| 1 | 2016-10-12T09:05:55Z | 39,995,060 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> for concating all <code>DataFrames</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.get_dummies.html" rel="nofollow"><code>str.get_dummies</code></a>. Last need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by index (<code>level=0</code>) with aggregating <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.sum.html" rel="nofollow"><code>sum</code></a>:</p>
<pre><code>import pandas as pd
import numpy as np
import io
temp=u"""CAT;BALL
"""
#after testing replace io.StringIO(temp) to filename
df1 = pd.read_csv(io.StringIO(temp), sep=";", index_col=None, header=None)
print (df1)
temp=u"""DOG;BALL;APPLE
"""
#after testing replace io.StringIO(temp) to filename
df2 = pd.read_csv(io.StringIO(temp), sep=";", index_col=None, header=None)
print (df2)
temp=u"""DOG;BALL;APPLE;CAT
"""
#after testing replace io.StringIO(temp) to filename
df3 = pd.read_csv(io.StringIO(temp), sep=";", index_col=None, header=None)
print (df3)
df = pd.concat([df1,df2,df3], keys=['A','B','C'])
df.reset_index(1, drop=True, inplace=True)
print (df)
0 1 2 3
A CAT BALL NaN NaN
B DOG BALL APPLE NaN
C DOG BALL APPLE CAT
</code></pre>
<pre><code>print (df.stack().reset_index(1, drop=True).str.get_dummies())
APPLE BALL CAT DOG
A 0 0 1 0
A 0 1 0 0
B 0 0 0 1
B 0 1 0 0
B 1 0 0 0
C 0 0 0 1
C 0 1 0 0
C 1 0 0 0
C 0 0 1 0
print (df.stack().reset_index(1, drop=True).str.get_dummies().groupby(level=0).sum())
APPLE BALL CAT DOG
A 0 1 1 0
B 1 1 0 1
C 1 1 1 1
</code></pre>
<hr>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow"><code>pandas.get_dummies</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by columns (<code>level=0</code>, axis=1) with aggregating <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.sum.html" rel="nofollow"><code>sum</code></a>:</p>
<pre><code>print (pd.get_dummies(df, dummy_na=False, prefix='', prefix_sep='')
.groupby(level=0, axis=1).sum())
APPLE BALL CAT DOG
A 0 1 1 0
B 1 1 0 1
C 1 1 1 1
</code></pre>
<p>EDIT by comment:</p>
<p>Another approach is get <code>dummies</code> from each dataframe separately and then <code>concat</code> output:</p>
<pre><code>df11 = pd.get_dummies(df1, dummy_na=False, prefix='', prefix_sep='')
.groupby(level=0, axis=1).sum()
#print (df11)
df21 = pd.get_dummies(df2, dummy_na=False, prefix='', prefix_sep='')
.groupby(level=0, axis=1).sum()
#print (df21)
df31 = pd.get_dummies(df3, dummy_na=False, prefix='', prefix_sep='')
.groupby(level=0, axis=1).sum()
#print (df31)
df = pd.concat([df11,df21,df31], keys=['A','B','C']).fillna(0).astype(int)
df.reset_index(1, drop=True, inplace=True)
print (df)
APPLE BALL CAT DOG
A 0 1 1 0
B 1 1 0 1
C 1 1 1 1
</code></pre>
| 4 | 2016-10-12T09:25:10Z | [
"python",
"csv",
"pandas"
] |
Edit table programmatically to fit content | 39,994,686 | <p>It's possible to create HTML page from a CSV file, with the following:</p>
<pre><code>import pandas as pd
df = pd.read_csv('../data.csv',delimiter=';', engine='python')
df.to_html('csv.html')
</code></pre>
<p>Column width of this table seems to respect the header (column title) size but for some columns content is larger then the column title and gets wrapped to next line. This happens with the following CSV, for the multi-word cells (<code>aaaaaaaaa aaaaaaaaa</code>): </p>
<pre><code>Name1;Name2;Name3;Name4;Name5
1;aaaaaaaaa aaaaaaaaa;b;aaaaaaaaa aaaaaaaaa;aaaaaaaaa aaaaaaaaa
2;aaaaaaaaa aaaaaaaaa;b;aaaaaaaaa aaaaaaaaa;aaaaaaaaa aaaaaaaaa
3;aaaaaaaaa aaaaaaaaa;b;aaaaaaaaa aaaaaaaaa;aaaaaaaaa aaaaaaaaa
</code></pre>
<p>I would like to make columns width large enough to fit content (avoid word wrap). How can I get there programmatically (using Python)?</p>
| 2 | 2016-10-12T09:07:16Z | 39,995,375 | <p>Answer is based on <a href="http://stackoverflow.com/questions/39993959/edit-html-inserting-css-reference">this</a>.</p>
<pre><code>import pandas as pd
filename = 'csv.html'
df = pd.read_csv('../data.csv',delimiter=';', engine='python')
html_begin = '\
<meta charset="UTF-8">\n\
<html>\n\
<head><link rel="stylesheet" type="text/css" href="csv.css"></head>\n\
<body>\n\n'
html_end = '\n\
</body>\n\
</html>\n'
with open(filename, 'w') as f:
f.write(html_begin)
df.to_html(f)
f.write(html_end)
</code></pre>
<p>And <code>csv.css</code> is like:</p>
<pre><code>table {
border: .5px solid lightgray;
border-collapse: collapse;
width: 100%;
}
th {
border: .5px solid lightgray;
text-align: center;
}
td {
border: .5px solid lightgray;
text-align: center;
white-space:nowrap
}
</code></pre>
<p>Alternatively (a better alternative I'll say), one can avoid the CSS need and do everything via <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow">Pandas Style</a> like:</p>
<pre><code>import pandas as pd
filename = 'csv_style.html'
df = pd.read_csv('../data.csv',delimiter=';', engine='python')
style = df.style
style.set_properties(**{'text-align': 'center',
'white-space': 'nowrap'})
with open(filename, 'w') as f:
f.write(style.render())
</code></pre>
| 1 | 2016-10-12T09:40:27Z | [
"python",
"html",
"css",
"python-2.7",
"pandas"
] |
Python child class constructor argument | 39,994,689 | <p>I have a parent class whose constructor has three argument, now I want to have a child class whose constructor only has two argument, but it's telling me it has to be given three arguments when I try to create the child object.</p>
<pre><code>class Parent(Exception):
def _init_(self, a, b):
...
super(Parent, self)._init_(a, b)
class Child(Parent):
def _init_(self, b):
super(Child, self)._init_(123, b)
# somewhere in the code I have:
raise Child("BAD_INPUT")
</code></pre>
<p>What I'm trying to do is instantiate a Child object with only one argument then in the constructor of that Child object call Parent's constructor and pass in two argument, one is hard-coded (123) .</p>
<p>Error I got:
<code>TypeError: __init__() takes exactly 3 arguments (2 given)</code></p>
| 0 | 2016-10-12T09:07:30Z | 39,994,858 | <p>you should be able to use: </p>
<pre><code>class Parent(Exception):
def __init__(self, a, b):
self.a = a
self.b = b
class Child(Parent):
a = 123
def __init__(self, b):
super().__init__(self.a, b)
</code></pre>
| 0 | 2016-10-12T09:14:51Z | [
"python",
"inheritance",
"constructor"
] |
python equivalent to PHP Post | 39,994,694 | <p>I am working with legacy system the previous programer was gone.<br>
Here is the test he left and I have no idea how to imitate his test using <code>python</code>. My background is <code>iOS, Android, Java, Python, Django, C/C++, PLSQL, SQL</code> but no PHP at all</p>
<p>Here is <code>test.php</code></p>
<pre><code>var tHost = "10.1.10.123";
var tExiname = "CloudHM TEST123";
var tIncname = "INC012345";
var tHWname = "aa 1111 0";
var tHwattr = "all";
var tStatus = 0;
var tCteatedat = "2016-05-23 12:20:42";
var d = new Date,
tUpdateat = [d.getFullYear(),
d.getMonth()+1,
d.getDate()].join('-')+' '+
[d.getHours(),
d.getMinutes(),
d.getSeconds()].join(':');
var arr = [{ host: tHost, host_name: tExiname, component_type: tHWname, component_status: tStatus, incident_created: tUpdateat }];
var arr2 =JSON.stringify(arr)
$.ajax({
url: 'http://customer.beenets.net/api/cloudhm/api.php' ,
type: 'POST',
data: { params: arr2 },
success: function(msg) {
//ShowOutput(msg);
alert(JSON.stringify(arr, null, 4));
}
})
</code></pre>
<p>I had tried this. Response is <code>200</code>, but <code>PHP server</code> read no payload</p>
<pre><code>notification_data = [{
"host": i.host,
"host_name": i.host_name,
"incident_created": i.incident_created,
"component_type": i.component_type,
"component_status": i.component_status
}]
response = requests.post(NOC_BEENETS_URL, data=json.dumps(notification_data))
</code></pre>
<p>Then I try put <code>params</code> key in front of it</p>
<pre><code>notification_data = [{
"params":{
"host": i.host,
"host_name": i.host_name,
"incident_created": i.incident_created,
"component_type": i.component_type,
"component_status": i.component_status
}
}]
response = requests.post(NOC_BEENETS_URL, data=json.dumps(notification_data))
</code></pre>
<p>Server return me 200 and read no payload again<br></p>
<p>Any help would be appreciated<br>
Best regards<br>
Sarit</p>
| 1 | 2016-10-12T09:07:43Z | 39,994,842 | <p>Edited:</p>
<pre><code>notification_data = [{
"host": i.host,
"host_name": i.host_name,
"incident_created": i.incident_created,
"component_type": i.component_type,
"component_status": i.component_status
}]
r = requests.post(NOC_BEENETS_URL, data = {'params': json.dumps(notification_data)})
</code></pre>
| 2 | 2016-10-12T09:14:08Z | [
"php",
"python"
] |
Difference between cv2.findNonZero and Numpy.NonZero | 39,994,831 | <p>Silly question here.</p>
<p>I want to find the locations of the pixels from some black and white images and found this two functions from Numpy library and OpenCV.</p>
<p>The example I found on the internet (<a href="http://docs.opencv.org/trunk/d1/d32/tutorial_py_contour_properties.html" rel="nofollow">http://docs.opencv.org/trunk/d1/d32/tutorial_py_contour_properties.html</a>):</p>
<pre><code> mask = np.zeros(imgray.shape,np.uint8)
cv2.drawContours(mask,[cnt],0,255,-1)
pixelpoints = np.transpose(np.nonzero(mask))
pixelpointsCV2 = cv2.findNonZero(mask)
</code></pre>
<p>Which states </p>
<blockquote>
<p>Numpy gives coordinates in <strong>(row, column)</strong> format, while OpenCV gives coordinates in <strong>(x,y)</strong> format. So basically the answers will be interchanged. Note that, row = x and column = y.</p>
</blockquote>
<p>Based on my understanding of english, isn't their explanation wrong? Shouldn't it be:</p>
<blockquote>
<p>Numpy gives coordinates in (row, column) format, while OpenCV gives coordinates in <strong>(y,x)</strong> or <strong>(column, row)</strong> format.</p>
</blockquote>
<p>My questions are:</p>
<ol>
<li><p>Does numpy return <strong>(row,col)</strong>/<strong>(x,y)</strong> and OpenCV <strong>(y,x)</strong> where row=x, col=y? Although IMHO it should be row=y, col=x?</p></li>
<li><p>Which one is more computation efficient? In terms of time & resources.</p></li>
</ol>
<p>Maybe I am not getting this simple thing right due to not being a non-native English speaker.</p>
| 0 | 2016-10-12T09:13:39Z | 39,996,246 | <p>There is an error in the documentation:</p>
<blockquote>
<p>Numpy gives coordinates in (row, column) format, while OpenCV gives coordinates in (x,y) format. So basically the answers will be interchanged. <s>Note that, row = x and column = y.</s> <strong>Note that, row = y and column = x.</strong></p>
</blockquote>
<p>So, regarding your questions:</p>
<ol>
<li>numpy returns <code>(row,col) = (y,x)</code>, and OpenCV returns <code>(x,y) = (col,row)</code></li>
<li><p>You need to scan the whole matrix and retrieve some points. I don't think there will be any significant difference in performance (<em>should be tested!</em>). </p>
<p><s>Since you're using Python, probably it's better to use Python facilities, e.g. numpy.</s> </p></li>
</ol>
<p>Runtime test comparing these two versions -</p>
<pre><code>In [86]: mask = (np.random.rand(128,128)>0.5).astype(np.uint8)
In [87]: %timeit cv2.findNonZero(mask)
10000 loops, best of 3: 97.4 µs per loop
In [88]: %timeit np.nonzero(mask)
1000 loops, best of 3: 297 µs per loop
In [89]: mask = (np.random.rand(512,512)>0.5).astype(np.uint8)
In [90]: %timeit cv2.findNonZero(mask)
1000 loops, best of 3: 1.65 ms per loop
In [91]: %timeit np.nonzero(mask)
100 loops, best of 3: 4.8 ms per loop
In [92]: mask = (np.random.rand(1024,1024)>0.5).astype(np.uint8)
In [93]: %timeit cv2.findNonZero(mask)
100 loops, best of 3: 6.75 ms per loop
In [94]: %timeit np.nonzero(mask)
100 loops, best of 3: 19.4 ms per loop
</code></pre>
<p>Thus, it seems using OpenCV results in something around <code>3x</code> speedup over the NumPy counterpart across varying datasizes.</p>
| 2 | 2016-10-12T10:23:59Z | [
"python",
"opencv",
"numpy"
] |
get the name of most recent file in linux - PYTHON | 39,994,863 | <p>I want to get the name of the most recent file from a particular directory in python?</p>
<p>I used this</p>
<pre><code>import os
import glob
def get_latest_file(path, *paths):
"""Returns the name of the latest (most recent) file
of the joined path(s)"""
fullpath = os.path.join(path, *paths)
print fullpath
list_of_files = glob.glob(fullpath)
if not list_of_files:
return None
latest_file = max(list_of_files, key=os.path.getctime)
_, filename = os.path.split(latest_file)
return filename
if __name__ == "__main__":
print get_latest_file('ocr', 'uploads', '*.png')
</code></pre>
<p><a href="http://codereview.stackexchange.com/questions/120494/finding-the-latest-file-in-a-folder">Source</a> </p>
<p>But I want the code to return name of the most recent file without specifying extension of file.
So let's say if there are jpg, jpeg, png, gif</p>
<p>I want the snippet to cover them all.</p>
<p>Any inputs?</p>
| 0 | 2016-10-12T09:15:01Z | 39,995,466 | <p>with your last line you just retrieve the files with .png extension</p>
<pre><code>get_latest_file('ocr', 'uploads', '*.png')
</code></pre>
<p>if you want to retrieve all files not depending on the extension, you just need to remove your extension specification in code to glob.glob('<em>'). This will retrieve all files in your directory. if you still need any extension but it doesnt matter which you also could retrieve them with glob.glob(</em>.*) i think.</p>
| 2 | 2016-10-12T09:44:07Z | [
"python"
] |
get the name of most recent file in linux - PYTHON | 39,994,863 | <p>I want to get the name of the most recent file from a particular directory in python?</p>
<p>I used this</p>
<pre><code>import os
import glob
def get_latest_file(path, *paths):
"""Returns the name of the latest (most recent) file
of the joined path(s)"""
fullpath = os.path.join(path, *paths)
print fullpath
list_of_files = glob.glob(fullpath)
if not list_of_files:
return None
latest_file = max(list_of_files, key=os.path.getctime)
_, filename = os.path.split(latest_file)
return filename
if __name__ == "__main__":
print get_latest_file('ocr', 'uploads', '*.png')
</code></pre>
<p><a href="http://codereview.stackexchange.com/questions/120494/finding-the-latest-file-in-a-folder">Source</a> </p>
<p>But I want the code to return name of the most recent file without specifying extension of file.
So let's say if there are jpg, jpeg, png, gif</p>
<p>I want the snippet to cover them all.</p>
<p>Any inputs?</p>
| 0 | 2016-10-12T09:15:01Z | 39,995,583 | <p>If you don't care about the extension a simple os.walk iteration will do. You can extend this to filter extensions if you need.</p>
<pre><code>import os
all_files = {}
root = 'C:\workspace\werkzeug-master'
for r, d, files in os.walk(root):
for f in files:
fp = os.path.join(root, r, f)
all_files[os.path.getmtime(fp)] = fp
keys = all_files.keys()
keys.sort(reverse = True)
print all_files[keys[0]]
</code></pre>
| 0 | 2016-10-12T09:49:24Z | [
"python"
] |
Just heard of Jupyter - is it possible to use Javascript and keep it in the cloud? | 39,994,971 | <p>I heard of Jupyter last night and was using it with Python last night. Looks like a great notebook for coding, something I have been searching for, but I'm unsure if I can use JavaScript with it? It looks like there are npm packages, but I would assume that would then stop me from saving it all in the cloud and having access across multiple machines..?</p>
| 0 | 2016-10-12T09:20:09Z | 39,995,115 | <p>IJavascript is an npm package that implements a Javascript kernel for the Jupyter notebook (formerly known as IPython notebook). A Jupyter notebook combines the creation of rich-text documents (including equations, graphs and videos) with the execution of code in a number of programming languages (including Javascript). You may find usage instructions and examples at <a href="https://github.com/n-riesco/ijavascript" rel="nofollow">https://github.com/n-riesco/ijavascript</a>.</p>
| 1 | 2016-10-12T09:27:51Z | [
"javascript",
"python",
"node.js",
"jupyter",
"jupyter-notebook"
] |
Python decorator logger | 39,995,207 | <p>I have the following code:</p>
<pre><code>def log(func):
def wrapper(*args, **kwargs):
func_str = func.__name__
args_str = ', '.join(args)
kwargs_str = ', '.join([':'.join([str(j) for j in i]) for i in kwargs.iteritems()])
with open('log.txt', 'w') as f:
f.write(func_str)
f.write(args_str)
f.write(kwargs_str)
return func(*args, **kwargs)
return wrapper()
@log
def example(a, b):
print('example')
</code></pre>
<p>However, even without calling any function, I still get the error: </p>
<pre><code>TypeError: example() takes exactly 2 arguments (0 given)
</code></pre>
<p>Can someone explain to me why this happens, because it seems the function is called, but I don't understand why.</p>
| 1 | 2016-10-12T09:32:09Z | 39,995,248 | <p>Because you are calling it here:</p>
<pre><code>return wrapper()
</code></pre>
<p>It should be:</p>
<pre><code>return wrapper
</code></pre>
| 4 | 2016-10-12T09:33:54Z | [
"python",
"decorator"
] |
Python decorator logger | 39,995,207 | <p>I have the following code:</p>
<pre><code>def log(func):
def wrapper(*args, **kwargs):
func_str = func.__name__
args_str = ', '.join(args)
kwargs_str = ', '.join([':'.join([str(j) for j in i]) for i in kwargs.iteritems()])
with open('log.txt', 'w') as f:
f.write(func_str)
f.write(args_str)
f.write(kwargs_str)
return func(*args, **kwargs)
return wrapper()
@log
def example(a, b):
print('example')
</code></pre>
<p>However, even without calling any function, I still get the error: </p>
<pre><code>TypeError: example() takes exactly 2 arguments (0 given)
</code></pre>
<p>Can someone explain to me why this happens, because it seems the function is called, but I don't understand why.</p>
| 1 | 2016-10-12T09:32:09Z | 39,995,253 | <p>You should return the <code>wrapper</code> function without calling it:</p>
<pre><code>return wrapper
</code></pre>
<p>Calling it implies the call to <code>wrapper</code> has to be evaluated, which you're however calling with the wrong signature.</p>
| 3 | 2016-10-12T09:34:02Z | [
"python",
"decorator"
] |
Error with given arguments and numpy - Python | 39,995,225 | <p>I have the following father class and method:</p>
<pre><code>import SubImage
import numpy as np
from scipy import misc
import random
class Image():
# Class constructor
def __init__(self):
self.__image = np.empty(0)
self.__rows = 0
self.__cols = 0
self.__rows_pixels = 0
self.__cols_pixels = 0
self.__rows_quotient = 0.0
self.__cols_quotient = 0.0
self.__create_image()
self.__subimages = np.empty((self.__rows, self.__cols))
def __create_subimages(self):
i = 0
j = 0
while i != self.__rows_quotient * self.__rows:
print (i+j)
sub_image = SubImage(self.__image[i:i + self.__rows_quotient, j:j + self.__cols_quotient], i + j)
if j == self.__cols_quotient * (self.__cols - 1):
j = 0
i += self.__rows_quotient
else:
j += self.__cols_quotient
</code></pre>
<p>And the following subclass which is supposed to be a child from the class above:</p>
<pre><code>import Image
class SubImage(Image):
def __init__(self, image, position):
self.__position = position
self.__image = image
</code></pre>
<p>My problem is that when creating a SubImage instance in the __create_subimages method I get the following error:</p>
<pre><code>File "/home/mitolete/PycharmProjects/myprojectSubImage.py", line 3, in <module>
class SubImage(Image):
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
</code></pre>
<p>I don't get why it says I'm giving 3 arguments, I'm giving 2 which is the subimage (a numpy array) and an integer.</p>
<p>WHy is this?</p>
<p>Regards and thanks.</p>
| 0 | 2016-10-12T09:32:47Z | 39,995,679 | <p>If you're importing <code>SubImage</code> from another file, i.e. a module, you will have to reference that in the import. In this case, assuming <code>SubImage</code> is in a file called <code>SubImage.py</code>, the import should be</p>
<pre><code>from SubImage import SubImage
</code></pre>
<p>so that <code>SubImage</code> now refers to the class <code>SubImage</code> in <code>SubImage.py</code>. This is also the case for <code>Image</code> in <code>Image.py</code>.</p>
<p>However, I don't think there's a need to do this given how closely related the two classes are. I'd put them in the same file and avoid the circular import.</p>
| 0 | 2016-10-12T09:54:08Z | [
"python",
"numpy",
"arguments"
] |
Error with given arguments and numpy - Python | 39,995,225 | <p>I have the following father class and method:</p>
<pre><code>import SubImage
import numpy as np
from scipy import misc
import random
class Image():
# Class constructor
def __init__(self):
self.__image = np.empty(0)
self.__rows = 0
self.__cols = 0
self.__rows_pixels = 0
self.__cols_pixels = 0
self.__rows_quotient = 0.0
self.__cols_quotient = 0.0
self.__create_image()
self.__subimages = np.empty((self.__rows, self.__cols))
def __create_subimages(self):
i = 0
j = 0
while i != self.__rows_quotient * self.__rows:
print (i+j)
sub_image = SubImage(self.__image[i:i + self.__rows_quotient, j:j + self.__cols_quotient], i + j)
if j == self.__cols_quotient * (self.__cols - 1):
j = 0
i += self.__rows_quotient
else:
j += self.__cols_quotient
</code></pre>
<p>And the following subclass which is supposed to be a child from the class above:</p>
<pre><code>import Image
class SubImage(Image):
def __init__(self, image, position):
self.__position = position
self.__image = image
</code></pre>
<p>My problem is that when creating a SubImage instance in the __create_subimages method I get the following error:</p>
<pre><code>File "/home/mitolete/PycharmProjects/myprojectSubImage.py", line 3, in <module>
class SubImage(Image):
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
</code></pre>
<p>I don't get why it says I'm giving 3 arguments, I'm giving 2 which is the subimage (a numpy array) and an integer.</p>
<p>WHy is this?</p>
<p>Regards and thanks.</p>
| 0 | 2016-10-12T09:32:47Z | 39,996,023 | <p>Your main problem is the way you import both Image and Subimage into each other. </p>
<p>Subimage should be imported this way:</p>
<pre><code>from myprojectSubImage import SubImage
</code></pre>
<p>Image should be imported this way:</p>
<pre><code>from FILENAME import Image
</code></pre>
<p>that being said, the mutual import seems like bad practice. you should probably either merge the Image and SubImage file, or move the 'create_subimages' function to another file. </p>
| 0 | 2016-10-12T10:12:11Z | [
"python",
"numpy",
"arguments"
] |
How to delete data from a RethinkDB database using its changefeed | 39,995,308 | <p>I'm working on a 'controller' for a database which continuously accrues data, but only uses recent data, defined as less than 3 days old. As soon as data becomes more than 3 days old, I'd like to dump it to a JSON file and remove it from the database.</p>
<p>To simulate this, I've done the following. The 'controller' program <code>rethinkdb_monitor.py</code> is</p>
<pre><code>import json
import rethinkdb as r
import pytz
from datetime import datetime, timedelta
# The database and table are assumed to have been previously created
database_name = "sensor_db"
table_name = "sensor_data"
port_offset = 1 # To avoid interference of this testing program with the main program, all ports are initialized at an offset of 1 from the default ports using "rethinkdb --port_offset 1" at the command line.
conn = r.connect("localhost", 28015 + port_offset)
current_time = datetime.utcnow().replace(tzinfo=pytz.utc) # Current time include timezone (assumed UTC)
retention_period = timedelta(days=3) # Period of time during which data is retained on the main server
expiry_time = current_time - retention_period # Age of data which is removed from the main server
data_to_archive = r.db(database_name).table(table_name).filter(r.row['timestamp'] < expiry_time)
output_file = "archived_sensor_data.json"
with open(output_file, 'a') as f:
for change in data_to_archive.changes().run(conn, time_format="raw"): # The time_format="raw" option is passed to prevent a "RqlTzinfo object is not JSON serializable" error when dumping
print change
json.dump(change['new_val'], f) # Since the main database we are reading from is append-only, the 'old_val' of the change is always None and we are interested in the 'new_val' only
f.write("\n") # Separate entries by a new line
</code></pre>
<p>Prior to running this program, I started up RethinkDB using</p>
<pre><code>rethinkdb --port_offset 1
</code></pre>
<p>at the command line, and used the web interface at <code>localhost:8081</code> to create a database called <code>sensor_db</code> with a table called <code>sensor_data</code> (see below).</p>
<p><a href="https://i.stack.imgur.com/HaAPR.png" rel="nofollow"><img src="https://i.stack.imgur.com/HaAPR.png" alt="enter image description here"></a></p>
<p>Once <code>rethinkdb_monitor.py</code> is running and waiting for changes, I run a script <code>rethinkdb_add_data.py</code> which generates synthetic data:</p>
<pre><code>import random
import faker
from datetime import datetime, timedelta
import pytz
import rethinkdb as r
class RandomData(object):
def __init__(self, seed=None):
self._seed = seed
self._random = random.Random()
self._random.seed(seed)
self.fake = faker.Faker()
self.fake.random.seed(seed)
def __getattr__(self, x):
return getattr(self._random, x)
def name(self):
return self.fake.name()
def datetime(self, start=None, end=None):
if start is None:
start = datetime(2000, 1, 1, tzinfo=pytz.utc) # Jan 1st 2000
if end is None:
end = datetime.utcnow().replace(tzinfo=pytz.utc)
if isinstance(end, datetime):
dt = end - start
elif isinstance(end, timedelta):
dt = end
assert isinstance(dt, timedelta)
random_dt = timedelta(microseconds=self._random.randrange(int(dt.total_seconds() * (10 ** 6))))
return start + random_dt
# Rethinkdb has been started at a port offset of 1 using the "--port_offset 1" argument.
port_offset = 1
conn = r.connect("localhost", 28015 + port_offset).repl()
rd = RandomData(seed=0) # Instantiate and seed a random data generator
# The database and table have been previously created (e.g. through the web interface at localhost:8081)
database_name = "sensor_db"
table_name = "sensor_data"
# Generate random data with timestamps uniformly distributed over the past 6 days
random_data_time_interval = timedelta(days=6)
start_random_data = datetime.utcnow().replace(tzinfo=pytz.utc) - random_data_time_interval
for _ in range(5):
entry = {"name": rd.name(), "timestamp": rd.datetime(start=start_random_data)}
r.db(database_name).table(table_name).insert(entry).run()
</code></pre>
<p>After interrupting <code>rethinkdb_monitor.py</code> with Cntrl+C, the <code>archived_sensor_data.json</code> file contains the data to be archived:</p>
<pre><code>{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475963599.347}, "id": "be2b5fd7-28df-48ee-b744-99856643265a", "name": "Elizabeth Woods"}
{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475879797.486}, "id": "36d69236-f710-481b-82b6-4a62a1aae36c", "name": "Susan Wagner"}
</code></pre>
<p>What I am still struggling with, however, is how to subsequently remove this data from the DB. The command syntax of <code>delete</code> seems to be such that it can be called on a table or selection, but the <code>change</code> obtained through the changefeed is simply a dictionary.</p>
<p>How could I use the changefeed to continuously delete data from the database?</p>
| 0 | 2016-10-12T09:36:54Z | 39,996,157 | <p>I used the fact that each <code>change</code> contains the ID of the corresponding document in the database, and created a selection using <code>get</code> with this ID:</p>
<pre><code>with open(output_file, 'a') as f:
for change in data_to_archive.changes().run(conn, time_format="raw"): # The time_format="raw" option is passed to prevent a "RqlTzinfo object is not JSON serializable" error when dumping
print change
if change['new_val'] is not None: # If the change is not a deletion
json.dump(change['new_val'], f) # Since the main database we are reading from is append-only, the 'old_val' of the change is always None and we are interested in the 'new_val' only
f.write("\n") # Separate entries by a new line
ID_to_delete = change['new_val']['id'] # Get the ID of the data to be deleted from the database
r.db(database_name).table(table_name).get(ID_to_delete).delete().run(conn)
</code></pre>
<p>The deletions will themselves be registered as changes, but I've used the <code>if change['new_val'] is not None</code> statement to filter these out.</p>
| 0 | 2016-10-12T10:18:57Z | [
"python",
"rethinkdb"
] |
Python numpy array integer indexed flat slice assignment | 39,995,309 | <p>Was experimenting with numpy and found this strange behavior.
This code works ok:</p>
<pre><code>>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> a[:, 1].flat[:] = np.array([-1, -1])
>>> a
array([[ 1, -1, 3],
[ 4, -1, 6]])
</code></pre>
<p>But why this code doesn't change to -1 elements of 0 and 2 column?</p>
<pre><code>>>> a[:, [0, 2]].flat[:] = np.array([-1, -1])
>>> a
array([[ 1, -1, 3],
[ 4, -1, 6]])
</code></pre>
<p>And how to write the code so that would change to -1 elements of 0 and 2 columns like this?</p>
<p>UPD: use of <code>flat</code> or smt similar is necessarily in my example</p>
<p>UPD2: I made example in question basing on this code:</p>
<pre><code>img = imread(img_name)
xor_mask = np.zeros_like(img, dtype=np.bool)
# msg_bits looks like array([ True, False, False, ..., False, False, True], dtype=bool)
xor_mask[:, :, channel].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)
</code></pre>
<p>And after assignment to xor mask with channel == 0 or 1 or 2 code works ok, but if channel == [1,2] or smt like this, assignment does not happen </p>
| 0 | 2016-10-12T09:36:56Z | 39,995,368 | <p>In first example by flattening the slice you don't change the shape and actually the <strike>python</strike> Numpy doesn't create a new object. so assigning to flattened slice is like assigning to actual slice. But by flattening a 2d array you're changing the shape and hence numpy makes a copy of it.</p>
<p>also you don't need to flatten your slice to add to it:</p>
<pre><code>In [5]: a[:, [0, 2]] += 100
In [6]: a
Out[6]:
array([[101, 2, 103],
[104, 5, 106]])
</code></pre>
| 2 | 2016-10-12T09:40:10Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"slice"
] |
Python numpy array integer indexed flat slice assignment | 39,995,309 | <p>Was experimenting with numpy and found this strange behavior.
This code works ok:</p>
<pre><code>>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> a[:, 1].flat[:] = np.array([-1, -1])
>>> a
array([[ 1, -1, 3],
[ 4, -1, 6]])
</code></pre>
<p>But why this code doesn't change to -1 elements of 0 and 2 column?</p>
<pre><code>>>> a[:, [0, 2]].flat[:] = np.array([-1, -1])
>>> a
array([[ 1, -1, 3],
[ 4, -1, 6]])
</code></pre>
<p>And how to write the code so that would change to -1 elements of 0 and 2 columns like this?</p>
<p>UPD: use of <code>flat</code> or smt similar is necessarily in my example</p>
<p>UPD2: I made example in question basing on this code:</p>
<pre><code>img = imread(img_name)
xor_mask = np.zeros_like(img, dtype=np.bool)
# msg_bits looks like array([ True, False, False, ..., False, False, True], dtype=bool)
xor_mask[:, :, channel].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)
</code></pre>
<p>And after assignment to xor mask with channel == 0 or 1 or 2 code works ok, but if channel == [1,2] or smt like this, assignment does not happen </p>
| 0 | 2016-10-12T09:36:56Z | 39,995,832 | <p>You could just remove the <code>flat[:]</code> <code>from a[:, [0, 2]].flat[:] += 100</code>:</p>
<pre><code>>>> import numpy as np
>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> a[:, 1].flat[:] += 100
>>> a
array([[ 1, 102, 3],
[ 4, 105, 6]])
>>> a[:, [0, 2]] += 100
>>> a
array([[101, 102, 103],
[104, 105, 106]])
</code></pre>
<p>But you say it is necessary... Can't you just <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow"><code>reshape</code></a> whatever you are trying to add to the initial array instead of using <code>flat</code>?</p>
<p>The second index call makes a copy of the array while the first returns a reference to it:</p>
<pre><code>>>> import numpy as np
>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> b = a[:,1].flat
>>> b[0] += 100
>>> a
array([[ 1, 102, 3],
[ 4, 5, 6]])
>>> b =a[:,[0,2]].flat
>>> b[0]
1
>>> b[0] += 100
>>> a
array([[ 1, 102, 3],
[ 4, 5, 6]])
>>> b[:]
array([101, 3, 4, 6])
</code></pre>
<p>It appears that when the elements you wish to iterate upon in a <code>flat</code> maner are not adjacent numpy makes an iterator over a copy of the array.</p>
| 0 | 2016-10-12T10:02:41Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"slice"
] |
Python numpy array integer indexed flat slice assignment | 39,995,309 | <p>Was experimenting with numpy and found this strange behavior.
This code works ok:</p>
<pre><code>>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> a[:, 1].flat[:] = np.array([-1, -1])
>>> a
array([[ 1, -1, 3],
[ 4, -1, 6]])
</code></pre>
<p>But why this code doesn't change to -1 elements of 0 and 2 column?</p>
<pre><code>>>> a[:, [0, 2]].flat[:] = np.array([-1, -1])
>>> a
array([[ 1, -1, 3],
[ 4, -1, 6]])
</code></pre>
<p>And how to write the code so that would change to -1 elements of 0 and 2 columns like this?</p>
<p>UPD: use of <code>flat</code> or smt similar is necessarily in my example</p>
<p>UPD2: I made example in question basing on this code:</p>
<pre><code>img = imread(img_name)
xor_mask = np.zeros_like(img, dtype=np.bool)
# msg_bits looks like array([ True, False, False, ..., False, False, True], dtype=bool)
xor_mask[:, :, channel].flat[:len(msg_bits)] = np.ones_like(msg_bits, dtype=np.bool)
</code></pre>
<p>And after assignment to xor mask with channel == 0 or 1 or 2 code works ok, but if channel == [1,2] or smt like this, assignment does not happen </p>
| 0 | 2016-10-12T09:36:56Z | 39,996,210 | <p>As others has pointed out <code>.flat</code> may create a copy of the original vector, so any updates to it would be lost. But <code>flat</code>tening a 1D slice is fine, so you can use a <code>for</code> loop to update multiple indexes.</p>
<pre><code>import numpy as np
a = np.array([[1, 2, 3], [4, 5, 6]])
a[:, 1].flat = np.array([-1, -1])
print a
# Use for loop to avoid copies
for idx in [0, 2]:
a[:, idx].flat = np.array([-1, -1])
print a
</code></pre>
<p>Note that you don't need to use <code>flat[:]</code>: just <code>flat</code> is enough (and probably more efficient).</p>
| 1 | 2016-10-12T10:21:40Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"slice"
] |
access docker container in kubernetes | 39,995,335 | <p>I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:</p>
<pre><code>$ docker run -p 33333:8080 foo
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
</code></pre>
<p>I can test it with:</p>
<pre><code>$ nc -v locahost 33333
connection succeeded!
</code></pre>
<p>However when I deploy it in Kubernetes it doesn't work.
Here is the manifest file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: foo-pod
namespace: foo
labels:
name: foo-pod
spec:
containers:
- name: foo
image: bar/foo:latest
ports:
- containerPort: 8080
</code></pre>
<p>and</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: foo-service
namespace: foo
spec:
type: NodePort
ports:
- port: 8080
- NodePort: 33333
selector:
name: foo-pod
</code></pre>
<p>Deployed with:</p>
<pre><code>$ kubectl apply -f foo.yaml
$ nc -v <publicIP> 33333
Connection refused
</code></pre>
<p>I don't understand why I cannot access it...</p>
| 0 | 2016-10-12T09:38:23Z | 40,000,085 | <p>The problem was that the application was listening on IP <code>127.0.0.1</code>.
It needs to listen on <code>0.0.0.0</code> to work in kubernetes. A change in the application code did the trick.</p>
| 1 | 2016-10-12T13:35:54Z | [
"python",
"docker",
"kubernetes"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.