text
stringlengths 226
34.5k
|
---|
Preventing double-output in python FileHandler when log paths overlap
Question: The following code results in the same log message being output twice:
log1 = logging.getLogger('foo')
log1.addHandler(logging.FileHandler('log.txt'))
log2 = logging.getLogger('foo.bar')
log2.addHandler(logging.FileHandler('log.txt'))
log2.warn("test message")
I realize that this is because 'foo.bar' matches both the 'foo' and 'foo.bar'
paths, so both loggers get the message. My question is: is there any way to
prevent this behaviour other than making sure I never have two loggers
pointing to the same file in the same log path?
Answer: You can tell `log2` [not to propagate
messages](http://docs.python.org/2/library/logging.html#logging.Logger.propagate)
to the handlers of ancestor loggers:
log2.propagate = False
* * *
import logging
log1 = logging.getLogger('foo')
log1.addHandler(logging.FileHandler('log.txt'))
log2 = logging.getLogger('foo.bar')
log2.addHandler(logging.FileHandler('log.txt'))
log2.propagate = False
log2.warn("test message")
writes `test message` only once in `log.txt`.
|
Format Python Decimal object to a specified precision
Question: I've spent countless hours researching, reading, testing, and ultimately
confused and dismayed at Python's Decimal object's lack of the most
fundamental concept: Formatting a Decimal's output to a string.
Let's assume we have some strings or Decimal objects with the following
values:
0.0008
11.1111
222.2222
3333.3333
1234.5678
The goal is to simply set the Decimal's precision to the second decimal place.
Eg, `11.1111` would be formatted as `11.11`, and `1234.5678` as `1234.57`.
I envision code similar to the following:
import decimal
decimals = [
decimal.Decimal('0.0008'),
decimal.Decimal('11.1111'),
decimal.Decimal('222.2222'),
decimal.Decimal('3333.3333'),
decimal.Decimal('1234.5678'),
]
for dec in decimals:
print dec.as_string(precision=2, rounding=ROUND_HALF_UP)
The resulting output would be:
0.00
11.11
222.22
3333.33
1234.57
Obviously we cannot make use of the Decimal's context's precision, because
this takes into consideration the TOTAL number of digits, not just decimal
precision.
I'm also not interested in converting the Decimal to a float to output its
value. The ENTIRE reason behind Decimal is to avoid storing and running
calculations on floats.
What other solutions are there? I understand there are many other similar
questions on stack overflow, but none of them have I found to resolve the
underlying issue I am inquiring of.
Thanks much!
Answer: Just use [string
formatting](http://docs.python.org/2/library/stdtypes.html#str.format) or the
[`format()` function](http://docs.python.org/2/library/functions.html#format):
>>> for dec in decimals:
... print format(dec, '.2f')
...
0.00
11.11
222.22
3333.33
1234.57
`decimal.Decimal` supports the same [format
specifications](http://docs.python.org/2/library/string.html#format-
specification-mini-language) as floats do, so you can use exponent, fixed
point, general, number or percentage formatting as needed.
This is the official and pythonic method of formatting decimals; the `Decimal`
class implements the `.__format__()` method to handle such formatting
efficiently.
|
Python: How to find more than one pathway in a recursive loop when multiple child nodes refers back to the parent?
Question: I'm using recursion to find the path from some point A to some point D. I'm
transversing a graph to find the pathways.
Lets say:
Graph = {'A':['route1','route2'],'B':['route1','route2','route3','route4'],
'C':['route3','route4'], 'D':['route4'] }
Accessible through:
A -> route1, route2
B -> route2, route 3, route 4
C -> route3, route4
There are two solutions in this path from A -> D:
route1 -> route2 -> route4
route1 -> route2 -> route3 -> route4
Since point B and point A has both route 1, and route 2. There is an infinite
loop so i add a check whenever i visit the node( 0 or 1 values ).
However with the check, i only get one solution back: route1 -> route2 ->
route4, and not the other possible solution.
Here is the actual coding: Routes will be substituted by Reactions.
def find_all_paths(graph,start, end, addReaction, passed = {}, reaction = [] ,path=[]):
passOver = passed
path = path + [start]
reaction = reaction + [addReaction]
if start == end:
return [reaction]
if not graph.has_key(start):
return []
paths=[]
reactions=[]
for x in range (len(graph[start])):
for y in range (len(graph)):
for z in range (len(graph.values()[y])):
if (graph[start][x] == graph.values()[y][z]):
if passOver.values()[y][z] < 161 :
passOver.values()[y][z] = passOver.values()[y][z] + 1
if (graph.keys()[y] not in path):
newpaths = find_all_paths(graph, (graph.keys()[y]), end, graph.values()[y][z], passOver , reaction, path)
for newpath in newpaths:
reactions.append(newpath)
return reactions
Here is the method call: dic_passOver is a dictionary keeping track if the
nodes are visited
solution = (find_all_paths( graph, "M_glc_DASH_D_c', 'M_pyr_c', 'begin', dic_passOver ))
My problem seems to be that once a route is visited, it can no longer be
access, so other possible solutions are not possible. I accounted for this by
adding a maximum amount of recursion at 161, where all the possible routes are
found for my specific code.
if passOver.values()[y][z] < 161 :
passOver.values()[y][z] = passOver.values()[y][z] + 1
However, this seem highly inefficient, and most of my data will be graphs with
indexes in their thousands. In addition i won't know the amount of allowed
node visits to find all routes. The number 161 was manually figured out.
Answer: Well, I can't understand your representation of the graph. But this is a
generic algorithm you can use for finding all paths which avoids infinite
loops.
First you need to represent your graph as a dictionary which maps nodes to a
set of nodes they are connected to. Example:
graph = {'A':{'B','C'}, 'B':{'D'}, 'C':{'D'}}
That means that from `A` you can go to `B` and `C`. From `B` you can go to `D`
and from `C` you can go to `D`. We're assuming the links are one-way. If you
want them to be two way just add links for going both ways.
If you represent your graph in that way, you can use the below function to
find all paths:
def find_all_paths(start, end, graph, visited=None):
if visited is None:
visited = set()
visited |= {start}
for node in graph[start]:
if node in visited:
continue
if node == end:
yield [start,end]
else:
for path in find_all_paths(node, end, graph, visited):
yield [start] + path
Example usage:
>>> graph = {'A':{'B','C'}, 'B':{'D'}, 'C':{'D'}}
>>> for path in find_all_paths('A','D', graph):
... print path
...
['A', 'C', 'D']
['A', 'B', 'D']
>>>
**Edit to take into account comments clarifying graph representation**
Below is a function to transform your graph representation(assuming I
understood it correctly and that routes are bi-directional) to the one used in
the algorithm above
def change_graph_representation(graph):
reverse_graph = {}
for node, links in graph.items():
for link in links:
if link not in reverse_graph:
reverse_graph[link] = set()
reverse_graph[link].add(node)
result = {}
for node,links in graph.items():
adj = set()
for link in links:
adj |= reverse_graph[link]
adj -= {node}
result[node] = adj
return result
If it is important that you find the path in terms of the links, not the nodes
traversed you can preserve this information like so:
def change_graph_representation(graph):
reverse_graph = {}
for node, links in graph.items():
for link in links:
if link not in reverse_graph:
reverse_graph[link] = set()
reverse_graph[link].add(node)
result = {}
for node,links in graph.items():
adj = {}
for link in links:
for n in reverse_graph[link]:
adj[n] = link
del(adj[node])
result[node] = adj
return result
And use this modified search:
def find_all_paths(start, end, graph, visited=None):
if visited is None:
visited = set()
visited |= {start}
for node,link in graph[start].items():
if node in visited:
continue
if node == end:
yield [link]
else:
for path in find_all_paths(node, end, graph, visited):
yield [link] + path
That will give you paths in terms of links to follow instead of nodes to
traverse. Hope this helps :)
|
Python: file.readline adding a space at the end of the line
Question: Am using python to read a flat space-padded text file. Part of the validation
of the text file is that each line in the text file is expected to be a
specific file length including the space padding.
When I use the following code, python ends up giving me a line with an extra
space. E.g. I expect all the rows in fileX to have 143 characters. Python
though will read this as 144 characters and thus say the file is invalid. If I
do the same in VB.NET, I get the correct 143 characters.
Why is **Python's** readline function adding an extra character? (Using python
3.2)
import io
myfile = open("file_path", "r")
while True:
line = myfile.readline()
if not line:
break
print(len(line)) #This prints 144 characters
**VB.NET** gives the correct length of 143 characters.
Using objStreamReader As StreamReader = New StreamReader(myFilePath)
While objStreamReader.EndOfStream = False
line = objStreamReader.ReadLine
len(line) 'This returns the correct length of 143.
Using line.strip will not be the right mechanism because I might get rid of
useful spaces. Remember the file is space-padded up to a maximum given length.
Answer: `objStreamReader.ReadLine` [chops off the terminating
newline](http://msdn.microsoft.com/en-
us/library/system.io.streamreader.readline.aspx), whereas Python's
`file.readline` [keeps
it](http://docs.python.org/2/tutorial/inputoutput.html#methods-of-file-
objects).
If your file was opened in text mode (and unless you explicitly specified
otherwise, it was), the line ending will always be either nothing (last line
only) or exactly one `\n`, and you can safely chop it off with `rstrip('\n')`.
|
How can I send a signal from a python program?
Question: I have this code which listens to USR1 signals
import signal
import os
import time
def receive_signal(signum, stack):
print 'Received:', signum
signal.signal(signal.SIGUSR1, receive_signal)
signal.signal(signal.SIGUSR2, receive_signal)
print 'My PID is:', os.getpid()
while True:
print 'Waiting...'
time.sleep(3)
This works when I send signals with `kill -USR1 pid`
But how can I send the same signal from within the above python script so that
after 10 seconds it automatically sends `USR1` and also receives it , without
me having to open two terminals to check it?
Answer: You can use os.kill():
os.kill(os.getpid(), signal.SIGUSR1)
Put this anywhere in your code that you want to send the signal from.
|
How to check if the sitemap contain some urls
Question: I am new to python and django, I am trying to fix a sitemap creator and one of
the bugs that it would create an empty sitemap. Meaning the sitemap does not
really have any urls in it.
example:
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
</urlset>
I would like to know the best approach to check if the sitemap have an empty
urlset tag or not.
Thanks
Answer: You could use
[`ElementTree`](http://docs.python.org/2/library/xml.etree.elementtree.html)
to parse xml:
from xml.etree import ElementTree as etree
urlset = etree.fromstring(xml)
if urlset.find('url') is None:
print("sitemap has no urls")
|
How to make a python program for multiplying elements of two lists?
Question: I want to make a python program which inputs two lists of numbers from user,
and outputs a list which satisfies some specific conditions. If I input a list
`[1,2,3]`, this means the polynomial that it represents is: `3x^2+2x+1`. Or,
in other words I have to multiply two polynomials.
For example, `L1=[1,2,3,1]` and `L2=[1,2]` then output list (L3) should be
`[1,4,7,7,2]`.
I tried to make a code which later turned out to be trash.
L1=list(input("Enter the first list : "))
L2=list(input("Enter the second list : "))
L3=[]
a=L1
b=L2
def multiply(L1,L2,a,b,L3,k,i,x):
if k<=min(len(a),len(b)):
if k-i==0:
x=x+L1.pop(i)*L2.pop(k-i)
return multiply(a,b,a,b,L3,k,i+1,x)
elif k-i>0:
x=x+L1.pop(i)*L2.pop(k-i)+L2.pop(i)*L1.pop(k-i)
return multiply(a,b,a,b,L3,k,i+1,x)
else:
L3.append(x)
return multiply(a,b,a,b,L3,k+1,0,0)
else:
return L3
print multiply(L1,L2,a,b,L3,0,0,0)
Any assistance is greatly appreciated.
Answer: Two mistakes: first, pop() actually removes items from the list, which is
probably not what you want (there are only three lists in the whole program).
Then "if k <= ..." should probably be "if k < ..." because indices count from
0. With these two fixes the problem still outputs nonsense. I'll leave you the
task of figuring it out: try to replace the last line with
import pdb; pdb.run("multiply(L1,L2,a,b,L3,0,0,0)")
then go step by step until something is wrong (`s` to do a step, `p VARNAME`
to print a variable).
|
Behavior of exec function in Python 2 and Python 3
Question: Following code gives different output in `Python2` and in `Python3`:
from sys import version
print(version)
def execute(a, st):
b = 42
exec("b = {}\nprint('b:', b)".format(st))
print(b)
a = 1.
execute(a, "1.E6*a")
`Python2` prints:
2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]
('b:', 1000000.0)
1000000.0
`Python3` prints:
3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)]
b: 1000000.0
42
Why does `Python2` bind the variable `b` inside the `execute` function to the
values in the string of the `exec` function, while `Python3` doesn't do this?
How can I achieve the behavior of `Python2` in `Python3`? I already tried to
pass dictionaries for globals and locals to `exec` function in `Python3`, but
nothing worked so far.
**\--- EDIT ---**
After reading Martijns answer I further analyzed this with `Python3`. In
following example I give the `locals()` dictionay as `d` to `exec`, but
`d['b']` prints something else than just printing `b`.
from sys import version
print(version)
def execute(a, st):
b = 42
d = locals()
exec("b = {}\nprint('b:', b)".format(st), globals(), d)
print(b) # This prints 42
print(d['b']) # This prints 1000000.0
print(id(d) == id(locals())) # This prints True
a = 1.
execute(a, "1.E6*a")
3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)]
b: 1000000.0
42
1000000.0
True
The comparison of the ids of `d` and `locals()` shows that they are the same
object. But under these conditions `b` should be the same as `d['b']`. What is
wrong in my example?
Answer: There is a big difference between `exec` in Python 2 and `exec()` in Python 3.
You are treating `exec` as a function, but it really is a _statement_ in
Python 2.
Because of this difference, you cannot change local variables in function
scope in Python 3 using `exec`, even though it was possible in Python 2. Not
even previously declared variables.
`locals()` only reflects local variables in one direction. The following never
worked in either 2 or 3:
def foo():
a = 'spam'
locals()['a'] = 'ham'
print(a) # prints 'spam'
In Python 2, using the `exec` statement meant the compiler knew to switch off
the local scope optimizations (switching from `LOAD_FAST` to `LOAD_NAME` for
example, to look up variables in both the local and global scopes). With
`exec()` being a function, that option is no longer available and function
scopes are now _always_ optimized.
Moreover, in Python 2, the `exec` statement explicitly copies all variables
found in `locals()` back to the function locals using `PyFrame_LocalsToFast`,
but only if no _globals_ and _locals_ parameters were supplied.
The proper work-around is to use a new namespace (a dictionary) for your
`exec()` call:
def execute(a, st):
namespace = {}
exec("b = {}\nprint('b:', b)".format(st), namespace)
print(namespace['b'])
|
Python Mock Process for Unit Testing
Question: **Background:**
I am currently writing a process monitoring tool (Windows and Linux) in Python
and implementing unit test coverage. The process monitor hooks into the
Windows API function [EnumProcesses](http://msdn.microsoft.com/en-
us/library/windows/desktop/ms682629%28v=vs.85%29.aspx) on Windows and monitors
the /proc directory on Linux to find current processes. The process names and
process IDs are then written to a log which is accessible to the unit tests.
**Question:**
When I unit test the monitoring behavior I need a process to start and
terminate. I would love if there would be a (cross-platform?) way to start and
terminate a fake system process that I could uniquely name (and track its
creation in a unit test).
**Initial ideas:**
* I could use [subprocess.Popen()](http://docs.python.org/2/library/subprocess.html#popen-objects) to open any system process but this runs into some issues. The unit tests could falsely pass if the process I'm using to test is run by the system as well. Also, the unit tests are run from the command line and any Linux process I can think of suspends the terminal (nano, etc.).
* I could start a process and track it by its process ID but I'm not exactly sure how to do this without suspending the terminal.
These are just thoughts and observations from initial testing and I would love
it if someone could prove me wrong on either of these points.
I am using Python 2.6.6.
**Edit:**
Get all Linux process IDs:
try:
processDirectories = os.listdir(self.PROCESS_DIRECTORY)
except IOError:
return []
return [pid for pid in processDirectories if pid.isdigit()]
Get all Windows process IDs:
import ctypes, ctypes.wintypes
Psapi = ctypes.WinDLL('Psapi.dll')
EnumProcesses = self.Psapi.EnumProcesses
EnumProcesses.restype = ctypes.wintypes.BOOL
count = 50
while True:
# Build arguments to EnumProcesses
processIds = (ctypes.wintypes.DWORD*count)()
size = ctypes.sizeof(processIds)
bytes_returned = ctypes.wintypes.DWORD()
# Call enum processes to find all processes
if self.EnumProcesses(ctypes.byref(processIds), size, ctypes.byref(bytes_returned)):
if bytes_returned.value < size:
return processIds
else:
# We weren't able to get all the processes so double our size and try again
count *= 2
else:
print "EnumProcesses failed"
sys.exit()
[Windows code is from here](http://stackoverflow.com/questions/6980246/how-
can-i-find-a-process-by-name-and-kill-using-ctypes)
Answer: **edit: this answer is getting long :), but some of my original answer still
applies, so I leave it in :)**
Your code is not so different from my original answer. Some of my ideas still
apply.
When you are writing Unit Test, you want to only test _your_ logic. When you
use code that interacts with the operating system, you usually want to mock
that part out. The reason being that you don't have much control over the
output of those libraries, as you found out. So it's easier to mock those
calls.
In this case, there are two libraries that are interacting with the sytem:
`os.listdir` and `EnumProcesses`. Since you didn't write them, we can easily
fake them to return what we need. Which in this case is a list.
But wait, in your comment you mentioned:
> "The issue I'm having with it however is that it really doesn't test that my
> code is seeing new processes on the system but rather that the code is
> correctly monitoring new items in a list."
The thing is, we don't _need_ to test the code that actually **monitors the
processes on the system** , because it's a third party code. What we _need_ to
test is that your **code logic handles the returned processes**. Because
that's the code you wrote. The reason why we are testing over a list, is
because that's what your logic is doing. `os.listir` and `EniumProcesses`
return a list of pids (numeric strings and integers, respectively) and your
code acts on that list.
I'm assuming your code is inside a Class (you are using `self` in your code).
I'm also assuming that they are isolated inside their own methods (you are
using `return`). So this will be sort of what I suggested originally, except
with actual code :) Idk if they are in the same class or different classes,
but it doesn't really matter.
## Linux method
Now, testing your Linux process function is not that difficult. You can patch
`os.listdir` to return a list of pids.
def getLinuxProcess(self):
try:
processDirectories = os.listdir(self.PROCESS_DIRECTORY)
except IOError:
return []
return [pid for pid in processDirectories if pid.isdigit()]
Now for the test.
import unittest
from fudge import patched_context
import os
import LinuxProcessClass # class that contains getLinuxProcess method
def test_LinuxProcess(self):
"""Test the logic of our getLinuxProcess.
We patch os.listdir and return our own list, because os.listdir
returns a list. We do this so that we can control the output
(we test *our* logic, not a built-in library's functionality).
"""
# Test we can parse our pdis
fakeProcessIds = ['1', '2', '3']
with patched_context(os, 'listdir', lamba x: fakeProcessIds):
myClass = LinuxProcessClass()
....
result = myClass.getLinuxProcess()
expected = [1, 2, 3]
self.assertEqual(result, expected)
# Test we can handle IOERROR
with patched_context(os, 'listdir', lamba x: raise IOError):
myClass = LinuxProcessClass()
....
result = myClass.getLinuxProcess()
expected = []
self.assertEqual(result, expected)
# Test we only get pids
fakeProcessIds = ['1', '2', '3', 'do', 'not', 'parse']
.....
## Windows method
Testing your Window's method is a little trickier. What I would do is the
following:
def prepareWindowsObjects(self):
"""Create and set up objects needed to get the windows process"
...
Psapi = ctypes.WinDLL('Psapi.dll')
EnumProcesses = self.Psapi.EnumProcesses
EnumProcesses.restype = ctypes.wintypes.BOOL
self.EnumProcessses = EnumProcess
...
def getWindowsProcess(self):
count = 50
while True:
.... # Build arguments to EnumProcesses and call enun process
if self.EnumProcesses(ctypes.byref(processIds),...
..
else:
return []
I separated the code into two methods to make it easier to read (I believe you
are already doing this). Here is the tricky part, `EnumProcesses` is using
pointers and they are not easy to play with. Another thing is, that I don't
know how to work with pointers in Python, so I couldn't tell you of an easy
way to mock that out =P
What I _can_ tell you is to simply not test it. Your logic there is very
minimal. Besides increasing the size of `count`, everything else in that
function is creating the space `EnumProcesses` pointers will use. Maybe you
can add a limit to the count size but other than that, this method is short
and sweet. It returns the windows processes and nothing more. Just what I was
asking for in my original comment :)
So leave that method alone. Don't test it. Make sure though, that anything
that uses `getWindowsProcess` and `getLinuxProcess` get's mocked out as per my
original suggestion.
Hopefully this makes more sense :) If it doesn't let me know and maybe we can
have a chat session or do a video call or something.
**original answer**
I'm not exactly sure how to do what you are asking, but whenever I need to
test code that depends on some outside force (external libraries, popen or in
this case processes) I mock out those parts.
Now, I don't know how your code is structured, but maybe you can do something
like this:
def getWindowsProcesses(self, ...):
'''Call Windows API function EnumProcesses and
return the list of processes
'''
# ... call EnumProcesses ...
return listOfProcesses
def getLinuxProcesses(self, ...):
'''Look in /proc dir and return list of processes'''
# ... look in /proc ...
return listOfProcessses
These two methods **only do one thing** , get the list of processes. For
Windows, it might just be a call to that API and for Linux just reading the
/proc dir. That's all, nothing more. The logic for handling the processes will
go somewhere else. This makes these methods extremely easy to mock out since
their implementations are just API calls that return a list.
Your code can then easy call them:
def getProcesses(...):
'''Get the processes running.'''
isLinux = # ... logic for determining OS ...
if isLinux:
processes = getLinuxProcesses(...)
else:
processes = getWindowsProcesses(...)
# ... do something with processes, write to log file, etc ...
In your test, you can then use a mocking library such as
[Fudge](http://farmdev.com/projects/fudge/api/fudge.html). You mock out these
two methods to return what you **expect** them to return.
This way you'll be testing _your_ logic since you can control what the result
will be.
from fudge import patched_context
...
def test_getProcesses(self, ...):
monitor = MonitorTool(..)
# Patch the method that gets the processes. Whenever it gets called, return
# our predetermined list.
originalProcesses = [....pids...]
with patched_context(monitor, "getLinuxProcesses", lamba x: originalProcesses):
monitor.getProcesses()
# ... assert logic is right ...
# Let's "add" some new processes and test that our logic realizes new
# processes were added.
newProcesses = [...]
updatedProcesses = originalProcessses + (newProcesses)
with patched_context(monitor, "getLinuxProcesses", lamba x: updatedProcesses):
monitor.getProcesses()
# ... assert logic caught new processes ...
# Let's "kill" our new processes and test that our logic can handle it
with patched_context(monitor, "getLinuxProcesses", lamba x: originalProcesses):
monitor.getProcesses()
# ... assert logic caught processes were 'killed' ...
Keep in mind that if you test your code this way, you won't get 100% code
coverage (since your mocked methods won't be run), but this is fine. You're
testing your code and not third party's, which is what matters.
Hopefully this might be able to help you. I know it doesn't answer your
question, but maybe you can use this to figure out the best way to test your
code.
|
Set a point on the image to act as rotate point Pygame.transform.rotate()
Question: I am newbie to both python and pygame. I want to rotate a rectangle about a
point other than center. My code so far is :
import pygame
pygame.init()
w = 640
h = 480
degree =45
screen = pygame.display.set_mode((w,h))
surf = pygame.Surface((25, 100))
surf.fill((255, 255, 255))
surf.set_colorkey((255, 0, 0))
bigger = pygame.Rect(0, 0, 25, 100)
pygame.draw.rect(surf, (100, 0, 0), bigger)
rotatedSurf = pygame.transform.rotate(surf, degree)
screen.blit(rotatedSurf, (400,300))
running = True
while running:
event = pygame.event.poll()
if event.type ==pygame.QUIT:
running = False
pygame.display.flip()
I can change the degree to get different rotation but the rotation is about
center. I want to set a point other than center of the rectangle as a rotate
point.
Please help. Thanks
Answer: I think you have to make a function of your own for that.
If you make a Vector class it's much easier.
Maybe something like:
def rotate(surf, angle, pos):
pygame.transform.rotate(surf, angle)
rel_pos = surf.blit_pos.sub(pos)
new_rel_pos = rel_pos.set_angle(rel_pos.get_angle() + angle)
surf.blit_pos = pos.add(new_rel_pos)
So there you have it.The only thing you have to do is the Vector class with
the methods 'add()', 'sub()', 'get_angle()' and 'set_angle()'. If you are
strugling just google for help.
In the end you'll end up with a nice Vector class that you can expand and use
in other projects.
|
How to test Django-CMS plugins packaged as a reusable app
Question: I have followed the procedure in the [Django
docs](https://docs.djangoproject.com/en/dev/intro/reusable-apps/) to make some
Django-CMS plugins reusable, and the [Hitchhiker's guide to
packaging](http://guide.python-distribute.org/quickstart.html) to put them on
pypi, so they are available via
pip install cmsplugin-rt
This installs them somewhere sensible that python can find them. Of course I
have my development directory somewhere else.
But when I add some tests to this package, I get the error:
django.core.exceptions.ImproperlyConfigured: App with label cmsplugin_rt could not be found
Some more explanation is needed. As the package is not part of a Django
project, i.e. there is no `manage.py` file to run tests with, I followed the
advice [here at stackoverflow](http://stackoverflow.com/questions/3841725/how-
to-launch-tests-for-django-reusable-app) and added `runtests.py` to the
`tests` directory. Specifically in this file I put:
import os, sys
from django.conf import settings
DIRNAME = os.path.dirname(__file__)
settings.configure(DEBUG=True,
DATABASES={
'default': {
'ENGINE': 'django.db.backends.sqlite3',
}
},
CMS_TEMPLATES = ( ('template_for_tests.html', 'Test template'), ),
CMS_MODERATOR = False,
CMS_PERMISSION = False,
TEMPLATE_CONTEXT_PROCESSORS = (
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.i18n',
'django.core.context_processors.request',
'django.core.context_processors.media',
'django.core.context_processors.static',
'cms.context_processors.media',
'sekizai.context_processors.sekizai',
),
INSTALLED_APPS = (
'cmsplugin_rt.button',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.admin',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'south',
'cms',
'mptt',
'menus',
'sekizai',
),
)
from django.test.simple import DjangoTestSuiteRunner
test_runner = DjangoTestSuiteRunner(verbosity=2)
failures = test_runner.run_tests(['cmsplugin_rt', ])
if failures:
sys.exit(failures)
So, as I mentioned, when I execute `python runtests.py` I get the error:
django.core.exceptions.ImproperlyConfigured: App with label cmsplugin_rt could not be found
Where am I going wrong? Or is there a better way to do this? I'd love to get
some tests in there!
Thanks!
(If this did run, would the tests run using my development version of the
package, or would they pull in the version from the pip install? Would I need
to `pip uninstall cmsplugin-rt` before each run?)
Here is my directory structure - I have several plugins in the one package,
which may be part of the problem. I put the `tests` directory as you see here,
though I have also tried it one level up.
cmsplugin-rt/
--README.txt
--LICENSE.txt
--MANIFEST.in
--setup.py
--cmsplugin_rt/
----__init__.py
----models.py
----button/
------__init__.py
------models.py
------cms_plugins.py
------templates/
----(other plugins)/
----tests/
------runtests.py
------mytests.py
To be safe I also put an empty `models.py` at the top level (following the
advice [here](http://stackoverflow.com/questions/6483636/how-to-test-django-
application-placed-in-subfolder)), but I'm not sure it makes any difference.
Answer: For posterity here is my work-around to the South migration problem I
mentioned in my first comment. It's not pretty, so I would love any
suggestions on how to improve it.
The process to add a new field to the `cmsplugin_rt.button` model is:
1. Before making any edits, copy `site-packages/cmsplugin_rt/button` into a dummy Django-CMS project as an app called `button`
2. Delete this new app's `button/migrations/` directory
3. Add `button` to the dummy project `settings.py`'s INSTALLED_APPS
4. Run `./manage.py schemamigration --init button`, so the dummy project's understanding of the database is aligned with the current model (before any changes are made)
5. Run `./manage.py migrate button`, to update the dummy project's database
6. Edit the button's `model.py` file in the dummy project to add the extra field, and make any other changes you require.
7. Run `./manage.py schemamigration --auto button`, to generate the migration code. This will be in `button/migrations/0002_auto__...`
8. This file is what you need to put in your package, but it will have the wrong number in the front if the plugin had more than just the `0001_initial.py` migration file in it originally. Copy it with the correct number into your package development directory. Also copy any model, cms_plugin, template and other changes you have made.
|
converters option in numpy genfromtxt not accepting -ve indexing of columns
Question: I want to load only last few columns in a text file with some evaluation.
I used numpy.genfromtxt with the argument converters={-1:func,-2:func}
But it is not working. On the other hand if i give the forward indexing like
converters={56:func,57:func} it works correctly.
Why doesn't converters argument support the python's backward indexing? Is
there anyway to do this if i know only the indexing of column from the last?
Answer: Using `numpy.loadtxt` it works, and you can use the `converters` parameter to
define your functions. Having a `tmp.txt` file with:
11,12,13,14,15,16,17,18,19
21,22,23,24,25,26,27,28,29
31,32,33,34,35,36,37,38,39
41,42,43,44,45,46,47,48,49
51,52,53,54,55,56,57,58,59
You can load the selected columns with (also chosing the order which you want
them to be stacked):
import numpy as np
print np.loadtxt('tmp.txt',delimiter=',',usecols=(-2,-1))
#[[ 18. 19.]
# [ 28. 29.]
# [ 38. 39.]
# [ 48. 49.]
# [ 58. 59.]]
print np.loadtxt('tmp.txt',delimiter=',',usecols=(-1,-2),converters={-1: lambda x: float(x)+100})
#[[ 119. 18.]
# [ 129. 28.]
# [ 139. 38.]
# [ 149. 48.]
# [ 159. 58.]]
|
Regex does not match but seems to be correct
Question: I have a very weird problem:
Using the same regex matches in several online services, but not in my local
python 3.3 instance.
re.search("ajaxHandler\('(?P<fp>[A-Z0-9]+)",rawdata).group("fp")
where rawdata is
<select name="F4542661421192HPAUS" onchange="liftAjax.lift_ajaxHandler('F4542661421185WLRZY=' + encodeURIComponent(this.value), null, null, null)">[... blabla ...]</select>
Any idea what's going wrong?
Answer: I can't reproduce this:
Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AM
D64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> rawdata="""<select name="F4542661421192HPAUS" onchange="liftAjax.lift_ajaxHandler('F4542661421185WLRZY=' + encodeURIComponent(this.value), null, null, null)">[... blabla ...]</select>"""
>>> re.search("ajaxHandler\('(?P<fp>[A-Z0-9]+)",rawdata).group("fp")
'F4542661421185WLRZY'
|
Flask, Heroku and Github Dependencies/File Structure
Question: Quite a beginner at the whole flask/heroku/github business, but been using
python for several years now and had experience with tortoise SVN. I have been
following the tutorial on how to push code to heroku at this link
<https://devcenter.heroku.com/articles/python> and after much tinkering I
managed to get my web app uploaded. However I have definitely missed
something.
Currently within the project I have a file structure to organize different
processes (for example webservice calls and database handling), these are then
imported into the main app by a code of the sort:
## Webservices
dirname, filename = os.path.split(os.path.abspath(__file__))
WSdirname = dirname + '\\WebServices\\'
sys.path.append(WSdirname)
import WebservicesModule as WSmodule # Module resides in "WebSerivices" folder
Which implies files are stored in a structure like
AppFolder\
app.py
WebServices\
WebservicesModules.py
...
Database\
DatabaseModules.py
...
Locally this works. However once pushed by git to heroku it would seem that
the code cannot access the `WebservicesModule` module. Giving an error in the
form
> Import error: no module named WebservicesModule.
To explain why I have this file structure; as there will be a large number of
webservices required it is easier to have them contained within the same
folder. Similarly for the database operations and so forth.
My question is this. Is my code bad practice, meaning heroku doesn't allow it?
Or has git hub not uploaded my files to heroku, hence not being able to find
them (despite being in the file structure of the master directory)? Or is
there some issue I don't know about? Do I need to declare these modules as
dependencies in the requirements.txt, despite doing so in the code?
Cheers for any help you can provide :)!
Answer: The issue is that locally you are developing on Windows, while Heroku's slugs
use some variant of Linux. `\` is the directory separator on Windows but
**`/`** is the directory separator on Linux.
Rather than hard-coding `\WebServices\` use `os.path.join` to join your path
and subdirectories:
WSdirname = os.path.join(dirname, "WebServices")
That will ensure that no matter _what_ platform you deploy to, the correct
directory separator is used.
|
py.test run tests in specific testSuite
Question: I'm new to py.test. So far I like what I see and want to integrate it to our
CI process.
Currently we use a different kind of parameterization scheme for our tests
which I will explain briefly:
* instead of parameterizing per-test, we parameterize per class
* say `params` is a touple of of of tuples, each representing different set of parameters.
* we create for each such tupla different instance of some `TestCaseWithParameters` which is a `unittest.TestCase` class. Something like this:
>
> for test_parameters in params:
>
> parameterized_test_suite.addTest(ParametrizedTestCase.parametrize(TestCaseWithParameters,param=test_parameters))
>
* Each of these classes is injected with `self.params` and runs all tests functions it with those different params.
* This means that if we have hundreds of tuples in `params` and `TestSomethingWithParameters` has dozens of tests, there are _a lot_ of tests in total.
My question: How would I go about translating this to py.test?
I've read [this](http://holgerkrekel.net/2009/05/13/parametrizing-python-
tests-generalized/) article about the `pytest_generate_tests` hook, but it
seems it injects dependency per test function, and I need it per TestCase...
The simplest way would be to tell py.test to run the specific
`parameterized_tes_suite` I create already, but I did not find a way to do
so...
A different way would be to do a similar dependency-injection at TestCase
class level, but I have not found a way to do that either.
Answer: You can easily parametrize whole classes using the `@pytest.mark.parametrize`
marker:
import pytest
@pytest.mark.parametrize('n', [0, 1])
class TestFoo:
def test_42(self, n):
assert n == 42
def test_7(self, n):
assert n == 7
See the [documentation on the parameterize
marker](http://pytest.org/latest/parametrize.html#pytest-mark-parametrize-
parametrizing-test-functions) for details on how to pass in multiple arguments
etc. And also have a look at how to [apply markers to classes and
modules](http://pytest.org/latest/example/markers.html#marking-whole-classes-
or-modules) for more information on this.
|
Anybody know MATLAB and Python? (Code conversion MATLAB>Python)
Question: I am trying to re-write this MATLAB program in Python. I haven't succeeded in
getting the same Python output, yet. But my attempt is given beneath the
MATLAB code. The code does not need any extra files/information to run. So
this should run OK on your MATLAB. And, if all goes to plan, Python also...
Summary of what the code does: Performs an integral taking in arguments `eV`
and `t`, for an array of variables `eV`. There are also more complicated
things considering a substitution for `E`. But, those familiar with both codes
should be able to follow.
Please feel free to ask if you have any questions, and many thanks for any
potential help/hints/solutions.
MATLAB CODE:
**Main.m**
clear all %Remove items from MATLAB workspace and reset MuPAD engine
clc %Clear command window
clf %Clear figure window
global d1 d2 T %Declare global variables
T = 0.02; %Temperature value (K)
d1 = 1; %Energy gap in electrode 1.
d2 = 0.5; %Energy gap in electrode 2.
small = 1e-9;
eV_values = linspace(0, 2.5, 2e3); %Row vector of 2e3 points linearly spaced between 0 and 0.25. These are the voltage values.
current = zeros(size(eV_values)); %Zeros creates array all of zeros. size gives size of dataset array.
tic %Start clock to measure performance
for x = 1:numel(eV_values) %numel gives number of elements in array ev_values).
eV = eV_values(x);
clc %Clear command window
disp(x) %Display array
current(x) = quad(@(t)integrand(t, eV), -1 + small, 1 - small);
end
toc %End clock
clf %Clear figure window
figure(1) %Create graphics object
hold on %Retain current graph when adding new graphs/Delay evaluation.
box on %Display the boundary of the current axes.
plot(eV_values, real(current), 'b')
plot(eV_values, imag(current), 'r')
title('S-S')
xlabel('eV/\Delta')
ylabel('I(eV)')
**Integrand.m**
function x = integrand(t, eV) %Declare function name and inputs
global d1 d2 T %Declare global variable
E = t./(1 - t.^2); %Variable substitution
x = abs(E)./sqrt(E.^2 - d1^2).*abs(E + eV)./sqrt((E + eV).^2 - d2^2).*...
(1./(1 + exp(E./T)) - 1./(1 + exp((E + eV)./T)));
x = x.*heaviside(E.^2 - d1^2).*heaviside((E + eV).^2 - d2^2);
x = x.*(1 + t.^2)./(1 - t.^2).^2;
%heaviside step function
PYTHON CODE:
from numpy import *
import pylab as pl
import array
from scipy import integrate
T = 0.02 # Global variable - Temperature (K)
d1 = 1 # Global variable - Energy gap in electrode 1.
d2 = 0.5 # Global variable - Energy gap in electrode 2.
small = 1e-9
eV_values = linspace(0.0, 2.5, num=10)
def heaviside( x ):
# Return 0 for x<0, 1 for x>0, 0.5 for x=0. #
if x == 0:
return 0.5
return 0 if x < 0 else 1
def integrand( t, eV ):
#print(" t: %s, eV: %s" % (t, eV))
E = t / ( 1 - t*t ) # E substitution.
x1 = ( abs( E ) / sqrt( E*E - d1*d1 ) ) * ( abs( E + eV ) / ( sqrt( ( E + eV )**2 - d2*d2 ) ) ) * (1/(1 + exp(E/T)) - 1/(1 + exp((E + eV)/T)))
x2 = x1*(heaviside( E*E - d1*d1 )*heaviside( (E + eV)**2 - d2*d2) )
x = x2*( ( 1 + t*t ) / ( 1 - t*t )**2 )
return x
current = []
for eV in eV_values:
integral, err = integrate.quad( integrand, ( -1 + small ), ( 1 - small ), args=(eV, ) )
# print( eV, integral )
print( eV, integral, err)
current.append( integral )
#print( 'current values')
print( current )
#pl.plot(eV_values,current,'b')
#pl.plot(eV_values,imag(current),'r')
#pl.title('S-S')
#pl.xlabel(r'eV/$\Delta$')
#pl.ylabel('I(eV)')
#pl.show()
Notable problems:
* The MATLAB code considers imaginary/real current values in quad. Python code currently doesn't, but ought to for obtaining the same output.
* The current Python code outputs: Giving `nan` values for the `integral` and `err`. Again, this may be down to the program not considering imaginary and real values in the integral.
`In [3]: run IV.py IV.py:22: RuntimeWarning: invalid value encountered in sqrt
x1 = (abs(E)/sqrt(E*E - d1*d1)) * (abs(E + eV)/(sqrt((E + eV)**2 - d2*d2))) *
(1/(1 + exp(E/T)) - 1/(1 + exp((E + eV)/T))) IV.py:22: RuntimeWarning:
overflow encountered in exp x1 = (abs(E)/sqrt(E*E - d1*d1)) * (abs(E +
eV)/(sqrt((E + eV)**2 - d2*d2))) * (1/(1 + exp(E/T)) - 1/(1 + exp((E +
eV)/T))) (0.0, nan, nan) (0.27777777777777779, nan, nan) (0.55555555555555558,
nan, nan) (0.83333333333333337, nan, nan) (1.1111111111111112, nan, nan)
(1.3888888888888888, nan, nan) (1.6666666666666667, nan, nan)
(1.9444444444444446, nan, nan) (2.2222222222222223, nan, nan) (2.5, nan, nan)
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]`
Answer: `numpy.sqrt` can't deal with minus values. Use `numpy.emath.sqrt` instead.
And because calculate with floating numbers is not precise, it's better to
return the real part of x:
return x.real
In addition, calculations with numpy scalar is slow. It's better to use python
standard module `math` & `cmath` to do the calculations in `integrand()`.
|
Wrapper for libeay32.dll: how to import macro?
Question: I'm writing small wrapper for OpenSLL `libeay32.dll` in Python. For majority
of functions it is possible to import them as follows:
self.Function_Name = self._dll.Function_Name
self.Function_Name.restype = ctypes.c_int #for example
self.Function_Name.argtypes = [list of ctypes arguments]
Unfortunately I'm not able to import this way any macros:
`X509_get_notAfter`, `X509_get_notBefore` etc.
Any ideas, how to do it with `ctypes`?
Answer: You can't import macros. What you're importing are functions from a DLL.
Macros aren't exported from the DLL, so there's nothing to import.
Fortunately, most macros are very simple, so you can just reimplement them in
Python.
Or, alternatively, create a wrapper DLL in C that defines a function for each
macro, compile and link that, and import the wrapper functions with `ctypes`.
Or you may want to use [`Cython`](http://www.cython.org) or some other
technology instead of `ctypes`. The way they generally work is to generate and
compile C code that wraps the C library and exports Python types and
functions, and it's generally as easy to export a C macro as a Python function
as it is to export a C function.
Or, simplest of all… does [PyOpenSSL](https://pypi.python.org/pypi/pyOpenSSL)
already wrap everything you need?
|
Circular module dependency in python
Question: I have two modules, baselib.Database and baselib.Application. In
baselib.Application, I have
import baselib.Database
APP = None
class BaseApplication():
def __init__(dbClass = baselib.Database.GenericDb...):
global APP
this.dbClass = dbClass
APP = this
etc...
In baselib.Database, I have
import baselib.Application
def getDB(dbClass = baselib.Application.APP.dbClass):
etc...
class GenericDB():
def __init__(self, args):
etc...
The problem is, when I import either of these modules, I get
AttributeError: 'module' object has no attribute (modulename)
The problem seems to stem from the fact that the default arguments are
evaluated during the import; if I replace getDB with
def getDB(dbClass = None):
dbClass = dbClass or baselib.Application.APP.dbClass
and do the same thing with the other default argument, everything works fine.
Is this the best/most pythonic way to do this, or should I just avoid the
circular dependency entirely and combine the two modules into one file? I'd
really like to keep them separate because a large part of my codebase is
dependent on them.
Answer: I think this <http://effbot.org/zone/import-confusion.htm#circular-imports>
may answer your question. Basically, import is also a statement. We should
avoid circular module dependency.
|
How to load user code?
Question: I have a program which automatically generates a data structure from user-
provided JSON code. I also want to provide an option to allow users to write
their own function to generate this data structure programmatically. Is there
a way for Python to load an arbitrary module by path and return that module's
namespace so I can call the user's function from my program?
Example, I want something like the following:
def make(custom):
if not custom:
return helper('example.json')
else:
return load('path/to/user-script.py').make() # this line here
Answer: You can import a named `.py` file as if it were a module by using
`imp.load_source`; see [this
question](http://stackoverflow.com/questions/67631/how-to-import-a-module-
given-the-full-path).
This is usually a pretty weird thing to do, but loading user hooks that don't
live in the Python module hierarchy seem like an okay excuse. :)
* * *
[This Python bug](http://bugs.python.org/issue14551) claims that `load_source`
is obsolete in Python 3, and advises the somewhat more cumbersome and less
obvious invocation:
importlib.SourceFileLoader(name, path).load_module(name)
|
PIL and vectorbased graphics
Question: I run into several problems when I try to open EPS- or SVG-Images with PIL.
Opening EPS
from PIL import Image
test = Image.open('test.eps')
ends in:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\Lib\site-packages\PIL\Image.py", line 1965, in open
return factory(fp, filename)
File "C:\Python27\Lib\site-packages\PIL\ImageFile.py", line 91, in __init__
self._open()
File "C:\Python27\Lib\site-packages\PIL\EpsImagePlugin.py", line 206, in _open
raise IOError, "bad EPS header"
IOError: bad EPS header
Also opening SVG ends in `IOError: cannot identify image file`.
The problem is I have to support both formats in my application. Converting to
other formats is no alternative. I'm on Windows 7, Python 2.7.2 and PIL 1.1.7.
I uploaded both images: [EPS](http://bit.ly/VckNjH) and
[SVG](http://bit.ly/XcsIxA).
Answer: Thre are alternatives to PIL, but alternatives to PIL are not what you want -
There is no library I know of that would transparently open a vector based
drawing and treat it just as any other image ,short of opening a web browser
and grabbing its render.
For dealing with SVG, there is a recipe using Cairo - which also can handle
alot of other formats, if a bit more difficult to deal with than the PIL API -
I think cario can also handle EPS - so, you can probably develop your app with
pycairo - or pycairo + PIL in the worst case.
The recipe for rendering SVG's is in the answer to: [Convert SVG to PNG in
Python](http://stackoverflow.com/questions/6589358/convert-svg-to-png-in-
python/6599172#6599172)
(note that you don't have to "convert the file to PNG" - the recibe shos how
you render to a cairo surface, which can be displayed, saved to a file, and so
on)
|
How to pass python variable to html variable?
Question: I need to read a url link from text file in python as a variable, and use it
in html. The text file "file.txt" contains only one line
"<http://188.xxx.xxx.xx:8878>", this line should be saved in the variable
"link", then I should use the contain of this variable in the html, so that
the link should be opened when I click on the button image "go_online.png". I
tried to change my code as following but it doesn't work! any help please?
#!/usr/bin/python
import cherrypy
import os.path
from auth import AuthController, require, member_of, name_is
class Server(object):
_cp_config = {
'tools.sessions.on': True,
'tools.auth.on': True
}
auth = AuthController()
@cherrypy.expose
@require()
def index(self):
f = open ("file.txt","r")
link = f.read()
print link
f.close()
html = """
<html>
<script language="javascript" type="text/javascript">
var var_link = '{{ link }}';
</script>
<body>
<p>{htmlText}
<p>
<a href={{ var_link }} ><img src="images/go_online.png"></a>
</body>
</html>
"""
myText = ''
myText = "Hellow World"
return html.format(htmlText=myText)
index.exposed = True
#configuration
conf = {
'global' : {
'server.socket_host': '0.0.0.0', #0.0.0.0 or specific IP
'server.socket_port': 8085 #server port
},
'/images': { #images served as static files
'tools.staticdir.on': True,
'tools.staticdir.dir': os.path.abspath('/home/ubuntu/webserver/images')
}
}
cherrypy.quickstart(Server(), config=conf)
Answer: first off, not sure that the javascript part makes any sense, just leave it
out. Also, your opening a **p** tag but not closing it. Not sure what your
templating engine is, but you could just pass in the variables in pure python.
Also, make sure to put quotes around your link. So your code should be
something like:
class Server(object):
_cp_config = {
'tools.sessions.on': True,
'tools.auth.on': True
}
auth = AuthController()
@cherrypy.expose
@require()
def index(self):
f = open ("file.txt","r")
link = f.read()
f.close()
myText = "Hello World"
html = """
<html>
<body>
<p>%s</p>
<a href="%s" ><img src="images/go_online.png"></a>
</body>
</html>
""" %(myText, link)
return html
index.exposed = True
(btw, the %s things are string placeholders, that will be poplulated the
variables in %(firstString, secondString) at the end of the the multi line
string.
|
Compare 2 .csv files with Python then output results
Question: I'm fairly new at programming and I am trying to write a python program that
will compare 2 .csv files by specific columns and check for additions,
removals, and modifications. The .csv files are both in the following format,
contain the same amount of columns, and use BillingNumber as the key:
BillingNumber,CustomerName,IsActive,IsCreditHold,IsPayScan,City,State
"2","CHARLIE RYAN","Yes","No","Yes","Reading","PA"
"3","INSURANCE BILLS","","","","",""
"4","AAA","","","","",""
I need to compare only columns 0, 1, 2, and 4. I have tried many different
ways to accomplish this but I haven't had any luck. I understand that I can
load them into dictionaries using `csv.DictReader` or `csv.reader`, but after
that I get stuck. I'm not sure exactly where or how to start after loading
them into memory.
I tried this previously:
import time
old_lines = set((line.strip() for line in open(r'Old/file1.csv', 'r+')))
file_new = open(r'New/file2.csv', 'r+')
choice = 0
choice = int( input('\nPlease choose your result format.\nEnter 1 for .txt, 2 for .csv or 3 for .json\n') )
time.sleep(1)
print(".")
time.sleep(1)
print("..")
time.sleep(1)
print("...")
time.sleep(1)
print("....")
time.sleep(1)
print('Done! Check "Different" folder for results.\n')
if choice == 1:
file_diff = open(r'Different/diff.txt', 'w')
elif choice == 2:
file_diff = open(r'Different/diff.csv', 'w')
elif choice == 3:
file_diff = open(r'Different/diff.json', "w")
else:
print ("You MUST enter 1, 2 or 3")
exit()
for line in file_new:
if line.strip() not in old_lines:
file_diff.write("** ERROR! Entry "+ line + "** Does not match previous file\n\n")
file_new.close()
file_diff.close()
It doesn't work properly because if there is an additional line, or one is
missing, it logs everything after that line as different. Also it compares the
whole line which is not what I want to do. This was basically just a starting
point and although it kind of worked, it isn't specific enough for what I
need. I'm really just looking for a good place to start. Thanks!
Answer: I think you were on the right track using the csv module. Since
'BillingNumber' is a unique key, I would create one dict for the "old" billing
file, and another for the "new" billing file:
import csv
def make_billing_dict(csv_dict_reader):
bdict = {}
for entry in csv_dict_reader:
key = entry['BillingNumber']
bdict[key] = entry
return bdict
with open('old.csv') as csv_file:
old = csv.DictReader(csv_file)
old_bills = make_billing_dict(old)
That results in this data structure for `old_bills`:
{'2': {'BillingNumber': '2',
'City': 'Reading',
'CustomerName': 'CHARLIE RYAN',
'IsActive': 'Yes',
'IsCreditHold': 'No',
'IsPayScan': 'Yes',
'State': 'PA'},
'3': {'BillingNumber': '3',
'City': '',
'CustomerName': 'INSURANCE BILLS',
'IsActive': '',
'IsCreditHold': '',
'IsPayScan': '',
'State': ''},
'4': {'BillingNumber': '4',
'City': '',
'CustomerName': 'AAA',
'IsActive': '',
'IsCreditHold': '',
'IsPayScan': '',
'State': ''}}
Once you create the same data structure for the "new" billing file, you can
easily find the differences:
# Keys that are in old_bills, but not new_bills
print set(old_bills.keys()) - set(new_bills.keys())
# Keys that are in new_bills, but not old_bills
print set(new_bills.keys()) - set(old_bills.keys())
# Compare columns for same billing records
# Will print True or False
print old_bills['2']['CustomerName'] == new_bills['2']['CustomerName']
print old_bills['2']['IsActive'] == new_bills['2']['IsActive']
Obviously, you wouldn't write a separate print statement for each comparison.
I'm just demonstrating how to use the data structures to find differences.
Next, you should write a function to loop through all possible BillingNumbers
and check for differences between old and new...but I'll leave that part for
you.
|
Starting out in Python Development: Coin-Tossing Loops
Question: Afternoon all,
I cross your paths as someone looking to teachimself programming. As such,
I've started with Python. As a disclaimer, I have searched the question for
some examples of Python coin-tosses but I've not really understood any of the
code that previous askers have come up with.
**My task** My educationanal material has asked me to come up with an
application that flips the virtual coin 100 times and then prints the results.
My ideas was to use an infinite loop, break out when the coin toss reaches 100
and then print the results.
I've not quite achieved this and I'm not spotting the error in what I've
written. It seems to achieve the 100 flips but then only prints out 50 of
either Heads / Tails; thus my error is somewhere in the counting logic?
Any explanation (bearing in mind I'm a beginner, not a moron!) would get both
my gratitude and my upvote
Thanks for reading
# Exercise 3.2
# Heads and Tails coin flip
#import random
import random
#declare variables
heads = 0
tails = 0
cointoss = 0
coinresult = random.randint(1,2)
#start the loop
while True:
cointoss +=1
#end the loop if cointoss is greater than 100
if cointoss > 100:
break
if coinresult == 1:
heads +=1
cointoss +=1
elif coinresult == 2:
tails +=1
cointoss +=1
print("Heads came up", heads, "times")
print("Tails came up", tails, "times")
Answer: You're incrementing `cointoss` twice per loop.
while True:
cointoss +=1 # You already incremented here, therefore...
if cointoss > 100:
break
if coinresult == 1:
heads +=1
cointoss +=1 # ...get rid of this...
elif coinresult == 2:
tails +=1
cointoss +=1 # ...and this.
Also, as was pointed out, you should be getting a new `coinresult` somewhere
inside the loop.
|
Python v2.6 rounding up with decimals (currency)?
Question: So I am very new to Python and have a question about rounding up.
product_price = '79.98'
subtotal = Decimal(product_price)
cal_tax = '0.0825'
tax_conv = Decimal(cal_tax)
tax_total = subtotal * tax_conv
total_tax = round(tax_total, 2)
print total_tax
self.assertEqual(total_tax, '6.60')
total_tax returns `6.6`, and the assert fails:
AssertionError: u'6.60' != 6.5999999999999996
I am sure there is a newbie mistake I am making. I would like for 'total_tax'
in this instance to round to 6.60 passing my assertion. I would be grateful
for any tips anyone has to offer. Thanks again!
Answer: Do not use `round` on decimals, `round` returns `float`. Use `quantize`:
total_tax = tax_total.quantize(Decimal('0.01'))
Then compare it with another `Decimal`:
self.assertEqual(total_tax, Decimal('6.60'))
If you want to round strictly up, quantize like this:
from decimal import Decimal, ROUND_UP
total_tax = tax_total.quantize(Decimal('0.01'), rounding=ROUND_UP)
|
Get a JSON object in python
Question: Usually my webservice built with Bottle return JSON files, which works fine.
But, I've an exception that need to call a local function.
Here is what I tried to do:
import json
def getData():
return json.dumps({'data': someData })
def function():
try:
# Fail
except:
print getData()
print type(getData())
json.load(getData())
So it prints:
{"data": "myData"}
<type 'str'>
[...]
AttributeError: 'str' object has no attribute 'read'
So `json.dumps` gives me a string. How can I use it as JSON ?
Answer: `json.load` loads JSON from a file object.
`json.loads` loads from a string. This is what you want.
|
Error "sqlserver_ado isn't an available database backend" (PyISAPIe on IIS)
Question: I'm having problems connecting my Django project to SQL Server 2008 when using
IIS to serve Django and [django-mssql](http://django-
mssql.readthedocs.org/en/latest/index.html) to handle transactions. I am using
IIS 7 and [64 bit ActivePython
2.7](http://www.activestate.com/activepython/downloads).
Here is my list of installed packages:
Django==1.4.5
distribute==0.6.19
django-mssql==1.2
pypm==1.3.4
pythonselect==1.3
pywin32==214
virtualenv==1.6.1
wsgiref==0.1.2
And here is the last bit of the stack trace:
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 40, in
backend = load_backend(connection.settings_dict['ENGINE'])
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Python27\lib\site-packages\django\db\utils.py", line 92, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Python27\lib\site-packages\django\db\utils.py", line 51, in load_backend
raise ImproperlyConfigured(error_msg)
django.core.exceptions.ImproperlyConfigured: 'sqlserver_ado' isn't an available database backend.
Try using django.db.backends.sqlserver_ado instead.
Error was: DLL load failed: The specified module could not be found.
If I add the `sqlserver_ado` folder to `C:\Python27\Lib\site-
packages\django\db\backends` and change my database settings in `settings.py`
from `'ENGINE': 'sqlserver_ado',` to `'ENGINE':
'django.db.backends.sqlserver_ado',`, then I get a slightly different stack
trace.
File "C:\Python27\lib\site-packages\django\db\backends\sqlserver_ado\base.py", line 6, in
import dbapi as Database
File "C:\Python27\lib\site-packages\django\db\backends\sqlserver_ado\dbapi.py", line 49, in
import pythoncom
File "C:\Python27\lib\site-packages\pythoncom.py", line 2, in
import pywintypes
File "C:\Python27\lib\site-packages\win32\lib\pywintypes.py", line 124, in
__import_pywin32_system_module__("pywintypes", globals())
File "C:\Python27\lib\site-packages\win32\lib\pywintypes.py", line 64, in __import_pywin32_system_module__
import _win32sysloader
ImportError: DLL load failed: The specified module could not be found.
If I connect to a sqlite database instead of SQL Server, the application works
fine.
If I run the project using the development server, connecting to SQL Server
works fine.
So it seems the problem is the combination of IIS / PyISAPIe and django_mssql.
Several other questions have mentioned similar issues. Each of these were
solved by somehow getting python dlls on the system path. I tried (both by
checking the path and copying the files into `c:\python2.7`, but I get the
same error.
* [Django error in Apache 2.2: Database backend fails in production but successful in development](http://stackoverflow.com/questions/13965251/django-error-in-apache-2-2-database-backend-fails-in-production-but-successful)
* [pywintypes27.dll not found using Apache, Django, pywin32, Python2.7 and mod_wsgi](http://stackoverflow.com/questions/7755626/pywintypes27-dll-not-found-using-apache-django-pywin32-python2-7-and-mod-wsgi)
* <http://code.google.com/p/django-mssql/issues/detail?id=107>
For a last bit of info, here is `sys.path` for the development server version
and the IIS / PyISAPIe version.
Development (works):
C:\Users\Administrator\Desktop\django test C:\Python27\python27.zip C:\Python27\DLLs C:\Python27\lib C:\Python27\lib\plat-win C:\Python27\lib\lib-tk C:\Python27 C:\Users\Administrator\AppData\Roaming\Python\Python27\site-packages C:\Python27\lib\site-packages C:\Python27\lib\site-packages\win32 C:\Python27\lib\site-packages\win32\lib C:\Python27\lib\site-packages\Pythonwin C:\Python27\lib\site-packages\setuptools-0.6c11-py2.7.egg-info
IIS (fails):
C:\PyISAPIe C:\Windows\system32\python27.zip C:\Python27\Lib C:\Python27\DLLs C:\Python27\Lib\lib-tk c:\windows\system32\inetsrv C:\Python27 C:\Python27\lib\site-packages C:\Python27\lib\site-packages\win32 C:\Python27\lib\site-packages\win32\lib C:\Python27\lib\site-packages\Pythonwin C:\Python27\lib\site-packages\setuptools-0.6c11-py2.7.egg-info c:\inetpub\PyApp
Any tips or suggestions of where to go from here would be appreciated. I'm
going to try out normal (i.e. non-Active) Python next to see if that makes a
difference.
Answer: Installing 64 bit python from scratch and following the advice
[here](http://stackoverflow.com/questions/7755626/pywintypes27-dll-not-found-
using-apache-django-pywin32-python2-7-and-mod-wsgi) worked. The problem must
have been some goofiness with Active Python.
There was one thing I did notice that may be helpful.
* With a normal installation of python and pywin32 (using the executables from the linked sites), `C:\Python27\Lib\site-packages` contained a folder named `pywin32_system32` which contained the executables that needed to be copied to `C:\Python27` to solve the problem.
* With the Active Python installation, this directory did not exist.
I also noticed that the directories that are there for both installation
methods (`win32`, `win32com`, and `win32comext`) contain slightly different
files.
I hope this saves someone else some pain in the future.
|
Find and replace everything between two placeholders with the contents of a variable
Question: Aloha, I have been trying to figure out how to replace/insert text strings
between two place holders.
#start
REPLACE ANYTHING IN HERE
#end
Originally I was trying to do this with BASH via sed, but hit a road-block
when I tried to pass a variable to sed.
sed -n -i '/#start/{p;:a;N;/#end/!ba;s/.*\n/hello\n/};p' file.txt
Returns
#start
hello
#end
but no joy when I try
sed -n -i '/#start/{p;:a;N;/#end/!ba;s/.*\n/$replace_var\n/};p' file.txt
or
sed -n -i "/#start/{p;:a;N;/#end/!ba;s/.*\n/$replace_var\n/};p" file.txt
I've been at this for hours, and have searched around but have not found a
solution. I'm up to trying in python or another language, or maybe with awk.
I'm kind of new in this realm so any useful information would be appreciated.
Thanks in advance
This is what I went with in the end. It's a script that in conjunction with
cron, updates my /var/etc/hosts.deny file with the latest published ssh
blocklist.
import re
import urllib2
hosts_deny = open('/etc/hosts.deny','r+')
hosts_deny_text = hosts_deny.read()
blockedHosts = urllib2.urlopen('http://www.openbl.org/lists/hosts.deny').read()
place = re.compile('(?<=#start)(\r?\n)'
'(.*?)'
'(?=\r?\n#end)',re.DOTALL)#DOTALL enables '.' to also include
#a new line
hosts_deny_text = re.sub(place, '\n'+ blockedHosts, hosts_deny_text)
hosts_deny.seek(0)
hosts_deny.write(hosts_deny_text)
hosts_deny.close()
Answer: This does seem to do what you want:
sed -ie "/#start/,/#end/{/#start/b;/#end/b;s/.*/$replace_var/;}" file.txt
The inner `/#start/b`and `/#end/b` skip those lines, otherwise you'd replace
them as well.
|
Initialize/Create/Populate a Dict of a Dict of a Dict in Python
Question: I have used dictionaries in python before but I am still new to python. This
time I am using a dictionary of a dictionary of a dictionary... i.e., a three
layer dict, and wanted to check before programming it.
I want to store all the data in this three-layer dict, and was wondering
what'd be an nice pythonic way to initialize, and then read a file and write
to such data structure.
The dictionary I want is of the following type:
{'geneid':
{'transcript_id':
{col_name1:col_value1, col_name2:col_value2}
}
}
The data is of this type:
geneid\ttx_id\tcolname1\tcolname2\n
hello\tNR432\t4.5\t6.7
bye\tNR439\t4.5\t6.7
Any ideas on how to do this in a good way?
Thanks!
Answer: First, let's start with the [`csv`](http://docs.python.org/2/library/csv.html)
module to handle parsing the lines:
import csv
with open('mydata.txt', 'rb') as f:
for row in csv.DictReader(f, delimiter='\t'):
print row
This will print:
{'geneid': 'hello', 'tx_id': 'NR432', 'col_name1': '4.5', 'col_name2': 6.7}
{'geneid': 'bye', 'tx_id': 'NR439', 'col_name1': '4.5', 'col_name2': 6.7}
So, now you just need to reorganize that into your preferred structure. This
is almost trivial, except that you have to deal with the fact that the first
time you see a given `geneid` you have to create a new empty `dict` for it,
and likewise for the first time you see a given `tx_id` within a `geneid`. You
can solve that with
[`setdefault`](http://docs.python.org/2/library/stdtypes.html#dict.setdefault):
import csv
genes = {}
with open('mydata.txt', 'rb') as f:
for row in csv.DictReader(f, delimiter='\t'):
gene = genes.setdefault(row['geneid'], {})
transcript = gene.setdefault(row['tx_id'], {})
transcript['colname1'] = row['colname1']
transcript['colname2'] = row['colname2']
You can make this a bit more readable with
[`defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict):
import csv
from collections import defaultdict
from functools import partial
genes = defaultdict(partial(defaultdict, dict))
with open('mydata.txt', 'rb') as f:
for row in csv.DictReader(f, delimiter='\t'):
genes[row['geneid']][row['tx_id']]['colname1'] = row['colname1']
genes[row['geneid']][row['tx_id']]['colname2'] = row['colname2']
The trick here is that the top-level `dict` is a special one that returns an
empty `dict` whenever it first sees a new key… and that empty `dict` it
returns is itself an empty `dict`. The only hard part is that `defaultdict`
takes a function that returns the right kind of object, and a function that
returns a `defaultdict(dict)` has to be written with a `partial`, `lambda`, or
explicit functions. (There are recipes on ActiveState and modules on PyPI that
will give you an even more general version of this that creates new
dictionaries as needed all the way down, if you want.)
|
Python CGI Issues
Question: I am building a simple web app (posted part of it yesterday) but I am
struggling with a portion:
1) Request a text file to upload 2) Save the uploaded file to a directory
I am using python and cgi for this. cgi is working as confirmed with a simple
test.cgi file.
Here is my current code for request_input.cgi:
#!/usr/bin/python
import cgi
print "Content-type: text/html\r\n\r\n"
print '<html>'
print '<body>'
print '<form enctype="multipart/form-data" action="save_input.cgi" method="post">'
print '<p>File: <input type="file" name="filename" /></p>'
print '<p>input type="submit" value="Upload" /></p>'
print '</form>'
print '</body>'
print '</html>'
Now when I tail the apache error log I get the following errors: "(2)No such
file or directory: exec of '/var/www/ipcheck/request_input.cgi' failed,
referer: [http://192.168.3.77/ipcheck/?C=M;O=A]" "Premature end of script
headers: request_input.cgi, referer [http://192.168.3.77/ipcheck/?C=M;O=A]"
Any help would be awesome! Thanks a lot
Answer: dos2unix resolved issue. Built in windows, moved to linux. Thanks for the
help.
|
Python: toggle a button (adding more buttons)
Question: This is just the initial code for what will be an array of buttons, effecting
each other. I can't seen to understand why I keep getting this definition
error!
from tkinter import *
import tkinter.messagebox
from tkinter import ttk
def changeImage(Num):
global buttonOn
global buttonOff
if Num == 1:
if button1(image) == buttonOn:
button1.config(image=buttonOff)
else:
button1.config(image=buttonOn)
root = Tk()
root.geometry('155x190')
root.title("Tile Turner")
buttonOn = PhotoImage(file="buttonPic.gif")
buttonOff = PhotoImage(file="buttonPic2.gif")
button1 = Button(image=buttonOn, width=20, height=20, command=changeImage(1))
buttonQuit = Button(text="Quit", width=10, height=0, command=root.destroy)
app.grid(column=0, row=0)
button1.grid(column=2, row = 3)
buttonQuit.grid(column=3, row = 10, columnspan = 4)
root.mainloop()
My definition error is in button1:
Traceback (most recent call last):
File "C:/Users/Jimmy/Desktop/COS 2013/Game1/small", line 23, in <module>
button1 = Button(image=buttonOn, width=20, height=20, command=changeImage(1))
File "C:/Users/Jimmy/Desktop/COS 2013/Game1/small", line 10, in changeImage
if button1(image) == buttonOn:
NameError: global name 'button1' is not defined
Any help at all would be appreciated!
Answer: In this line,
button1 = Button(image=buttonOn, width=20, height=20, command=changeImage(1))
You call the function `changeImage` passing in `1` as the argument. That
function then gets evaluated and the _result_ (`None` in this case) gets
passed to the `command=...` default argument of the `Button` constructor. Of
course, this causes you to get the `NameError` because you call `changeImage`
before you actually pass it to the Button constructor -- i.e. `button1`
doesn't exist yet because it is waiting for the `changeImage` function to
finish before it can continue on with constructing the `Button` instance.
You want something like:
button1 = Button(...,command=lambda:changeImage(1))
This creates a new function which when called simply calls `changeImage` with
the proper argument.
To elaborate a little further on lambda, the above statement is more or less
shorthand for
def temp_function():
return changeImage(1)
button1 = Button(...,command=temp_function)
|
I am trying to find and replace the values in the text file
Question: I have a text file containing values (e.g.0.2803739 0.280314). I would like to
replace the values in the text file in such a way that highest value will be
replaced by lowest value and so on. e.g. If text file contains values from 1
to 10 then value 10 should be replaced by value 1, value 9 should be replaced
by 2, value 8 should be replaced by 3 and so on. In the following script the
3rd "for loop" is getting ignored.
import fileinput
text_file = r"E:\Jagu\test123.txt"
f1 = open(text_file, 'r')
unique_v = set(f1.read().split())
a = list(unique_v)
new_list1= list(a)
new_list1.sort(reverse = True )
new_list2 = new_list1
new_list2.sort(reverse = False )
l = 0
m = len (new_list2)
m = m-1
f2 = open(text_file + ".tmp", 'w')
for j in new_list1:
c = new_list1 [l]
l = l + 1
for k in new_list2:
d = new_list2[m]
m = m - 1
for line in f1:
f2.write(line.replace(c,d))
print "replaced : " + str(c) + " with : " + str(d)
f1.close()
f2.close()
Hope the explanation is helpful to understand my issue. I am a beginner in
Python programming. Any help would be appreciated.
Answer: Issue 1:
new_list2 = new_list1
This will make `new_list2` and `new_list1` point to same list. You need
new_list2 = list(a)
or
new_list2 = new_list[:]
Issue 2:
You cannot do
for line in f1:
after you have `read()` from `f1`.
Do
f1.seek(0)
l = f1.readlines()
for line in f1:
|
Daemonizing a python script with python-daemon - socket trouble
Question: I'm try to daemonize some code, but I'm having some trouble.
If I call the code with tklogger(), it runs just fine. However, if I call it
in the daemon context, I get the following trace:
Traceback (most recent call last):
File "./tklogger.py", line 103, in <module>
tklogger()
File "./tklogger.py", line 41, in tklogger
conn, addr = s.accept()
File "/usr/lib/python2.6/socket.py", line 195, in accept
sock, addr = self._sock.accept()
socket.error: [Errno 9] Bad file descriptor
close failed in file object destructor:
IOError: [Errno 9] Bad file descriptor
My code is as follows:
#!/usr/bin/python
# tklogger, a TK10X GPS tracking device logger
import socket
import time
import daemon
HOST = '' # Bind to all interfaces
PORT = 9000 # Arbitrary non-privileged port
IMEI = '359710040656622' # Device IMEI
REQUEST_DATA = 1 # Do we want to request data?
INTERVAL = 30 # How often do we want updates?
LOGDIR = '/var/log/tklogger/' # Where shall we log?
# END CONFIG
# Establish socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # allow re-use of the address
s.bind((HOST, PORT))
s.listen(1)
# Open log files
logger = open(LOGDIR + 'tklogger.log', 'a')
deviceLog = open(LOGDIR + IMEI + '.csv', 'a')
def sendTracker(DATA):
conn.send(DATA)
log("\t<< " + DATA)
def log(DATA):
#print (DATA)
logger.write(DATA + '\n')
logger.flush()
def tklogger():
# Accept connections as they come
while 1:
global conn
conn, addr = s.accept()
strNow = time.strftime("(%d/%m/%Y %H:%M:%S)", time.localtime(time.time()))
log(strNow + ' Accepted connection from ' + addr[0])
# Fetch data from the socket
while 1:
data = conn.recv(1024)
if not data: break
data = data.rstrip()
log("\t>> " + data)
# Check for logon & send data request
if data == '##,imei:' + IMEI + ',A;':
sendTracker('LOAD')
if REQUEST_DATA:
time.sleep(5)
request = '**,imei:' + IMEI + ',C,' + str(INTERVAL) + 's'
sendTracker(request)
# Check for heartbeat
if data == IMEI + ';':
sendTracker('ON')
# Parse actual data
if data[:20] == 'imei:' + IMEI:
# Split into fields
# id, mode, dateTime, ??, LBS??, ??, fixType??, lat, N/S, lon, E/W, speed?, bearing?
fields = data.split(',');
if fields[6] == 'A':
# Hopefully we have the protocol right...
try:
# Convert to degress decimal.
latDeg = round(float(fields[7][:2]) + (float(fields[7][2:]) / 60.0), 5)
lonDeg = round(float(fields[9][:3]) + (float(fields[9][3:]) / 60.0), 5)
if fields[8] == 'S': latDeg = -latDeg
if fields[10] == 'W': lonDeg = -lonDeg
# Date & time
msgDate = fields[2][4:6] + '/' + fields[2][2:4] + '/' + fields[2][:2]
msgTime = fields[2][6:8] + ':' + fields[2][8:]
# Speed
speed = round(1.852 * float(fields[11]), 2)
# Bearing
bearing = float(fields[12].rstrip(';'))
# Log the device data
deviceLog.write(msgDate + ',' + msgTime + ',' + str(latDeg) + ',' + str(lonDeg) + ',' + str(speed) + ',' + str(bearing) + '\n')
deviceLog.flush()
# Just in case something goes wrong though
except: pass
conn.close()
strNow = time.strftime("(%d/%m/%Y %H:%M:%S)", time.localtime(time.time()))
log(strNow + ' Connection from ' + addr[0] + ' closed')
with daemon.DaemonContext(stderr = logger):
tklogger()
Suggestions would be appreciated!
Answer: The act of daemonizing kills all existing sockets. Therefore, you must open
your socket `s` after daemonization (inside the DaemonContext).
|
python script advanced scheduling
Question: I'm trying to do something really complicated. Using a Windows box, I'm try to
get a script to run every half-an-hour, Mon-Fri, 9:00am-7:00pm, skipping
certain dates I define as "holidays". I would love for Python to run this
script itself. I've looked into 'apschedule', but can't seem to find the right
options I need to do this. If not able to do this through Python, what other
solutions can I look at?
By the way, as of right now, I'm running Python 3.3, but am willing to
downgrade if necessary.
Answer: decorate your job-functions to skip the special days:
from datetime import date
def not_on(dates):
def noop(): pass
def decor(fn):
if date.today() in dates:
return noop
else:
return fn
return decor
@not_on( ( date(2013, 03, 01), ) )
def job():
print "yeah"
then just schedule your jobs for the regular dates and done. if the job is
called on a special day the decorator will just skip execution.
just keep using `apscheduler`.
|
Pure Python with Cython decorators: How to get attribute access at module level
Question: I would like to write some Pure Python with Cython decorator, but when I
rename my NONE.PY to NONE.PYX I've got an error. To workaround this issue I
need to wrap each attribute with a pure python definition call without
decorator. I wonder why...
here the module none.pyx (if you rename it to none.py, you will have no issue
at all)
import cython
@cython.cfunc
@cython.returns(cython.double)
@cython.locals(n=cython.int,i=cython.int,r=cython.int)
def ccrange(n):
r=0
for i in range(n):
r+=i
return r
def crange(n): return ccrange(n)
and the python test file test_none.py:
import pyximport; pyximport.install()
import none
n=10000
print ">>pure python call>>",none.crange(n)
print ">>cython call>>",none.ccrange(n)
Result with none.pyx:
> > pure python call>> 49995000.0 cython call>> Traceback (most recent call
> last): File "C:\Users\damien\python4d\bacoland\test_none.py", line 6, in
> print ">>cython call>>",none.ccrange(n)
> AttributeError: 'module' object has no attribute 'ccrange'
Rename none.pyx to none.py, give:
> > pure python call>> 49995000 cython call>> 49995000
Thanks for Help! Have a NiceDay :-)
EDIT: Avoid the decorator @cython.cfunc is breaking the speed avantage of
cython... Consider this following code with and without @cython.cfunc:
@cython.cfunc
@cython.returns(cython.double)
@cython.locals(n=cython.int)
def fibo_c(n):
if n == 0 or n == 1:
return n
return fibo_c(n-2) + fibo_c(n-1)
Answer: `@cython.cfunc` decorator is an equivalent to `cdef`ing a function (see
[here](http://docs.cython.org/src/tutorial/pure.html?highlight=cfunc#extension-
types-and-cdef-functions) for details), so this function is accessible only
within `C` code. So, to make it accessible from Python get rid of
`@cython.cfunc`.
|
How to make another list of the duplicate entries in existing list using python?
Question: I had a list like:
l = [[(3,4)], [(3,7)], [(3,8)]]
I used the `chain()` function to flat the list, now I have a list like:
l2 = [3,4,3,7,3,8]
I want to separate the duplicate items into another list:
l3 = [3,3,3]
l4 = [4,7,8]
I used the `set()` function, but it destroyed the duplicate items and resulted
in:
l3 = [4,7,8]
but I want to obtain both of them separately
Answer: You'd have to do some kind of count. Using the [`collections.Counter()`
class](http://docs.python.org/2/library/collections.html#collections.Counter)
would make that easy:
from collections import Counter
counts = Counter(main_list)
duplicate_list, unique_list = [], []
for entry in main_list:
if counts[entry] > 1:
duplicate_list.append(entry)
else:
unique_list.append(entry)
`counts` is a multi-set or bag; a mapping of entry to it's count in
`main_list`. The above example preserves the ordering of `main_list`.
|
Webkit under Windows with PyQt doesn't get remote resources via xhr
Question: I would like to write a Qt application which uses Webkit as its gui to get
data from a server and display it. I got it working unter Linux and OS X
without problems but under windows the XMLHttpRequest always returns status 0
and I don't know why. Here is the pyqt code I use:
import sys, os
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
app = QApplication(sys.argv)
web = QWebView()
web.page().settings().setAttribute(QWebSettings.LocalContentCanAccessRemoteUrls, True)
path = os.path.abspath(os.path.join(os.path.dirname(__file__), 'index.html'))
url = "file://localhost/" + path
web.load(QUrl(url))
web.show()
sys.exit(app.exec_())
and here is html HTML/JS I use to test it:
<!DOCTYPE html>
<title>TEST</title>
<h1>TEST</h1>
<div id="test"></div>
<script type="text/javascript">
function t(text) { document.getElementById("test").innerHTML = text }
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if(this.status != 0)
t(this.responseText)
else
t("Status is 0")
}
xhr.open("GET", "https://jeena.net/")
xhr.send()
</script>
On Linux it opens a new Window with a WebKit view in it, loads html local
index.html file into it and renders it which shows the TEST headline. After
that it runs the XMLHttpRequest code to get a websites content and set it with
innerHTML into the prepared div.
On windows it loads and shows the title but then when it runs the xhr code the
status is always just 0 and it never changes, no matter what I do.
As far as I understand `LocalContentCanAccessRemoteUrls` should make it
possible for the xhr to get that content from the remote website even on
windows, any idea why this is not working? I am using Qt version 4.9.6 on my
windows machine and python v2.7.
Answer: I think there are two simple attempts to solve this problem.
My first thinking is that it can be due to cross domain request. Seems that
there is no easy way to disable cross domain protection in QWebkit. I got the
information from this stackoverflow question:
[QtWebkit Same-Origin-
policy](http://stackoverflow.com/questions/8090462/qtwebkit-same-origin-
policy)
As stated in the accepted answer:
"By default, Qt doesn't expose method to disable / whitelist the same origin
policy. Extended the same (qwebsecurityorigin.cpp) and able to get it
working."
But since you've got everything working on linux and mac, the above may not be
the cause.
Another possibility is you don't have openssl enabled with your Qt on windows.
Since I noticed you have requested to a **https** page, which should require
openssl. You can change the page to a **http** one to quick test this
possibility.
|
How do I embed an IPython Interpreter into an application running in an IPython Qt Console
Question: There are a few topics on this, but none with a satisfactory answer.
I have a python application running in an IPython qt console
<http://ipython.org/ipython-doc/dev/interactive/qtconsole.html>
When I encounter an error, I'd like to be able to interact with the code at
that point.
try:
raise Exception()
except Exception as e:
try: # use exception trick to pick up the current frame
raise None
except:
frame = sys.exc_info()[2].tb_frame.f_back
namespace = frame.f_globals.copy()
namespace.update(frame.f_locals)
import IPython
IPython.embed_kernel(local_ns=namespace)
I would think this would work, but I get an error:
RuntimeError: threads can only be started once
Answer: I just use this:
from IPython import embed; embed()
works better than anything else for me :)
|
SQLAlchemy+pymysql Error: sqlalchemy.util.queue.Empty
Question: Trying to run Python3.2, SQLAlchemy0.8 and MySQL5.2 on Ubuntu using Eclispse
but I keep getting the error below. Am using pymysql (pymysql3 actually)
engine.
**module monitor**
from sqlalchemy import create_engine, MetaData
from sqlalchemy.ext.declarative import declarative_base
Engine = create_engine('mysql+pymysql://user:mypass@localhost/mydb')
Base = declarative_base(Engine)
Metadata = MetaData(bind=Engine)
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Table, Column, Integer
Session = sessionmaker(bind=Engine)
session = Session()
class Student(Base):
__table__ = Table('student_name', Metadata,
Column('id', Integer, primary_key=True),
autoload=True)
With that, when I run the module it throws the error as indicated below. What
am I doing wrong?
Traceback (most recent call last):
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/pool.py", line 757, in _do_get
return self._pool.get(wait, self._timeout)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/util/queue.py", line 166, in get
raise Empty
sqlalchemy.util.queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lukik/workspace/upark/src/monitor.py", line 12, in <module>
class Parking(Base):
File "/home/lukik/workspace/upark/src/monitor.py", line 15, in Parking
autoload=True)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/schema.py", line 333, in __new__
table._init(name, metadata, *args, **kw)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/schema.py", line 397, in _init
self._autoload(metadata, autoload_with, include_columns)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/schema.py", line 425, in _autoload
self, include_columns, exclude_columns
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/base.py", line 1604, in run_callable
with self.contextual_connect() as conn:
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/base.py", line 1671, in contextual_connect
self.pool.connect(),
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/pool.py", line 272, in connect
return _ConnectionFairy(self).checkout()
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/pool.py", line 425, in __init__
rec = self._connection_record = pool._do_get()
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/pool.py", line 777, in _do_get
con = self._create_connection()
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/pool.py", line 225, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/pool.py", line 322, in __init__
exec_once(self.connection, self)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/event.py", line 381, in exec_once
self(*args, **kw)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/event.py", line 398, in __call__
fn(*args, **kw)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/strategies.py", line 168, in first_connect
dialect.initialize(c)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/dialects/mysql/base.py", line 2052, in initialize
default.DefaultDialect.initialize(self, connection)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/default.py", line 172, in initialize
self._get_default_schema_name(connection)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/dialects/mysql/base.py", line 2019, in _get_default_schema_name
return connection.execute('SELECT DATABASE()').scalar()
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/base.py", line 664, in execute
params)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/base.py", line 808, in _execute_text
statement, parameters
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/base.py", line 871, in _execute_context
context)
File "/usr/local/lib/python3.2/dist-packages/SQLAlchemy-0.8.0b2-py3.2.egg/sqlalchemy/engine/default.py", line 322, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.2/dist-packages/pymysql/cursors.py", line 105, in execute
query = query % escaped_args
TypeError: unsupported operand type(s) for %: 'bytes' and 'tuple'
Answer: `mysql-connector-python` and `oursql` work fine for me under py3k.
# How to install?
$ pip install mysql-connector-python
# Usage
Engine = create_engine('mysql+mysqlconnector://<USERNAME>:<PASSWD>@<HOSTNAME>:<PORT>/<DBNAME>')
|
Python 2.7 Unicode/IDLE confusion
Question: I've read a lot about Unicode and the various encodings/decodings in Python
2.7, but I'm still having trouble understanding why IDLE can't seem to print
the right string.
I have a unicode string:
>>> s = u"Hey I\u2019m Bob"
>>> print s
Hey I'm Bob
However, after I encode into UTF-8, I get:
>>> s.encode('utf-8')
'Hey I\xe2\x80\x99m Bob'
>>> print s.encode('utf-8')
Hey I’m Bob
I can't figure out why I can't print this correctly, but I can write
s.encode('utf-8') to a text file and it comes out correctly. Not sure if this
has something to do with IDLE?
Answer: It is because IDLE's output window isn't supporting UTF-8, but printing a
Unicode string automatically tries to encode the string in the stdout
encoding. The following is from an IDLE console:
>>> s = u"Hey I\u2019m Bob"
>>> print s
Hey I’m Bob
>>> print s.encode('utf8')
Hey I’m Bob
>>> import sys
>>> sys.stdout.encoding
'cp1252'
>>> print s.encode('cp1252')
Hey I’m Bob
|
a strange issue when trying to analysis HTML with beautifulsoup
Question: i'm trying to write some python codes to gather music charts data from
official websites, but i get in trouble when gathering billboard's data. i
choose beautifulsoup to handle the HTML
my ENV: python-2.7 beautifulsoup-3.2.0
first i analysis the HTML
>>> import BeautifulSoup, urllib2, re
>>> html = urllib2.urlopen('http://www.billboard.com/charts/hot-100?page=1').read()
>>> soup = BeautifulSoup.BeautifulSoup(html)
then i try to gather data what i want, e.g., the artist name
HTML:
<div class="listing chart_listing">
<article id="node-1491420" class="song_review no_category chart_albumTrack_detail no_divider">
<header>
<span class="chart_position position-down">11</span>
<h1>Ho Hey</h1>
<p class="chart_info">
<a href="/artist/418560/lumineers">The Lumineers</a> <br>
The Lumineers </p>
artist name is The Lumineers
>>> print str(soup.find("div", {"class" : re.compile(r'\bchart_listing')})\
... .find("p", {"class":"chart_info"}).a.string)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'find'
NoneType! seems it can't grep the data what i want, maybe my rule is wrong, so
i try to grep some basic tag instead.
>>> print str(soup.find("div"))
None
>>> print str(soup.find("a"))
None
>>> print str(soup.find("title"))
<title>The Hot 100 : Page 2 | Billboard</title>
>>> print str(soup)
......entire HTML.....
i'm confusing, why can't it grep the basic tag like div, a? they indeed there.
what's wrong with my codes? there is nothing wrong when i try to analysis
other chart with these.
Answer: This seems to be a Beautifulsoup 3 issue. If you prettify() the output:
from BeautifulSoup import BeautifulSoup as soup3
import urllib2, re
html = urllib2.urlopen('http://www.billboard.com/charts/hot-100?page=1').read()
soup = soup3(html)
print soup.prettify()
you can see at the end of the output:
<script type="text/javascript" src="//assets.pinterest.com/js/pinit.js"></script>
</body>
</html>
</script>
</head>
</html>
With two html end tags, it looks like BeautifulSoup3 is confused by the
Javascript stuff in this data.
If you use:
from bs4 import BeautifulSoup as soup4
import urllib2, re
html = urllib2.urlopen('http://www.billboard.com/charts/hot-100?page=1').read()
soup = soup4(html)
print str(soup.find("div", {"class" : re.compile(r'\bchart_listing')}).find("p", {"class":"chart_info"}).a.string)
You get `'The Lumineers'` as output.
If you cannot switch to bs4, I suggest you write out the html variable to a
file `out.txt`, then change the script to read in `in.txt` and copy the output
to the input and cutting away chunks.
from BeautifulSoup import BeautifulSoup as soup3
import re
html = open('in.txt').read()
soup = soup3(html)
print str(soup.find("div", {"class" : re.compile(r'\bchart_listing')}).find("p", {"class":"chart_info"}).a.string)
My first guess was to remove the `<head> ... </head>` and that worked wonders.
After that you can solve that programmatically:
from BeautifulSoup import BeautifulSoup as soup3
import urllib2, re
htmlorg = urllib2.urlopen('http://www.billboard.com/charts/hot-100?page=1').read()
head_start = htmlorg.index('<head')
head_end = htmlorg.rindex('</head>')
head_end = htmlorg.index('>', head_end)
html = htmlorg[:head_start] + htmlorg[head_end+1:]
soup = soup3(html)
print str(soup.find("div", {"class" : re.compile(r'\bchart_listing')}).find("p", {"class":"chart_info"}).a.string)
|
Python : import module once for a whole package
Question: I'm currently coding an app which is basically structured that way :
main.py
\+ Package1
+--- Class1.py
+--- Apps
\+ Package2
+--- Class1.py
+--- Apps
So I have two questions : First, inside both packages, there are modules
needed by all Apps, eg : re. Is there a way I can import a module for the
whole package at once, instead of importing it in every file that needs it ?
And, as you can see, Class1 is used in both packages. Is there a good way to
share it between both packages to avoid code duplication ?
Answer: I would strongly recommend against doing this: by separating the imports from
the module that uses the functionality, you make it more difficult to track
dependencies between modules.
If you really want to do it though, one option would be to create a new module
called `common_imports` (for example) and have it do the imports you are
after.
Then in your other modules, add the following:
from common_imports import *
This should give you all the public names from that module (including all the
imports).
|
Unable to login to django admin view with valid username and password
Question: I know this question has been asked several times before but most of the
question were asked long ago and old answers did not work for me.
I have a django-nonrel based app which is using dbindexer as backend and
deployed on GAE. I am able to view homepage of my app which does not require
login.
But when I try to login to admin view, it gives "wrong username / password"
On my local development server, if I use "manage.py runserver", then I am able
to login on admin page. But If I run my app through GAE launcher, then I am
not able to login.
I could gather that GAE launcher uses different django from "manage.py
runserver". So, how can I make GAE (on launcher as well as on deployment
server) use django-nonrel?
Other details:
app.yaml does NOT include "django" library.
settings.py
DATABASES['native'] = DATABASES['default']
DATABASES['default'] = {'ENGINE': 'dbindexer', 'TARGET': 'native'}
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'djangotoolbox',
'autoload',
'dbindexer',
# Uncomment the next line to enable the admin:
'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
'django.contrib.admindocs',
'djangoappengine',
'testapp',
)
urls.py
from django.conf.urls.defaults import *
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r'^admin/', include(admin.site.urls)),
)
**UPDATE 1::**
As @dragonx pointed out, I need to run `python manage.py remote
createsuperuser` and create the user.
On local server, when I run 'manage.py syncdb', it fills database with
initializing data which also includes creating a superuser. I use
'initial_data.yaml' inside 'fixtures' directory for this and is read
automatically by syncdb command.
So, Is there any way to run "syncdb' on server side? Somehow I assumed this is
happening automatically at deployment just like 'manage.py runserver' happens
itself and I do not need to run app manually.
If I run `manage.py remote syncdb`, it blurts out following error:
google.appengine.api.datastore_errors.NeedIndexError: no matching index found.
<<ed>>some stack trace<<ed>>
The suggested index for this query is:
- kind: django_content_type
properties:
- name: app_label
- name: name
**Update 2:**
Instead of using `appcfg.py update site` command, if you use `python manage.py
deploy` from your app directory, it runs fixtures on remote server. Don't know
what's doing what.
`manage.py remote loaddata initdata.yaml` can also be used to initialize
remote database.
But even after this, I still do not see the fixtures data loaded in admin
interface i.e. it seems database was not initialized or maybe admin view is
badly broken. But I'd save that for another question~
Answer: When you run `python manage.py runserver` it starts a local dev server on your
local machine. It has it's own dev datastore on your local machine. At some
point you created an admin user in your local database.
When you deploy on app engine, it runs your code on Google's servers. There's
a datastore there, but it doesn't share the data on your dev server. You'll
need to create an admin user in the production datastore too. Try:
`python manage.py remote createsuperuser`
|
Invalid Syntax error in Python Code I copied from the Internet
Question: One of my early courses in the University I attend, was some basic training in
Python 3 years ago. Now I was looking for a program that could help me resize
some Grid stuff and I found something that could help me in Python. I
reinstalled Python to my PC and found my old editor. However when I run the
code I get an invalid syntax error that I can't understand. This is the part
of the code that the error appears in :
def downsize(mode, cell_size, inpath, outpath):
from VolumeData import fileformats
try:
grid_data = fileformats.open_file(inpath)
except fileformats.Uknown_File_Type, e:
sys.stderr.write(str(e))
sys.exit(1)
reduced = Reduced_Grid(grid_data, mode, cell_size)
from VolumeData.netcdf.netcdf_grid import write_grid_as_netcdf
write_grid_as_netcdf(reduced, outpath)
The exact invalid syntax error is in the "except fileformats.Uknown_File_Type,
e:" line. Can you help me ?
Answer: If you are using Python 3.x, you cannot use `except
fileformats.Uknown_File_Type, e`. The comma works as an `as` statement (in the
`try`/`except` block), so you should replace it with: `except
fileformats.Uknown_File_Type as e`.
The comma works in Python 2.7, but not 3.x. However, the `as` should work for
both.
Reference: [Handling errors in Python
3.3](http://docs.python.org/3.3/tutorial/errors.html#handling-exceptions)
|
Python code runs through cmd however not in IDLE..Why?
Question: Okay so i just got my leap motion device and im trying to run the scripts.
When I press f5, the scripts load however it doesnt do the functions.. (it
initilizes, loads everything) .
But when i open by double clicking (through cmd) it works how its supposed to
properly..
Any idea why?
Here is an example code:
<http://pastebin.com/6Pu2DQ4n>
Answer: IDLE isn't executing the code in the `if __name__ == '__main__'` due to the
way it is expected to work.
Change the last two lines so that the `if` statement isn't there, and the
`main()` call is not indented:
# if __name__ == "__main__":
main()
Note, this _will_ mean that `main()` is executed _every_ time this function is
imported anywhere, but it should run in IDLE. (IDLE didn't run `main`
previously as it doesn't trigger the `if`)
PS Well done on getting hold of a LEAP! V. jealous >:)
|
Google App Engine doesn't find local python module
Question: For some reason when I uploaded my app engine project yesterday (before this,
everything worked fine), it can't find one of my .py files/modules. My
directory is as follows:
app_directory/
gaesessions/
__init__.py
lib/
httplib2/
__init__.py
other stuff
app.yaml
appengine_config.py
index.yaml
All other .py files/modules
For some reason I now get the following error:
import_string() failed for 'games.GetMyGames'. Possible reasons are:
- missing __init__.py in a package;
- package or module path not included in sys.path;
- duplicated package or module name taking precedence in sys.path;
- missing module, class, function or variable;
Original exception:
ImportError: cannot import name GameModel
Answer: I realized I had a circular import:
in File1.py
from File2 import class1
and in File2.py
from File1 import class3
I changed to: in File1.py
import File2
and in File2.py
import File1
and I moved all of my class imports from File1 and File2 further down the
files and this solved my issue. Hopefully this helps someone else.
|
Python RLock IO-Bound?
Question: I have a set of CPU-bound processes that take any number of cores to 100%
utilization as long as their only synchronization is getting jobs out of a
Queue.
As soon as I add an RLock to avoid worst case scenarios when updating a
directory in the file system, CPU/core utilization drops to 60%, as if the
processes had become IO-bound.
What's the explanation?
This is not about overall speed. It is about CPU/core utilization, so Python
2/3, Cython, or PyPy should not matter.
**Update:** I gave a partial answer to my own question. The final solution for
my particular case consisted on modifying the way the file system was accessed
so no synchronization was needed (a "sort of" map/reduce).
Answer: It all depends on how `multiprocessing` has implemented `RLock`. I am aware
that multiprocessing _can_ work across hosts which implies that
synchronisation primitives may work across sockets. If that is true, it would
introduce a lot of (variable) latency.
So I did an experiment.
Here's a noddy example of `RLock` being used by more than one process (to
prevent any fast-path where all locks are within the same process):
#!/usr/bin/env python
import multiprocessing
from time import sleep
lock = multiprocessing.RLock()
def noop(myname):
# nonlocal lock
sleep(0.5)
print myname, "acquiring lock"
with lock:
print myname, "has lock"
sleep(0.5)
print myname, "released lock"
sProc1 = multiprocessing.Process(target=noop, args=('alice',))
sProc2 = multiprocessing.Process(target=noop, args=('bob',))
sProc1.start()
sProc2.start()
sProc1.join()
sProc2.join()
When this is run, its output looks something like this:
alice acquiring lock
alice has lock
bob acquiring lock
alice released lock
bob has lock
bob released lock
Great, so now run it with system call tracing via
[strace](http://en.wikipedia.org/wiki/Strace).
In the command below, the `-ff` option tells the tool to "follow `fork()`"
calls, i.e. trace any processes started by the main one. For reasons of
brevity I'm also using `-e trace=futex,write`, which filters output based on
conclusions I made before posting this. Normally you would run without the
`-e` option and use a text editor / `grep` to explore what happened after the
fact.
# strace -ff -e trace=futex,write ./traceme.py
futex(0x7fffeafe29bc, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, 7fb92ac6c700) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7fb92a8540b0, FUTEX_WAKE_PRIVATE, 2147483647) = 0
futex(0x7fb92aa7131c, FUTEX_WAKE_PRIVATE, 2147483647) = 0
write(3, "\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 32) = 32
Process 25873 attached
Process 25874 attached
Process 25872 suspended
[pid 25873] write(1, "alice acquiring lock\n", 21alice acquiring lock
) = 21
[pid 25873] write(1, "alice has lock\n", 15alice has lock
) = 15
[pid 25874] write(1, "bob acquiring lock\n", 19bob acquiring lock
) = 19
[pid 25874] futex(0x7fb92ac91000, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 25873] futex(0x7fb92ac91000, FUTEX_WAKE, 1 <unfinished ...>
[pid 25874] <... futex resumed> ) = 0
[pid 25873] <... futex resumed> ) = 1
[pid 25874] write(1, "bob has lock\n", 13 <unfinished ...>
bob has lock
[pid 25873] write(1, "alice released lock\n", 20 <unfinished ...>
alice released lock
[pid 25874] <... write resumed> ) = 13
[pid 25873] <... write resumed> ) = 20
Process 25872 resumed
Process 25873 detached
[pid 25872] --- SIGCHLD (Child exited) @ 0 (0) ---
Process 25872 suspended
[pid 25874] write(1, "bob released lock\n", 18bob released lock
) = 18
Process 25872 resumed
Process 25874 detached
--- SIGCHLD (Child exited) @ 0 (0) ---
From the pattern of print (`write()`) messages and
[`futex`](http://en.wikipedia.org/wiki/Futex) calls which block and later
resume, it seems clear that `RLock` is implemented using `futex`, or "Fast
Userspace Mutex". As the name implies this is a good choice for
synchronisation.
When a process is blocked in a system-call like `futex` the process is
blocking on I/O for all intents and purposes.
All this implies that `multiprocessing.RLock` is efficient and doing what it
was designed to do. So if your application's performance is less than you
expect when using synchronisation, chances are that your algorithm is to
blame.
|
Python crash when downloading image as numpy array
Question: Why does the following code crash python? Is there an easier/better way to
download an image and convert it to a numpy array?
from pylab import *
from urllib import request
captcha=imread(request.urlopen('http://pastebin.com/etc/CaptchaSecurityImages.php?width=100&height=35&characters=4&b=123'))
Note that this causes the python interpreter to exit rather than just printing
a stack trace.
Answer: Several issues:
1. `imread` can not detect the image type and defaults to `png`.
2. matplotlib's [_png.read_png crashes on Python 3 with urllib.request object](https://github.com/matplotlib/matplotlib/pull/1650).
3. the `request.urlopen` object is missing a seek function and does not work with PIL (PIL is used by matplotlib to read non-png images).
This code works for me on win-amd64-py3.3:
from pylab import *
from urllib import request
from io import BytesIO
url = 'http://pastebin.com/etc/CaptchaSecurityImages.php?width=100&height=35&characters=4&b=123'
data = BytesIO(request.urlopen(url).read())
captcha = imread(data, format='jpg')
|
How to receive reference and pointer arguments in Python + SWIG?
Question: I have a C++ function in which two arguments are given as the following
example.
void func(int& n, char** data)
{
*data = other_func1(); // returns a char array
n = other_func2(); // returns the length of the array
}
I can easily use this function in C or C++, but I do not know how I can call
it from a Python module generated with SWIG. I suppose that I will have to
write another C++ function which returns, for example, std::pair. But if
possible, I would like to know a workaround in the Python side. Could anyone
help?
Answer: For quite a lot of cases (e.g. `int *n`) it would be sufficient to write:
%apply int *OUTPUT { int *n };
which uses some default typemaps that SWIG provides for output parameters.
(There is also INOUT and INPUT which are similar).
In this instance though we don't quite match any of the predefined cases, so
we need to do the same thing manually. That's basically two typemaps per
argument - an input typemap which creates something temporary for the actual
function call and uses that instead of some real input and an argout that
marshals the result back from the temporary to Python. In the case of Python
it makes sense to use a tuple for returning multiple arguments.
An example:
%module test
%typemap(in,numinputs=0) int& n (int temp) "$1 = &temp;"
%typemap(in,numinputs=0) char **data (char *temp) "$1 = &temp;"
%typemap(argout) char **data {
%append_output(PyString_FromString(*$1));
}
%typemap(argout) int& n {
%append_output(PyInt_FromLong(*$1));
}
%inline %{
void foo(int& n, char **data) {
static char str[] = "Hello world";
*data = str;
n = sizeof str;
}
%}
Points to note:
The temporary variables (`int temp`, `char *temp`) automatically get renamed
which stops the apparent name clash. `%append_output` is a SWIG macro that
expands to add something to the back of the `$result` tuple in Python. If your
function `foo` were dynamically allocated memory you'd need to handle that.
The freearg typemap is often useful if the in typemap needs to dynamically
allocate memory.
This was sufficient to allow me to compile and run it like:
import test
len,str = test.foo()
print len
print str
|
Trying to verify SHA1 message signature using Python. What am I doing wrong?
Question: I'm attempting to verify the SHA1 signature of a message by downloading a
certificate from a website and extracting its public key. There's a few bits
of sample code elsewhere on SO
([here](http://stackoverflow.com/questions/544433/how-do-you-verify-an-rsa-
sha1-signature-in-python) and
[here](http://stackoverflow.com/questions/5663084/m2crypto-signature-
algorithm)), however I haven't yet figured out what I'm doing wrong.
import requests
from M2Crypto import BIO, RSA, EVP, X509
def verify_message(cert_url, msg, sig):
cert_text = requests.get(cert_url, verify=True)
cert = X509.load_cert_string(cert_text.content)
pubkey = cert.get_pubkey()
sig = sig.decode('base64')
# Write a few files to disk for debugging purposes
f = open("sig", "wb")
f.write(sig)
f.close()
f = open("msg", "w")
f.write(msg)
f.close()
f = open("mypubkey.pem", "w")
f.write(pubkey.get_rsa().as_pem())
f.close()
pubkey.reset_context(md='sha1')
pubkey.verify_init()
pubkey.verify_update(msg)
assert pubkey.verify_final(sig) == 1
This gives me the following assertion error:
File "/tmp/test.py", line 71, in verify_message
assert pubkey.verify_final(sig) == 1
AssertionError
However, if I use `openssl` from the command line along with the files
generated from the above Python script, it works fine:
[jamie@test5 tmp]$ openssl dgst -sha1 -verify mypubkey.pem -signature sig msg
Verified OK
I've hit a brick wall here; any suggestions would be greatly appreciated.
Thanks!
Answer: Your code is work properly — <https://gist.github.com/kalloc/5106808> I see
something else wrong here
|
Python CSV read-> write; remove and replace PLUS: end of line is JSON format
Question: I am having problems getting my Python script to do what I want. It does not
appear to be modifying my file.
I want to:
1. Read in a *.csv file that has the following format PropertyName::PropertyValue,…,PropertyName::PropertyValue,{ExtPropertyName::ExtPropertyValue},…,{ExtPropertyName:: ExtPropertyValue}
2. I want to remove PropertyName:: and leave behid just a column of the PropertyValue
3. I want to add a header line
I was trying to step through replacing the :: values with a comma, but cant
seem to get this to work:
fin = csv.reader(open('infile', 'rb'), delimiter=',')
fout = open('outfile', 'w')
for row in fin:
fout.write(','.join(','.join(item.split()) for item in row) + '::')
fout.close()
Any advice, whether on my first step problem, or to a bigger picture
resolution is always appreciated. Thanks.
UPDATE/EDIT asked for by a person nice enough to review for me!
Here is the first line of the *.csv file (INPUT)
InnerDiameterOrWidth::0.1,InnerHeight::0.1,Length2dCenterToCenter::44.6743867864386,Length3dCenterToCenter::44.6768028159989,Length2dToInsideEdge::44.2678260053526,Length3dToInsideEdge::44.2717800813466,Length2dToOutsideEdge::44.6743867864386,Length3dToOutsideEdge::44.6768028159989,MinimumCover::0,MaximumCover::0,StartConnection::ImmxGisUtilityNetworkCommon.Connection,
In a perfect world here is what I would like my text file to look like
(OUTPUT)
InnerDiameterOrWidth, InnerHeight, Length2dCenterToCenter,,,,,,,,,,,
0.1,0.1,44.6743867864386
so one header line and the values in column
**UPDATED** JSON Info
The end of each line has JSON formatted text:
{StartPoint::7858.35924983374[%2C]1703.69341358077[%2C]-3.075},{EndPoint::7822.85045874375[%2C]1730.80294308742[%2C]-3.53962362760298}
WHich I need to split into X Y Z and X Y Z with headers
Answer: Maybe something like this (assuming that each line has the same keys, and in
the same order):
import csv
with open("diam.csv", "rb") as fin, open("diam_out.csv", "wb") as fout:
reader = csv.reader(fin)
writer = csv.writer(fout)
for i, line in enumerate(reader):
split = [item.split("::") for item in line if item.strip()]
if not split: # blank line
continue
keys, vals = zip(*split)
if i == 0:
# first line: write header
writer.writerow(keys)
writer.writerow(vals)
which produces
localhost-2:coding $ cat diam_out.csv
InnerDiameterOrWidth,InnerHeight,Length2dCenterToCenter,Length3dCenterToCenter,Length2dToInsideEdge,Length3dToInsideEdge,Length2dToOutsideEdge,Length3dToOutsideEdge,MinimumCover,MaximumCover,StartConnection
0.1,0.1,44.6743867864386,44.6768028159989,44.2678260053526,44.2717800813466,44.6743867864386,44.6768028159989,0,0,ImmxGisUtilityNetworkCommon.Connection
I think most of that code should make sense, except maybe the `zip(*split)`
trick: that basically transposes a sequence, i.e.
>>> s = [['a','1'],['b','2']]
>>> zip(*s)
[('a', 'b'), ('1', '2')]
so that the elements are now grouped together by their index (the first ones
are all together, the second, etc.)
|
Python search one million strings in a file and count occurrences of each string
Question: This is more about to find the fastest way to do it. I have a file1 which
contains about one million strings(length 6-40) in separate line. I want to
search each of them in another file2 which contains about 80,000 strings and
count occurrence(if small string is found in one string multiple times, the
occurence of this string is still 1). If anyone is interested to compare
performance, there is link to download file1 and file2.
dropbox.com/sh/oj62918p83h8kus/sY2WejWmhu?m
What i am doing now is construct a dictionary for file 2, use strings ID as
key and string as value. (because strings in file2 have duplicate values, only
string ID is unique) my code is
for line in file1:
substring=line[:-1].split("\t")
for ID in dictionary.keys():
bigstring=dictionary[ID]
IDlist=[]
if bigstring.find(substring)!=-1:
IDlist.append(ID)
output.write("%s\t%s\n" % (substring,str(len(IDlist))))
My code will take hours to finish. Can anyone suggest a faster way to do it?
both file1 and file2 are just around 50M, my pc have 8G memory, you can use as
much memory as you need to make it faster. Any method that can finish in one
hour is acceptable:)
Here, after I have tried some suggestions from these comments below, see
performance comparison, first comes the code then it is the run time.
Some improvements suggested by Mark Amery and other peoples
import sys
from Bio import SeqIO
#first I load strings in file2 to a dictionary called var_seq,
var_seq={}
handle=SeqIO.parse(file2,'fasta')
for record in handle:
var_seq[record.id]=str(record.seq)
print len(var_seq) #Here print out 76827, which is the right number. loading file2 to var_seq doesn't take long, about 1 second, you shall not focus here to improve performance
output=open(outputfilename,'w')
icount=0
input1=open(file1,'r')
for line in input1:
icount+=1
row=line[:-1].split("\t")
ensp=row[0] #ensp is just peptides iD
peptide=row[1] #peptides is the substrings i want to search in file2
num=0
for ID,bigstring in var_seq.iteritems():
if peptide in bigstring:
num+=1
newline="%s\t%s\t%s\n" % (ensp,peptide,str(num))
output.write(newline)
if icount%1000==0:
break
input1.close()
handle.close()
output.close()
It will take 1m4s to finish. Improved 20s compared to my old one
#######NEXT METHOD suggested by entropy
from collections import defaultdict
var_seq=defaultdict(int)
handle=SeqIO.parse(file2,'fasta')
for record in handle:
var_seq[str(record.seq)]+=1
print len(var_seq) # here print out 59502, duplicates are removed, but occurances of duplicates are stored as value
handle.close()
output=open(outputfilename,'w')
icount=0
with open(file1) as fd:
for line in fd:
icount+=1
row=line[:-1].split("\t")
ensp=row[0]
peptide=row[1]
num=0
for varseq,num_occurrences in var_seq.items():
if peptide in varseq:
num+=num_occurrences
newline="%s\t%s\t%s\n" % (ensp,peptide,str(num))
output.write(newline)
if icount%1000==0:
break
output.close()
This one takes 1m10s,not faster as expected since it avoids searching
duplicates, don't understand why.
Haystack and Needle method suggested by Mark Amery, which turned out to be the
fastest, The problem of this method is that counting result for all substrings
is 0, which I don't understand yet.
Here is the code I implemented his method.
class Node(object):
def __init__(self):
self.words = set()
self.links = {}
base = Node()
def search_haystack_tree(needle):
current_node = base
for char in needle:
try:
current_node = current_node.links[char]
except KeyError:
return 0
return len(current_node.words)
input1=open(file1,'r')
needles={}
for line in input1:
row=line[:-1].split("\t")
needles[row[1]]=row[0]
print len(needles)
handle=SeqIO.parse(file2,'fasta')
haystacks={}
for record in handle:
haystacks[record.id]=str(record.seq)
print len(haystacks)
for haystack_id, haystack in haystacks.iteritems(): #should be the same as enumerate(list)
for i in xrange(len(haystack)):
current_node = base
for char in haystack[i:]:
current_node = current_node.links.setdefault(char, Node())
current_node.words.add(haystack_id)
icount=0
output=open(outputfilename,'w')
for needle in needles:
icount+=1
count = search_haystack_tree(needle)
newline="%s\t%s\t%s\n" % (needles[needle],needle,str(count))
output.write(newline)
if icount%1000==0:
break
input1.close()
handle.close()
output.close()
It takes only 0m11s to finish, which is much faster than other methods.
However, I don't know it is my mistakes to make all counting result as 0, or
there is a flaw in the Mark's method.
Answer: Your code doesn't seem like it works(are you sure you didn't just quote it
from memory instead of pasting the actual code?)
For example, this line:
substring=line[:-1].split("\t")
will cause `substring` t be a list. But later you do:
if bigstring.find(substring)!=-1:
That would cause an error if you call `str.find(list)`.
In any case, you are building lists uselessly in your innermost loop. This:
IDlist=[]
if bigstring.find(substring)!=-1:
IDlist.append(ID)
#later
len(IDlist)
That will uselessly allocate and free lists which would cause memory thrashing
as well as uselessly bogging everything down.
This is code that should work and uses more efficient means to do the
counting:
from collections import defaultdict
dictionary = defaultdict(int)
with open(file2) as fd:
for line in fd:
for s in line.split("\t"):
dictionary[s.strip()] += 1
with open(file1) as fd:
for line in fd:
for substring in line.split('\t'):
count = 0
for bigstring,num_occurrences in dictionary.items():
if substring in bigstring:
count += num_occurrences
print substring, count
PS: I am assuming that you have multiple words per line that are tab-split
because you do `line.split("\t")` at some point. If that is wrong, it should
be easy to revise the code.
PPS: If this ends up being too slow for your use(you'd have to try it, but my
guess is this should run in ~10min given the number of strings you said you
had). You'll have to use suffix trees as one of the comments suggested.
Edit: Amended the code so that it handles multiple occurrences of the same
string in `file2` without negatively affecting performance
Edit 2: Trading maximum space for time.
Below is code that will consume quite a bit of memory and take a while to
build the dictionary. However, once that's done, each search out of the
million strings to search for should complete in the time it takes for a
single hashtable lookup, that is `O(1)`.
Note, I have added some statements to log the time it takes for each step of
the process. You should keep those so you know which part of the time is taken
when searching. Since you are testing with 1000 strings only this matters a
lot since if 90% of the cost is the build time, not the search time, then when
you test with 1M strings you will still only be doing that once, so it won't
matter
Also note that I have amended my code to parse file1 and file2 as you do, so
you should be able to just plug this in and test it:
from Bio import SeqIO
from collections import defaultdict
from datetime import datetime
def all_substrings(s):
result = set()
for length in range(1,len(s)+1):
for offset in range(len(s)-length+1):
result.add(s[offset:offset+length])
return result
print "Building dictionary...."
build_start = datetime.now()
dictionary = defaultdict(int)
handle = SeqIO.parse(file2, 'fasta')
for record in handle:
for sub in all_substrings(str(record.seq).strip()):
dictionary[sub] += 1
build_end = datetime.now()
print "Dictionary built in: %gs" % (build-end-build_start).total_seconds()
print "Searching...\n"
search_start = datetime.now()
with open(file1) as fd:
for line in fd:
substring = line.strip().split("\t")[1]
count = dictionary[substring]
print substring, count
search_end = datetime.now()
print "Search done in: %gs" % (search_end-search_start).total_seconds()
|
Python, count down timer that doesn't sleep
Question: I am new to python and i am trying to make a countdown timer on a button
click. But i would like this countdown timer to start its countdown and place
the current countdown value in the text area. Also i need the rest of the
application to not sleep while this countdown is running. So far it will
output the countdown in the console but will freeze the rest of the
application. Can someone point me in the right direction?
from Tkinter import *
import time
import threading
import thread
class App:
def __init__(self, master):
frame = Frame(master)
frame.pack()
self.getvalue = Button(frame, text="Get the Text Area", command=self.thevalue)
self.getvalue.pack(side=LEFT)
self.text_area = Entry()
self.text_area.pack(side=RIGHT)
def thevalue(self):
print "In the value"
try:
t = threading.Thread(target=self.print_time("I am in print_time"))
t.daemon = True
t.start()
except:
print "Error: unable to start thread"
def print_time(self,bleh):
print bleh
print "The text area value is %s" % self.text_area.get()
boom=5
while boom >0:
time.sleep(1)
self.text_area.delete(0, END)
self.text_area.insert(0, boom)
print(boom)
boom -=1
root = Tk()
app = App(root)
root.mainloop()
Answer:
threading.Thread(target=self.print_time("I am in print_time"))
This will not do what you want it to do. What happens here is that the
function `self.print_time` is called and its return value is then passed to
the constructor of `threading.Thread`.
You need to create the thread like this:
t = threading.Thread(target=self.print_time, args=("I am in print_time",))
|
Python/BeautifulSoup Parsing HTML Fractions
Question: **Questions**
1. Why does is the output in the final two cases BOTH unicode, but in one case it shows the fraction, and in the other it shows some other code representing the fraction?
2. What is the cleanest way for me to go from the fraction to a decimal (-1.75)?
**Background**
I am using `BeautifulSoup` and `Python` to read some `HTML.` The `HTML`
outputs fractions. Below is the python code I am using to test this problem,
and the resulting output. In the below code I have
print type(c[0])
print c[0]
print type(c[0].get_text())
print c[0].get_text()
print type(re.split(" ", c[0].get_text())[0])
print re.split(" ", c[0].get_text())
and this outputs:
<class 'bs4.element.Tag'>
<b>-1¾ -101</b>
<type 'unicode'>
-1¾ -101
<type 'unicode'>
[u'-1\xbe\xa0-101']
Answer: Let's get the easy part of your question out of the way first:
When you print a list, the `repr` of the contents is used to represent the
items in the list. So since
re.split(" ", c[0].get_text())
is a list, the print statement prints the
[repr](http://docs.python.org/2/library/repr.html#module-repr) of the
`unicode` element in the list.
In [63]: x = u'-1\xbe\xa0-101'
In [64]: print(x)
-1¾ -101
In [65]: repr(x)
Out[65]: "u'-1\\xbe\\xa0-101'"
* * *
Now for the interesting part: Some unicode code points have names. For
example,
In [60]: import unicodedata as ud
In [61]: ud.name(u'\xbe')
Out[61]: 'VULGAR FRACTION THREE QUARTERS'
In fact, we can search through all the unicode characters for those with names
which match the pattern `'FRACTION (\w+) (\w+)'`:
import unicodedata as ud
import re
numerator = {
'ONE':1,
'TWO':2,
'THREE':3,
'FOUR':4,
'FIVE':5,
'SIX':6,
'SEVEN':7,
'EIGHT':8,
'NINE':9,
'ZERO':0,
}
denominator = {
'QUARTER':4,
'HALF':2,
'SEVENTH':7,
'NINTH':9,
'THIRD':3,
'FIFTH':5,
'SIXTH':6,
'EIGHTH':8,
'SIXTEENTH':16
}
fraction = {}
for num in range(0x110000):
s = unichr(num)
try:
name = ud.name(s)
except ValueError:
continue
match = re.search('FRACTION ({n}) ({d})'.format(
n = '|'.join(numerator.keys()),
d = '|'.join(denominator.keys()),
) , name)
if match:
fraction[num] = unicode(
float(numerator[match.group(1)])/denominator[match.group(2)]).lstrip('0')
print(fraction)
Thus we now have a `dict` named `fraction` which maps unicode code points to
`unicode` decimal representations of the fractions.
{8585: u'.0', 43056: u'.25', 43057: u'.5', 43058: u'.75', 43059: u'.0625', 43060: u'.125', 43061: u'.1875', 188: u'.25', 189: u'.5', 190: u'.75', 8528: u'.142857142857', 8529: u'.111111111111', 8531: u'.333333333333', 8532: u'.666666666667', 8533: u'.2', 8534: u'.4', 8535: u'.6', 8536: u'.8', 8537: u'.166666666667', 8538: u'.833333333333', 8539: u'.125', 8540: u'.375', 8541: u'.625', 8542: u'.875', 69245: u'.333333333333', 3443: u'.25', 3444: u'.5', 3445: u'.75', 69243: u'.5', 69244: u'.25', 11517: u'.5', 69246: u'.666666666667'}
Now you can translate `u'-1\xbe\xa0-101'` like this:
text = u'-1\xbe\xa0-101'
print(text.translate(fraction))
yields
-1.75 -101
* * *
So the short answer is:
fraction = {8585: u'.0', 43056: u'.25', 43057: u'.5', 43058: u'.75', 43059: u'.0625', 43060: u'.125', 43061: u'.1875', 188: u'.25', 189: u'.5', 190: u'.75', 8528: u'.142857142857', 8529: u'.111111111111', 8531: u'.333333333333', 8532: u'.666666666667', 8533: u'.2', 8534: u'.4', 8535: u'.6', 8536: u'.8', 8537: u'.166666666667', 8538: u'.833333333333', 8539: u'.125', 8540: u'.375', 8541: u'.625', 8542: u'.875', 69245: u'.333333333333', 3443: u'.25', 3444: u'.5', 3445: u'.75', 69243: u'.5', 69244: u'.25', 11517: u'.5', 69246: u'.666666666667'}
text = c[0].get_text()
text = text.translate(fraction)
parts = map(float, text.split())
print(parts)
yields
[-1.75, -101.0]
Note that in the future it is possible that more fractions are assigned
unicode code points. It is also possible that the name of the unicode code
point does not match the pattern `'FRACTION ({n}) ({d})'` that I used to
generate the `fraction` dict. So my solution is somewhat fragile and may need
to be updated in the future.
|
Minimum Weight Triangulation Taking Forever
Question: so I've been working on a program in Python that finds the minimum weight
triangulation of a convex polygon. This means that it finds the weight(The sum
of all the triangle perimeters), as well as the list of chords(lines going
through the polygon that break it up into triangles, not the boundaries). I
was under the impression that I'm using the dynamic programming algorithm,
however when I tried using a somewhat more complex polygon it takes
forever(I'm not sure how long it takes because I haven't gotten it to finish).
It works fine with a 10 sided polygon, however I'm trying 25 and that's what
is making it stall. My teacher gave me the polygons so I assume that the 25
one is supposed to work as well.
Since this algorithm is supposed to be O(n^3), the 25 sided polygon should
take roughly 15.625 times longer to calculate, however it's taking way longer
seeing that the 10 sided seems instantaneous. Could you guys look at my
algorithm and tell me if I'm doing some sort of n operation in there that I'm
not realizing? I can't see anything I'm doing, except maybe the last part
where I get rid of the duplicates by turning the list into a set, however in
my program I put a trace after the decomp before the conversion happens, and
it's not even reaching that point.
Here's my code, if you guys need anymore info just please ask. Something in
there is making it take longer than O(n^3) and I need to find it so I can trim
it out.
#!/usr/bin/python
import math
def cost(v):
ab = math.sqrt(((v[0][0] - v[1][0])**2) + ((v[0][1] - v[1][1])**2))
bc = math.sqrt(((v[1][0] - v[2][0])**2) + ((v[1][1] - v[2][1])**2))
ac = math.sqrt(((v[0][0] - v[2][0])**2) + ((v[0][1] - v[2][1])**2))
return ab + bc + ac
def triang_to_chord(t, n):
if t[1] == t[0] + 1:
# a and b
if t[2] == t[1] + 1:
# single
# b and c
return ((t[0], t[2]), )
elif t[2] == n-1 and t[0] == 0:
# single
# c and a
return ((t[1], t[2]), )
else:
# double
return ((t[0], t[2]), (t[1], t[2]))
elif t[2] == t[1] + 1:
# b and c
if t[0] == 0 and t[2] == n-1:
#single
# c and a
return ((t[0], t[1]), )
else:
#double
return ((t[0], t[1]), (t[0], t[2]))
elif t[0] == 0 and t[2] == n-1:
# c and a
# double
return ((t[0], t[1]), (t[1], t[2]))
else:
# triple
return ((t[0], t[1]), (t[1], t[2]), (t[0], t[2]))
file_name = raw_input("Enter the polygon file name: ").rstrip()
file_obj = open(file_name)
vertices_raw = file_obj.read().split()
file_obj.close()
vertices = []
for i in range(len(vertices_raw)):
if i % 2 == 0:
vertices.append((float(vertices_raw[i]), float(vertices_raw[i+1])))
n = len(vertices)
def decomp(i, j):
if j <= i: return (0, [])
elif j == i+1: return (0, [])
cheap_chord = [float("infinity"), []]
old_cost = cheap_chord[0]
smallest_k = None
for k in range(i+1, j):
old_cost = cheap_chord[0]
itok = decomp(i, k)
ktoj = decomp(k, j)
cheap_chord[0] = min(cheap_chord[0], cost((vertices[i], vertices[j], vertices[k])) + itok[0] + ktoj[0])
if cheap_chord[0] < old_cost:
smallest_k = k
cheap_chord[1] = itok[1] + ktoj[1]
temp_chords = triang_to_chord(sorted((i, j, smallest_k)), n)
for c in temp_chords:
cheap_chord[1].append(c)
return cheap_chord
results = decomp(0, len(vertices) - 1)
chords = set(results[1])
print "Minimum sum of triangle perimeters = ", results[0]
print len(chords), "chords are:"
for c in chords:
print " ", c[0], " ", c[1]
**EDIT:** I'll add the polygons I'm using, again the first one is solved right
away, while the second one has been running for about 10 minutes so far.
FIRST ONE:
202.1177 93.5606
177.3577 159.5286
138.2164 194.8717
73.9028 189.3758
17.8465 165.4303
2.4919 92.5714
21.9581 45.3453
72.9884 3.1700
133.3893 -0.3667
184.0190 38.2951
SECOND ONE:
397.2494 204.0564
399.0927 245.7974
375.8121 295.3134
340.3170 338.5171
313.5651 369.6730
260.6411 384.6494
208.5188 398.7632
163.0483 394.1319
119.2140 387.0723
76.2607 352.6056
39.8635 319.8147
8.0842 273.5640
-1.4554 226.3238
8.6748 173.7644
20.8444 124.1080
34.3564 87.0327
72.7005 46.8978
117.8008 12.5129
162.9027 5.9481
210.7204 2.7835
266.0091 10.9997
309.2761 27.5857
351.2311 61.9199
377.3673 108.9847
390.0396 148.6748
Answer: It looks like you have an issue with the inefficient recurision here.
...
def decomp(i, j):
...
for k in range(i+1, j):
...
itok = decomp(i, k)
ktoj = decomp(k, j)
...
...
You've ran into the same kind of issue as a [naive recursive implementation of
the Fibonacci Numbers](http://geeksonjava.com/interview/fibonacci.php), but
the way this algorithm works, it'll probably be much worst on the run time.
Assuming that is the only issue with you're algorithm, then you just need to
use memorization to ensure that the decomp is only calculated once for each
unique input.
The way to spot this issue is to print out the values of i, j and k as the
triple (i,j,k). In order to obtain a runtime of O(N^3), you shouldn't see the
same exact triple twice. However, the triple (22, 24, 23), appears at least
twice (in the 25), and is the first such duplicate. That shows the algorithm
is calculating the same thing multiple times, which is inefficient, and is
bumping up the performance well past O(N^3). I'll leave figuring out what the
algorithms actual performance is to you as an exercise. Assuming there isn't
something else wrong with the algorithm the algorithm should eventually stop.
|
Python cannot import name <class>
Question: I've been wrestling most of the night trying to solve an import error.
This is a common issue, but no previous question quite answers my issue.
I am using PyDev (an Eclipse plugin), and the library Kivy (a Python library)
I have a file structure set up like this:
<code>
__init__.py
main.py
engine.py
main_menu_widget.py
"code" is held within the eclipse folder "MyProject" but it's not a package so
I didn't include it.
The files look like this:
main.py
# main.py
from code.engine import Engine
class MotionApp(App):
# Ommited
engine.py
# engine.py
from code.main_menu_widget import MainMenuWidget
class Engine():
# Ommited
main_menu_widget.py
# main_menu_widget.py
from code.engine import Engine
class MainMenuWidget(Screen):
pass
The error I recieve, in full detail, is:
Traceback (most recent call last):
File "C:\MyProject\code\main.py", line 8, in <module>
from code.engine import Engine
File "C:\MyProject\code\engine.py", line 6, in <module>
from code.main_menu_widget import MainMenuWidget
File "C:\MyProject\code\main_menu_widget.py", line 3, in <module>
from code.engine import Engine
Any idea what I did wrong here? I just renamed my entire folder structure
because I screwed up this module structure so bad, but I think i'm close to
how it should look....
Answer: There seems to be a circular import. from `engine.py` you are importing
`main_menu_widget` while from `main_menu_widget`you are importing `engine`.
That is clearly a circular import which is not allowed by python.
|
python - overloading several operators at once
Question: I have a custom class and I want to overload several artihmetic operators, and
wonder if there is a way to avoid having to write out the code for each one
individually. I haven't been able to find any examples that don't explicity
overload each operator one-by-one.
class Foo(object):
a=0
def __init__(self, a):
self.a=a
def __add__(self, other):
#common logic here
return Foo(self.a+other.a)
def __sub__(self, other):
#common logic here
return Foo(self.a-other.a)
def __mul__(self, other):
#common logic here
return Foo(self.a*other.a)
#etc...
The logic is slightly more complex than this, but the common pattern is that
each operator overload method contains some identical code to check that the
operation is allowed, and then constructs an operation using the class
members. I want to reduce the redundant code. This works:
class Foo(object):
a=0
def __init__(self, a):
self.a=a
def operate(self, other, operator):
#common logic here
a = constructOperation(self.a, other.a, operator)
return Foo(a)
def __add__(self, other):
return self.operate(other, "+")
def __sub__(self, other):
return self.operate(other, "-")
def constructOperation(operand0, operand1, operator):
if operator=="+":
return operand0 + operand1
if operator=="-":
return operand0 - operand1
But it seems kind of silly to be constructing operations manually like that.
Does this approach make sense, or is there a better way here?
Answer: You can do it via reflection and higher order functions, though this may not
play well with inheritance.
import operator
def apply_a(func):
def inner(self, other):
return Foo(func(self.a, other.a))
return inner
class Foo(object):
def __init__(self, a=0):
self.a = a
for name in ['__add__','__mul__','__sub__']:
setattr(Foo, name, apply_a(getattr(operator, name)))
|
How do I set the terminal foreground process group for a process I'm running under a pty?
Question: I've written a simple wrapper script for repeating commands when they fail
called [retry.py](https://github.com/stsquad/retry). However as I want to see
the output of child command I've had to pull some pty tricks. This works OK
for programs like rsync but others like scp apply additional test for showing
things like their progress meter.
The scp code has a test that is broadly:
getpgrp() == tcgetpgrp(STDOUT_FILENO);
Which fails when I run though the wrapper script. As you can see with my
simple tty_test.c test case:
./tty_tests
isatty reports 1
pgrps are 13619 and 13619
and:
./retry.py -v -- ./tty_tests
command is ['./tty_tests']
isatty reports 1
pgrps are 13614 and -1
child finished: rc = 0
Ran command 1 times
I've tried using the tcsetpgrp() which ends up as an IOCTL on the pty fd's but
that results in an -EINVAL for ptys. I'd prefer to keep using the Python
subprocess machinery if at all possible or is manually fork/execve'ing going
to be required for this?
Answer: I believe you can pare your program down to this, if you don't need to provide
a whole new pty to the subprocess:
from argparse import ArgumentParser
import os
import signal
import subprocess
import itertools
# your argumentparser stuff goes here
def become_tty_fg():
os.setpgrp()
hdlr = signal.signal(signal.SIGTTOU, signal.SIG_IGN)
tty = os.open('/dev/tty', os.O_RDWR)
os.tcsetpgrp(tty, os.getpgrp())
signal.signal(signal.SIGTTOU, hdlr)
if __name__ == "__main__":
args = parser.parse_args()
if args.verbose: print "command is %s" % (args.command)
if args.invert and args.limit==None:
sys.exit("You must define a limit if you have inverted the return code test")
for run_count in itertools.count():
return_code = subprocess.call(args.command, close_fds=True,
preexec_fn=become_tty_fg)
if args.test == True: break
if run_count >= args.limit: break
if args.invert and return_code != 0: break
elif not args.invert and return_code == 0: break
print "Ran command %d times" % (run_count)
The `setpgrp()` call creates a new process group in the same session, so that
the new process will receive any ctrl-c/ctrl-z/etc from the user, and your
retry script won't. Then the `tcsetpgrp()` makes the new process group be the
foreground one on the controlling tty. The new process gets a `SIGTTOU` when
that happens (because since the `setpgrp()`, it has been in a background
process group), which normally would make the process stop, so that's the
reason for ignoring `SIGTTOU`. We set the `SIGTTOU` handler back to whatever
it was before, to minimize the chance of the subprocess being confused by an
unexpected signal table.
Since the subprocess is now in the foreground group for the tty, its
tcgetpgrp() and getpgrp() will be the same, and isatty(1) will be true
(assuming the stdout it inherits from retry.py actually is a tty). You don't
need to proxy traffic between the subprocess and the tty, which lets you ditch
all the `select` event handling and fcntl-nonblocking-setting.
|
Print UTF-8 characters in cmd using python
Question:
# -*- coding: utf-8 -*-
print "ÆØÅ"
When running the above script in Windows 7 with python 2.7.3 using `cmd`,
`powershell` or `cygwin`, I get this output:
ÆØÅ
The file is a UTF-8 file and works fine in my text editor. How can I make it
print "ÆØÅ"?
Answer: Bit late to the party here, try
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
print "ÆØÅ"
|
Is there a python (scipy) function to determine parameters needed to obtain a target power?
Question: In R there is a very useful function that helps with determining parameters
for a two sided t-test in order to obtain a target statistical power.
The function is called `power.prop.test`.
<http://stat.ethz.ch/R-manual/R-patched/library/stats/html/power.prop.test.html>
You can call it using:
power.prop.test(p1 = .50, p2 = .75, power = .90)
And it will tell you n the sample size needed to obtain this power. This is
extremely useful in deterring sample sizes for tests.
Is there a similar function in the scipy package?
Answer: I've managed to replicate the function using the below formula for n and the
inverse survival function `norm.isf` from scipy.stats

from scipy.stats import norm, zscore
def sample_power_probtest(p1, p2, power=0.8, sig=0.05):
z = norm.isf([sig/2]) #two-sided t test
zp = -1 * norm.isf([power])
d = (p1-p2)
s =2*((p1+p2) /2)*(1-((p1+p2) /2))
n = s * ((zp + z)**2) / (d**2)
return int(round(n[0]))
def sample_power_difftest(d, s, power=0.8, sig=0.05):
z = norm.isf([sig/2])
zp = -1 * norm.isf([power])
n = s * ((zp + z)**2) / (d**2)
return int(round(n[0]))
if __name__ == '__main__':
n = sample_power_probtest(0.1, 0.11, power=0.8, sig=0.05)
print n #14752
n = sample_power_difftest(0.1, 0.5, power=0.8, sig=0.05)
print n #392
|
How to get the AST-tree instead of a list when parsing in Python with ANTLR?
Question: I get simple antlr3 grammar
[MicroXpath](http://www.antlr3.org/grammar/1210113624040/MicroXPath.g) and
build lexer and parser for Python.
Then I wrote a simple test code:
import antlr3
from XPathLexer import XPathLexer
from XPathParser import XPathParser
def print_level_order(tree, indent):
print('{0}{1}'.format(' '*indent, tree.text, tree.getType()))
for child in tree.getChildren():
print_level_order(child, indent+1)
input = 'descendant::name[class/name[test="x"]="File"]'
char_stream = antlr3.ANTLRStringStream(input)
lexer = XPathLexer(char_stream)
tokens = antlr3.CommonTokenStream(lexer)
parser = XPathParser(tokens)
tree = parser.xPath().tree
print_level_order(tree, 0)
Result:
None
descendant
:
name
[
class
/
name
[
test
=
"x"
]
=
"File"
]
Where the tree? This is a linear list! What am I doing wrong? Or using ANTLR
can not build a tree?
Answer: Only adding `output=AST;` to the `options{...}` is not enough: you'll have to
tell ANTLR which nodes/tokens to exclude from the AST (if any), and which
nodes/tokens you want to make the root of a (sub) tree. Not doing so results
in a flat tree, as you already observed.
Checkout this Q&A to find out how to create a hierarchy in your tree: [How to
output the AST built using
ANTLR?](http://stackoverflow.com/questions/4931346/how-to-output-the-ast-
built-using-antlr)
|
How to provide pre-compiled cython modules for 32 and 64 bits neatly?
Question: I have a python script using a cython module I wrote. I want to publish it,
and in order to save users the trouble of compiling the cython stuff
(especially complex on Windows), I want to provide pre-compiled extensions.
However, I will need one version for 32 bits and another for 64. I thought
about including the two files as mymodule32.pyd and mymodule64.pyd, and then,
mymodule.py doing the following:
if bits == 32:
from mymodule32 import *
elif bits == 64:
from mymodule64 import *
But this feels a litle clumsy. What if the user decides to compile the module
himself producing mymodule.pyd?
Answer: My impression is that this is part of how you package your module and
publishing it on pypi, not about how you import it.
The import is supposed not to care about your architecture, is the module
installation and package that needs to know about this.
|
Is it Possible to Use Imported Class Methods in A Python Class Definition Without Running All Code In the Imported File?
Question: I am using the PyMOL molecular viewer as a subset of a larger program, and for
ease of reading am breaking up my files like so...
### command1ClassFile.py
class command1Class():
def command1(self):
print "do action 1, this requires pymol functions"
### command2ClassFile.py
class command2Class():
def command2(self):
print "do action 2, this also requires pymol functions"
### mainModule.py
import command1ClassFile, command2ClassFile
class commandsClass(command1Class, command2Class):
pass
class guiClass(parentClass, commandsClass):
def onLeftClick(self):
self.command1()
def onRightClick(self):
self.command2()
# this imports the module as well as launching the program, unfortunately
import pymol
pymol.finish_launching()
I can't just add "import pymol" to the beginning of the other files, because
that would launch the program multiple times. I can solve this by just using
one .py file, but that leads to an excessively large source file.
I did not catch anyone's interest on the PyMOL mailing list, so I was hoping
there was some other way around this. If not, is there a better way to break
up code? I am used to being spoiled by header files in C++, and the
architecture of Python projects is bit difficult for me to handle properly.
EDIT: For different cases, is using multiple inheritance across files and
dummy compilation classes in this way a good way to structure python projects
with complicated methods?
Answer: If I understand the question correctly, that's what `if __name__ ==
'__main__'` is for.
|
How to use python to extract data that is pushed to stdout?
Question: I'm trying to do some scripting, but one of the utilities I have returns a
value to stdout, where I would like to assign it to a variable.
The utility (candump) is constantly running and only prints to std out when it
receives data.
import threading
from subprocess import call
import time
class ThreadClass(threading.Thread):
def run(self):
call(["candump","can0"])
t = ThreadClass()
t.start()
time.sleep(1)
call(["cansend", "can0", "-i", "0x601", "0x40", "0xF6", "0x60", "0x01", "0x00", "0x00", "0x00", "0x00"])
It returns the following values, which I would like to use in my python
script:
<0x601> [8] 40 f6 60 01 00 00 00 00
<0x581> [8] 4b f6 60 01 96 08 00 00
The documentation on candump (what is dumping the data to stdout) is
[sparse](http://git.pengutronix.de/?p=tools/canutils.git;a=blob;f=man/candump.8;h=cd11f8906500705716d0733c1b411253125e5511;hb=HEAD)
Is there I way I can tap into stdout and snatch the data going to it?
Sorry if this is incredibly obvious... learning Linux bit by bit.
Answer: If you aren't expecting a ton of output or don't mind reading it in all at
once, you can use `subprocess.check_output`:
>>> import subprocess
>>> print subprocess.check_output(['ls', '/etc'])
adjtime
adobe
anacrontab
apparmor.d
arch-release
ati
at-spi2
avahi
axelrc
bash.bash_logout
If you do need to read it line-by-line, take a look at this question: [read
subprocess stdout line by
line](http://stackoverflow.com/questions/2804543/read-subprocess-stdout-line-
by-line)
|
list became tuple for no reason. Bug or am I just too careless?
Question: I am in a quandary right now. This piece of code looks valid but no matter how
many times I tried to change its syntax, it still gives me the same result.
Basically, my problem is that even though I've created a list-nested list n x
n matrix, when I try to assign values to entries in a specific row, I get a
TypeError i.e.
TypeError: 'tuple' object does not support item assignment
I'm using Python 2.7. I assume that it is my fault and not a bug of Python. I
need clarification. Please try the code and tell me whether it works on your
com, and if not, shed light on the problem, if you can. Thanks in advance.
Here is the code
import sys
class Matrix:
def __init__(self, n):
"""create n x n matrix"""
self.matrix = [[0 for i in range(n)] for i in range(n)]
self.n = n
def SetRow(self, i, x):
"""convert all entries in ith row to x"""
for entry in range(self.n):
self.matrix[i][entry] = x
def SetCol(self, j, x):
"""convert all entries in jth column to x"""
self.matrix = zip(*self.matrix)
self.SetRow(j, x)
self.matrix = zip(*self.matrix)
def QueryRow(self, i):
"""print the sum of the ith row"""
print sum(self.matrix[i])
def QueryCol(self, j):
"""print the sum of the jth column"""
self.matrix = zip(*self.matrix)
x = sum(matrix[j])
self.matrix = zip(*self.matrix)
print x
mat = Matrix(256) # create 256 x 256 matrix
with open(sys.argv[1]) as file: # pass each line of file
for line in file:
ls = line.split(' ')
if len(ls) == 2:
query, a = ls
eval('mat.%s(%s)' %(query, a))
if len(ls) == 3:
query, a, b = ls
eval('mat.%s(%s, %s)' % (query, a, b))
File creator script here:
file = open('newfile', 'w')
file.write("""SetCol 32 20
SetRow 15 7
SetRow 16 31
QueryCol 32
SetCol 2 14
QueryRow 10
SetCol 14 0
QueryRow 15
SetRow 10 1
QueryCol 2""")
file.close()
Answer: `zip()` returns tuples:
>>> zip([1, 2], [3, 4])
[(1, 3), (2, 4)]
Map them back to lists:
self.matrix = map(list, zip(*self.matrix))
|
Python 3.3 cx_freeze weird error: 'NoneType' object has no attribute 'path'
Question: So, here's my problem.
I'm making a game in Pygame and Python 3.3, using Ubuntu 12.10. Fine. I'm
gonna bundle a bunch of Python scripts into one executable, then distribute
it. Also fine. I'm going with cx_freeze, because since I'm using Python 3 I
have no other options.
This is where my problem comes in. I've Googled around, but haven't seen
anything like it. My `setup.py` is as follows:
from cx_Freeze import setup, Executable
import sys
includes = ['sys', 'pygame.display', 'pygame.event', 'pygame.mixer', 'core', 'game']
build_options = {
'optimize' : 2,
'compressed': True,
'packages': ['pygame', 'core', 'game'],
'includes': includes,
'path': sys.path + ['core', 'game'],
}
executable = Executable('__init__.py',
copyDependentFiles=True,
targetDir='dist',
)
setup(name='Invasodado',
version='0.8',
description='wowza!',
options = {'build_exe': build_options},
executables=[executable])
My `__init__.py` is as follows:
from sys import argv
import pygame.display
import pygame.event
import pygame.mixer
pygame.mixer.init()
pygame.display.init()
pygame.font.init()
from core import gsm
#Omitted for brevity
The rest of my code (including the full `__init__.py`) can be found at
<https://github.com/CorundumGames/Invasodado>, in case it's relevant.
I get a long-ass stack trace, which can be found here
<http://pastebin.com/Aej05wGE> . The last 10 lines of it is this;
File "/usr/local/lib/python3.3/dist-packages/cx_Freeze/finder.py", line 421, in _RunHook
method(self, *args)
File "/usr/local/lib/python3.3/dist-packages/cx_Freeze/hooks.py", line 454, in load_scipy
finder.IncludePackage("scipy.lib")
File "/usr/local/lib/python3.3/dist-packages/cx_Freeze/finder.py", line 536, in IncludePackage
self._ImportAllSubModules(module, deferredImports)
File "/usr/local/lib/python3.3/dist-packages/cx_Freeze/finder.py", line 211, in _ImportAllSubModules
recursive)
File "/usr/local/lib/python3.3/dist-packages/cx_Freeze/finder.py", line 209, in _ImportAllSubModules
if subModule.path and recursive:
AttributeError: 'NoneType' object has no attribute 'path'
In case it's relevant, I'm using Pydev and Eclipse. Now, the last line stands
out because Googling it reveals nothing. I have no idea where `subModule`
could have become `None`, and I can't easily check because cx_freeze has shit
documentation.
I've never really used cx_freeze or distutils before, so I don't know what the
hell I'm doing! Any help would be greatly appreciated.
Answer: Having dug into this, it's a bug in cx_Freeze, that can only hit when you have
more than one Python version since [PEP
3149](http://docs.python.org/3/whatsnew/3.2.html#pep-3149-abi-version-tagged-
so-files) installed - i.e. it wouldn't have come up before 3.3.
I've filed a bug report for it:
<https://bitbucket.org/anthony_tuininga/cx_freeze/issue/22/error-when-
scanning-for-modules-with-a>
In the mean time, you can probably avoid the problem by using Python 3.2 for
now, because that's the default in Ubuntu 12.10. Python 3.3 will be the
default in 13.04.
|
Fade between images on screen using Python TKinter / imageTK
Question: I am a python newbie and have been making a somewhat odd slideshow script that
cycles through images and also sources a variable from another file to
'settle' on an image.
I'm sure my code is tragic. But it does work (see below)!
My question is - how would I make it fade between images instead of the jerky
go to white momentarily then to next image which it does currently? Is there a
transitions module I should look at?
from Tkinter import *
import Image, ImageTk, random, string
class MyApp(Tk):
def __init__(self):
Tk.__init__(self)
fr = Frame(self)
fr.pack()
self.canvas = Canvas(fr, height = 400, width = 600)
self.canvas.pack()
self.old_label_image = None
self.position = 0
self.command = 0
self.oldcommand = 0
self.slideshow()
self.debug()
def debug(self):
self.QUIT = Button(self)
self.QUIT["text"] = "QUIT!" + str(self.command)
self.QUIT["fg"] = "red"
self.QUIT["command"] = self.quit
self.QUIT.pack({"side": "right"})
def slideshow (self):
if self.command != self.oldcommand:
self.after_cancel(self.huh)
# run through random between 2-5 changes
# then settle on command for 30 seconds
self.title("Title: PAUSE")
self.oldcommand = self.command
self.slideshow()
else:
file = str(self.position) + '.jpg'
image1 = Image.open(file)
self.tkpi = ImageTk.PhotoImage(image1)
label_image = Label(self, image=self.tkpi)
label_image.place(x=0,y=0,width=image1.size[0],height=image1.size[1])
self.title("Title: " + file)
if self.old_label_image is not None:
self.old_label_image.destroy()
self.old_label_image = label_image
# make this random instead of pregressional
if self.position is not 1:
self.position = self.position + 1
else:
self.position = 0
commandfile = open('command.txt', 'r')
self.command = string.atoi(commandfile.readline())
commandfile.close()
int = random.randint(2000, 5000)
self.huh = self.after(int, self.slideshow)
#self.after_cancel(huh) - works ! so maybe can do from below Fn?
if __name__ == "__main__":
root = MyApp()
root.mainloop()
Answer: This can be achieved using the blend function.
Image.blend(image1, image2, alpha) ⇒ image
Creates a new image by interpolating between the given images, using a
constant alpha. Both images must have the same size and mode.
out = image1 * (1.0 - alpha) + image2 * alpha
If the alpha is 0.0, a copy of the first image is returned. If the alpha is
1.0, a copy of the second image is returned. There are no restrictions on the
alpha value. If necessary, the result is clipped to fit into the allowed
output range.
So you could have something like this:
alpha = 0
while 1.0 > alpha:
image.blend(img1,img2,alpha)
alpha = alpha + 0.01
label_image.update()
An example is here, havn't had time to test this but you get the idea-
from PIL import image
import time
white = image.open("white_248x.jpg")
black = image.open("black_248x.jpg")
new_img = image.open("white_248x.jpg")
root = Tk()
image_label = label(root, image=new_img)
image_label.pack()
alpha = 0
while 1.0 > alpha:
new_img = image.blend(white,black,alpha)
alpha = alpha + 0.01
time.sleep(0.1)
image_label.update()
root.mainloop()
|
Uniform Random Numbers
Question: I am trying to understand what this code does. I am going through some
examples about numpy and plotting and I can't figure out what `u` and `v` are.
I know `u` is an array of two arrays each with size 10000. What does
`v=u.max(axis=0)` do? Is the `max` function being invoked part of the standard
python library? When I plot the histogram I get a pdf defined by 2x as opposed
to a normal uniform distribution.
import numpy as np
import numpy.random as rand
import matplotlib.pyplot as plt
np.random.seed(123)
u=rand.uniform(0,1,[2,10000])
v=u.max(axis=0)
plt.figure()
plt.hist(v,100,normed=1,color='blue')
plt.ylim([0,2])
plt.show()
Answer: `u.max()`, or equivalently `np.max(u)`, will give you _the_ maximum value in
the array - i.e. a single value. It's the Numpy function here, not part of the
standard library. You often want to find the maximum value along a particular
axis/dimension and that's what is happening here.
U has shape `(2,10000)`, and `u.max(axis=0)` gives you the max along the `0`
axis, returning an array with shape `(10000,)`. If you did `u.max(axis=1)` you
would get an array with shape `(2,)`.
Simple illustration/example:
>>> a = np.array([[1,2],[3,4]])
>>> a
array([[1, 2],
[3, 4]])
>>> a.max(axis=0)
array([3, 4])
>>> a.max(axis=1)
array([2, 4])
>>> a.max()
4
|
How do I load and unload a Python module dynamically, disassemble and inspect it, but not execute init code or add it to sys.modules?
Question: I'm experimenting with disassembling Python modules into bytecodes.
Must I import a Python module statically or dynamically in order to
disassemble or inspect it? If not, what are the (pythonic, portable) ways to
do it?
I'd like to:
1. Load an available Python module's binary data into memory at runtime:
1. Without it appearing as an available module in `sys.modules`.
2. I don't want to execute any of the module's `__init__` code, or have it added to any namespace.
3. There should be no other side effects of loading the module. As far as the interpreter's concerned, it should just be a blob of data to be inspected.
2. Disassemble or otherwise inspect the module's classes, functions or data.
3. Unload the module when desired.
I've searched, and I see a number of methods of dynamic module importation
(which has the side effect of executing module `__init__` code or other inline
code, and insertion into sys.modules). But I'd rather not deal with those side
effects.
Is this possible? If so, what approaches are most portable/Pythonic?
Answer: I looked into this a bit and one possible solution is usage of the
[pyclbr](http://docs.python.org/2/library/pyclbr.html) module. The inspection
of it looks at basic information about classes and functions, loading it into
a dictionary for easy access. Here is a sample run:
>>> import pyclbr
>>> import sys
>>> info = pyclbr.readmodule_ex('inspect')
>>> info
{'formatargvalues': <pyclbr.Function object at 0x5083e28e50>, 'walktree': <pyclbr.Function object at 0x5083e28b50>, 'getinnerframes': <pyclbr.Function object at 0x5083e29050>, 'indentsize': <pyclbr.Function object at 0x5083e28710>, 'getmodulename': <pyclbr.Function object at 0x5083e28850>, 'formatannotation': <pyclbr.Function object at 0x5083e28d50>, 'ismemberdescriptor': <pyclbr.Function object at 0x5083e283d0>, 'iscode': <pyclbr.Function object at 0x5083e28550>, 'getsource': <pyclbr.Function object at 0x5083e28b10>, 'formatargspec': <pyclbr.Function object at 0x5083e28dd0>, 'getabsfile': <pyclbr.Function object at 0x5083e288d0>, 'getsourcelines': <pyclbr.Function object at 0x5083e28ad0>, '_getfullargs': <pyclbr.Function object at 0x5083e28c10>, 'isabstract': <pyclbr.Function object at 0x5083e28610>, 'isbuiltin': <pyclbr.Function object at 0x5083e28590>, 'getlineno': <pyclbr.Function object at 0x5083e28f10>, 'getcomments': <pyclbr.Function object at 0x5083e28990>, 'getgeneratorstate': <pyclbr.Function object at 0x5083e293d0>, 'getattr_static': <pyclbr.Function object at 0x5083e29390>, 'getframeinfo': <pyclbr.Function object at 0x5083e28ed0>, 'isgenerator': <pyclbr.Function object at 0x5083e28490>, '_static_getmro': <pyclbr.Function object at 0x5083e29190>, 'isframe': <pyclbr.Function object at 0x5083e28510>, 'getouterframes': <pyclbr.Function object at 0x5083e28f90>, 'getclasstree': <pyclbr.Function object at 0x5083e28b90>, 'getfile': <pyclbr.Function object at 0x5083e287d0>, '_shadowed_dict': <pyclbr.Function object at 0x5083e29310>, 'getargvalues': <pyclbr.Function object at 0x5083e28d10>, 'getmembers': <pyclbr.Function object at 0x5083e28650>, 'BlockFinder': <pyclbr.Class object at 0x5083e28a10>, 'isfunction': <pyclbr.Function object at 0x5083e28390>, 'getargspec': <pyclbr.Function object at 0x5083e28c50>, 'currentframe': <pyclbr.Function object at 0x5083e29090>, 'namedtuple': <pyclbr.Function object at 0x5083e1b150>, 'getmoduleinfo': <pyclbr.Function object at 0x5083e28810>, 'trace': <pyclbr.Function object at 0x5083e29110>, 'isclass': <pyclbr.Function object at 0x5083db8950>, '_is_type': <pyclbr.Function object at 0x5083e29290>, 'getcallargs': <pyclbr.Function object at 0x5083e28e90>, 'ismethoddescriptor': <pyclbr.Function object at 0x5083e28310>, 'isgeneratorfunction': <pyclbr.Function object at 0x5083e28450>, 'isroutine': <pyclbr.Function object at 0x5083e285d0>, 'getfullargspec': <pyclbr.Function object at 0x5083e28cd0>, 'getmro': <pyclbr.Function object at 0x5083e286d0>, 'getargs': <pyclbr.Function object at 0x5083e28bd0>, 'stack': <pyclbr.Function object at 0x5083e290d0>, 'getdoc': <pyclbr.Function object at 0x5083e28750>, 'findsource': <pyclbr.Function object at 0x5083e28950>, 'cleandoc': <pyclbr.Function object at 0x5083e28790>, '_check_class': <pyclbr.Function object at 0x5083e29250>, '_check_instance': <pyclbr.Function object at 0x5083e29210>, 'classify_class_attrs': <pyclbr.Function object at 0x5083e28690>, 'ismodule': <pyclbr.Function object at 0x5083db8910>, 'EndOfBlock': <pyclbr.Class object at 0x5083e289d0>, 'isdatadescriptor': <pyclbr.Function object at 0x5083e28350>, 'getmodule': <pyclbr.Function object at 0x5083e28910>, 'formatannotationrelativeto': <pyclbr.Function object at 0x5083e28d90>, 'getsourcefile': <pyclbr.Function object at 0x5083e28890>, 'ismethod': <pyclbr.Function object at 0x5083e282d0>, 'isgetsetdescriptor': <pyclbr.Function object at 0x5083e28410>, 'istraceback': <pyclbr.Function object at 0x5083e284d0>, 'getblock': <pyclbr.Function object at 0x5083e28a50>}
>>> sys.modules['inspect']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'inspect'
Anything more advanced and you would have to start looking into accessing the
abstract syntax tree through the [ast
module](http://docs.python.org/2/library/ast.html#module-ast).
|
Using Sublime Text 2 with Portable Python
Question: I have portable python and portable sublime text installed on a flash drive. I
edited the python-build file so that it would use portable python to run the
programs but it doesn't print anything into the sublime text window, it just
opens up a command prompt window which immediately closes if the program stops
or has an error. Is there anyway to make the output pop up in sublime text?
Ideally, I would like to make this usable on all windows computers so I can
keep my workflow portable!
Thanks!
Answer: I ran into the same problem and after a bit of troubleshooting, here is my
solution:
1) Use the build system:
{
"cmd": ["\\Portable Python 2.7.6.1\\App\\python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"}
This build will open your program in Python Portable. Instead of specifying
your flash drive letter, using "\" will go to the path relative to the root of
the current drive.
2) At the end of your code add the following line to prevent force closing:
os.system("pause")
Also, don't forget to import the "os" module:
import os
|
Python unittest fails when it shouldn't
Question: I ran unit tests in the file below, and one of test cases failed, where it
should not have failed. I got an unexpected result - Assertion error, where in
TestFormatInitMethodArgs I intended to test if `'"' == '"'`, but it tested for
`'"' == None` \- it looks like test in the second testcase checks for equality
from not its own setUp():
#!/usr/bin/env python
import csv
import unittest
class Format:
def __init__(self, file_path, header=False, flag='r', delimiter=',', quote_char=None):
self.file_path = file_path
self.header = header
self.flag = flag
self.delimiter = delimiter
self.quote_char = None
class TestFormatInitMethodDefaults(unittest.TestCase):
def setUp(self):
self.file_path = 'C:/Privatus/eurusd.csv'
self.header = False
self.flag = 'r'
self.delimiter = ','
self.quote_char = None
def test_attributes(self):
f = Format('C:/Privatus/eurusd.csv')
self.assertEqual(self.file_path, f.file_path)
self.assertEqual(self.header, f.header)
self.assertEqual(self.flag, f.flag)
self.assertEqual(self.delimiter, f.delimiter)
self.assertEqual(self.quote_char, f.quote_char)
class TestFormatInitMethodArgs(unittest.TestCase):
def setUp(self):
self.file_path = 'C:/Privatus/eurusd.csv'
self.header = True
self.flag = 'rb'
self.delimiter = ';'
self.quote_char = '"'
def test_args(self):
a = Format('C:/Privatus/eurusd.csv', header=True, flag='rb', delimiter=';', quote_char='"')
self.assertEqual(self.file_path, a.file_path)
self.assertEqual(self.header, a.header)
self.assertEqual(self.flag, a.flag)
self.assertEqual(self.delimiter, a.delimiter)
self.assertEqual(self.quote_char, a.quote_char)
if __name__ == '__main__':
unittest.main()
Test results:
F.
======================================================================
FAIL: test_args (__main__.TestFormatInitMethodArgs)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Privatus\repos\working\data.py", line 45, in test_args
self.assertEqual(self.quote_char, a.quote_char)
AssertionError: '"' != None
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (failures=1)
What am I doing wrong?
Answer: You do not seem to set the quote_char attribute in your constructor (**init**)
Try self.quote_char = quote_char instead of self.quote_char = None
|
python program to accept a string from the command line and print all files matching that string within a folder
Question: How do i write a program to accept a string from the command line and print
all filenames matching that string within a folder(also subfolders)?
I'm looking for a pattern match.
Answer: You can use this technique
import os, fnmatch, sys
def all_files(root, patterns='*', single_level=False, yield_folders=False):
# Expand patterns from semicolon-separated string to list
patterns = patterns.split(';')
for path, subdirs, files in os.walk(root):
if yield_folders:
files.extend(subdirs)
files.sort( )
for name in files:
for pattern in patterns:
if fnmatch.fnmatch(name, pattern):
yield os.path.join(path, name)
break
if single_level:
break
user_definedpath, filepattern = sys.argv[1], sys.argv[2]
# Invoking the all_files and putting them into list
#thefiles = list(all_files('/tmp', '*.py;*.htm;*.html'))
thefiles = list(all_files(user_definedpath, filepattern))
print thefiles
Now you can save this file as `sample.py` in say `/tmp/abc/sample.py` Then you
can execute as `python /tmp/abc/sample.py "/tmp/xyz/" "*.py;*.txt"`
|
Embarassingly parallel tasks with IPython Parallel (or other package) depending on unpickable objects
Question: I often hit problems where I wanna do a simple stuff over a set of many, many
objects quickly. My natural choice is to use IPython Parallel for its
simplicity, but often I have to deal with unpickable objects. After trying for
a few hours I usually resign myself to running my taks overnight on a single
computer, or do a stupid thing like dividing things semi-manually in to run in
multiple python scripts.
To give a concrete example, suppose I want to delete all keys in a give S3
bucket.
What I'd normally do without thinking is:
import boto
from IPython.parallel import Client
connection = boto.connect_s3(awskey, awssec)
bucket = connection.get_bucket('mybucket')
client = Client()
loadbalancer = c.load_balanced_view()
keyList = list(bucket.list())
loadbalancer.map(lambda key: key.delete(), keyList)
The problem is that the `Key` object in `boto` is unpickable (*). This occurs
very often in different contexts for me. It's a problem also with
multiprocessing, execnet, and all other frameworks and libs I tried (for
obvious reasons: they all use the same pickler to serialize the objects).
Do you guys also have those problems? Is there a way I can serialize these
more complex objects? Do I have to write my own pickler for this particular
objects? If I do, how do I tell IPython Parallel to use it? How do I write a
pickler?
Thanks!
* * *
(*) I'm aware that I can simply make a list of the keys names and do something
like this:
loadbalancer.map(lambda keyname: getKey(keyname).delete())
and define the `getKey` function in each engine of the IPython cluster. This
is just a particular instance of a more general problem that I find often.
Maybe it's a bad example, since it can be easily solved in another way.
Answer: `IPython` has a `use_dill` option, where if you have the `dill` serializer
installed, you can serialize most "unpicklable" objects.
[How can I use dill instead of pickle with
load_balanced_view](http://stackoverflow.com/questions/24042443/how-can-i-use-
dill-instead-of-pickle-with-load-balanced-view)
|
how to use variable in python os.path.exists
Question: Here is my code
[root@04 ~]# python
Python 2.4.3 (#1, May 5 2011, 16:39:10)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os.path
>>> pid = open('/var/run/httpd.pid' , 'r').read()
>>> print pid
24154
>>> os.path.exists('/proc/',pid)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: exists() takes exactly 1 argument (2 given)
Even i tried following, its not working. How do i use variable `pid` in
`os.path.exists` command?
>>> os.path.exists('/proc/'+pid)
False
>>>
**EDIT :**
if i type manually PID number it works
>>> print pid
24154
>>> os.path.exists('/proc/24154')
True
>>>
Answer: The problem is that `http.pid` doesn't contain just the number, but also a
newline character. Since Python's `read`, unlike shell's backquotes, doesn't
strip trailing newlines, the `pid` variable contains a string like `"12345\n"`
and your code is testing whether `"/proc/12345\n"` exists.
To correct the problem, call
[`strip()`](http://docs.python.org/2/library/string.html#string.strip) on the
string you've read from the file:
os.path.exists(os.path.join('/proc', pid.strip()))
|
How to copy a file in Python?
Question: I need to copy a file specified by the user and make a copy of it (giving it a
name specified by the user). This is my code:
import copy
def main():
userfile = raw_input('Please enter the name of the input file.')
userfile2 = raw_input('Please enter the name of the output file.')
infile = open(userfile,'r')
file_contents = infile.read()
infile.close()
print(file_contents)
userfile2 = copy.copy(file_contents)
outfile = open(userfile2,'w+')
file_contents2 = outfile.read()
print(file_contents2)
main()
Something strange is happening here, as it doesn't print the contents of the
second file, outfile.
Answer: If you are reading outfile, why do you open it with `'w+'`? This truncates the
file.
Use `'r'`to read. See the
[link](http://docs.python.org/2.7/library/functions.html#open)
|
What does `import _preamble` do in Python?
Question: I notice the following at the top of Twisted's `twistd.py` script:
import os, sys
try:
import _preamble
except ImportError:
sys.exc_clear()
sys.path.insert(0, os.path.abspath(os.getcwd()))
What does `import _preamble` do? I can't seem to find any references to it on
the google-mage.
Answer: `_preamble` is a module like any other. In twisted's case, this is simply [the
module](http://twistedmatrix.com/trac/browser/trunk/bin/_preamble.py) that
sets up [`sys.path`](http://docs.python.org/3/library/sys.html#sys.path) so
that you can run twisted in development setups.
|
I'm making a Dungeons and Dragons style game in python, but I'm getting incorrect if returns
Question: I've worked out how to do most of this stuff in the past couple of days, but
that's all the experience i have, so this is probably simple. Anyway,
everything was going fine, until I tried to complicate a few formula's, or at
least change the values they used. Here is what I'm working with.
class EnemyStats():
def Ename(self):
return #Not sure What i should put in these spots...
def EBaseDodge(self):
return
def EnemyEvasion(self):
return
def ENickRange(self):
return
def EBaseAttack(self):
return
def EWeaponAttack(self):
return
def EAttackRating(self):
return
def EnemyDefense(self):
return
def EAttackDamage(self):
return
def Damage(self):
return ((PAttackDamage-EnemyDefense)if (PAttackDamage-EnemyDefense>0) else 0);
def EDamage(self):
return ((EAttackDamage-PDefense) if ((EAttackDamage-PDefense)>0) else 0);
def LightAttackDamage(self):
return int(LightAttackDamage == (Damage * 0.72));
def HeavyAttackDamage(self):
return (Damage * 1.28);
def LightNicked(self):
return (LightAttackDamage/2);
def HeavyNicked(self):
return (HeavyAttackDamage/2);
def Nicked(self):
return (Damage/2)
def Estats(self):
Ename = raw_input('Target Enemy Name: ');
EBaseDodge = int(raw_input('Enter Enemy Dodge: '));
EnemyEvasion = int(raw_input('Enter Enemy Evasion: '));
ENickRange = (EBaseDodge + EnemyEvasion);
EBaseAttack = int(raw_input('Enter Enemy Base Attack: '));
EWeaponAttack = int(raw_input('Enter Enemy Weapon Attack(If N/A, 0): '));
EAttackRating = (EBaseAttack + EWeaponAttack);
EnemyDefense = int(raw_input('Enter Enemy Defense: '));
EAttackDamage = int(raw_input('Enter Enemy Attack Damage: '));
Damage = ((PAttackDamage-EnemyDefense)if (PAttackDamage-EnemyDefense>0) else 0);
EDamage = ((EAttackDamage-PDefense) if ((EAttackDamage-PDefense)>0) else 0);
LightAttackDamage = (Damage * 0.72);
HeavyAttackDamage = (Damage * 1.28);
LightNicked = (LightAttackDamage/2);
HeavyNicked = (HeavyAttackDamage/2);
Nicked = (Damage/2)
And then it is referenced by this.
def PLightAttackForm():#light attack
print 'Light attack!';
d = dice()
lightbase = d.LightAttack()
EE = EnemyStats()
if lightbase <= EE.EBaseDodge:
print 'You rolled', lightbase, ', Miss!', 0, 'Damage!'
elif lightbase > EE.ENickRange:
print 'You rolled', lightbase, ', Hit!', EE.LightAttackDamage, 'Damage!'
elif lightbase < EE.ENickRange:
print 'You rolled', lightbase, ', Nicked!', EE.LightNicked, 'Damage!'
else:
print lightbase
And Everything goes amazing, no errors, but i get this.
Light attack! You rolled 700 , Miss! 0 Damage! #Should be a Hit and damage
Normal Attack! You rolled 278 , Miss! 0 Damage! #should be hit and damage
Heavy Attack! You rolled 135 , Miss! 0 Damage!#should be a Nick and damage
I'm sure it's just something i don't really know, But if you could help me out
that would be amazing! Thank you!
Also, here is all the code i have written. Might have some redundancy... :D
import random
Name = raw_input("Enter Name: ")
#EName===
#EBaseDodge = int(raw_input("Enter Enemy Dodge: "))
PBaseDodge = int(input('Enter Base Dodge: '))
#EnemyEvasion = int(raw_input("Enter Enemy Evasion: "))
PEvasion = int(input('Enter Evasion: '))
#ENickRange = (EBaseDodge + EnemyEvasion)
PNickRange = (PBaseDodge + PEvasion)
PBaseAttack = int(raw_input("Enter Base Attack: "))
#EBaseAttack===
PWeaponAttack = int(raw_input("Enter Weapon Attack: "))
#EWeaponAttack===
PAttackRating = (PBaseAttack + PWeaponAttack)
#EAttackRating = (EBaseAttack + EWeaponAttack)
#EnemyDefense = int(raw_input("Enter Enemy Defense: "))
PDefense = int(input('Enter Defense: '))
PAttackDamage = int(raw_input("Enter Attack Damage: "))
#EAttackDamage===
#Damage = (PAttackDamage-EnemyDefense)
#EDamage = (EAttackDamage-PDefense)
#LightAttackDamage = (Damage * 0.72)
#HeavyAttackDamage = (Damage * 1.28)
#LightNicked = (LightAttackDamage/2)
#HeavyNicked = (HeavyAttackDamage/2)
#Nicked = Damage/2
class dice():
def NormalAttack(self):
return random.randint(1, PAttackRating);
def LightAttack(self):
return random.randint(1, (int(PAttackRating*1.25)));
def HeavyAttack(self):
return random.randint(1, (int(PAttackRating*0.75)));
#def att():
# d = dice()
# base = d.roll()
# if base <= a:
# print 'You rolled', base, ', Miss!', 0, 'Damage!'
# elif base > b:
# print 'You rolled', base, ', Hit!', Damage, 'Damage!'
# elif base < b:
# print 'You rolled', base, ', Nicked!', Nicked, 'Damage!'
# else:
# print base
####################NEW CODE#######################################
def Menu():
print '(1)Attack';
print '(2)Choose Enemy';
print '(3)Charge';
print '(4)Item';
def select():
choice = input('Enter Choice: ');
EE = EnemyStats()
if (choice == 1):
Attacktype();
elif (choice == 2):
EE.Estats();
elif (choice == 3):
Charge();
elif (choice == 4):
ItemSelection();
else:
print 'There are Numbers for a reason Nuub!',;
##############Enemy Stats############
class EnemyStats():
def Ename(self):
return
def EBaseDodge(self):
return
def EnemyEvasion(self):
return
def ENickRange(self):
return
def EBaseAttack(self):
return
def EWeaponAttack(self):
return
def EAttackRating(self):
return
def EnemyDefense(self):
return
def EAttackDamage(self):
return
def Damage(self):
return ((PAttackDamage-EnemyDefense)if (PAttackDamage-EnemyDefense>0) else 0);
def EDamage(self):
return ((EAttackDamage-PDefense) if ((EAttackDamage-PDefense)>0) else 0);
def LightAttackDamage(self):
return int(LightAttackDamage == (Damage * 0.72));
def HeavyAttackDamage(self):
return (Damage * 1.28);
def LightNicked(self):
return (LightAttackDamage/2);
def HeavyNicked(self):
return (HeavyAttackDamage/2);
def Nicked(self):
return (Damage/2)
def Estats(self):
Ename = raw_input('Target Enemy Name: ');
EBaseDodge = int(raw_input('Enter Enemy Dodge: '));
EnemyEvasion = int(raw_input('Enter Enemy Evasion: '));
ENickRange = (EBaseDodge + EnemyEvasion);
EBaseAttack = int(raw_input('Enter Enemy Base Attack: '));
EWeaponAttack = int(raw_input('Enter Enemy Weapon Attack(If N/A, 0): '));
EAttackRating = (EBaseAttack + EWeaponAttack);
EnemyDefense = int(raw_input('Enter Enemy Defense: '));
EAttackDamage = int(raw_input('Enter Enemy Attack Damage: '));
Damage = ((PAttackDamage-EnemyDefense)if (PAttackDamage-EnemyDefense>0) else 0);
EDamage = ((EAttackDamage-PDefense) if ((EAttackDamage-PDefense)>0) else 0);
LightAttackDamage = (Damage * 0.72);
HeavyAttackDamage = (Damage * 1.28);
LightNicked = (LightAttackDamage/2);
HeavyNicked = (HeavyAttackDamage/2);
Nicked = (Damage/2)
#Attacking
def Attacktype():
print '(1)LightAttack';
print '(2)NormalAttack';
print '(3)HeavyAttack';
print '(4)Use Dem Magicks';
print '(5)Menu(<<This is for nuublets)';
Attchoice = input('Enter Choice: ')
if (Attchoice == 1):
PLightAttackForm();
elif (Attchoice == 2):
PNormalAttackForm();
elif (Attchoice == 3):
PHeavyAttackForm();
elif (Attchoice == 4):
MagicMenu();
elif (Attchoice == 5):
Menu();
else:
print 'You wot M8?';
Menu();
def PLightAttackForm():#light attack
print 'Light attack!';
d = dice()
lightbase = d.LightAttack()
EE = EnemyStats()
if lightbase <= EE.EBaseDodge:
print 'You rolled', lightbase, ', Miss!', 0, 'Damage!'
elif lightbase > EE.ENickRange:
print 'You rolled', lightbase, ', Hit!', EE.LightAttackDamage, 'Damage!'
elif lightbase < EE.ENickRange:
print 'You rolled', lightbase, ', Nicked!', EE.LightNicked, 'Damage!'
else:
print lightbase
def PNormalAttackForm():#Normal attack
print 'Normal Attack!';
d = dice()
base = d.NormalAttack()
EE = EnemyStats()
if base <= EE.EBaseDodge:
print 'You rolled', base, ', Miss!', 0, 'Damage!'
elif base > EE.ENickRange:
print 'You rolled', base, ', Hit!', EE.Damage, 'Damage!'
elif base < EE.ENickRange:
print 'You rolled', base, ', Nicked!', EE.Nicked, 'Damage!'
else:
print base
def PHeavyAttackForm():#Heavy Attack
print 'Heavy Attack!';
d = dice()
heavybase = d.HeavyAttack()
EE = EnemyStats()
if heavybase <= EE.EBaseDodge:
print 'You rolled', heavybase, ', Miss!', 0, 'Damage!'
elif heavybase > EE.ENickRange:
print 'You rolled', heavybase, ', Hit!', EE.HeavyAttackDamage, 'Damage!'
elif heavybase < EE.ENickRange:
print 'You rolled', heavybase, ', Nicked!', EE.HeavyNicked, 'Damage!'
else:
print heavybase
def MagicMenu():#magic menu()
print 'Magic menu!';
##############Enemy Stats############
####################NEW CODE#######################################
Answer: As That1Guy said, you're not passing parameters to your functions. In python,
functions are objects as well, and you can compare them to other objects. So
when you do:
if lightbase <= EE.EBaseDodge:
print 'You rolled', lightbase, ', Miss!', 0, 'Damage!'
elif lightbase > EE.ENickRange:
print 'You rolled', lightbase, ', Hit!', EE.LightAttackDamage, 'Damage!'
elif lightbase < EE.ENickRange:
print 'You rolled', lightbase, ', Nicked!', EE.LightNicked, 'Damage!'
else:
print lightbase
`lightbase <= EE.EBaseDodge` always evaluates to true.
You need to add parenthesis to the calls to`EBase...` methods. Also you need
to add return values to the declaration of the methods. so make your functions
look something like (assuming you have a variable called base_dodge):
def EBaseDodge(self):
return base_dodge
and change your ifs to:
if lightbase <= EE.EBaseDodge():
...
I'm not sure why you are trying to use methods for all of those values in the
first place. It would make more sense if the methods were attributes.
Try removing all of the methods in `Enemystats` and making your variables
instance variables:
class EnemyStats:
def Estats(self):
#Keep this method to set all of the stats and add self. before them like this
self.Ename = raw_input('Target Enemy Name: ');
self.EBaseDodge = int(raw_input('Enter Enemy Dodge: '));
self.EnemyEvasion = int(raw_input('Enter Enemy Evasion: '));
self.ENickRange = (self.EBaseDodge + self.EnemyEvasion);
....
Putting `self.` before all of your variables associates them to an instance of
your class (`EnemyStats`).
Then to create a new EnemyStats object and input the values:
EE = EnemyStats()
EE.Estats() # will prompt you for the values
After that you can reference your values like you had wanted to before:
if lightbase <= EE.EBaseDodge:
...
Just so you know, this is not the most organized/recommended way of doing
things.
|
Permanently caching results of Python class generation
Question: I am doing dynamic class generation that **could be** statically determined at
"compile" time. The simple case that I have right now looks more or less like
this:
class Base(object):
def __init__(self, **kwargs):
self.do_something()
def ClassFactory(*args):
some_pre_processing()
class GenericChild(Base):
def __init__(self, **kwargs):
self.some_processing()
super(GenericChild, self).__init__(*args, **kwargs)
return GenericChild
Child1 = ClassFactory(1, 'Child_setting_value1')
Child2 = ClassFactory(2, 'Child_setting_value2')
Child3 = ClassFactory(3, 'Child_setting_value3')
On import, the Python interpreter seems to compile to bytecode, then execute
the file (thus generating `Child1`, `Child2`, and `Child3`) once per Python
instance.
**Is there a way to tell Python to compile the file, execute it once to unpack
the`Child` classes, then compile that into the `pyc` file, so that the
unpacking only happens once (even across successive executions of the Python
script)?**
I have other use cases that are more complicated and expansive, so simply
getting rid of the factory by hand-writing the `Child` classes is not really
an option. Also, I would like to avoid an extra preprocessor step if possible
(like using the C-style macros with the C preprocessor).
Answer: No, you'd have to generate Python code instead where those classes are 'baked'
to python code instead.
Use some form of string templating where you generate Python source code, save
those to `.py` files, then bytecompile those.
However, the class generation happens only once on startup. Is it really that
great a cost to generate these?
|
Python cannot compare Tkinter Value
Question: I am trying to get the value of "Dragon On" which should start off as "On". I
want to compare it to "Off" but it does not recognize the change. The second
time i press the button it will set textvariable to Off but the comparison
will not detected that it changed. I have also tried
if (self.e3.config('textvariable')[-1] == 'Off')
But it just continues on.
from Tkinter import *
import time
import datetime
import threading
import sys
import pdb;
class App:
dragon = 300
def __init__(self, master):
Label(master, text="Dragon Down").grid(row=2)
self.e3 = Button(master,command=self.dragon_callback,width=15,text="Dragon On",textvariable="On")
self.e4 = Entry(master)
self.e3.grid(row=2, column=1)
self.e4.grid(row=2, column=2)
def dragon_callback(self):
print "%s" % self.e3.config('textvariable')[-1]
# print sys._current_frames()
if self.e3.config('textvariable')[-1]:
print "inside here the variable is '%s' " % self.e3.config('textvariable')[-1]
try:
self.t2_stop = threading.Event()
t = threading.Thread(target=self.dragon_time,name="Dragon Thread",args=("eee",self.t2_stop))
t.daemon = True
t.start()
except:
print "Error: unable to start thread"
self.e3.config(textvariable='Off')
else:
print " i stopped"
self.t2_stop.set()
self.e4.delete(0, END)
self.e3.config(textvariable='True')
def dragon_time(self,bleh,stop_event):
#print "did i get the value %s" % self.text_area.get()
boom=300
while(not stop_event.is_set() and boom > 0):
#equivalent to time.sleep()
self.e4.delete(0, END)
self.e4.insert(0, boom)
time.sleep(1)
self.e4.delete(0, END)
self.e4.insert(0, boom)
boom -=1
pass
root = Tk()
root.title("Timer")
app = App(root)
root.mainloop()
Answer: To make that comparison you must convert the object to a string
if (str(self.e3.config('textvariable')[-1]) == 'Off'):
|
Convert python API example to PHP
Question: I need to connect to a RESTful API. The only example the company gave me to
connect to their API is a example in Python. I do not understand the language
but am comfortable with PHP. Is there a way I can do this with cuRL and/or
PHP?
Here is the example in Python:
import requests
import hmac
import hashlib
import datetime as dt
import simplejson as json
import sys
tech_prefix = '' #the Account Tech Prefix
secret_key = '' #the API Key
#creating URI info
t = dt.datetime.utcnow().replace(microsecond=0)
timestamp = t.isoformat()
url_scheme = 'https'
net_location = 'api.thesite.com'
path = '/v1/available-tns/npas/'
method = 'GET'
ordered_query_params = ''
body = ''
body_md5 = ''
canonical_uri = url_scheme + "://" + net_location + path + "\n" + ordered_query_params
tokens = (
timestamp,
method,
body_md5,
canonical_uri
)
message_string = u'\n'.join(tokens).encode('utf-8')
signature = hmac.new(secret_key, message_string, digestmod=hashlib.sha1).hexdigest()
headers = {'X-Timestamp':timestamp}
request_url = url_scheme + '://' + net_location + path + '?' + ordered_query_params # append ordered query params here
request = requests.get(request_url,auth=(tech_prefix,signature),headers=headers)
print request
Answer: Yes, you can do this in PHP. This python code doesn't use any special python
libraries; it's just sending an HTTP request with specific headers and
specific auth info. Actually translating this code from python into PHP is
outside the scope of a typical StackOverflow answer, though.
|
Python treat module name as 'NoneType'
Question: I have a piece of code that behaves strangely.
At the beginning, I import a module, which is a python binding for a C
library.
try:
import pyccn
except:
print "ERROR: PyCCN is not found"
exit(1)
Later in my code, I use pyccn module to do quite a lot stuff, and it was
working as expected (almost). Now after working correctly for quite a while,
it gives me the error:
Traceback (most recent call last):
File "./ndn-ls-keys.py", line 185, in upcall
if kind == pyccn.UPCALL_CONTENT_UNVERIFIED:
AttributeError: 'NoneType' object has no attribute 'UPCALL_CONTENT_UNVERIFIED'
So it say 'pyccn' is NoneType!! But it was working, I mean the same function
that includes line 185 was called multiple times before the error happens. And
the error happens consistently. I didn't redefine 'pyccn', was just using
'pyccn.foo(), pyccn.bar(), etc'.
What are the possible reasons that this could happen?
P.S. The error happens at the end of my script. If I put a time.sleep(10)
there, then it happens after the sleeping...
Thanks!
Answer: I somehow solved this problem. Originally, immediately below the import for
PyCCN, I have two other imports:
import xml.etree.ElementTree as ET
import time
So the imports are global to this file.
Once I moved these two imports inside the function where they are actually
used, the problem went away!! The move is the only change and I don't know the
reason behind this fix.
Perhaps there is some conflicts between the imports due to some problem in
pyccn module? (I assume the standard libraries are not responsible for this
problem).
|
Keep trailing zeroes in python
Question: I am writing a class to represent money, and one issue I've been running into
is that `"1.50" != str(1.50)`. str(1.50) equals 1.5, and alll of a sudden,
POOF. 45 cents have vanished and the amount is now 1 dollar and 5 cents. not
one dollar and 50 cents. Any way I could prevent str from doing this, or am I
doing something wrong? This is Python 2 BTW.
Answer: You can use the `format` method on strings to specify how many decimal places
you want to represent:
>>> "{:.2f}".format(1.5)
'1.50'
But even better would be to use the [`decimal
module`](http://docs.python.org/2/library/decimal.html) for representing
money, since representation issues with binary floats can give you slightly
off results if you're doing arithmetic. The documentation for that module
mentions some of those issues specifically - one of the most interesting ones
for money applications is:
>>> 0.1+0.1+0.1-0.3
5.551115123125783e-17
>>> from decimal import Decimal
>>> Decimal('.1') + Decimal('.1') + Decimal('.1') - Decimal('.3')
Decimal('0.0')
|
Printing all the global methods in Python REPL
Question: In Python REPL
dir(str)
prints
['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
i.e.. all the available methods for string.
How can I view all the global methods available like "print", "list" , "len"
etc. ?
Answer:
>>> dir(__builtins__)
['ArithmeticError', 'AssertionError', 'AttributeError',
... # a whole bunch of other Errors
'abs', 'all', 'any',
... # other builtins
'type', 'vars', 'zip']
As @eryksun mentioned in his comment, this will only work in the `__main__`
module. If you want to do this in an imported module, use
`sorted(__builtins__)`.
|
Using relational model (keys) to make references to other objects, good or bad idea?
Question: In my previous job, most of program processing relied on persistent data
stored on a DB.
So the DB data model leaded the runtime programs data structures. Thus it was
very convenient for us to use primary keys values as references to others
objects.
**For example :**
1 - Considering we have a company that sell goods such as books to customers
via Internet. We have three classes : Book, Order and Customer.
* The Book class contains different information about the book and also a unique identifier such as ISBN number.
* Customer class contains all data (and generally much more) the company needs to know in order to ship the books to theirs customers such as the email address. Customer objects have also a unique persistent id that identify them.
* Thus the Order class contains two relational references `int isbn;` (the book id) and `int customer_id;`
In this example case, order class methods do not need to access customer data
or book data, so order class don't need to depend from them.
2 - If we now consider another class that is used to write and send the email
confirmation of an order :
class OrderMailer
{
// Customer index
std::map<int, Customer *> customers;
...
// we have a function that send email with low level parameters
void sendEmail(const std::string& mailAddress, const std::string& body);
// and we have another method that simply sends the email for a given order
void sendEmail(const Order& order);
};
The sendEmail(const Order& order) method will need to get the customer email
adress, thus it will need to get the object from its identifier.
Thats why we have a map and then the address will be accessed like that
const std::string& target = customers[order.customer_id]->emailAddress; // not found test omitted for reading.
Thats the idea.
**The question part :**
I used this way of referencing during several year without asking myself "Is
it really a good idea" because :
* object/record ids were the way to identify object in the company
* such persistent ids where used everywhere (code, logs, discussion with other ITs)
* runtime data structures always reflected DB data model (it may not be a solid argument but that was very helpful when switching between DB world and runtime world(c++, python, js))
I'm not anymore in this company but I kept this way of programming when
dealing with persistent records and I'm not sure I'm doing the things right.
What have we done?
We used a logical way of referencing objects instead of using those provided
by language : pointers or c++ references. It sounds really bad to me when said
like that.
Here is a list of pro and cons for using this "method" from my point of view :
* Pros :
* If there is an underlying relational data model, the runtime data structures reflect the data model and things are easier to understand
* This avoid useless coupling of classes (In the example, Order does not depends on Customer and Book classes)
* Unique identifiers can be strings that are very human friendly to read
* Cons :
* Why not using the basic fundamental features of the language that are pointers and references to do this? thats sounds a bad approach.
* We need to use dictionaries/index (maps) every-time we want to access data of logically referenced objects
As said, i'm not sure this is a good thing to use logical/relational
references. Is there some rules that can be applied in order to decide using
or not this approach? I will be happy to have your opinion about this.
Answer: Well this is how I would do it. Since this is C++ and if we use standard
containers. Why not use iterators. I added some printing code for easy testing
also.
If you want iterator stability change the vector to a list. For better lookup
operations have a look at `boost::multi_index`
<http://www.boost.org/doc/libs/1_53_0/libs/multi_index/doc/index.html>.
`boost::multi_index` is also good to use if iterator stability is important.
It would probably also be easy to use `boost::serialization` if you want to
write your data structures to file.
#include <string>
#include <iostream>
#include <vector>
struct Streamable
{
virtual ~Streamable() = default;
virtual std::ostream& print(std::ostream& ost) const = 0;
};
std::ostream& operator<<(std::ostream& ost, const Streamable& str)
{
return str.print(ost);
}
struct Customer : public Streamable
{
std::string name_m;
Customer(const std::string& name_):name_m(name_)
{}
std::ostream& print(std::ostream& ost) const
{
return ost<<name_m;
}
};
struct CustomerColl : public std::vector<Customer>,
public Streamable
{
CustomerColl(std::initializer_list<Customer> customers)
:std::vector<Customer>(customers)
{}
std::ostream& print(std::ostream& ost) const
{
ost<<"Customers:"<<std::endl;
for(const auto& c: *this)
{
ost<<c<<std::endl;
}
return ost;
}
};
struct Book : public Streamable
{
std::string title_m;
unsigned int isbn_m;
Book(const std::string& title_, const unsigned int& isbn_)
:title_m(title_),isbn_m(isbn_)
{}
std::ostream& print(std::ostream& ost) const
{
return ost<<title_m<<":{"<<isbn_m<<"}";
}
};
struct BookColl : public std::vector<Book>,
public Streamable
{
BookColl(std::initializer_list<Book> books)
:std::vector<Book>(books)
{}
std::ostream& print(std::ostream& ost) const
{
ost<<"Books:"<<std::endl;
for(const auto& b: *this)
{
ost<<b<<std::endl;
}
return ost;
}
};
struct Order : public Streamable
{
BookColl::const_iterator book_m;
CustomerColl::const_iterator customer_m;
Order(const BookColl::const_iterator& book_,
const CustomerColl::const_iterator& customer_)
:book_m(book_),customer_m(customer_)
{}
std::ostream& print(std::ostream& ost) const
{
return ost<<"["<<*customer_m<<"->"<<*book_m<<"]";
}
};
struct OrderColl : public std::vector<Order>,
public Streamable
{
OrderColl(std::initializer_list<Order> orders)
:std::vector<Order>(orders)
{}
std::ostream& print(std::ostream& ost) const
{
ost<<"Orders:"<<std::endl;
for(const auto& o: *this)
{
ost<<o<<std::endl;
}
return ost;
}
};
int main()
{
CustomerColl customers{{"Anna"},{"David"},{"Lisa"}};
BookColl books{{"C++",123},{"Java",234},{"Lisp",345}};
OrderColl orders{{books.begin(),customers.begin()},{books.begin(),customers.begin()+1},{books.end()-1,customers.begin()+2}};
std::cout<<customers<<std::endl;
std::cout<<books<<std::endl;
std::cout<<orders<<std::endl;
return 0;
}
|
How to JSON serialize hh:mm:ss in Python? How to query its type?
Question: I need to serialize a python object into JSON and am having a hard time
converting time-counters into JSON-play-nice form
Say I have something like this:
01:20:24 # hh:mm:ss
which is a time counter I'm increasing, while my script is running.
When done, I need to convert to JSON.
I'm currently trying this:
dthandler = lambda obj: obj.isoformat() if isinstance(obj, time) else None
this_object["totaltime"] = json.dumps(this_object["totaltime"], default=dthandler)
but I get an error on `time` being not a valid `class, type, or tuple of
classes and types`
**Question:**
How do I serialize this? And is there a list of possible 'default-types' to
query against (Python newbie... sorely missing Javascript typeof)
Thanks!
Answer: This is not a JSON problem; your `time` reference is not what you think it is.
Make sure you have a `datetime.time` object there, and not the `time` module,
for example:
>>> import datetime
>>> import time
>>> ref = datetime.time(10, 20)
>>> isinstance(ref, time)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
>>> isinstance(ref, datetime.time)
True
If you use the _correct_ type to test against, things work fine:
>>> import json
>>> dthandler = lambda obj: obj.isoformat() if isinstance(obj, datetime.time) else None
>>> json.dumps(ref, default=dthandler)
'"10:20:00"'
Note that the documentation expects the handler to raise a `TypeError` instead
of returning `None`; that way unserializable objects are at least treated as
errors:
def dthandler(o):
try:
return o.isoformat()
except AttributeError:
raise TypeError
would be more Pythonic and correct.
|
How does python find a module file if the import statement only contains the filename?
Question: Everywhere I see Python code importing modules using `import sys` or `import
mymodule`
How does the interpreter find the correct file if no directory or path is
provided?
Answer: <http://docs.python.org/2/tutorial/modules.html#the-module-search-path>
> When a module named spam is imported, the interpreter first searches for a
> built-in module with that name. If not found, it then searches for a file
> named spam.py in a list of directories given by the variable sys.path.
> sys.path is initialized from these locations:
>
> * the directory containing the input script (or the current directory).
> * PYTHONPATH (a list of directory names, with the same syntax as the shell
> variable PATH).
> * the installation-dependent default.
>
For information on the "installation-specific default", see documentation on
[the `site` module](https://docs.python.org/3/library/site.html#module-site).
|
What is the OAuth scope for the Google Translation API?
Question: Surely someone else is using the API, I've looked and searched, I cannot seem
to find the correct value to place for the scope parameter when
authenticating:
I've looked at all these scope lists, nothing, tried the OAuth 2.0 playground,
translation is not there.
[oauth playground v1](http://googlecodesamples.com/oauth_playground/)
[oauth playground v2](https://developers.google.com/oauthplayground/)
[oath supported
scopes](https://developers.google.com/gdata/articles/oauth#TokenScope)
[auth scopes](https://developers.google.com/gdata/faq#AuthScopes)
Any clues welcomed, thank you.
Error message:
Error: invalid_request
Missing required parameter: scope
Learn more
Request Details
**Update**
User Ezra explained that OAuth2 authentication is not needed for the
Translation API.
I got down this road by this path:
I was trying to make the sample code here work:
[translation api sample code](http://code.google.com/p/google-api-python-
client/source/browse/samples/translate/main.py)
And didn't have the apiclient.discovery module
from apiclient.discovery import build
I went off looking for that which landed me [here to this quick-start
configurator](https://developers.google.com/api-client-
library/python/start/installation) which gave me an autogenerated translation
api project [here](https://google-api-client-
libraries.appspot.com/quickstart/c/gen?api=translate&language=python&language_variant=stable&platform=appengine&version=v2):
This starter project which is supposed to be tailored for Translation API
includes a whole bunch of OAuth configuration and so I wound up asking the
question because of the error mentioned here
exception calling translation api: <HttpError 400 when requesting https://www.googleapis.com/language/translate/v2?q=zebra&source=en&alt=json&target=fr&key=MYSECRETKEYWENTHERE returned "Bad Request">
The code I'm using to make said call which errors out in this way is:
service = build('translate', 'v2',
developerKey='MYSECRETKEYWENTHERE')
result = service.translations().list(
source='en',
target=lang,
q='zebra'
).execute()
If I make the same call directly that the error complains about, it works ok
https://www.googleapis.com/language/translate/v2?key=MYSECRETKEYWENTHERE&q=zebra&target=fr&alt=json&source=en
**Updated Again**
Okay, I removed all the OAuth code from the sample project and then ran it
again and then finally noticed that I had a typo in my secret key... donk
Thanks for the answers!
.
Thank you
Answer: According to [Google's
documentation](https://developers.google.com/gdata/faq#AuthScopes), you have
to look at the documentation for your specific API.
Update as per [this Google Group
question](https://groups.google.com/forum/?fromgroups=#!topic/google-ajax-
search-api/oVTXDbTKhF8):
"The Translate API (both v1 and v2) is an unauthenticated API, so you don't
need to use OAuth with it. Instead, for v2, you should use an API key, which
you can get here: <http://code.google.com/apis/console>"
|
Running out of cron.hourly won't import a Python module
Question: I have foo running out of cron.hourly. It's been chmod +x'd, and it runs fine.
My problem is it does not recognize Python modules as importable.
I have ~/Foo/src, and within that lies the original Python code that I turned
into an executable (main), as well as the other module I'm trying to import
(foobar). I have a **init**.py sitting there, empty, which should let either
module be imported. In fact, running my script with
python src/main.py
Everything works just fine and I don't get this error. When running
run-parts -v /etc/cron.hourly/main
I get an error as follows:
ImportError: No module named foobar
run-parts: /etc/cron.hourly//main exited with return code 1
The way that I'm importing foobar is
os.chdir("/home/ubuntu/Foo/src/")
import foobar
Again, this works when running from Python, but not when running my
executable. Why is this, and what can I change to avoid this?
Answer:
import sys
sys.path.append("/home/ubuntu/Foo/src")
import foobar
From the doc:
> sys.path
>
> A list of strings that specifies the search path for modules. Initialized
> from the environment variable PYTHONPATH, plus an installation-dependent
> default.
>
> As initialized upon program startup, the first item of this list, path[0],
> is the directory containing the script that was used to invoke the Python
> interpreter. If the script directory is not available (e.g. if the
> interpreter is invoked interactively or if the script is read from standard
> input), path[0] is the empty string, which directs Python to search modules
> in the current directory first. Notice that the script directory is inserted
> before the entries inserted as a result of PYTHONPATH.
>
> A program is free to modify this list for its own purposes.
|
Swig and Python - different object instantation
Question: I Have a question regarding swig wrapped objects generated on the Python side
and wrapped objects generated on the C++ side. Suppose I have the following
simple C++ class definitions
#include <vector>
class Sphere
{
public:
Sphere(){};
};
class Container
{
public:
Container() : data_(0) {};
void add() {
data_.push_back(Sphere());
}
Sphere & get(int i) { return data_[i]; }
std::vector<Sphere> data_;
};
and the following swig setup
%module engine
%{
#define SWIG_FILE_WITH_INIT
#include "sphere.h"
%}
// -------------------------------------------------------------------------
// Header files that should be parsed by SWIG
// -------------------------------------------------------------------------
%feature("pythonprepend") Sphere::Sphere() %{
print 'Hello'
%}
%include "sphere.h"
If I then do the following in Python
import engine
sphere_0 = engine.Sphere()
container = engine.Container()
container.add()
sphere_1 = container.get(0)
Then the first instantiation of the wrapped Sphere class does call the
**init** method of the Python wrapping interface ('Hello' is printed).
However, the second, where the instance is generated on the C++ side does not
('Hello' is not printed).
Since my goal is to be able to add additional Python functionality to the
object upon its construction, I'd be pleased to hear if anybody has any
pointers for a correct approach to achieve this - for both of the above
instantiation scenarios.
Best regards,
Mads
Answer: I usually do things like this with explicit `pythoncode` blocks in the
interface file:
%pythoncode %{
def _special_python_member_function(self):
print "hello"
self.rotate() # some function of Sphere
Sphere.new_functionality=_special_python_member_function
%}
So you can add arbitrary python functionality to the class, on top of what the
SWIG interface provides. You may want/need to `rename` some of the C
functionality out the way but this can should get you all of the member
_functions_ you want.
I've never tried to remap `__init__` in this way, so I don't know how that
would behave. Assuming that it won't work, you won't be able to ensure that
the python objects have a given internal state (member variables) at
construction.
What you will be forced to do is do lazy evaluation:
def function_that_depends_on_python_specific_state(self, *args):
if not hasatttr( self, 'python_data'):
self.python_data = self.make_python_data() # construct the relevant data
pass # do work that involves the python specific data
and check for the existence of the python specific data. If there is just a
few cases of this, I'd just put it in the functions as above. However, if that
ends up being messy, you could modify `__getattr__` so that it constructs the
python-specific data members as they are accessed.
def _sphere_getattr(self, name):
if name=='python_data':
self.__dict__[name]=self.make_python_data()
return self.__dict__[name]
else:
raise AttributeError
Sphere.__getattr__ = _sphere_getattr
IMHO, in the limit where you have a large amount of new functionality, and
data that are independent of the underlying C implementation, you are in
effect asking "How can I make my python Sphere class be a sub-class ofthe C
Sphere class but keep them as the same type?"
|
How can I generate a random url of a certain length every time a page is created?
Question: In my python/pyramid app, I let users generate html pages which are stored in
an amazon s3 bucket. I want each page to have a separate path like
www.domain.com/2cxj4kl. I have figured out how to generate the random string
to put in the url, but I am more concerned with duplicates. how can I check
each of these strings against a list of existing strings so that nothing is
overwritten? Can I just put each string in a dictionary or array and check the
ever growing array/dict each time a new one is created? Are there issues with
continuing to grow such an object, and will it permanently exist in the app
memory somehow? How can I do this?
Answer: The approach of storing a list of existing identifiers in some storage and
comparing new identifiers with the list would work in a simple case, however,
this may become tricky if you have to store, say, billions of identifiers, or
if you want to generate them on more than one machine. This also complicates
things with storing the list, retrieving, comparing etc. Not to mention
locking - what if two users decide to create a page at exactly the same
second?
Universally Unique Identifiers (UUIDs) have a [very-very low chance of
collision](http://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates)
\- much lower than, say, a chance of our planet being swallowed by a black
hole in the next five minutes. So low that you can ignore it for any practical
purposes.
Python has a library called [uuid](http://docs.python.org/2/library/uuid.html)
to generate UUIDs
>>> import uuid
>>> # make a random UUID
>>> u = uuid.uuid4()
>>> u.hex
'f3db6f9a34ed48938a45113ac4b5f156'
The resulting string is 32 characters long, which may be too long for you.
Alternatively, you may just generate a random string like this:
''.join(random.choice(string.ascii_letters + string.digits) for x in range(12))
at 10-15 characters long it probably will be less random than an UUID, but
still a chance of a collision would be much lower than, say, a chance of a
janitor at Amazon data center going mental, destroying your server with an axe
and setting the data center on fire :)
|
Make python send the enter key when using curl
Question: I'm making a python script that curls the fantasy hockey scoreboard page,
calls a perl to regex substitute out the team names and scores, and display
it. The regex is all set up, but now I'm having trouble getting the webpage. I
notice when I do it on my computer after I run the
curl -o /tmp/fantasyhockey.txt http://games.espn.go.com/fhl/scor...
command, pressing enter once will start it, and then pressing it again will
exit out and make my page. How do I force python to press enter? For my
curiosity sake, why is it waiting?
EDIT: Here's the script. Not much to it.
import os
def main():
os.system("curl -o /tmp/fantasyhockey.txt http://games.espn.go.com/fhl/scoreboar\
d?leagueId=xxxxx&seasonId=2013")
unfix = open("/tmp/fantasyhockey.txt", "r").read().replace('\n', '')
outfile = open("/tmp/fantasyhockey2.txt", "w")
outfile.write(unfix)
outfile.close()
os.system("perl regex.pl < /tmp/fantasyhockey2.txt")
os.system("rm /tmp/fantasyhockey*")
main()
Answer: The `&` character in your URL has special meaning in the shell (it sends the
program to the background). This is why you have to hit enter.
To avoid this, quote the URL argument to `curl`, like this:
`curl -o /tmp/fantasyhockey.txt "http://...."`
|
Python flattening my tuple structure
Question: I am trying to have a python hierarchical data structure with a map and the
value will be a tuple. In some cases the tuple will be of length 1. Python
intelligently flattens the structure whenever the tuple is of length 1.
Observe the example below which can be run in a python interpreter. In the
"another_scenario" I expected the length to be 1 but it has drilled one level
below and got the underlying steps. This totally screws up my tests because I
rely on that being a tuple with the command, function_list, function list.
Question - Why does this happen? How do I ask python to not flatten it?
import os
def run():
my_scenario = {
"scenario_name" :
( # Each scenario is a List of (command, function_list, function_list)
# function_list = one function OR tuple of functions
(
"command1",
(
os.path,
os.path.exists
),
None
),
(
"command2",
(
os.path,
os.path.exists
),
None
)
)
}
another_scenario = {
"scenario_name" :
(
(
"command1",
(
os.path,
os.path.exists
),
None
)
)
}
for name in my_scenario:
print "Full Scenario is %s" % str(my_scenario[name])
print "Length should be 2 -> %s" % len(my_scenario[name])
for name in another_scenario:
print "Full Scenario is %s" % str(another_scenario[name])
print "Length should be 1 -> %s" % len(another_scenario[name]) #Prints 3 as it drills one level down
if __name__ == "__main__":
run()
Answer: You need to add a comma:
another_scenario = {
"scenario_name":
(
(
"command1",
(
os.path,
os.path.exists
),
None
), # <- Note this comma
)
}
to make that a tuple, otherwise it is just an expression. 1-element tuples can
only be distinguished from expressions by the presence of a comma:
>>> (1)
1
>>> (1,)
(1,)
>>> type((1))
<type 'int'>
>>> type((1,))
<type 'tuple'>
In fact, it's the _comma_ that defines tuples, not the parenthesis:
>>> 1,
(1,)
>>> 1, 2
(1, 2)
The parenthesis are only needed when you need to define an _empty_ tuple:
>>> ()
()
|
Python and efficient looping of set intersections (using trees)
Question: Below are the distinct paths of attributes and values of a decision tree. If I
were to enumerate the tree of every combination, the tree would be huge.
So...each path of the tree are all of the distinct attributes and values of
leaf node.
If given a list of values to score, i.e find the the node with the most common
elements, I use the below code.
What is the most insanely fast method of trying to accomplish what I want? The
below works but time is of the up-most important so much so that is worthy of
using `c` and importing into python.
Would a tree structure be faster? If so - what structure? would scipy weave be
faster?
nodes = {}
nodes[1] = ['hod=1','hod=2','state=NY','state=LA']
nodes[2] = ['hod=3','hod=4','state=FL','state=NV']
nodes[3] = ['hod=5','hod=6','state=WY','state=HI']
nodes[4] = ['hod=5','hod=6']
score = ['hod=6','state=WY','dow=4']
score_size = len(score)
max_node = -1
max_len = -1
for node_id, node in nodes.iteritems():
this_node_interection_len = len(set(score).intersection(node))
if this_node_interection_len>max_len:
max_len = this_node_interection_len
max_node = node_id
#print node_id, len(set(score).intersection(node))
print 'max_node',3
Answer: It _might_ be faster to store the data in a heap, where
`len(score.intersection(node))` is key value for each node. This way, building
the initial data structure would be a bit slower than making a flat dictionary
would be, but you could quickly retrieve the top several nodes rather than
just the node with the maximum score.
You should also look into using PyPy or something similar to optimize
performance.
|
InterfaceError unknown type <class 'decimal.Decimal'> for arg 10
Question: I'm trying to fetch some data from my database., this works perfect on my
local machine. But when deployed on Google App Engine it gives me an error
> InterfaceError at /report/unit/D8500/WV_herverkoop/2013/0/10/ unknown type
> <class 'decimal.Decimal'> for arg 10
**full traceback**
InterfaceError at /report/unit/D8500/WV_herverkoop/2013/0/10/
unknown type <class 'decimal.Decimal'> for arg 10
Request Method: GET
Request URL: http://dw-services.appspot.com/report/unit/D8500/WV_herverkoop/2013/0/10/
Django Version: 1.4.3
Exception Type: InterfaceError
Exception Value:
unknown type <class 'decimal.Decimal'> for arg 10
Exception Location: /python27_runtime/python27_lib/versions/1/google/storage/speckle/python/api/rdbms.py in _AddBindVariablesToRequest, line 427
Python Executable: /python27_runtime/python27_dist/python
Python Version: 2.7.3
Python Path:
['/base/data/home/apps/s~dw-services/2.365787501016750085',
'/python27_runtime/python27_dist/lib/python27.zip',
'/python27_runtime/python27_dist/lib/python2.7',
'/python27_runtime/python27_dist/lib/python2.7/plat-linux2',
'/python27_runtime/python27_dist/lib/python2.7/lib-tk',
'/python27_runtime/python27_dist/lib/python2.7/lib-old',
'/python27_runtime/python27_dist/lib/python2.7/lib-dynload',
'/python27_runtime/python27_dist/lib/python2.7/site-packages',
'/python27_runtime/python27_lib/versions/1',
'/python27_runtime/python27_lib/versions/third_party/django-1.4',
'/python27_runtime/python27_lib/versions/third_party/webapp2-2.3',
'/python27_runtime/python27_lib/versions/third_party/webob-1.1.1',
'/python27_runtime/python27_lib/versions/third_party/yaml-3.10',
'/base/data/home/apps/s~dw-services/2.365787501016750085/..']
Server time: don, 7 Mrt 2013 13:58:28 +0000
Traceback Switch to copy-and-paste view
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/core/handlers/base.py in get_response
response = callback(request, *callback_args, **callback_kwargs) ...
▶ Local vars
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/contrib/auth/decorators.py in _wrapped_view
return view_func(request, *args, **kwargs) ...
▶ Local vars
/base/data/home/apps/s~dw-services/2.365787501016750085/dewaelereports/views/reportView.py in showReportPerOffice
position = 0 # position in which we'll insert the region
region_commission = 0 # commission earned in a region
region_count = 0 # transactions in a region
region_avg = 0 # average in a region
counter = 0 # detect when it's the first zip in the region
for zip in region.zips.all():
results = zip.get_total_sales(office, start_week, end_week, year, unit_type) ...
total_count += results[0][0]
total_commission += results[0][1]
region_count += results[0][0]
region_commission += results[0][1]
zipList.append((zip.zip_name, results[0], results[1]))
if counter == 0:
▶ Local vars
/base/data/home/apps/s~dw-services/2.365787501016750085/dewaelereports/models.py in get_total_sales
return [calculate_results(zip_transactions, unit), zip_transactions] ...
▼ Local vars
Variable Value
office
<Office: D8500>
transactions
[<Transaction: k08071>, <Transaction: T8500-08157C>, <Transaction: D8500-11451>, <Transaction: D8500-12042H>, <Transaction: D8500-12143B>, <Transaction: T8500-09259>, <Transaction: T8500-10244a>, <Transaction: T8500-10277>, <Transaction: T8500-10277>, <Transaction: D8500-10345-A12>, <Transaction: T8500-10420>, <Transaction: T8500-10496>, <Transaction: T8500-11040H>, <Transaction: D8500-11048>, <Transaction: D8500-11650H>, <Transaction: D8500-11255B>, <Transaction: T8500-11325>, <Transaction: D8500-11343H>, <Transaction: T8500-11497>, <Transaction: D8500-11508B>, '...(remaining elements truncated)...']
start_week
u'0'
self
<Zip: 8500 - Kortrijk>
zip_transactions
[<Office_per_transaction: 2460>, <Office_per_transaction: 2413>, <Office_per_transaction: 775>, <Office_per_transaction: 2477>, <Office_per_transaction: 2414>, <Office_per_transaction: 2485>]
unit
<Unit_type: WV herverkoop>
year
u'2013'
office_transactions
[<Office_per_transaction: 196>, <Office_per_transaction: 1111>, <Office_per_transaction: 2460>, <Office_per_transaction: 2433>, <Office_per_transaction: 1105>, <Office_per_transaction: 1135>, <Office_per_transaction: 2413>, <Office_per_transaction: 775>, <Office_per_transaction: 3176>, <Office_per_transaction: 3444>, <Office_per_transaction: 2477>, <Office_per_transaction: 2414>, <Office_per_transaction: 1094>, <Office_per_transaction: 2485>]
end_week
u'10'
/base/data/home/apps/s~dw-services/2.365787501016750085/dewaelereports/models.py in calculate_results
if len(transaction_query) >= 1:
sum = 0
for tr in transaction_query:
sum += tr.transaction.commission_fix_out * (tr.office_percentage / 100)
if unit:
if unit.min_quantity:
count_transactions = len(transaction_query.filter(transaction__transaction_price_out__gt=str(unit.min_quantity))) ...
if not unit or not unit.min_quantity:
count_transactions = len(transaction_query)
avg = sum / len(transaction_query)
▼ Local vars
Variable Value
transaction_query
[<Office_per_transaction: 2460>, <Office_per_transaction: 2413>, <Office_per_transaction: 775>, <Office_per_transaction: 2477>, <Office_per_transaction: 2414>, <Office_per_transaction: 2485>]
count_transactions
0
sum
Decimal('61313.0300')
tr
<Office_per_transaction: 2485>
avg
0
unit
<Unit_type: WV herverkoop>
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/db/models/query.py in __len__
self._result_cache = list(self.iterator()) ...
▶ Local vars
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/db/models/query.py in iterator
for row in compiler.results_iter(): ...
▶ Local vars
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/db/models/sql/compiler.py in results_iter
for rows in self.execute_sql(MULTI): ...
▶ Local vars
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/db/models/sql/compiler.py in execute_sql
cursor.execute(sql, params) ...
▶ Local vars
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/db/backends/util.py in execute
return self.cursor.execute(sql, params) ...
▶ Local vars
/python27_runtime/python27_lib/versions/third_party/django-1.4/django/db/backends/mysql/base.py in execute
return self.cursor.execute(query, args) ...
▶ Local vars
/python27_runtime/python27_lib/versions/1/google/storage/speckle/python/api/rdbms.py in execute
request = sql_pb2.ExecRequest()
request.options.include_generated_keys = True
if args is not None:
if not hasattr(args, '__iter__'):
args = [args]
self._AddBindVariablesToRequest(
statement, args, request.bind_variable.add) ...
request.statement = _ConvertFormatToQmark(statement, args)
self._DoExec(request)
self._executed = request.statement
def executemany(self, statement, seq_of_args):
"""Prepares and executes a database operation for given parameter sequences.
▼ Local vars
Variable Value
self
<google.storage.speckle.python.api.rdbms.Cursor object at 0xfc8c0b10>
args
(1,
0,
10,
'2013-01-01 00:00:00',
'2013-12-31 23:59:59.99',
'R',
'S',
13,
2,
1459,
Decimal('75000'))
request
<google.storage.speckle.proto.sql_pb2.ExecRequest object at 0xfc8a6458>
statement
'SELECT `dewaelereports_office_per_transaction`.`id`, `dewaelereports_office_per_transaction`.`office_id`, `dewaelereports_office_per_transaction`.`transaction_id`, `dewaelereports_office_per_transaction`.`office_percentage` FROM `dewaelereports_office_per_transaction` INNER JOIN `dewaelereports_transaction` ON (`dewaelereports_office_per_transaction`.`transaction_id` = `dewaelereports_transaction`.`id`) WHERE (`dewaelereports_office_per_transaction`.`office_id` = %s AND `dewaelereports_transaction`.`transaction_end_week` BETWEEN %s and %s AND `dewaelereports_transaction`.`transaction_end_date` BETWEEN %s and %s AND `dewaelereports_transaction`.`transaction_end_status` IN (%s, %s) AND `dewaelereports_office_per_transaction`.`transaction_id` IN (SELECT U0.`transaction_id` FROM `dewaelereports_transaction_status` U0 WHERE U0.`status` = %s ) AND `dewaelereports_transaction`.`unit_type_id` = %s AND `dewaelereports_office_per_transaction`.`transaction_id` IN (SELECT U0.`id` FROM `dewaelereports_transaction` U0 INNER JOIN `dewaelereports_property` U1 ON (U0.`premises_id` = U1.`id`) WHERE U1.`property_zip_id` = %s ) AND `dewaelereports_transaction`.`transaction_price_out` > %s )'
/python27_runtime/python27_lib/versions/1/google/storage/speckle/python/api/rdbms.py in _AddBindVariablesToRequest
raise InterfaceError('unknown type %s for arg %d' % (type(arg), i)) ...
▼ Local vars
Variable Value
direction
1
i
10
bind_variable_factory
<bound method RepeatedCompositeFieldContainer.add of [<google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c14c8>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c17a0>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1960>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1998>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1f10>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1dc0>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c17d8>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1c70>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c15a8>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1d50>, <google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1ed8>]>
args
(1,
0,
10,
'2013-01-01 00:00:00',
'2013-12-31 23:59:59.99',
'R',
'S',
13,
2,
1459,
Decimal('75000'))
bv
<google.storage.speckle.proto.client_pb2.BindVariableProto object at 0xfc8c1ed8>
statement
'SELECT `dewaelereports_office_per_transaction`.`id`, `dewaelereports_office_per_transaction`.`office_id`, `dewaelereports_office_per_transaction`.`transaction_id`, `dewaelereports_office_per_transaction`.`office_percentage` FROM `dewaelereports_office_per_transaction` INNER JOIN `dewaelereports_transaction` ON (`dewaelereports_office_per_transaction`.`transaction_id` = `dewaelereports_transaction`.`id`) WHERE (`dewaelereports_office_per_transaction`.`office_id` = %s AND `dewaelereports_transaction`.`transaction_end_week` BETWEEN %s and %s AND `dewaelereports_transaction`.`transaction_end_date` BETWEEN %s and %s AND `dewaelereports_transaction`.`transaction_end_status` IN (%s, %s) AND `dewaelereports_office_per_transaction`.`transaction_id` IN (SELECT U0.`transaction_id` FROM `dewaelereports_transaction_status` U0 WHERE U0.`status` = %s ) AND `dewaelereports_transaction`.`unit_type_id` = %s AND `dewaelereports_office_per_transaction`.`transaction_id` IN (SELECT U0.`id` FROM `dewaelereports_transaction` U0 INNER JOIN `dewaelereports_property` U1 ON (U0.`premises_id` = U1.`id`) WHERE U1.`property_zip_id` = %s ) AND `dewaelereports_transaction`.`transaction_price_out` > %s )'
arg
Decimal('75000')
self
<google.storage.speckle.python.api.rdbms.Cursor object at 0xfc8c0b10>
Obviously it fails because it says that `unit.min_quantity` is a decimal. But
i didn't declare it as a Decimal in my models:
class Unit_type(models.Model):
unit_name = models.CharField(max_length=145)
department = models.ForeignKey(Department)
min_quantity = models.IntegerField(blank=True, null=True)
class Meta:
verbose_name = 'unit type'
def __unicode__(self):
return self.unit_name
When I fill out the statement and execute it manually in Google Cloud Sql, it
does work. Can someone help me with this nasty issue?
Thanks
Answer: The Google Cloud SQL DBAPI currently does not support the `decimal.Decimal`
type, although you can monkey patch it to:
import decimal
from google.storage.speckle.proto import jdbc_type
from google.storage.speckle.python.api import converters
from google.storage.speckle.python.api import rdbms
from google.storage.speckle.python.api import rdbms_googleapi
rdbms._PYTHON_TYPE_TO_JDBC_TYPE[decimal.Decimal] = jdbc_type.DECIMAL
converters.conversions[decimal.Decimal] = converters.Any2Str
converters.conversions[jdbc_type.DECIMAL] = decimal.Decimal
There is an issue logged
[here](https://code.google.com/p/googlecloudsql/issues/detail?id=62) where a
Google representative provided the patch above and indicated they are working
on a fix for this.
|
How to convert a string into a localised date in django
Question: We are doing an AJAX call in Django, where a user enters a date and a number,
and the AJAX call looks up if there already is a document with that number and
date.
The application is internationalised and localised. The problem is how to
interpret the date sent by AJAX into a valid Python/Django date object. This
has to be done using the current users locale, ofcourse.
One solution I found does not work: [Django: how to get format date in
views?](http://stackoverflow.com/questions/8287883/django-how-to-get-format-
date-in-views)
`get_format()` returns a string (in our case `j-n-Y`), but `strftime()`
expects a format string in the form of `%j-%n-%Y`.
Why the Django format is different from the `stftime()` format beats me, FYI
we're using Django 1.5 currently.
I think the problem is that in all examples I could find, the dates are
already date objects, and Python/Django just does formatting. What we need is
to convert the string into a date using the current locale, and THEN format
it. I was figuring this would be a standard problem, but all of the possible
solutions I found and tried don't seem to work...
Thanks,
Erik
Answer: Submitting a ticket to Django gave me a clue to the answer. You can convert a
specific type of data into an object by passing it through the corresponding
field and calling to_python(). In my case with a date it would be like so:
from django.forms.fields import DateField
fld = DateField()
dt = request.GET.get('date', '')
formatted_datetime = fld.to_python(dt)
Erik
|
Changing factor order in ggplot2 with Rpy2 in Python
Question: I'm trying to translate the following code into Rpy2 with no success:
neworder <- c("virginica","setosa","versicolor")
library("plyr")
iris2 <- arrange(transform(iris,
Species=factor(Species,levels=neworder)),Species)
This is meant to just change the `factor` order of a particular column, in
this case `Species`.
I don't want to use `plyr` and all that stuff in Rpy2 too since I can just
modify the the dataframe plotted as a Python object. The following does not
work:
# start with Python df 'mydf' and convert to R df
# to get mydf_r. The column equivalent of Species here
# is "variable"
# ...
mydf_r.variable = r.factor(ro.StrVector(["a", "b", "c"]))
# call ggplot...
ggplot2.ggplot(mydf) + ...
This does not work. How can I get the equivalent of the R code? I.e. I have a
melted dataframe with several values of `variable` plotted as `c, b, a` and I
want to change the order to be `a, b, c` by changing the `factor` order of
`variable`. Thanks.
**edit** I was able to change the order with this code:
labels = robj.StrVector(tuple(["a", "b", "c"]))
variable_factor = r.factor(labels, levels=labels)
r_melted = r.transform(r_melted, **{"variable": variable_factor})
p = ggplot2.ggplot(r_melted) + \
ggplot2.geom_boxplot(aes_string(**{"x": "variable",
"y": "value"
"fill": "group"})) + \
ggplot2.scale_fill_manual(values=np.array(["#00BA38", "#F8766D"])) + \
ggplot2.coord_flip()
However, this breaks ggplot's ability to correctly make the boxplot and color
code it by `group` variable. If I remove the lines:
labels = robj.StrVector(tuple(["a", "b", "c"]))
variable_factor = r.factor(labels, levels=labels)
r_melted = r.transform(r_melted, **{"variable": variable_factor})
Then it all works correctly... all I want is to change the order in which the
`variable` values appear in the boxplot.
@lgautier: the solution you gave looks like what I want, but it does not work
for me here. I made a test case for it with the `iris` dataset:
**original plot**
import os
iris = pandas.read_table(os.path.expanduser("~/iris.csv"),
sep=",")
iris["Species"] = iris["Name"]
r_melted = conversion_pydataframe(iris)
p = ggplot2.ggplot(r_melted) + \
ggplot2.geom_boxplot(aes_string(**{"x": "PetalLength",
"y": "PetalWidth",
"fill": "Species"})) + \
ggplot2.facet_grid(Formula("Species ~ .")) + \
ggplot2.coord_flip()
p.plot()
produces:

But if I add:
labels = robj.StrVector(tuple(["versicolor", "virginica", "setosa"]))
variable_i = r_melted.names.index("Species")
r_melted[variable_i] = robj.FactorVector(r_melted[variable_i],
levels=labels)
prior to plotting, I get:

I think this is because the names I use don't match exactly the `Species` name
values. It would be helpful if rpy2 raised an error when this happens. But in
any case, what if I want to overwrite the names of the factor? I.e. take the
first factor name and make it `x`, the second `y`, etc. and have it be
displayed in that order? Is the only way to do that to make a new column for
it with the correct name in the dataframe?
Answer: You need to change the levels of the factor used, either on-the-fly (first
example below), or in column for the data frame (second example).
If `labels` is a relatively short list the following will just work:
# r_melted is the one defined upstream of your code snippet,
# not the results of calling r.transform()
labels = robj.StrVector(tuple(["a", "b", "c"]))
p = ggplot2.ggplot(r_melted) + \
ggplot2.geom_boxplot(aes_string(**{"x": "factor(variable, levels = %s)" % labels,
"y": "value"
"fill": "group"})) + \
ggplot2.scale_fill_manual(values=np.array(["#00BA38", "#F8766D"])) + \
ggplot2.coord_flip()
If `labels` is larger (or no R code at all is wished):
# r_melted is the one defined upstream of your code snippet,
# not the results of calling r.transform()
from rpy2.robjects.vectors import FactorVector
variable_i = r_melted.names.index('variable')
r_melted[variable_i] = FactorVector(r_melted[variable_i],
levels = robj.StrVector(tuple(["a", "b", "c"]))
p = ggplot2.ggplot(r_melted) + \
ggplot2.geom_boxplot(aes_string(**{"x": "variable",
"y": "value"
"fill": "group"})) + \
ggplot2.scale_fill_manual(values=np.array(["#00BA38", "#F8766D"])) + \
ggplot2.coord_flip()
|
how do we get the output of a subprocess of a subprocess
Question: Could someone share a sample of python script that shows the output of a
subprocess (java kicked off by file.bin) of a subprocess (kicking off a
file.bin) ?
The subprocess (java kicked off by file.bin) of a subprocess (kicking off a file.bin) looks like this below from a `ps -ef | grep java`
`rrr 26267 26266 0 16:05 pts/12 00:00:03
/tmp/install.dir.26267/Linux/resource/jre/bin/java com.rew.erg.REW
/tmp/install.dir.26267/temp.erg /tmp/env.properties.26267 "-i" "console"`
How do we hook up to the subprocess of another subprocess and perform
interaction with it like an expect or pexpect script?
Answer: There are many ways to do that, here is just an example:
import subprocess
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT,shell=True)
except subprocess.CalledProcessError, ex:
ret = ex.returncode
|
Python: Using Excel CSV file to read only certain columns and rows
Question: While I can read csv file instead of reading to whole file how can I print
only certain rows and columns?
Imagine as if this is Excel:
A B C D E
State |Heart Disease Rate| Stroke Death Rate | HIV Diagnosis Rate |Teen Birth Rate
Alabama 235.5 54.5 16.7 18.01
Alaska 147.9 44.3 3.2 N/A
Arizona 152.5 32.7 11.9 N/A
Arkansas 221.8 57.4 10.2 N/A
California 177.9 42.2 N/A N/A
Colorado 145.3 39 8.4 9.25
Heres what I have:
import csv
try:
risk = open('riskfactors.csv', 'r', encoding="windows-1252").read() #find the file
except:
while risk != "riskfactors.csv": # if the file cant be found if there is an error
print("Could not open", risk, "file")
risk = input("\nPlease try to open file again: ")
else:
with open("riskfactors.csv") as f:
reader = csv.reader(f, delimiter=' ', quotechar='|')
data = []
for row in reader:# Number of rows including the death rates
for col in (2,4): # The columns I want read B and D
data.append(row)
data.append(col)
for item in data:
print(item) #print the rows and columns
I need to only read column B and D with all statistics to read like this:
A B D
State |Heart Disease Rate| HIV Diagnosis Rate |
Alabama 235.5 16.7
Alaska 147.9 3.2
Arizona 152.5 11.9
Arkansas 221.8 10.2
California 177.9 N/A
Colorado 145.3 8.4
### Edited
no errors
Any ideas on how to tackle this? Everything I try isn't working. Any help or
advice is much appreciated.
Answer: I hope you have heard about Pandas for Data Analysis.
The following code will do the job for reading columns however about reading
rows, you might have to explain more.
import pandas
io = pandas.read_csv('test.csv',sep=",",usecols=(1,2,4)) # To read 1st,2nd and 4th columns
print io
|
How can i assign (assert) values to functions in Z3py?
Question: I would like to kindly ask , How can I convert the following Z3 constraints
into Z3py (Python API).
(declare-datatypes () ((S a b c d e f g)))
(declare-fun fun1 ( S ) Bool)
(declare-fun fun2 ( S S ) Bool)
(assert (forall ((x S)) (= (fun1 x)
(or
(= x a)
(= x b)
(= x c)
(= x d)
(= x e)
(= x f)
(= x g)
))))
(assert (forall ((y1 S) (y2 S)) (= (fun2 y1 y2)
(or
(and (= y1 a) (= y2 b))
(and (= y1 c) (= y2 d))
(and (= y2 e) (= y2 f))
))))
Answer: You can encode it in the following way:
from z3 import *
S, (a, b, c, d, e, f, g) = EnumSort('S', ['a', 'b', 'c', 'd', 'e', 'f', 'g'])
fun1 = Function('fun1', S, BoolSort())
fun2 = Function('fun2', S, S, BoolSort())
s = Solver()
x = Const('x', S)
s.add(ForAll([x], fun1(x) == Or(x == a, x == b, x == c, x == d, x == e, x == f, x == g, x == e)))
y1, y2 = Consts('y1 y2', S)
s.add(ForAll([y1, y2], fun2(y1, y2) == Or(And(y1 == a, y2 == b), And(y1 == c, y2 == d), And(y1 == e, y2 == f))))
print(s.check())
print(s.model())
Note that `fun1` and `fun2` are essentially macros. So, we can avoid the
quantifiers and define them as:
def fun1(x):
return Or(x == a, x == b, x == c, x == d, x == e, x == f, x == g, x == e)
def fun2(y1, y2):
return Or(And(y1 == a, y2 == b), And(y1 == c, y2 == d), And(y1 == e, y2 == f))
|
Confusing Error when Reading from a File in Python
Question: I'm having a problem opening the `names.txt` file. I have checked that I am in
the correct directory. Below is my code:
import os
print(os.getcwd())
def alpha_sort():
infile = open('names', 'r')
string = infile.read()
string = string.replace('"','')
name_list = string.split(',')
name_list.sort()
infile.close()
return 0
alpha_sort()
And the error I got:
> FileNotFoundError: [Errno 2] No such file or directory: 'names'
Any ideas on what I'm doing wrong?
Answer: You mention in your question body that the file is "names.txt", however your
code shows you trying to open a file called "names" (without the ".txt"
extension). (Extensions are part of filenames.)
Try this instead:
infile = open('names.txt', 'r')
|
How do i trace a particular terminal command?
Question: In Openstack, lets say for example, i'm entering the command and i start up an
instance using the image myimage and use flavor 1.
nova boot --image myimage --flavor 1 server1
How can i actually trace this command and get details like what functions are
executed since the code is in python, which files it enters while executing
the command etc.,
Sorry i'm a linux noob and i'm trying to find ways to trace few calls. Instead
of doing this by going through all the files and doing this.
Is there a simple way to do this?
Answer: If you want to trace a regular Python program, take a look at the [Python
trace module](http://docs.python.org/2/library/trace.html).
However, I don't think you will find tracing useful to understand what
OpenStack is doing in the example you provided:
nova boot --image myimage --flavor 1 server1
OpenStack is not implemented as a single Python program. It is implemented as
a collection of Python programs that run as Linux services in separate
processes, and typically these processes are distributed across multiple
machines.
The `nova` program is just a small client program that makes requests against
an OpenStack endpoint over HTTP. When you do the request above, the following
services are involved. Note that most of the OpenStack "Services" are actually
implemented by multiple Linux "services" (aka daemons). These are the
OpenStack Services and Linux services/daemons involved when you do a `nova
boot`.
* Identity Service (keystone)
* _keystone_
* Compute Service (nova)
* _nova-api_
* _nova-scheduler_
* _nova-compute_
* _nova-network_ (if not using the new Network Service (quantum))
* Image Service (glance)
* _glance-api_
* _glance-registry_
Note that if the new Network service (quantum) were involved, there would be
even more services involved here.
OpenStack does inter-process communication using two mechanisms:
* HTTP (using REST API) for communication across OpenStack project boundaries (e.g., communication between the Compute service and the Image service)
* AMQP-based message queue (typically RabbitMQ, but could be Qpid or ZeroMQ) for communication across services within a single OpenStack project (e.g, communication between _nova-api_ and _nova-compute_)
_The services also share information via a database, but that isn't important
if you're interested in tracing the thread of control_.
For the example you gave with `nova boot`, note all of the interactions that
occur across services:
1. _nova_ client makes a request over HTTP against the Identity service (keystone), passing username and password and getting a token
2. _nova_ client makes a request over HTTP against the Compute service (_nova-api_) to [create a new server](http://docs.openstack.org/api/openstack-compute/2/content/CreateServers.html).
3. _nova-api_ makes a request over the message queue to _nova-scheduler_ to run an instance.
4. _nova-scheduler_ selects a compute host and makes a request over the message queue to _nova-compute_ on that host to boot a new virtual machine instance.
5. _nova-compute_ makes a request over the message queue to _nova-network_ to do network configuration for the new instance.
6. _nova-compute_ makes a request over HTTP against the Image Service (_glance-api_) for the virtual machine image file.
7. _glance-api_ makes a request over HTTP against _glance-registry_ to retrieve the file from the image backend.
If you wanted to generate a trace that encompasses all of the OpenStack code
involved, you would have to trace each service involved.
I'd recommend just reading the code rather than trying to do automated traces.
You can also look at the log files, since they contain a lot of debug
information. Take a look at the recently released [OpenStack Operations
Guide](http://docs.openstack.org/ops/) for some guidance on how to read the
log files.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.