text
stringlengths 226
34.5k
|
---|
Make Python wait until a file exists before continuing
Question: In my code, I write a file to my hard disk. After that, I need to import the
generated file and then continue processing it.
for i in xrange(10):
filename=generateFile()
# takes some time, I wish to freeze the program here
# and continue once the file is ready in the system
file=importFile(filename)
processFile(file)
If I run the code snippet in one go, most likely `file=importFile(filename)`
will complain that that file does not exist, since the generation takes some
time.
I used to manually run `filename=generateFile()` and wait before running
`file=importFile(filename)`.
Now that I'm using a `for` loop, I'm searching for an automatic way.
Answer: You could use `time.sleep` and I would expect that if you are loading a module
this way you would need to `reload` rather than `import` after the first
`import`.
However, unless the file is very large why not just generate the string and
then `eval` or `exec` it?
**Note** that since your file generation function is not being invoked in a
thread it should be blocking and will only return when it thinks it has
finished writing - possibly you can improve things by ensuring that the file
writer ends with `outfile.flush()` then `outfile.close()` but on some OSs
there may still be a time when the file is not actually available.
|
what exactly is passed as "address" in sendto for ipv6
Question: I am trying to send an **icmpv6 ping packet** across running as root (python
2.7 on Linux)
I understand that **sendto** uses two tuple struct in case of ipv4 (and it
works) and know that ipv6 uses a 4 tuple struct. Still i can't get it to work.
It either results in an _"invalid argument"_ or _"socket.gaierror: [Errno -2]
Name or service not known"_
Following is a bare minimum example showing what i am attempting. I am even
fine if i can get it to work with local host in case of ipv6 i.e. ::1
import socket
def main(dest_name):
#dest_addr = socket.gethostbyname(dest_name)
addrs = socket.getaddrinfo(dest_name, 0, socket.AF_INET6, 0, socket.SOL_IP)
print addrs
dest = addrs[2]
port = 33434 # just some random number because of icmp
icmp = socket.getprotobyname('ipv6-icmp')
#print icmp
send_socket = socket.socket(socket.AF_INET6, socket.SOCK_RAW, icmp)
print "sent to " + str(dest[4])
send_socket.sendto('', (str(dest[4]), port))
send_socket.close()
if __name__ == '__main__':
main('ipv6.google.com')
I actually tried each tuple from the addr list, but the result is same.
**Update:**
Also tried alternates with sendto's params, but it results in invalid
arguments whether i use local host or google ipv6 address
send_socket.sendto('', dest[4])
**Update 2:**
For reference, working ipv4 code follows (as asked in comments)
def main(dest_name):
dest_addr = socket.gethostbyname(dest_name)
icmp = socket.getprotobyname('icmp')
send_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
print "sent to " + dest_name#str(dest[4])
send_socket.sendto('', (dest_addr, 0))
send_socket.close()
if __name__ == '__main__':
main('www.google.com')
**Update 3:**
When i run the v6 version with dest[4] as the only parameter (no string, just
the tuple and NO port), following is output on my machine (Mint 15) which
includes printing interfaces
sudo python test_v6.py
[(10, 1, 6, '', ('::1', 0, 0, 0)), (10, 2, 17, '', ('::1', 0, 0, 0)), (10, 3, 0, '', ('::1', 0, 0, 0))]
sent to ('::1', 0, 0, 0)
Traceback (most recent call last):
File "test_v6.py", line 18, in <module>
main('::1')
File "test_v6.py", line 14, in main
send_socket.sendto('', dest[4])
socket.error: [Errno 22] Invalid argument
I am not sure why it still produces invalid argument
Answer: Your original problem was that bizarre things like a 2-tuple whose first
member is a Python string representation of the 4-tuple address are not even
close to valid ways to specify an address.
You can fix that by just using `dest[4]` itself—that is, the tuple you got
back as the sockaddr part of `getaddrinfo`—as the address. (As Sander
Steffann's answer explains, you're not exactly doing this cleanly. But in your
case, at least for `'::1'` or `'localhost'` with the other values you've
specified, you're going to get back the right values to use.) You should also
probably use `addrs[0]` rather than `addrs[2]`.
Anyway, in your **Update 3** , you appear to have done exactly that, and
you're getting `socket.error: [Errno 22] Invalid argument`. But there are
_two_ arguments to `sendto`, and it's the _other_ one that's invalid: `''` is
not a valid ICMP6 packet because it has no ICMP6 header.
You can test this pretty easily by first `connect`ing to `dest[4]`, which will
succeed, and then doing a plain `send`, which will fail with the same error.
For some reason, on Fedora 10 (ancient linux), the call seems to succeed
anyway. I don't know what goes out over the wire (if anything). But on Ubuntu
13.10 (current linux), it fails with `EINVAL`, exactly as it should. On OS X
10.7.5 and 10.9.0, it fails with `ENOBUFS`, which is bizarre. In all three
cases, if I split the `sendto` into a `connect` and a `send`, it's the `send`
that fails.
`'\x80\0\0\0\0\0\0\0'` is a valid ICMP6 packet (an Echo service request header
with no data). If I use that instead of your empty string, it now works on all
four machines.
(Of course I still get `ENETUNREACH` or `EHOSTUNREACH` when I try to hit
something on the Internet, because I don't have an IPv6-routable connection.)
|
wxpython : changing background / foreground image dynamically
Question: I am writing a GUI flow using wxPython which has 4 pages (or more). They way I
have approached is creating 4 (or more) classes with each class defining its
own static (background) and dynamic images / content. In my application I
would then programmatically create instances class required and capture events
on that page. Based upon the event triggered the registered handler would
destroy current class and switch to other class(page). So my code actually
creates X classes with each class having its own method to set background /
foreground content/images:
def OnEraseBackground(self, evt):
dc = evt.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
dc.Clear()
bmp = wx.Bitmap(self.image)
dc.DrawBitmap(bmp, 0, 0)
def buttonClick(self, evt):
parent = self.frame
self.Destroy()
DispatchState(parent, "admin1.png", 1)
The issue is that the second page does not comes up at all in screen.
Below is my complete code. Note I have created 2 classes (MainPanel,
SecondPanel)that creates a screen on panel in my application frame. It then
waits for an event. Once I get the desired event I delete the current class
and create an instance of new class:
import wx
########################################################################
class SecondPanel(wx.Panel):
def __init__(self,parent, image, state):
wx.Panel.__init__(self, parent=parent)
self.state = state
self.image = image
self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM)
self.frame = parent
sizer = wx.BoxSizer(wx.VERTICAL)
hSizer = wx.BoxSizer(wx.HORIZONTAL)
panel=wx.Panel(self, -1)
self.buttonOne=wx.Image("image1.bmp", wx.BITMAP_TYPE_BMP).ConvertToBitmap()
self.button=wx.BitmapButton(self, -1, self.buttonOne, pos=(100,50))
self.button.Bind(wx.EVT_LEFT_DCLICK, self.buttonClick)
sizer.Add(self.button, 0, wx.ALL, 5)
hSizer.Add((1,1), 1, wx.EXPAND)
hSizer.Add(sizer, 0, wx.TOP, 100)
hSizer.Add((1,1), 0, wx.ALL, 75)
self.SetSizer(hSizer)
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
def buttonClick(self, evt):
parent = self.frame
self.Destroy()
DispatchState(parent, "admin0.png", 0)
def OnEraseBackground(self, evt):
dc = evt.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
dc.Clear()
bmp = wx.Bitmap(self.image)
dc.DrawBitmap(bmp, 0, 0)
class MainPanel(wx.Panel):
def __init__(self,parent, image, state):
wx.Panel.__init__(self, parent=parent)
self.state = state
self.image = image
self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM)
self.frame = parent
sizer = wx.BoxSizer(wx.VERTICAL)
hSizer = wx.BoxSizer(wx.HORIZONTAL)
panel=wx.Panel(self, -1)
self.buttonOne=wx.Image("image0.bmp", wx.BITMAP_TYPE_BMP).ConvertToBitmap()
self.button=wx.BitmapButton(self, -1, self.buttonOne, pos=(100,50))
self.button.Bind(wx.EVT_LEFT_DCLICK, self.buttonClick)
sizer.Add(self.button, 0, wx.ALL, 5)
hSizer.Add((1,1), 1, wx.EXPAND)
hSizer.Add(sizer, 0, wx.TOP, 100)
hSizer.Add((1,1), 0, wx.ALL, 75)
self.SetSizer(hSizer)
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
def buttonClick(self, evt):
parent = self.frame
self.Destroy()
DispatchState(parent, "admin1.png", 1)
def OnEraseBackground(self, evt):
dc = evt.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
dc.Clear()
bmp = wx.Bitmap(self.image)
dc.DrawBitmap(bmp, 0, 0)
class Main(wx.App):
def __init__(self, redirect=False, filename=None):
wx.App.__init__(self, redirect, filename)
self.frame = wx.Frame(None, size=(800, 480))
self.state = 0
self.image = 'admin0.png'
def DispatchState(frame, image, state):
if state == 0 :
print image
print state
MainPanel(frame, image, state)
if state == 1 :
print image
print state
SecondPanel(frame, image, state)
frame.Show()
if __name__ == "__main__":
app = Main()
DispatchState(app.frame,app.image, app.state)
app.MainLoop()
The reason I have selected this approach is that I can easily switch from one
state to other such that I can switch to any screen / page. If suppose
tomorrow we need to dynamically add / remove more pages - it can be easily
done. I would need to create the page (class) and add its state in
DispatchState() global method.
But for me currently second screen does not gets rendered at all. Also please
comment on my approach - is there any better way I can achieve this - what are
the things I should take care or what is erroneous in my code?
Answer: Some solution.
I have to create `MyFrame` class to add **sizer** which resize _Panel_ to
_Frame_ size.
I add `DispatchState` as `ChangePanel` to `MyFrame` to make it more Object
oriented. Now _Panel_ call _Frame_ function `ChangePanel` and _Frame_
create/destroy panels.
Because `SecondPanel` and `MainPanel` are very similar I made one `MyPanel`
class - to have less work with removing my errors :) - see **DRY** rule:
[Don't Repeat Yourself](http://en.wikipedia.org/wiki/Don%27t_Repeat_Yourself)
(_I attache my bitmaps so other users can run this code too_)
(_I use ball1.png, ball2.png in place image0.bmp, image1.bmp_)
import wx
#######################################################################
class MyPanel(wx.Panel):
def __init__(self, parent, state, button_image, background_image):
wx.Panel.__init__(self, parent=parent)
print "(debug) MyPanel.__init__: state:", state
self.parent = parent
self.state = state
self.button_image = button_image
self.background_image = background_image
self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM)
vsizer = wx.BoxSizer(wx.VERTICAL)
hSizer = wx.BoxSizer(wx.HORIZONTAL)
#self.buttonOne=wx.Image("image1.bmp", wx.BITMAP_TYPE_BMP).ConvertToBitmap()
self.buttonImage = wx.Image(button_image, wx.BITMAP_TYPE_PNG).ConvertToBitmap()
self.button = wx.BitmapButton(self, -1, self.buttonImage, pos=(100,50))
self.button.Bind(wx.EVT_LEFT_DCLICK, self.buttonClick)
self.backgroundImage = wx.Bitmap(self.background_image)
vsizer.Add(self.button, 0, wx.ALL, 5)
hSizer.Add((1,1), 1, wx.EXPAND)
hSizer.Add(vsizer, 0, wx.TOP, 100)
hSizer.Add((1,1), 0, wx.ALL, 75)
self.SetSizer(hSizer)
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
def buttonClick(self, evt):
print "(debug) MyPanel.buttonClick"
self.parent.ChangePanel()
def OnEraseBackground(self, evt):
dc = evt.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
dc.Clear()
dc.DrawBitmap(self.backgroundImage, 0, 0)
#######################################################################
class MyFrame(wx.Frame):
def __init__(self, size=(800,480)):
wx.Frame.__init__(self, None, size=size)
self.state = None
self.panel = None
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(self.sizer)
self.Show() # Show is used to show/hide window not to update content
self.ChangePanel()
#--------------------------
def ChangePanel(self):
print "(debug) MyFrame.ChangePanel: state:", self.state
if self.state is None or self.state == 1:
# change state
self.state = 0
# destroy old panel
if self.panel:
self.panel.Destroy()
# create new panel
self.panel = MyPanel(self, self.state, "ball1.png", "admin0.png")
# add to sizer
self.sizer.Add(self.panel, 1, wx.EXPAND)
elif self.state == 0 :
# change state
self.state = 1
# destroy old panel
if self.panel:
self.panel.Destroy()
# create new panel
self.panel = MyPanel(self, self.state, "ball2.png", "admin1.png")
# add to sizer
self.sizer.Add(self.panel, 1, wx.EXPAND)
else:
print "unkown state:", self.state
self.Layout() # refresh window content
#######################################################################
class Application(wx.App):
def __init__(self, redirect=False, filename=None):
wx.App.__init__(self, redirect, filename)
self.frame = MyFrame((800, 480))
def run(self):
self.MainLoop()
#######################################################################
if __name__ == "__main__":
Application().run()
ball1.png  ball2.png

admin0.png 
admin1.png 
|
Multiple Django Databases - Map Model to Database in Same Application
Question: I have searched all over for a solution for this but have been unable to find
anything.
I have one Django project, one application, two models and two database. I
would like one model to speak and sync exclusively to one database. This is
what I've tried:
**Settings**
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'database_a', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': 'user',
'PASSWORD': 'xxxxxx',
'HOST': 'localhost', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
},
'applicationb_db': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'database_b',
'USER': 'user',
'PASSWORD': 'xxxxxx',
'HOST': 'localhost',
'PORT': '',
},
}
DATABASE_ROUTERS = ['fanmode4.router.ApiRouter']
**Models**
from django.db import models
class TestModelA(models.Model):
testid = models.CharField(max_length=200)
class Meta:
db_table = 'test_model_a'
class TestModelB(models.Model):
testid = models.CharField(max_length=200)
class Meta:
db_table = 'test_model_b'
app_label = 'application_b'
**Router**
class ApiRouter(object):
def db_for_read(self, model, **hints):
if model._meta.app_label == 'application_b':
return 'applicationb_db'
return None
def db_for_write(self, model, **hints):
if model._meta.app_label == 'application_b':
return 'applicationb_db'
return None
def allow_relation(self, obj1, obj2, **hints):
if obj1._meta.app_label == 'application_b' or \
obj2._meta.app_label == 'application_b':
return True
return None
def allow_syncdb(self, db, model):
if db == 'applicationb_db':
return model._meta.app_label == 'application_b'
elif model._meta.app_label == 'application_b':
return False
return None
the application name is "api". Basically with this setup, if I sync the
database it will only sync on the default db. If I sync the database
specifying the second database `python manage.py syncdb
--database=applicationb_db`, it will not sync anything to the second database.
I am simply trying to achieve the following:
* Everything for TestModelA goes to default database
* Everything for TestModelB goes to applicationb_db database
* Everything else goes to the default database
Answer: Instead of using `model._meta.app_label` you can use `model` to check which
model it is and return appropriate DB.
You can update the router as:
class ApiRouter(object):
def db_for_read(self, model, **hints):
if model == TestModelB:
return 'applicationb_db'
return None
def db_for_write(self, model, **hints):
if model == TestModelB:
return 'applicationb_db'
return None
def allow_relation(self, obj1, obj2, **hints):
if model == TestModelB:
return True
return None
def allow_syncdb(self, db, model):
if model == TestModelB:
return True
else:
return False
return None
|
Error in python package
Question: I started a script which retrieves a value from a JSON object using python but
am getting these errors
File "c:\Python33\lib\json\__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "c:\Python33\lib\json\decoder.py", line 352, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "c:\Python33\lib\json\decoder.py", line 368, in raw_decode
obj, end = self.scan_once(s, idx)
my code is as follows:
#!/usr/bin/env python
import json
data=json.loads('{WARRANTY:"",ROOT_CATEGORYNAME:"Automobiles",}')
print data['ROOT_CATEGORYNAME']
Answer: What you have here may be a valid JavaScript literal, but that does NOT make
it valid JSON. In valid JSON all of the keys need to be quoted, and there
cannot be a trailing comma after the last element in an object or array.
In this case, the same information would look like this as JSON:
data=json.loads('{"WARRANTY":"","ROOT_CATEGORYNAME":"Automobiles"}')
|
how to toggle event binding on tabs of notebook widget
Question: i have simple application based on tkinter and ttk so i have notebook widget
supposed to creates a limited number of tabs ; and the tabs are the same
thing..but in need to do different actions on each one ..when press some
button a tab with its own name is created and the event binding will be focus
on it..if i selected the previous tab was created with button press..the event
binding will not focus on it nor its children widgets..and this is the problem
i need to solve..can I toggle event binding between tabs??>>any suggestions??
im using python 2.7
Answer: See my answer to question [how to make instances of event for every single tab
on multi tab GUI tkinter( notebook
widget)](http://stackoverflow.com/a/19896049/1832058) to see working example.
I use class `MyTab` to create new tab with own events binding - so I can
create many identical tabs and every tab use own events binding. In example
tabs show different message when you change tab.
You didn't attache code in your question so I can't add more detailed answer.
**EDIT:**
example from previous link + binding to frame:
* directly in `MyTab` : `self.bind("<Button-1>", self.clickFrame)` (left mouse call function in `MyTab`
* in `Application` : `tab.bind("<Button-3>", self.clickTab)` (right mouse call function in `Application`
code:
#!/usr/bin/env python
from Tkinter import *
import tkMessageBox
import ttk
#---------------------------------------------------------------------
class MyTab(Frame):
def __init__(self, root, name):
Frame.__init__(self, root)
self.root = root
self.name = name
self.entry = Entry(self)
self.entry.pack(side=TOP)
self.entry.bind('<FocusOut>', self.alert)
self.entry.bind('<Key>', self.printing)
self.bind("<Button-1>", self.clickFrame)
#-------------------------------
def alert(self, event):
print 'FocusOut event is working for ' + self.name + ' value: ' + self.entry.get()
#tkMessageBox.showinfo('alert', 'FocusOut event is working for ' + self.name + ' value: ' + self.entry.get())
#-------------------------------
def printing(self, event):
print event.keysym + ' for ' + self.name
#-------------------------------
def clickFrame(self, event):
print "MyTab: click at (" + str(event.x) + ", " + str(event.y) + ') for ' + self.name + " (parent name: " + self.root.tab(CURRENT)['text'] + ")"
#---------------------------------------------------------------------
class Application():
def __init__(self):
self.tabs = {'ky':1}
self.root = Tk()
self.root.minsize(300, 300)
self.root.geometry("1000x700")
self.notebook = ttk.Notebook(self.root, width=1000, height=650)
# self.all_tabs = []
self.addTab('tab1')
self.button = Button(self.root, text='generate', command=self.start_generating).pack(side=BOTTOM)
self.notebook.pack(side=TOP)
#-------------------------------
def addTab(self, name):
tab = MyTab(self.notebook, name)
tab.bind("<Button-3>", self.clickTab)
self.notebook.add(tab, text="X-"+name)
# self.all_tabs.append(tab)
#-------------------------------
def clickTab(self, event):
print "Application: click at (" + str(event.x) + ", " + str(event.y) + ') for ' + event.widget.name
#-------------------------------
def start_generating(self):
if self.tabs['ky'] < 4:
self.tabs['ky'] += 1
self.addTab('tab'+ str(self.tabs['ky']))
#-------------------------------
def run(self):
self.root.mainloop()
#----------------------------------------------------------------------
Application().run()
|
python itertools.permutations combinations
Question: I have this variable: `message = "Hello World"` and I built a function that
shuffles it:
def encrypt3(message,key):
random.seed(key)
l = range(len(message))
random.shuffle(l)
return "".join([message[x] for x in l])
This function just suffle the message so it could look like this for example
"Hrl llWodeo"
Now if I want to convert it to the message again using itertools.permutations,
how can I do it? When I tried this : `print [x for x in
itertools.permutations(shuffledMsg)]` the program closed with error because
its has to many posibilities.
Answer: This is of course "unshuffable" so long as you know the original seed, since
we can simply re-run it to find out where each character is shifting to.
import random
def encrypt3(message,key):
random.seed(key)
l = range(len(message))
random.shuffle(l)
return "".join([message[x] for x in l])
key = 'bob'
message = 'Hello World!'
print(encrypt3(message, key))
def unshuffle(message, key):
random.seed(key)
new_list = list(range(len(message)))
old_list = [None] * len(new_list)
random.shuffle(new_list)
for i, old_i in enumerate(new_list):
old_list[old_i] = message[i]
return ''.join(old_list)
print(unshuffle(encrypt3(message, key), key))
|
Pygame attribute, init()
Question: I'm trying to use Pygame with Python 3.3 on my windows 8 laptop. Pygame
installed fine and when I `import pygame` it imports fine as well. Although
when I try to execute this small code:
import pygame
pygame.init()
size=[700,500]
screen=pygame.display.set_mode(size)
I get this error:
Traceback (most recent call last):
File "C:\Users\name\documents\python\pygame_example.py", line 3, in <module>
pygame.init()
AttributeError: 'module' object has no attribute 'init'
I used `pygame-1.9.2a0-hg_56e0eadfc267.win32-py3.3` to install Pygame. Pygame
is installed in this location 'C:\PythonX' and Python 3.3 is installed in this
location 'C:\Python33'. I have looked at other people having the same or
similar problem and it doesn't seem to solve the error. Have I done anything
wrong when installing Pygame? Or does it not support windows 8?
Answer: You have a _directory_ named `pygame` in your path somewhere.
$ mkdir pygame # empty directory
$ python3.3
>>> import pygame
>>> pygame
<module 'pygame' (namespace)>
>>> pygame.__path__
_NamespacePath(['./pygame'])
Remove or rename this directory, it is masking the actual pygame package.
If you use `print(pygame.__path__)` it'll tell you where the directory was
found; in the above example it was found relative to the current directory
(`./`).
|
python beautiful soup how to json decode to dict
Question: I'm new to beautifulsoup in python and I"m trying to extract dict from
beatifulsoup
I use beautifulsoup to extract json and got my beautifulsoup.beautifulsoup
variable `soup`
I"m trying to get value out of `soup` , but when I do `result =
soup.findAll("bill")` I got an empty list `[]`. How can I extract soup to get
dict result of
{u'congress': 113,
u'number': 325,
u'title': u'A bill to ensure the complete and timely payment of the obligations of the United States Government until May 19, 2013, and for other purposes.',
u'type': u'hr'}
print type(soup)
print soup
=> result below
BeautifulSoup.BeautifulSoup
{
"bill": {
"congress": 113,
"number": 325,
"title": "A bill to ensure the complete and timely payment of the obligations of the United States Government until May 19, 2013, and for other purposes.",
"type": "hr"
},
"category": "passage",
"chamber": "s"
}
* * *
**UPDATE**
Here is how I got `soup`:
from BeautifulSoup import BeautifulSoup
import urllib2
url = urllib2.urlopen("https://www.govtrack.us/data/congress/113/votes/2013/s11/data.json")
content = url.read()
soup = BeautifulSoup(content)
Answer: You could remove `BeautifulSoup`:
import json
import urllib2
url = "https://www.govtrack.us/data/congress/113/votes/2013/s11/data.json"
data = json.load(urllib2.urlopen(url))
|
Finding time complexity of a specific python algorithm
Question: Hi I need to understand what is the time complexity of the following
algorithm.
def complex(n):
l=[]
i=1
while i<n:
l=list(range(i))
i*=2
I've got to the point where I realized it runs int(log(n,2)) times over the
loop but I'm having hard time incorporating the range(i) into the final
expression. Any help appreciated thank you.
Answer: You've already worked out that it runs `int(log(n, 2))` iterations. (You can
test that very easily by just adding a counter into the loop, and calling it
with, e.g., 1, 2, 4, 8, 16, 32, 64, etc., and seeing that the counter goes up
1 every time `n` doubles.)
Now you want to know how long the inside of the loop takes. Here, you'd need
to know the time complexity of the `range` and `list` functions. I can give
you the answers to those, and in fact you might be able to guess them, but you
can't really _prove_ that unless you start reading the source code to CPython.
So, let's test it with some simple timing:
import timeit
for i in range(20):
n = 1 << i
t = timeit.timeit(lambda: list(range(n))
print('{} takes {}'.format(n, t))
If you run this, you'll see that, once you get beyond around 32, doubling `n`
seems to double the time it takes. So, that means `list(range(n))` is O(n),
right?
Let's verify whether that makes sense. I don't know whether you're using
Python 2.x or 3.x, so I'll work it out both ways.
In 2.x: `range(n)` has to calculate `n` integers, and build a list `n` values
long. That seems like it ought to be O(n).
In 3.x: `range(n)` just returns an object that remembers the number `n`. That
ought to be O(1). But then we call `list` on that `range`, which has to
iterate the whole range, calculating all `n` integers, and building a list `n`
values long. So it's still O(n).
Put that back into your loop, and you have O(log n) times through the loop,
each one O(i) complexity. So, the total time is O(1) + O(2) + O(4) + O(…) +
O(n/4) + O(n/2) + O(n), with log(n) steps in the summation. In other words,
it's the sum of a [geometric
sequence](http://www.mathsisfun.com/algebra/sequences-sums-geometric.html).
And now you can solve the problem. (Or, if not, you're stuck on a new part,
which someone can answer for your very simply if you can't figure it out
yourself.)
* * *
You worked out that the sum is `-(1-2**log(n,2))`. That's not quite right,
because you wanted a closed range, not a half-open range, so it should be
`-(1-2**log(n+1,2))`. But that's probably my fault for not explaining it
clearly, and it doesn't matter too much, so let's go with your version first.
`2**log(n, 2)` is obviously `n`. (If you don't understand exponentiation and
logarithms well enough to understand why, you should find a tutorial on the
math, but meanwhile you can test it with a variety of different values of `n`
to convince yourself.)
Meanwhile, `-(1-x)` for any `x` is just `x-1`.
So, the sum is just `n-1`.
If you go back and use the correct `log(n+1, 2)` instead of `log(n, 2)`,
you'll get `2n-1`.
So, is that correct? Let's test with some actual numbers.
If `n = 16`, you get `1+2+4+8+16 = 31 = 2n-1`. If `n = 1024`, you get
`1+2+4+…+256+512+1024 = 2047 = 2n-1`. Any power-of-2 you throw at it, you get
exactly the right answer. For a non-power-of-2, like 1000, you get
`1+2+4+…+256+512+1000 = 2023`, which is not exactly `2n-1`, but it's always
within a factor of 2. (In fact, it's `n + 2**(ceil(log(n, 2)) - 1`, or `n + m
- 1` where `m` is the `n` rounded up to a power of 2.)
Anyway, `n-1`, `2n-1`, `n + 2**(ceil(log(n, 2)) - 1`… those are all `O(n)`.
And you can go back and test this by timing the whole function with different
values of `n` and see that, beyond very small numbers, when you double `n` it
takes about twice as long.
|
Deleted __pycache__ and __init__.py
Question: * Django 1.6
* Ubuntu 12.04
* Python 3.2.3
Accidentally deleted a Django app's `__pycache__` folder & its `__init__.py`
file, and it crashed Django. when I `python3 manage.py runserver`, it'll
instantly claims there's no module by the name _agepct_ even though the app's
directory exists and all files are in it (except the ones I deleted). I
emptied the trash so I can't get the files back. Is there any way to get the
app working again short of recreating the whole app from scratch?
Here's the traceback it spits out when I try to `runserver`:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.2/dist-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.2/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.2/dist-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python3.2/dist-packages/django/core/management/base.py", line 280, in execute
translation.activate('en-us')
File "/usr/local/lib/python3.2/dist-packages/django/utils/translation/__init__.py", line 130, in activate
return _trans.activate(language)
File "/usr/local/lib/python3.2/dist-packages/django/utils/translation/trans_real.py", line 188, in activate
_active.value = translation(language)
File "/usr/local/lib/python3.2/dist-packages/django/utils/translation/trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/usr/local/lib/python3.2/dist-packages/django/utils/translation/trans_real.py", line 159, in _fetch
app = import_module(appname)
File "/usr/lib/python3.2/importlib/__init__.py", line 124, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/lib/python3.2/importlib/_bootstrap.py", line 824, in _gcd_import
raise ImportError(_ERR_MSG.format(name))
ImportError: No module named agepct
Answer: The `__pycache__` directory is a cache for byte-compiled files and will be re-
built when you import the package.
The `__init__.py` file on the other hand is what makes a directory a module.
If it was initially empty, simply recreate it in the `agepct` directory for
the imports to work again. Note that `__init__.py` would _not_ have been
inside the `__pycache__` directory.
If `__init__.py` was more than just an empty 'make this a package' file, then
you have to recreate that code from scratch or restore the file from a backup.
|
For loop issue in python while using regex for pattern matching in DNA analysis
Question: Hey Stack Overflow I am fairly new to Python and I have an issue with my for
loop that I can't quite seem to figure out.
I am trying to read into a FASTA file which has the following example text:
>seq1
AAACTACCGCGTTT
>seq2
AAACTGCAACTAGCGTTT
>seq3
AAACCGGAGTTACCTAGCGTTT
What I would like to do is read into my file and print the FASTA header (e.g.
header is >seq1), then I want to match two unique patterns (in this e.g. "AAA"
and "TTT") present in the DNA sequence and print the DNA sequence that is
between these two patterns.
So my will like my output to look like this:
>seq1
CTACCGCG
>seq2
CTGCAACTAGCG
>seq3
CCGGAGTTACCTAGCG
I have the following code:
import re
def find_seq(filename):
with open(filename) as file:
seq=''
for line in file:
header = re.search(r'^>\w+', line)
if(header):
print (header.group())
seq = seq.replace('\n','')
find_Lpattern = re.sub(r'.*AAA', '',seq)
find_Rpattern = re.sub(r'TTT.*', '',find_Lpattern)
if(find_Rpattern):
print (find_Rpattern)
seq = ''
else:
seq += line
filename = 'test.txt'
print(find_seq(filename))
I keep getting this as my output:
>seq1
>seq2
CTACCGCG
>seq3
CTGCAACTAGCG
Essentially my for loop skips over seq1 and then assigns the DNA sequence from
seq1 to seq2, and the iteration on my for loop is off. Could anyone please
point me in the right direction so I can fix this issue?
Answer: Even assuming your indentation is set in the way that would produce the
results you describe, your logic is off. You're printing the header before you
handle the accumulated `seq`.
When you read line 1 of your file, your `header` regexp matches. At that
point, `seq` is the empty string. It therefore prints the match, and runs your
replace and `re.sub` calls on the empty string.
Then it reads line 2, "AAACTACCGCGTTT", and appends that to `seq`.
Then it reads line 3, ">seq2". That matches your header regexp, so it prints
the header. Then in runs your replace and sub calls on `seq` \- which is still
"AAACTACCGCGTTT" from line 2.
You need to move your `seq` handling to before you print the headers, and
consider what will happen when you run off the end of the file without finding
a final header - you will still have 'seq' contents that you want to parse and
print after your for loop has ended.
Or maybe look into the third-party biopattern library, which has the
[`SeqIO`](http://biopython.org/wiki/SeqIO) module to parse FASTA files.
|
plotting a range of data that satisfies conditions in another column in matplotlib python
Question: I'm new to python. I have an array with four columns. I want to plot columns 2
and 3, pending that column 1 satisfies a condition. If column 1 does not
satisfy this range, it is plotted in the next subplot. I have seen that using
the where function can do this - just not sure exactly how to go about it.
For example:
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
data = np.array([[17., 18., 19., 20., 31., 46.],\
[1.52,2.5,2.55,2.56,2.53,2.54],\
[7.04,7.06,9.05,11.08,7.06,11.06],\
[0.,0.,0.,0.,4.,4.]])
First round and replace the second column:
dataRound = sp.round_(data,1)
data[:,1] = dataRound[:,1]
Then locate/plot the two different conditions:
if np.where(data[i]==1.5):
subplot(211)
plt.scatter(data[:,1],data[:,2])
elif np.where(data[i] ==2.5):
subplot(212)
plt.scatter(data[:,1], data[:,2])
Answer: Here is a code snippet that answers your question:
t=linspace(0,1000, 1000)
y=sin(0.1*t)
ii=find(t>100)
plot(t[ii],y[ii])
Basically with `find` you generate a list of indexes that satisfy the logical
condition (t>100) and then use this list.
p.s. put np. and plt. where needed
Update:
Is this what you need? Note, you have to exchange [1,:] instead of [:,1].
Advice - print out what you are doing to be sure.
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
data = np.array([[17., 18., 19., 20., 31., 46.],\
[1.52,2.5,2.55,2.56,2.53,2.54],\
[7.04,7.06,9.05,11.08,7.06,11.06],\
[0.,0.,0.,0.,4.,4.]])
dataRound = sp.round_(data,1)
data[1,:] = dataRound[1,:]
ax1=plt.subplot(211)
ax2=subplot(212)
ax1.scatter(data[1,data[1,:]<=1.5], data[2,data[1,:]<=1.5], color = 'g')
ax2.scatter(data[1,data[1,:]>=2.5], data[2,data[1,:]>=2.5], color = 'b')
if you want to plot 2rd and 3rd column, then:
ax1.scatter(data[2,data[1,:]<=1.5], data[3,data[1,:]<=1.5], color = 'g')
ax2.scatter(data[2,data[1,:]>=2.5], data[3,data[1,:]>=2.5], color = 'b')
|
how to assign variable to module name in python function
Question: I have a set of modules, and I want to be able to call one of them within a
function based on an argument given to that function. I tried this, but it
doesn't work:
from my.set import modules
def get_modules(sub_mod):
variable = sub_mod
mod_object = modules.variable
function(mod_object)
I get:
AttributeError: 'module' object has no attribute 'variable'
It's not taking the argument I give it, which would be the name of a module
that exists under my.set.modules. so if I called the function
get_modules(name_of_mod_under_modules), I would like the line modules.variable
to be "modules.name_of_mod_under_modules" which I could then have as an object
passed to mod_object.
Answer: In your current code, you're looking for `modules.variable` which doesn't
exist, hence the error! That's not how you get an attribute of an object.
To achieve what you wanted, use the `getattr` function.
mod_object = getattr(modules, variable)
|
Python mocking- understand the concept and necessity
Question: **First of all I really want to understand WHY should I use mocking (Python
Mock library).**
What's the difference between quickly making a small object for the need of
testing something and using a mock object?
What are the advantages of mocking over other methods (I would really need
some "live" examples to understand this one)? Is mocking essential in some
cases?
Also: what's the difference between a mock object and magicmock object? How to
they connect?
**The second thing is WHAT/WHERE to mock.**
Should I only mock, for example, the database query results? I mean: does the
mocked data should always be a remote one?
**tl;dr:** Please explain the mocking concept in Python to someone who never
used it in any language.
Answer: A mock object is meant to quickly and easily represent some complex object
without having to manually go through and set up stubs for that object during
a test. It's simply a useful utility for making writing tests a bit easier.
As to what/where to mock, anything that is external to the module-under-test
should be mocked. All you want to test is the current module code, not the
code for some module that is being called by the module-under-test.
A quick example is some code that uses the simplejson module.
import simplejson
def my_fn(args):
return simplejson.dumps({'args': args})
All you want to test is that the function my_fn makes the correct call to
simplejson.dumps(), so you mock simplejson. You don't really care if the
object passed to simplejson is converted to json correctly as testing that is
in the scope of the simplejson module (which has it's own set of tests that
you can run if you're so inclined).
import working_code
import mock
@mock.patch('working_code.simplejson')
def test_my_fn(mock_simplejson):
working_code.my_fn('test-args')
mock_simplejson.dumps.assert_called_with({'args': 'test-args'})
Note that mock.patch is simply a nice way to inject and remove mocks for a
particular test. After test_my_fn is run, working_code.simplejson returned to
the state before the function was called. If that's confusing, you can think
of the test example as:
import working_code
import mock
def test_my_fn():
mock_simplejson = mock.Mock()
working_code.simplejson = mock_simplejson
working_code.my_fn('test-args')
mock_simplejson.dumps.assert_called_with({'args': 'test-args'})
|
Python get generated url to string
Question: (Title might change not too sure what to call it)
So I'm trying to open a URL that directs to a random page (This URL:
[http://anidb.net/perl-
bin/animedb.pl?show=anime&do.random=1](http://anidb.net/perl-
bin/animedb.pl?show=anime&do.random=1)) and I want to return where that URL
goes
randomURL = urllib.urlopen("http://anidb.net/perl-bin/animedb.pl?show=anime&do.random=1")
print(randomURL)
That's what I (stupidly) thought would work. I imported urllib
Answer: In Python 3 >, `urllib.urlopen` was replaced by `urllib.request.urlopen`.
Change the request line to this:
urllib.request.urlopen('http://anidb.net/perl-bin/animedb.pl?show=anime&do.random=1')
For more, you can see the
[docs](http://docs.python.org/3.0/library/urllib.request.html)
But if you want to have the url, which is a bit more difficult, you can take a
look at
[`urllib.request.HTTPRedirectHandler`](http://docs.python.org/3.3/library/urllib.request.html?highlight=httpredirecthandler#urllib.request.HTTPRedirectHandler)
|
Pls help: using Python mechanize, but don’t know the form name on webpage
Question: (I am a newbie in programming)
I am trying to write some Python codes to login a forum, this is webpage: the
[https://www.artofproblemsolving.com/Forum/ucp.php?mode=login&redirect=/index.php](https://www.artofproblemsolving.com/Forum/ucp.php?mode=login&redirect=/index.php).
And unfortunately I don’t have knowledge in web page source codes. My main
question is, how does the form name in the webpage source code looked like?
Because for below codes, row 4, I need the form name of the webpage. However I
tried below but not working.
import mechanize
b = mechanize.Browser()
r = b.open("https://www.artofproblemsolving.com/Forum/ucp.php?mode=login&redirect=/index.php")
b.select_form(name="login")
b.form["login"] = "MYNAME"
b.form["password"] = "MYPASSWORD"
b.submit()
could you please help me? many thanks.
Answer: 1. check how html page is structured, focus on form tag
2. install FireBug on FireFox browser or use the same built in mechanism if you are on chrome, then with FireBug open go to 'net' tab and check what calls was done to the server when you sumbited the form
3. install scrapy and go though its tutorial <http://doc.scrapy.org/en/latest/intro/tutorial.html>, when you feel comfortable enough check how to use scrapy FormRequest <http://doc.scrapy.org/en/latest/topics/request-response.html#formrequest-objects>
Enjoy!
|
mod wsgi using apache and python
Question: I have used mod_wsgi to create a web server that can be called locally. Now I
just found out I need to change it so it runs through the Apache server. I'm
hoping to do this without rewriting my whole script.
from wsgiref.simple_server import make_server
class FileUploadApp(object):
firstcult = ""
def __init__(self, root):
self.root = root
def __call__(self, environ, start_response):
if environ['REQUEST_METHOD'] == 'POST':
post = cgi.FieldStorage(
fp=environ['wsgi.input'],
environ=environ,
keep_blank_values=True
)
body = u"""
<html><body>
<head><title>title</title></head>
<h3>text</h3>
<form enctype="multipart/form-data" action="http://localhost:8088" method="post">
</body></html>
"""
return self.__bodyreturn(environ, start_response,body)
def __bodyreturn(self, environ, start_response,body):
start_response(
'200 OK',
[
('Content-type', 'text/html; charset=utf8'),
('Content-Length', str(len(body))),
]
)
return [body.encode('utf8')]
def main():
PORT = 8080
print "port:", PORT
ROOT = "/home/user/"
httpd = make_server('', PORT, FileUploadApp(ROOT))
print "Serving HTTP on port %s..."%(PORT)
httpd.serve_forever() # Respond to requests until process is killed
if __name__ == "__main__":
main()
I am hoping to find a way to make it possible to avoid making the server and
making it possible to run multiple instances of my script.
Answer: The documentation at:
* <http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines>
explains what mod_wsgi is expecting to be given.
If you also read:
* <http://blog.dscpl.com.au/2011/01/implementing-wsgi-application-objects.html>
you will learn about the various ways that WSGI application entry points can
be constructed.
From that you should identify that FileUploadApp fits one of the described
ways of defining a WSGI application and thus you only need satisfy the
requirement that mod_wsgi has of the WSGI application object being accessible
as 'application'.
|
How to check for Python, the key associated with the certificate or not
Question: I know that the blocks modulus in the certificate and private key must be the
same if they are related. But how this can be checked by using Python? I am
looking for a solution to the OpenSSL library, but I found none. Please tell
me the solution, how to understand what the certificate and private key
associated with using Python. If the private key is not encrypted in the
format PEM, a certificate in PEM format. Preferably using standard libraries.
Without using OpenSSL through the subprocess
Thanks.
Answer: There is a Python interface to the OpenSSL library : pyOpenSSL currently in
version 0.13.1.
EDIT : The answer to the question...
**Verify that a private key matches a certificate with PyOpenSSL** :
import OpenSSL.crypto
from Crypto.Util import asn1
c=OpenSSL.crypto
# The certificate - an X509 object
cert=...
# The private key - a PKey object
priv=...
pub=cert.get_pubkey()
# Only works for RSA (I think)
if pub.type()!=c.TYPE_RSA or priv.type()!=c.TYPE_RSA:
raise Exception('Can only handle RSA keys')
# This seems to work with public as well
pub_asn1=c.dump_privatekey(c.FILETYPE_ASN1, pub)
priv_asn1=c.dump_privatekey(c.FILETYPE_ASN1, priv)
# Decode DER
pub_der=asn1.DerSequence()
pub_der.decode(pub_asn1)
priv_der=asn1.DerSequence()
priv_der.decode(priv_asn1)
# Get the modulus
pub_modulus=pub_der[1]
priv_modulus=priv_der[1]
if pub_modulus==priv_modulus:
print('Match')
else:
print('Oops')
_(Source :<http://www.v13.gr/blog/?p=325>)_
|
Explanation of program's behavior (Python)
Question:
import threading
shared_balance = 0
class Deposit(threading.Thread):
def run(self):
for i in xrange(1000000):
global shared_balance
balance = shared_balance
balance += 100
shared_balance = balance
class Withdraw(threading.Thread):
def run(self):
for i in xrange(1000000):
global shared_balance
balance = shared_balance
balance -= 100
shared_balance = balance
thread1 = Deposit()
thread2 = Withdraw()
thread1.start()
thread2.start()
thread1.join()
thread2.join()
print shared_balance
Everytime I run this program it outputs a random number. If it deposits 100, 1
million times and withdraws 100, 1 million times then why isn't the output 0?
Answer: You need to use
[threading.Lock](http://docs.python.org/2/library/threading.html#lock-objects)
to access you variables safely :
from threading import Thread, Lock
shared_balance = 0
class Deposit(Thread):
def __init__(self, lock):
super(Deposit, self).__init__()
self.lock = lock
def run(self):
global shared_balance
for i in xrange(1000000):
with self.lock:
shared_balance += 100
class Withdraw(Thread):
def __init__(self, lock):
super(Withdraw, self).__init__()
self.lock = lock
def run(self):
global shared_balance
for i in xrange(1000000):
with self.lock:
shared_balance -= 100
shared_lock = Lock()
thread1 = Deposit(shared_lock)
thread2 = Withdraw(shared_lock)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
print shared_balance
Output :
>>> 0
Also, take a look at the bytecode generated for :
a = 0
def f():
global a
a += 10
Bytecode of "a += 10" :
6 LOAD_GLOBAL 0 (a) # Load global "a" UNSAFE ZONE
9 LOAD_CONST 2 (10) # Load value "10" UNSAFE ZONE
12 INPLACE_ADD # Perform "+=" UNSAFE ZONE
13 STORE_GLOBAL 0 (a) # Store global "a"
16 LOAD_CONST 0 (None) # Load "None"
19 RETURN_VALUE # Return "None"
In Python, a bytecode execution cannot be preempted. It makes it really
convenient to use for threading. But in that case, it takes 4 bytecode
executions to perform the '+=' operation. It means that any other bytecode of
any other thread is suceptible to be executed between these bytecodes. This is
what makes it unsafe and the reason why you should use locks.
|
Django, ImportError: cannot import name Celery, possible circular import?
Question: I went through this example here:
<http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html>
All my tasks are in files called tasks.py.
After updating celery and adding the file from the example django is throwing
the following error, no matter what I try:
ImportError: cannot import name Celery
Is the problem possibly caused by the following?
app.autodiscover_tasks(settings.INSTALLED_APPS, related_name='tasks')
Because it goes through all tasks.py files which all have the following
import.
from cloud.celery import app
**cloud/celery.py** :
from __future__ import absolute_import
import os, sys
from celery import Celery
from celery.schedules import crontab
from django.conf import settings
BROKER_URL = 'redis://:PASSWORD@localhost'
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'cloud.settings')
app = Celery('cloud', broker=BROKER_URL)
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(settings.INSTALLED_APPS, related_name='tasks')
if "test" in sys.argv:
app.conf.update(
CELERY_ALWAYS_EAGER=True,
)
print >> sys.stderr, 'CELERY_ALWAYS_EAGER = True'
CELERYBEAT_SCHEDULE = {
'test_rabbit_running': {
"task": "retail.tasks.test_rabbit_running",
"schedule": 3600, #every hour
},
[..]
app.conf.update(
CELERYBEAT_SCHEDULE=CELERYBEAT_SCHEDULE
)
**retail/tasks.py** :
from cloud.celery import app
import logging
from celery.utils.log import get_task_logger
logger = get_task_logger('tasks')
logger.setLevel(logging.DEBUG)
@app.task
def test_rabbit_running():
import datetime
utcnow = datetime.datetime.now()
logger.info('CELERY RUNNING')
The error happens, when I try to access a url that is not valid, like /foobar.
**Here is the full traceback** :
Traceback (most recent call last):
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 126, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 255, in __call__
response = self.get_response(request)
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 178, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 220, in handle_uncaught_exception
if resolver.urlconf_module is None:
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 342, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/opt/src/slicephone/cloud/cloud/urls.py", line 52, in
urlpatterns += patterns('', url(r'^search/', include('search.urls')))
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 25, in include
urlconf_module = import_module(urlconf_module)
File "/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/opt/src/slicephone/cloud/search/urls.py", line 5, in
from handlers import SearchHandler
File "/opt/src/slicephone/cloud/search/handlers.py", line 15, in
from places import handlers as placeshandler
File "/opt/src/slicephone/cloud/places/handlers.py", line 23, in
import api as placesapi
File "/opt/src/slicephone/cloud/places/api.py", line 9, in
from djapi import *
File "/opt/src/slicephone/cloud/places/djapi.py", line 26, in
from tasks import add_single_place, add_multiple_places
File "/opt/src/slicephone/cloud/places/tasks.py", line 2, in
from cloud.celery import app
File "/opt/src/slicephone/cloud/cloud/celery.py", line 4, in
from celery import Celery
File "/opt/src/slicephone/cloud/cloud/celery.py", line 4, in
from celery import Celery
ImportError: cannot import name Celery
Answer: Adding the following lines to cloud/celery.py:
import celery
print celery.__file__
gave me the file itself and not the celery module from the library. After
renaming celery.py to celeryapp.py and adjusting the imports all errors were
gone.
Note:
That leads to a change in starting the worker:
celery worker --app=cloud.celeryapp:app
For those running celery==3.1.2 and getting this error:
TypeError: unpack_from() argument 1 must be string or read-only buffer, not memoryview
Apply the patch mentioned here: <https://github.com/celery/celery/issues/1637>
|
Trying to understand a solution to project Euler # 3
Question: > The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime
> factor of the number 600851475143 ? @ <http://projecteuler.net/problem=3>
I have a deal going with myself that if I can't solve a project Euler problem
I will understand the best solution I can find. I did write an algorithm which
worked for smaller numbers but was too inefficient to work for bigger ones. So
I googled [Zach Denton's](http://zacharydenton.com/project-euler-solutions/3/)
answer and started studying it.
Here is his code:
#!/usr/bin/env python
import math
def factorize(n):
res = []
# iterate over all even numbers first.
while n % 2 == 0:
res.append(2)
n //= 2
# try odd numbers up to sqrt(n)
limit = math.sqrt(n+1)
i = 3
while i <= limit:
if n % i == 0:
res.append(i)
n //= i
limit = math.sqrt(n+i)
else:
i += 2
if n != 1:
res.append(n)
return res
print max(factorize(600851475143))
Here are the bits I can't figure out for myself:
1. In the second while loop, why does he use a `sqrt(n + 1)` instead of just `sqrt(n)`?
2. Why wouldn't you also use `sqrt(n + 1)` when iterating over the even numbers in the first while loop?
3. How does the algorithm manage to find only prime factors? In the algorithm I first wrote I had a separate test for checking whether a factor was prime, but he doesn't bother.
Answer: 1. I suspect the `+1` has to do with the imprecision of `float` (I am not sure whether it's actually required, or is simply a defensive move on the author's part).
2. The first `while` loop factors all twos out of `n`. I don't see how `sqrt(n + 1)` would fit in there.
3. If you work from small factor to large factors, you automatically eliminate all composite candidates. Think about it: once you've factored out `5`, you've automatically factored out `10`, `15`, `20` etc. No need to check whether they're prime or not: by that point `n` will not be divisible by them.
I suspect that checking for primality is what's killing your original
algorithm's performance.
|
Python: Iterate through folders, then subfolders and print filenames with path to text file
Question: I am trying to use python to create the files needed to run some other
software in batch.
for part of this i need to produce a text file that loads the needed data
files into the software.
My problem is that the files i need to enter into this text file are stored in
a set of structured folders.
I need to loop over a set of folders (up to 20), which each could contain up
to 3 more folders which contain the files i need. The bottom level of the
folders contain a set of files needed for each run of the software. The text
file should have the path+name of these files printed line by line, add an
instruction line and then move to the next set of files from a folder and so
on until all of sub level folders have been checked.
i am fairly new with python and can't really find anything that quite does
what i need.
Any help is appreciated
Answer: Use os.walk(). The following will output a list of all files within the
subdirectories of "dir". The results can be manipulated to suit you needs:
import os
def list_files(dir):
r = []
subdirs = [x[0] for x in os.walk(dir)]
for subdir in subdirs:
files = os.walk(subdir).next()[2]
if (len(files) > 0):
for file in files:
r.append(subdir + "/" + file)
return r
|
Can oauth.get_current_user() be used with OAuth2?
Question: I'm having a hard time finding a definitive answer about the use of OAuth2
within _my_ GAE app. First, this is **not** an endpoints app, just a plain old
python app.
I can get the `oauth.get_current_user()` method to return the authenticated
user when expected if using the OAuth endpoints within my app
(`appid.appspot.com/_ah/OAuth*`), but this is using OAuth1, which is
deprecated -- Google's dev docs make that very clear.
So I tried using Google's OAuth2 endpoints to auth my app and I've gotten the
access token, but the `oauth.get_current_user()` call within my GAE app always
throws an exception (invalid OAuth sig) and never presents the User object
when expected. I've tried authorizing my app with various scopes
(`https://www.googleapis.com/auth/userinfo.email` &
`https://www.googleapis.com/auth/appengine.admin`), but it doesn't matter as
when I sign the request with the OAuth2 token, my GAE app never accepts the
request as valid and `oauth.get_current_user()` always throws an exception.
So my question is, should I be able to use the `oauth.get_current_user()` call
from within my GAE app when signing requests with an OAuth2 token? If so,
which scope(s) must I authorize for access to the GAE app?
Answer: tl;dr;
try this inside appengine code:
from google.appengine.api import oauth
oauth.get_current_user(SCOPE)
* * *
I've been to the same path for the past week, wandering among vague google
documents.
My final understanding is that AppEngine never officially made it to the
OAuth2 land. You see these 'OAuth1 being deprecated' messages all over google
API documents, but it's actually quiet in appengine documents. It talks about
OAuth, but does not talk about which version.
This is the landscape of what I think the current status is (as of
2013-12-07):
* [[1] Authorizing into appengine with OAuth](https://developers.google.com/appengine/docs/python/oauth/): the `*.appspot.com/_ah/` approach. Doesn't say which version. Likely 1.0 only.
* [[2] Google API Authorization](https://developers.google.com/api-client-library/python/guide/aaa_oauth): all the OAuth2 fuss, but it's about requesting other Google APIs, not much about appengine.
* [[3] Google accounts authentication with OAuth 2.0](https://developers.google.com/accounts/docs/OAuth2Login): logging in with general google account. Unfortunately appengine is not included in the scope.
There is another document that talks about [OAuth 2.0 on
appengine](https://developers.google.com/api-client-
library/python/guide/google_app_engine), but it's about calling Google APIs
from appengine server, not logging into it.
I tried accessing appengine server with the OAuth2 approach in
[[3](https://developers.google.com/accounts/docs/OAuth2Login)], but
`oauth.get_current_user()` method raised an exception. Also tried various
scopes, hoping one would fit for appengine, only to fail.
However,
What I found out [from another SO
answer](http://stackoverflow.com/questions/7810607/google-app-engine-
oauth2-provider/10855271#10855271), was an **undocumented** use of the method:
oauth.get_current_user('https://www.googleapis.com/auth/userinfo.email')
passing the scope as an argument. And this worked, provided the consumer had
passed the access token with the scope.
And it turned out it was in the [appengine
code](https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/api/oauth/oauth_api.py?spec=svn404&r=400#85)
after all. It just wasn't
[documented](https://developers.google.com/appengine/docs/python/oauth/functions#get_current_user).
Improvements or corrections to any misunderstandings are welcome.
|
shuffle a python array WITH replacement
Question: What is the easiest way to shuffle a python array or list WITH replacement??
I know about `random.shuffle()` but it does the reshuffling WITHOUT
replacement.
Answer: You are looking for
[`random.choice()`](http://docs.python.org/2/library/random.html#random.choice)
calls in a list comprehension:
[random.choice(lst) for _ in range(len(lst))]
This produces a list of the same length as the input list, but the values can
repeat.
Demo:
>>> import random
>>> lst = [1,2,4,5,3]
>>> [random.choice(lst) for _ in range(len(lst))]
[3, 5, 1, 4, 1]
|
Loop URL to scrape using beautiful soup python
Question: I am using the following code to scrape the website. The following which I
tried works fine for a page in the website. Now I want to scrape several such
pages for which I am looping the URL as shown below.
from bs4 import BeautifulSoup
import urllib2
import csv
import re
number = 2500
for i in xrange(2500,7000):
page = urllib2.urlopen("http://bvet.bytix.com/plus/trainer/default.aspx?id={}".format(i))
soup = BeautifulSoup(page.read())
for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
print re.sub(r'\s+',' ',','.join(eachuniversity.findAll(text=True)).encode('utf-8'))
print '\n'
number = number + 1
The following is the normal code without loop
from bs4 import BeautifulSoup
import urllib2
import csv
import re
page = urllib2.urlopen("http://bvet.bytix.com/plus/trainer/default.aspx?id=4591")
soup = BeautifulSoup(page.read())
for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
print re.sub(r'\s+',' ',''.join(eachuniversity.findAll(text=True)).encode('utf-8'))
I am looping the `id` value in the URL from 2500 to 7000. But there are many
`id`'s for which there is no value. So there are no such pages. How do I skip
those pages and scrape data only when there exists data for given `id`.
Answer: you can either try catch the result ([Why is it "Easier to ask forgiveness
than permission" in python, but not in
Java?](http://stackoverflow.com/questions/6092992/why-is-it-easier-to-ask-
forgiveness-than-permission-in-python-but-not-in-java)):
for i in xrange(2500,7000):
try:
page = urllib2.urlopen("http://bvet.bytix.com/plus/trainer/default.aspx?id={}".format(i))
except:
continue
else:
soup = BeautifulSoup(page.read())
for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
print re.sub(r'\s+',' ',','.join(eachuniversity.findAll(text=True)).encode('utf-8'))
print '\n'
number = number + 1
or use a (great) lib such as [requests](http://www.python-
requests.org/en/latest/) and check before scrapping
import requests
for i in xrange(2500,7000):
page = requests.get("http://bvet.bytix.com/plus/trainer/default.aspx?id={}".format(i))
if not page.ok:
continue
soup = BeautifulSoup(requests.text)
for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
print re.sub(r'\s+',' ',','.join(eachuniversity.findAll(text=True)).encode('utf-8'))
print '\n'
number = number + 1
basically there's no way for you to know if the page with that id exists
before calling the url.
|
why does timeit module take longer time than the code produces output quickly in python?
Question: here is the code to calculate fibonacci
import timeit
counter=0
def fibhelper(n):
global counter
counter+=1
if n==0 :
return 0
elif n==1:
return 1
else:
return fibhelper(n-1)+fibhelper(n-2)
print fibhelper(20)
print "Total function calls-- ",counter
t1=timeit.Timer('fibhelper(20)',"from __main__ import fibhelper")
y=t1.timeit()
print "normal method in secs: ",y
output is:
6765
Total function calls-- 21891
which comes out immediately, but it is still calculating `y`. why is this?
when the `function` is evaluated quickly, why does `timeit` of that `function`
takes longer?
Answer: The default parameters of `timeit` include: `number=1000000`.
Quoting the documentation of `timeit`:
> ... run its `timeit()` method with `number` executions.
Therefore, it is expected to take 1000000 times longer.
|
Looping through a list of tuples to make a POST request with each tuple
Question: I have a list of tuples here:
import datetime
import requests
from operator import itemgetter
original = [(datetime.datetime(2013, 11, 12, 19, 24, 50), u'78:E4:00:0C:50:DF', u' 8', u'Hon Hai Precision In', u''), (datetime.datetime(2013, 11, 12, 19, 24, 50), u'78:E4:00:0C:50:DF', u' 8', u'Hon Hai Precision In', u''), (datetime.datetime(2013, 11, 12, 19, 24, 48), u'9C:2A:70:69:81:42', u' 5', u'Hon Hai Precision In 12:', u''), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'00:1E:4C:03:C0:66', u' 9', u'Hon Hai Precision In', u''), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'20:C9:D0:C6:8F:15', u' 8', u'Apple', u''), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'68:5D:43:90:C8:0B', u' 11', u'Intel Orate', u' MADEGOODS'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'68:96:7B:C1:76:90', u' 15', u'Apple', u''), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'68:96:7B:C1:76:90', u' 15', u'Apple', u''), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'04:F7:E4:A0:E1:F8', u' 32', u'Apple', u''), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'04:F7:E4:A0:E1:F8', u' 32', u'Apple', u'')]
data = [x[:-2] for x in original]
newData = sorted(data, key=itemgetter(0))
print newData
[(datetime.datetime(2013, 11, 12, 19, 24, 47), u'00:1E:4C:03:C0:66', u' 9'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'20:C9:D0:C6:8F:15', u' 8'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'68:5D:43:90:C8:0B', u' 11'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'68:96:7B:C1:76:90', u' 15'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'68:96:7B:C1:76:90', u' 15'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'04:F7:E4:A0:E1:F8', u' 32'), (datetime.datetime(2013, 11, 12, 19, 24, 47), u'04:F7:E4:A0:E1:F8', u' 32'), (datetime.datetime(2013, 11, 12, 19, 24, 48), u'9C:2A:70:69:81:42', u' 5'), (datetime.datetime(2013, 11, 12, 19, 24, 50), u'78:E4:00:0C:50:DF', u' 8'), (datetime.datetime(2013, 11, 12, 19, 24, 50), u'78:E4:00:0C:50:DF', u' 8')]
The first element in each tuple is a date/time, the second is a MAC address
and the third is a RSSI strength.
I am looking for the best way to send each tuple in a POST request the
Google's Measurement Protocol, like so:
requests.post("http://www.google-analytics.com/collect",
data="v=1&tid=UA-22560594-2&cid="varMACADDRESS"&t=event&ec="varDATETIME"&ea="varRSSI")
The "varXXXXXX"s represent the elements of the tuples.
This is what I think should be the solution, but I can't think of how to
assign the elements of each tuple to the %s's:
for tuples [:10] in newData:
requests.post("http://www.google-analytics.com/collect",
data="v=1&tid=UA-22560594-2&cid="%s"&t=event&ec="%s"&ea="%s")
What would be the most efficient and pythonic way to do this?
Answer: Just take advantage of the fact that you can specify a `dict` to the `data`
kwarg and requests will [handle the form-encoding for you](http://www.python-
requests.org/en/latest/user/quickstart/#more-complicated-post-requests).
for date,mac,rssi in some_collection_of_tuples:
payload = {'t':'event','v':'1','ec':date,'cid':mac,...} #etc
requests.post("http://www.google-analytics.com/collect", data=payload)
|
How to change runs of values in a column with Perl, Python or bash
Question: I want to iterate through lots of tab-delimited files and change a single
column in the following way:
ORIGINAL
Col1 Col2 ....
afawer 1
asdgf 1
aser 1
qwerq 10
a3awer 10
1sdgf 11
a55er 11
2wu9 12
asxwer 12
a2dgf 13
a1er 13
qperq 13
...
DESIRED REPLACEMENT
Col1 Col2 ....
afawer 1
asdgf 1
aser 1
qwerq 2
a3awer 2
1sdgf 3
a55er 3
2wu9 4
asxwer 4
a2dgf 5
a1er 5
qperq 5
...
Note that the run lengths in Col2 will vary substantially for each file, so
the lengths cannot be hard-coded. Essentially, the pattern I want to replace
is this:
> aaabbbbbccccdddd
where a, b, c and d could be any integers. The desired replacement is:
> 1112222233334444
and so on (i.e. a natural ordering of integers).
I'd like to do this with Python or Perl, or just using bash commands like
`sed` if possible.
Answer: With Python, use
[`itertools.groupby()`](http://docs.python.org/2/library/itertools.html#itertools.groupby)
to group rows on the second column, and a counter provided by
[`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate):
import csv
from itertools import groupby
from operator import itemgetter
with open(inputfile, 'rb') as ifh, open(outputfile, 'wb') as ofh:
reader = csv.reader(ifh, delimiter='\t')
writer = csv.writer(ofh, delimiter='\t')
writer.writerow(next(reader)) # copy across header
for counter, (key, group) in enumerate(groupby(reader, itemgetter(1)), 1):
for row in group:
row[1] = counter
writer.writerow(row)
This writes a new CSV file with the same data, except the second column is
replaced by a counter (starting at 1) that increments every time the original
value in column 2 changes.
|
Skip an iteration while looping through a list - Python
Question: Is there a way to skip the first iteration in this for-loop, so that I can put
a for-loop inside a for-loop in order to compare the first element in the list
with the rest of them.
from collections import Counter
vowelCounter = Counter()
vowelList = {'a','e','i','o','u'}
userString = input("Enter a string ")
displayed = False
for letter in userString:
letter = letter.lower()
if letter in vowelList:
vowelCounter[letter] +=1
for vowelCount1 in vowelCounter.items():
char, count = vowelCount1
for vowelCount2 in vowelCounter.items(STARTING AT 2)
char2, count2 = vowelCount2
if count > count2 : CONDITION
How would the syntax go for this? I only need to do a 5 deep For-loop. So the
next would Start at 3, then start at 4, then 5, the the correct print
statement depending on the condition. Thanks
Answer: You could do:
for vowelCount2 in vowelCounter.items()[1:]:
This will give you all the elements of `vowelCounter.items()` except the first
one.
The `[1:]` means you're slicing the list and it means: start at index `1`
instead of at index `0`. As such you're excluding the first element from the
list.
If you want the index to depend on the previous loop you can do:
for i, vowelCount1 in enumerate(vowelCounter.items()):
# ...
for vowelCount2 in vowelCounter.items()[i:]:
# ...
This means you're specifying `i` as the starting index and it depends on the
index of `vowelCount1`. The function `enumerate(mylist)` gives you an index
and an element of the list each time as you're iterating over `mylist`.
|
Python using import in a function, import content of variable and not its name?
Question: I have a function that checks if a module is installed, if not it will install
it. I am passing the extension through a function. However how can I stop it
attempting to import the variable name and use its contents?
Example:
def importExtension(extension):
try:
import extension
except:
Do stuff
importExtension("blah")
Answer: Use
[importlib](http://docs.python.org/2/library/importlib.html#importlib.import_module)
([backport](https://pypi.python.org/pypi/importlib/)).
import importlib
def importExtension(extension):
try:
importlib.import_module(name)
except:
Do stuff
importExtension("blah")
* * *
Also, to quote the docs about `__import__(..)`:
> This is an advanced function that is not needed in everyday Python
> programming, unlike importlib.import_module().
|
Python Threading and interpreter shutdown- Is this fixable or issue Python Issue #Issue14623
Question: I have python script that uploads files to a cloud account. It was working for
a while, but out of now where I started getting the '`Exception in thread
Thread-1 (most likely raised during interpreter shutdown)'` error. After
researching I found this python issue <http://bugs.python.org/issue14623>
which states the issue will not get fixed.
However, I'm not exactly sure this would apply to me and I am hoping someone
could point out a fix. I would like to stay with python's threading and try to
avoid using multiprocessing since this is I/O bound. This is the stripped down
version(which has this issue also), but in the full version the upload.py has
a list I like to share so I want it to run in the same memory.
It always breaks only after it completes and all the files are uploaded. I
tried removing 't.daemon = True' and it will just hang(instead of breaking) at
that same point(after all the files are uploaded). I also tried removing
`q.join()` along with 't.daemon = True' and it will just hang after
completion. Without the t.daemon = True and q.join(), I think it is blocking
at `item = q.get()` when it comes to the end of the script execution(just
guess).
main:
import logging
import os
import sys
import json
from os.path import expanduser
from Queue import Queue
from threading import Thread
from auth import Authenticate
from getinfo import get_containers, get_files, get_link
from upload import upload_file
from container_util import create_containers
from filter import MyFilter
home = expanduser("~") + '/'
directory = home + "krunchuploader_logs"
if not os.path.exists(directory):
os.makedirs(directory)
debug = directory + "/krunchuploader__debug_" + str(os.getpid())
error = directory + "/krunchuploader__error_" + str(os.getpid())
info = directory + "/krunchuploader__info_" + str(os.getpid())
os.open(debug, os.O_CREAT | os.O_EXCL)
os.open(error, os.O_CREAT | os.O_EXCL)
os.open(info, os.O_CREAT | os.O_EXCL)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s',
filename=debug,
filemode='w')
logger = logging.getLogger("krunch")
fh_error = logging.FileHandler(error)
fh_error.setLevel(logging.ERROR)
fh_error.setFormatter(formatter)
fh_error.addFilter(MyFilter(logging.ERROR))
fh_info = logging.FileHandler(info)
fh_info.setLevel(logging.INFO)
fh_info.setFormatter(formatter)
fh_info.addFilter(MyFilter(logging.INFO))
std_out_error = logging.StreamHandler()
std_out_error.setLevel(logging.ERROR)
std_out_info = logging.StreamHandler()
std_out_info.setLevel(logging.INFO)
logger.addHandler(fh_error)
logger.addHandler(fh_info)
logger.addHandler(std_out_error)
logger.addHandler(std_out_info)
def main():
sys.stdout.write("\x1b[2J\x1b[H")
print title
authenticate = Authenticate()
cloud_url = get_link(authenticate.jsonresp)
#per 1 million files the list will take
#approx 300MB of memory.
file_container_list, file_list = get_files(authenticate, cloud_url)
cloud_container_list = get_containers(authenticate, cloud_url)
create_containers(cloud_container_list,
file_container_list, authenticate, cloud_url)
return file_list
def do_the_uploads(file_list):
def worker():
while True:
item = q.get()
upload_file(item)
q.task_done()
q = Queue()
for i in range(5):
t = Thread(target=worker)
t.daemon = True
t.start()
for item in file_list:
q.put(item)
q.join()
if __name__ == '__main__':
file_list = main()
value = raw_input("\nProceed to upload files? Enter [Y/y] for yes: ").upper()
if value == "Y":
do_the_uploads(file_list)
upload.py:
def upload_file(file_obj):
absolute_path_filename, filename, dir_name, token, url = file_obj
url = url + dir_name + '/' + filename
header_collection = {
"X-Auth-Token": token}
print "Uploading " + absolute_path_filename
with open(absolute_path_filename) as f:
r = requests.put(url, data=f, headers=header_collection)
print "done"
Error output:
Fetching Cloud Container List... Got it!
All containers exist, none need to be added
Proceed to upload files? Enter [Y/y] for yes: y
Uploading /home/one/huh/one/green
Uploading /home/one/huh/one/red
Uploading /home/one/huh/two/white
Uploading /home/one/huh/one/blue
Uploading /home/one/huh/two/yellow
done
Uploading /home/one/huh/two/purple
done
done
done
done
done
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 808, in __bootstrap_inner
File "/usr/lib64/python2.7/threading.py", line 761, in run
File "krunchuploader.py", line 97, in worker
File "/usr/lib64/python2.7/Queue.py", line 168, in get
File "/usr/lib64/python2.7/threading.py", line 332, in wait
<type 'exceptions.TypeError'>: 'NoneType' object is not callable
UPDATE: I placed a time.sleep(2) at the end of the script which seems to have
fixed the issue. I guess the sleep allows the daemons to finish before the
script comes to end of life and closes? I would of thought the main process
would have to wait for the daemons to finish.
Answer: You can use a "poison pill" to kill the workers gracefully. After putting all
the work in the queue, add a special object, one per worker, that workers
recognize and quit. You can make the threads non-daemonic so Python will wait
for them to finish before shutting down the process.
A concise way to make `worker` recognize `poison` and quit is to use the two-
argument form of the
[`iter()`](http://docs.python.org/2/library/functions.html#iter) builtin in a
`for` loop:
def do_the_uploads(file_list):
def worker():
for item in iter(q.get, poison):
upload_file(item)
poison = object()
num_workers = 5
q = Queue()
for i in range(num_workers):
t = Thread(target=worker)
t.start()
for item in file_list:
q.put(item)
for i in range(num_workers):
q.put(poison)
|
Python: how to replace values in masked array according to a condition
Question: I have an array. I want to replace the values `> 5` with `1`, and the values
`<= 5` with `0`. I also must to take into account the invalid values (`999`).
1) My array:
>>> import numpy
>>> a = numpy.array([ [[2, 5, 999],[0, 12, 1]], [[999, 8, 7],[7, 11, 6]] ])
>>> a
array([[[ 2, 5, 999],
[ 0, 12, 1]],
[[999, 8, 7],
[ 7, 11, 6]]])
2) I mask the invalid values:
>>> mask_a = (a==999)
>>> a_masked = numpy.ma.masked_array(a, mask = mask_a)
>>> print a_masked
[[[2 5 --]
[0 12 1]]
[[-- 8 7]
[7 11 6]]]
3) I replace the values <= 5 with zeros:
>>> a_masked[a_masked<=5]=0
>>> print a_masked
[[[0 0 --]
[0 12 0]]
[[-- 8 7]
[7 11 6]]]
4) I want to replace now the values > 5 with ones:
>>> a_masked[a_masked>5]=1
>>> print a_masked
[[[0 0 1]
[0 1 0]]
[[1 1 1]
[1 1 1]]]
Why doesn't it take into account the values=999 which were already masked???
I want to get the following result:
[[[0 0 --]
[0 1 0]]
[[-- 1 1]
[1 1 1]]]
Answer: How about simply:
>>> a[a != 999] = (a[a != 999] > 5)
>>> a
array([[[ 0, 0, 999],
[ 0, 1, 0]],
[[999, 1, 1],
[ 1, 1, 1]]])
|
Need help on Socket Programming in Python (beginner)
Question: I am learning socket programming and python. I need to create a client that
sends a command to a server (list or get ). The server then validates the
command. My client program can display "list" or "get" , but it doesn't show
the error message when I enter other things.
Also, it only works one time; when I enter a different command after receiving
a reply from the server, it gives me the following error:
Traceback (most recent call last): File "fclient.py", line 49, in
client_socket.send(command) File
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py",
line 170, in _dummy raise error(EBADF, 'Bad file descriptor')
I'm totally lost. What is the best way to get a command line input in the
client program and send it to the server and ask the server to validate the
command line parameter? Can someone take a look and point me to the right
direction? Your help is greatly appreciated.
Client.py
import socket #for sockets
import sys #for exit
command = ' '
socksize = 1024
#return a socket descriptor which can be used in other socket related functions
#properties: address family: AF_INET (IP v4)
#properties: type: SOCK_STREAM (connection oriented TCP protocol)
try:
#create an AF_INET, STREAM socket (TCP)
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except socket.error, msg: #error handling
print 'Failed to create socket. Error code: ' + str(msg[0]) + ', Error message: ' + msg[1]
sys.exit();
print 'Socket Created'
#Get the IP address of the remote host/url
#connect to IP on a certain 'port' using the connect
#host = 'flip3.engr.oregonstate.edu'
#port = 30021
#host = 'www.google.com'
#port = 80
host = '' #symbolic name meaning the local host
port = 8888 #arbitrary non-privileged port
try:
remote_ip = socket.gethostbyname(host)
except socket.gaierror:
#could not resolve
print 'Hostname could not be resolved. Existing'
sys.exit()
print 'IP address of ' + host + ' is ' + remote_ip
#Connect to remote server
client_socket.connect((remote_ip, port))
print 'Socket Connected to ' + host + ' on ip ' + remote_ip
#Send some data to remote server
while True:
print 'Enter a command: list or get <filename>'
command = raw_input()
if command.strip() == 'quit':
break
client_socket.send(command)
data = client_socket.recv(socksize)
print data
#Close the socket
client_socket.close()
Server.py
import socket
import sys
from thread import *
#HOST = 'flip3.engr.oregonstate.edu' #symbolic name meaning all available interfaces
#PORT = 30021
HOST = ''
PORT = 8888
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
try:
server_socket.bind((HOST, PORT)) #bind to a address(and port)
except socket.error, msg:
print 'Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
print 'Socket bind complete'
#put the socket in listening mode
server_socket.listen(10) #maximum 10 connections
print 'TCP Server Waiting for client on port 30021'
#wait to accept a connection - blocking call
client, addr = server_socket.accept()
#display client information
print 'Connected with ' + addr[0] + ':' + str(addr[1])
#keep talking with the client
while 1:
#Receiving from client
data = client.recv(1024)
if (data == 'list' or data == 'get'):
reply = 'receive: ' + data
client.send(reply)
break;
else:
reply = 'wrong command'
client.send(reply)
client.close()
Answer: The first problem is that you close your client socket _inside_ your loop,
closing it after receiving the reply to the first command. Move closing the
socket out of the loop instead and introduce an exit condition to exit the
loop:
#Connect to remote server
client_socket.connect((remote_ip, port))
print 'Socket Connected to ' + host + ' on ip ' + remote_ip
#Send some data to remote server
while True:
print 'Enter a command: list or get <filename>'
command = raw_input()
if command.strip() == 'quit':
break
client_socket.send(command)
data = client_socket.recv(socksize)
print data
# Close the socket
client_socket.close()
You're doing something similar on the server side: You try to reopen the
listening socket every iteration. Move this part out of the loop as well:
#wait to accept a connection - blocking call
client, addr = server_socket.accept()
#display client information
print 'Connected with ' + addr[0] + ':' + str(addr[1])
The reason your command parsing doesn't work is because of this statement:
if (data == 'list' or 'get'):
What you meant to write here is
if (data == 'list' or data == 'get'):
The first expression will evaluate like this:
* Is `data == 'list'`?
* If yes, that sub-expression evaluates to `True` and is therefore what `a or b` will return.
* If not, the second operand to `or` is chosen, which is the string `'get'`.
* The result of that `or` expression will now be implicitely cast to boolean by the `if` statement:
* Case 1: `True` is already `True`
* Case 2: A non-empty string cast to boolean evaluates to `True` as well
So your `if` statement will always evaluate to `True`, that's why your command
parsing didn't work.
To write
if (data == 'list' or data == 'get'):
in a nicer form, I would suggest to use
if (data.strip() in ('list' 'get')):
Lastly, you should wrap your socket code in a `try..finally` to make sure the
sockets are always closed even if exceptions like a `KeyboardInterrupt`
happen:
try:
#keep talking with the client
while 1:
#Receiving from client
data = client.recv(1024)
if (data.strip() in ('list' 'get')):
reply = 'receive: ' + data
client.send(reply)
else:
reply = 'wrong command'
client.send(reply)
except KeyboardInterrupt:
print "Exiting gracefully."
finally:
server_socket.close()
(The [`finally`](http://docs.python.org/2/tutorial/errors.html#defining-clean-
up-actions) clause gets executed under all circumstances - whether handled or
undhandled exceptions happened, or not)
|
if-else statement and code exit
Question: basically I'm quite new to python so I decided to make a simple calculator, I
have done the coding for the calculator and that all works and I am happy,
however I would like an if-else statement to see if they would like to
continue with another calculation. So this is the top part of my code and the
bottom part of my code, I would like to know how to get it so that after the
'else' part of the code, it just runs the rest of the code.
import os
done = ("")
if done == True:
os._exit(0)
else:
print ("---CALCULATOR---")
...
done = str(input("Would you like to do another calculation? (Y/N) "))
if done == "N" or "n":
done = True
if done == "Y" or "y":
done = False
Any help would be appreciated.
Answer: You'll want something like this...
import os
done = False
while not done:
print ("---CALCULATOR---")
...
# be careful with the following lines, they won't actually do what you expect
done = str(input("Would you like to do another calculation? (Y/N) "))
if done == "N" or "n":
done = True
if done == "Y" or "y":
done = False
|
How can i deal a hand in python
Question: im trying to deal a hand in python with random.shuffle and its keeps giving me
an error, can some one help me figure out whats wrong with it.
# [import statements]
import q1fun
# [constants]
# [rest of program code]
number = input("howmany cards to you want dealt?")
a = q1fun.deal(number)
print (a)
# [import statements]
import random
# [constants]
def deal(x):
y = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
a = random.shuffle(y(x))
return(a)
howmany cards to you want dealt?5 Traceback (most recent call last): File
"C:\Users\Eddie\workspace\cp104\durb8250_a16\src\q1.py", line 18, in a =
q1fun.deal(number) File
"C:\Users\Eddie\workspace\cp104\durb8250_a16\src\q1fun.py", line 29, in deal a
= random.shuffle(y(x)) TypeError: 'list' object is not callable
Answer: `random.shuffle(y)` shuffles the list `y` inplace and returns `None`. So
def deal(n):
"Return a hand of n cards"
y = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, 13]
random.shuffle(y)
return y[:n]
might be closer to what you want.
Or omit `random.shuffle(y)` and just use `random.sample`:
return random.sample(y, n)
|
Square root inside a square root python
Question: I was wondering how to write a code showing the square root inside a square
root. This is what I have so far:
number=float(input("Please enter a number: "))
square = 2*number**(1/2)**(1/3)
print(square)
But that's not right as I'm getting a different number from the calculator.
Answer: Import `math` and use `math.sqrt(math.sqrt(number))`
import math
number=float(input("Please enter a number: "))
square = math.sqrt(math.sqrt(number))
print(square)
|
Python using diffrent pip version that in PYTHONPATH
Question: I installed python 2.7 on mine shared host (it already had python 2.6 but they
didnt want to upgrade it or install any packages) and pip. Configured
PYTHONPATH and PATH in .bashrc. I dont have root access to this machine.
When I am checking sys.path with mine python installation it does not
reference anywhere this shared location.
I checked commands:
which python
which pip
output:
> /home/mgx/python27/bin/pip
and both provides me to mine installation but using
pip --version
output:
> pip 1.1 from /usr/local/lib/python2.6/dist-packages/pip-1.1-py2.6.egg
> (python 2.6)
I can see that it using version from /usr/ not mine. How can I force it to use
mine pip version? When I try to install with mine pip version by direct
address it everything works but the short pip command is using wrong one. Also
strange is that 'which' command show the good one...
Edit: output of cat $(which pip) and outputs of previous commands
#!/home/mgx/python27/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==1.4.1','console_scripts','pip'
__requires__ = 'pip==1.4.1'
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('pip==1.4.1', 'console_scripts', 'pip')()
)
Answer: I think you may change your `PATH` variable so that your
`/home/mgx/python27/bin` is searched first. Add the following line to you
`.bashrc` and souce it afterward.
PATH=/home/mgx/python27/bin:$PATH
Then
source .bashrc
Or you could just alias pip in your `.bashrc`
alias pip='/home/mgx/python27/bin/pip'
I think this would fix it.
|
Prepend sys.path from shell?
Question: If You want to use given `python` binary You can prepend `PATH`.
If You want to use given `libpython` You can prepend `LD_LIBRARY_PATH`
Now suppose You want to use given package directory. I tried `PYTHONPATH` \--
but it doesn't work:
$ python -c 'import sys; print sys.path[:2]'
['', '/home/boris/.local/lib/python2.7/site-packages']
$ PYTHONPATH="/home/boris/test/lib/python2.7/site-packages" python -c 'import sys; print sys.path[:2]'
['', '/home/boris/.local/lib/python2.7/site-packages']
In order to do that You'd have to do either in script:
import sys
sys.path.insert(0, "/home/boris/test/lib/python2.7/site-packages")
or make the special `usercustomize.py` in default `site-packages` with the
same contents.
Is there a way to modify `sys.path` from the shell?
The behaivior I'm looking for is:
$ python -c 'import sys; print sys.path[:2]'
['', '/home/boris/.local/lib/python2.7/site-packages']
$ your-shell-command
$ python -c 'import sys; print sys.path[:2]'
['/home/boris/test/lib/python2.7/site-packages', '']
**Edit** :
I can see no harm if the dir will be second:
['', '/home/boris/test/lib/python2.7/site-packages']
but
~: PYTHONPATH="/home/farhad" python -c 'import sys; print sys.path[:2]'
['', '/home/boris/.local/lib/python2.7/site-packages']
doesn't work for me. Probably because I have `usercustomize.py` (see I also
want `~/.local` libs before system-wide ones).
Answer: For me works:
$PYTHONPATH="/home/farhad" python -c 'import sys; print sys.path[:2]'
['', '/home/farhad']
$
|
Storing value from a parsed ping
Question: I'm working on some code that performs a ping operation from python and
extracts only the latency by using awk. This is currently what I have:
from os import system
l = system("ping -c 1 sitename | awk -F = 'FNR==2 {print substr($4,1,length($4)-3)}'")
print l
The `system()` call works fine, but I get an output in terminal rather than
the value storing into l. Basically, an example output I'd get from this
particular block of code would be
90.3
0
Why does this happen, and how would I go about actually storing that value
into l? This is part of a larger thing I'm working on, so preferably I'd like
to keep it in native python.
Answer: Use
[`subprocess.check_output`](https://docs.python.org/2/library/subprocess.html#subprocess.check_output)
if you want to store the output in a variable:
from subprocess import check_output
l = check_output("ping -c 1 sitename | awk -F = 'FNR==2 {print substr($4,1,length($4)-3)}'", shell=True)
print l
Related: [Extra zero after executing a python
script](http://stackoverflow.com/questions/19914292/extra-zero-after-
executing-a-python-script)
|
Exception handling in Tkinter
Question: So, I wrote a small Tkinter program in python. The program executes good, but
its easy for an exception to occur if theres a non-digit character entered
into a field.
so, I tried to remedy it, but my remedy fails:
and here is the issue:
try:
self.laborOne = float(self.trimmerOne_Entry.get()) * 8
self.laborTwo = float(self.trimmerTwo_Entry.get()) * 8
self.laborThree = float(self.operator_Entry.get()) * 8
self.addedThem = self.laborOne + self.laborTwo + self.laborThree
self.laborTotal.set(str(self.addedThem))
self.perUnitLabor = self.addedThem / 125
self.laborUnit.set(str(self.perUnitLabor))
except ValueError:
tkinter.messagebox.showinfo('Error:', 'One or more of your values was not numeric. Please fix them.')
self.performIt()
self.performIt()
At first I tried just showing the messagebox in the error handling, but that
closes the program when you click ok. SO, I tried recursion, calling the
function to itself. When this happens, the dialog box just stays there.
because self.performIt doesn't need an arg passed in, I passed (self) into it
just to try it. THIS allows me to fix my values in the boxes, which is what I
am looking for, but causes a different exception
Anyway, how can I handle the ValueError exception without the program
terminating, so that a user can enter corrected data?
Complete code
import tkinter
import tkinter.messagebox
class MyGui:
def __init__(self):
#create the main window widget
self.main_window = tkinter.Tk()
#create 6 frames:
#one for each trimmers/operators pay,
#one for buttons
#one for outputs
self.trimmerOne = tkinter.Frame(self.main_window)
self.trimmerTwo = tkinter.Frame(self.main_window)
self.operator = tkinter.Frame(self.main_window)
self.rotaryLabor = tkinter.Frame(self.main_window)
self.rotaryLaborUnit = tkinter.Frame(self.main_window)
self.buttonFrame = tkinter.Frame(self.main_window)
#create and pack widgets for Trimmer 1
self.trimmerOne_Label = tkinter.Label(self.trimmerOne, text='Enter the payrate for trimmer 1: ')
self.trimmerOne_Entry = tkinter.Entry(self.trimmerOne, width=10)
self.trimmerOne_Label.pack(side='left')
self.trimmerOne_Entry.pack(side='left')
#create and pack widgets for Trimmer 2
self.trimmerTwo_Label = tkinter.Label(self.trimmerTwo, text='Enter the payrate for trimmer 2: ')
self.trimmerTwo_Entry = tkinter.Entry(self.trimmerTwo, width=10)
self.trimmerTwo_Label.pack(side='left')
self.trimmerTwo_Entry.pack(side='left')
#create and pack widgets for Operator
self.operator_Label = tkinter.Label(self.operator, text='Enter the payrate for operator: ')
self.operator_Entry = tkinter.Entry(self.operator, width=10)
self.operator_Label.pack(side='left')
self.operator_Entry.pack(side='left')
#create and pack widgets for rotaryLabor
self.rotaryLabor_Label = tkinter.Label(self.rotaryLabor, text="This is what it cost's in trimmer labor: ")
self.laborTotal = tkinter.StringVar() #to update with laborTotal_Label
self.laborTotal_Label = tkinter.Label(self.rotaryLabor, textvariable=self.laborTotal)
self.rotaryLabor_Label.pack(side='left')
self.laborTotal_Label.pack(side='left')
#create and pack widgets for labor Unit
self.rotaryLaborUnit_Label = tkinter.Label(self.rotaryLaborUnit, text="This is the cost per part in trim labor: ")
self.laborUnit = tkinter.StringVar() #to update with laborTotal_Label
self.laborUnit_Label = tkinter.Label(self.rotaryLaborUnit, textvariable=self.laborUnit)
self.rotaryLaborUnit_Label.pack(side='left')
self.laborUnit_Label.pack(side='left')
#create and pack the button widgets
self.calcButton = tkinter.Button(self.buttonFrame, text = "Calculate", command=self.performIt)
self.saveButton = tkinter.Button(self.buttonFrame, text = "Save", command=self.saveIt)
self.quitButton = tkinter.Button(self.buttonFrame, text = "Quit", command=self.main_window.destroy)
self.calcButton.pack(side="left")
self.saveButton.pack(side="left")
self.quitButton.pack(side="left")
#pack the frames
self.trimmerOne.pack()
self.trimmerTwo.pack()
self.operator.pack()
self.rotaryLabor.pack()
self.rotaryLaborUnit.pack()
self.buttonFrame.pack()
tkinter.mainloop()
#define the function that will do the work:
def performIt(self):
try:
self.laborOne = float(self.trimmerOne_Entry.get()) * 8
self.laborTwo = float(self.trimmerTwo_Entry.get()) * 8
self.laborThree = float(self.operator_Entry.get()) * 8
self.addedThem = self.laborOne + self.laborTwo + self.laborThree
self.laborTotal.set(str(self.addedThem))
self.perUnitLabor = self.addedThem / 125
self.laborUnit.set(str(self.perUnitLabor))
except ValueError:
tkinter.messagebox.showinfo('Error:', 'One or more of your values was not numeric. Please fix them.')
self.performIt()
self.performIt()
def saveIt(self):
self.laborOne = float(self.trimmerOne_Entry.get()) * 8
self.laborTwo = float(self.trimmerTwo_Entry.get()) * 8
self.laborThree = float(self.operator_Entry.get()) * 8
self.addedThem = self.laborOne + self.laborTwo + self.laborThree
self.laborTotal.set(str(self.addedThem))
self.perUnitLabor = self.addedThem / 125
self.laborUnit.set(str(self.perUnitLabor))
file = open("log.txt", 'w')
file.write("Trimmer One gets paid: " + str(self.laborOne))
file.write("\n___________________________________________\n")
file.write("Trimmer Two gets paid: " + str(self.laborTwo))
file.write("\n___________________________________________\n")
file.write("Operator gets paid: " + str(self.laborThree))
file.write("\n___________________________________________\n")
file.write("The sum of thier daily labor is: " + str(self.addedThem))
file.write("\n___________________________________________\n")
file.write("If production is reached, the labor cost is" + str(self.laborOne) + "per unit")
file.write("\n___________________________________________\n")
file.close()
testRun = MyGui()
Answer: That's not how you catch errors. Do it like this:
except ValueError:
tkinter.messagebox.showinfo('Error:', 'One or more of your values was not numeric. Please fix them.')
You don't need to call the function again.
|
How do I use Twisted (or Autobahn) to connect to a socket.io server?
Question: I am trying to figure out a way to connect to a socket.io (node.js) server
with a Python Twisted client. The server is a chat server which I didn't
write, so I have no control over it. I tried a few things, mainly TCP
connections, but I figured that I'll need to use the Websockets interface to
communicate successfully.
Just to test out, I used the code from socket.io tutorial,
<http://socket.io/#how-to-use> for the server.
var app = require('http').createServer(handler)
, io = require('socket.io').listen(app)
, fs = require('fs')
app.listen(8080);
function handler (req, res) {
fs.readFile(__dirname + '/index.html',
function (err, data) {
if (err) {
res.writeHead(500);
return res.end('Error loading index.html');
}
res.writeHead(200);
res.end(data);
});
}
io.sockets.on('connection', function (socket) {
socket.emit('news', { hello: 'world' });
socket.on('my other event', function (data) {
console.log(data);
});
});
For the client, I used the example code from this tutorial
<http://autobahn.ws/python/tutorials/echo/>: (I know the callbacks don't
match, but I just want to see if it will connect first, which it doesn't).
from twisted.internet import reactor
from autobahn.websocket import WebSocketClientFactory, \
WebSocketClientProtocol, \
connectWS
class EchoClientProtocol(WebSocketClientProtocol):
def sendHello(self):
self.sendMessage("Hello, world!")
def onOpen(self):
self.sendHello()
def onMessage(self, msg, binary):
print "Got echo: " + msg
reactor.callLater(1, self.sendHello)
if __name__ == '__main__':
factory = WebSocketClientFactory("ws://localhost:8080", debug = False)
factory.protocol = EchoClientProtocol
connectWS(factory)
reactor.run()
This is just to see if it will connect. The problem is, the socket.io server
says: `destroying non-socket.io upgrade`, so I'm guessing the client isn't
sending a proper UPGRADE header, but I'm not sure.
Am I missing something, or are Websocket implementations different across
libraries, and that I'll need to do some digging in order for them to
communicate? I had a feeling it was supposed to be quite easy. My question is,
what do I change on the client so it will connect (complete handshake
successfully and start accepting/sending frames)?
Finally, I would like to use Twisted, but I'm open to other suggestions. I
understand the most straight forward will be making a socket.io client, but I
only know Python.
EDIT:
After turning on logging, it shows this:
2013-11-14 22:11:29-0800 [-] Starting factory <autobahn.websocket.WebSocketClientFactory instance at 0xb6812080>
2013-11-14 22:11:30-0800 [Uninitialized]
[('debug', True, 'WebSocketClientFactory'),
('debugCodePaths', False, 'WebSocketClientFactory'),
('logOctets', True, 'WebSocketClientFactory'),
('logFrames', True, 'WebSocketClientFactory'),
('trackTimings', False, 'WebSocketClientFactory'),
('allowHixie76', False, 'WebSocketClientFactory'),
('utf8validateIncoming', True, 'WebSocketClientFactory'),
('applyMask', True, 'WebSocketClientFactory'),
('maxFramePayloadSize', 0, 'WebSocketClientFactory'),
('maxMessagePayloadSize', 0, 'WebSocketClientFactory'),
('autoFragmentSize', 0, 'WebSocketClientFactory'),
('failByDrop', True, 'WebSocketClientFactory'),
('echoCloseCodeReason', False, 'WebSocketClientFactory'),
('openHandshakeTimeout', 5, 'WebSocketClientFactory'),
('closeHandshakeTimeout', 1, 'WebSocketClientFactory'),
('tcpNoDelay', True, 'WebSocketClientFactory'),
('version', 18, 'WebSocketClientFactory'),
('acceptMaskedServerFrames', False, 'WebSocketClientFactory'),
('maskClientFrames', True, 'WebSocketClientFactory'),
('serverConnectionDropTimeout', 1, 'WebSocketClientFactory'),
('perMessageCompressionOffers', [], 'WebSocketClientFactory'),
('perMessageCompressionAccept',
<function <lambda> at 0x177ba30>,
'WebSocketClientFactory')]
2013-11-14 22:11:30-0800 [Uninitialized] connection to 127.0.0.1:8080 established
2013-11-14 22:11:30-0800 [Uninitialized] GET / HTTP/1.1
User-Agent: AutobahnPython/0.6.4
Host: localhost:8080
Upgrade: WebSocket
Connection: Upgrade
Pragma: no-cache
Cache-Control: no-cache
Sec-WebSocket-Key: TOy2OL5T6VwzaiX93cesPw==
Sec-WebSocket-Version: 13
2013-11-14 22:11:30-0800 [Uninitialized] TX Octets to 127.0.0.1:8080 : sync = False, octets = 474554202f20485454502f312e310d0a557365722d4167656e743a204175746f6261686e5079
74686f6e2f302e362e340d0a486f73743a206c6f63616c686f73743a383038300d0a557067726164653a20576562536f636b65740d0a436f6e6e656374696f6e3a20557067726164650d0a507261676d613a206e6f
2d63616368650d0a43616368652d436f6e74726f6c3a206e6f2d63616368650d0a5365632d576562536f636b65742d4b65793a20544f79324f4c35543656777a616958393363657350773d3d0d0a5365632d576562
536f636b65742d56657273696f6e3a2031330d0a0d0a
2013-11-14 22:11:30-0800 [EchoClientProtocol,client] connection to 127.0.0.1:8080 lost
2013-11-14 22:11:30-0800 [EchoClientProtocol,client] Stopping factory <autobahn.websocket.WebSocketClientFactory instance at 0xb6812080>
I take this as socket.io not wanting let non-socket.io connections connect,
which is kind of odd. If anyone knows a workaround or any ideas please share
them.
Answer: Websocket is just one protocol used by socket.io. As per socket.io
specifications <https://github.com/LearnBoost/socket.io-spec>, I need to make
a POST request to the server, which will return a session ID. Then, I can use
that to make an url to make a Websocket connection to the server with
Autobahn.
Do a POST to:
`'http://localhost:8080/socket.io/1/'` The response body will include a unique
session ID.
`url = 'ws://socket.io/1/websocket/' + sid + '/'`
Use above to connect to the server with Autobahn.
|
Python multiprocessing map_async ready() and _number_left not working as expected
Question: This is a little complicated, and I am sure this is a newbie error, but I
cannot for the life for me figure out where to even look.
Trying to do a multiprocessing map_async to process a large number of files.
Essentially, the code gets a list of files, looks in each file for a match on
a MAC address and writes out a line to another file if it matches.
nodb is my library....I have not included everything here yet (it's kind of
convoluted). I am hoping that someone can point me where to even look for
debugging this.
Here is the problem: the code works perfectly on anything under 60,000 files.
However, when I point it at a directory with 595,200 files, the little "while
true" loop that checks to see if it is done (using _number_left) stops
working....processing appears to continue but the _number_left does not
decrease and the ready() function returns TRUE...which it isn't.
And it stops after processing 62111 or 62112 files every single time I run it.
I added the little "dump" function thinking my queue was filling up.
Not sure what else to tell you...am I missing something? (probably) Please let
me know what more I can tell you to figure this out. I really have no idea
what is relevant....
Code is:
import nodb_v09d as nodb
import netaddr
import sys
import collections
from multiprocessing import Pool, Queue
import itertools
import time
# Handle CLI arguments
#
nmultip = 1
args = sys.argv[1:]
nmultip = nodb.parseArgs(args)
# this function just gets the file list in the directory
todoPif = nodb.getFileList('/data/david/data/2012/05')
filterfields = { 10:set([int(netaddr.EUI('00-00-0a-0e-c9-be')),\
int(netaddr.EUI('00:15:ce:de:78:f3')),\
int(netaddr.EUI('3c-75-4a-ea-15-01')),\
int(netaddr.EUI('00-24-d1-1e-e9-be'))])
}
ff=collections.OrderedDict(sorted(filterfields.items()))
resultfields = [28,29,30,33]
rf=resultfields.sort
cutoff = 40
todocnt = len(todoPif)
outQ = Queue()
hdrQ = Queue()
# output file
gpif = '/media/sf_samplePifs/test2.gpif'
append = 0
if __name__ == '__main__':
# this is a little trick I picked up off the internet for passing multiple queues. works ok
todopool = Pool(None,nodb.poolQueueInit,[outQ,hdrQ])
# itertools used to create an arg that contains a constant (ff) for all calls)
r = todopool.map_async(nodb.deRefCall,itertools.izip(todoPif, itertools.repeat(ff)),1)
while (True):
nodb.logging.info('number left: ' + str(r._number_left) + '\nready? ' + str(r.ready()))
nodb.logging.info('queue size: ' + str(outQ.qsize()))
if (r._number_left == 0): break
if (outQ.qsize() >= cutoff):
nodb.dumpQueueToGpif(gpif, hdrQ, outQ, append, cutoff)
if (append == 0):
append = 1
sys.stderr.write('\rPIF Files DONE: ' + str(todocnt-r._number_left) + '/' + str(todocnt))
print '\n'
time.sleep(0.2)
r.wait()
sys.stderr.write('\rPIF Files DONE: ' + str(todocnt) + '/' + str(todocnt) + '\n')
# dump remainder to file
nodb.dumpQueueToGpif(gpif, hdrQ, outQ, append,outQ.qsize())
**MAJOR ADDITION:**
At the request of another user, I simplified the code. No queues, no external
private libraries, etc:
import sys
import os
import time
from multiprocessing import Pool
def doPifFile(pifFile):
#readPif = call(['ls','-l',' > /tmp/out'])
cmd = 'ipdr_dump ' + pifFile + ' | grep "," | wc -l > /tmp/dump'
readPif = os.system(cmd)
return readPif
def getFileList(directory):
flist = list()
for root, dirs, files in os.walk(directory):
for piffile in files:
if piffile.endswith('.pif'):
flist.append(os.path.abspath(os.path.join(root,piffile)))
return flist
todoPif = getFileList('/data/david/data/2012/05')
todocnt = len(todoPif)
print '# of files to process: ' + str(todocnt)
if __name__ == '__main__':
todopool = Pool()
r = todopool.map_async(doPifFile,todoPif,1)
while (True):
print 'number left: ' + str(r._number_left) + '\nready? ' + str(r.ready())
#if (r.ready()): break
if (r._number_left == 0): break
sys.stderr.write('\rPIF Files DONE: ' + str(todocnt-r._number_left) + '/' + str(todocnt))
print '\n'
time.sleep(0.2)
sys.stderr.write('\rPIF Files DONE: ' + str(todocnt) + '/' + str(todocnt) + '\n')
When I ran it, I got something VERY interesting that did NOT show up on the
runs with the more complex code but that happened at exactly the same spot,
although it claims it happened in my dump program:
number left: 533100
ready? False
PIF Files DONE: 62100/595200
number left: 533090
ready? False
PIF Files DONE: 62110/595200
*** glibc detected *** ipdr_dump: corrupted double-linked list: 0x0000000001a58370 ***
======= Backtrace: =========
/lib64/libc.so.6[0x36f5c76126]
/lib64/libc.so.6[0x36f5c78eb4]
/lib64/libc.so.6(fclose+0x14d)[0x36f5c6678d]
/lib64/libz.so.1[0x36f6803021]
ipdr_dump[0x405c0b]
ipdr_dump[0x40546e]
ipdr_dump[0x401c2a]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x36f5c1ecdd]
ipdr_dump[0x4016b9]
======= Memory map: ========
00400000-0040e000 r-xp 00000000 08:02 2364135 /home/david/ipdr_dump
0060d000-0060e000 rw-p 0000d000 08:02 2364135 /home/david/ipdr_dump
01a54000-01a75000 rw-p 00000000 00:00 0 [heap]
32e0600000-32e0604000 r-xp 00000000 08:02 3932181 /lib64/libuuid.so.1.3.0
32e0604000-32e0803000 ---p 00004000 08:02 3932181 /lib64/libuuid.so.1.3.0
32e0803000-32e0804000 rw-p 00003000 08:02 3932181 /lib64/libuuid.so.1.3.0
36f5800000-36f5820000 r-xp 00000000 08:02 3932309 /lib64/ld-2.12.so
36f5a1f000-36f5a20000 r--p 0001f000 08:02 3932309 /lib64/ld-2.12.so
36f5a20000-36f5a21000 rw-p 00020000 08:02 3932309 /lib64/ld-2.12.so
36f5a21000-36f5a22000 rw-p 00000000 00:00 0
36f5c00000-36f5d8a000 r-xp 00000000 08:02 3932315 /lib64/libc-2.12.so
36f5d8a000-36f5f89000 ---p 0018a000 08:02 3932315 /lib64/libc-2.12.so
36f5f89000-36f5f8d000 r--p 00189000 08:02 3932315 /lib64/libc-2.12.so
36f5f8d000-36f5f8e000 rw-p 0018d000 08:02 3932315 /lib64/libc-2.12.so
36f5f8e000-36f5f93000 rw-p 00000000 00:00 0
36f6000000-36f6002000 r-xp 00000000 08:02 3932566 /lib64/libdl-2.12.so
36f6002000-36f6202000 ---p 00002000 08:02 3932566 /lib64/libdl-2.12.so
36f6202000-36f6203000 r--p 00002000 08:02 3932566 /lib64/libdl-2.12.so
36f6203000-36f6204000 rw-p 00003000 08:02 3932566 /lib64/libdl-2.12.so
36f6400000-36f6417000 r-xp 00000000 08:02 3932564 /lib64/libpthread-2.12.so
36f6417000-36f6617000 ---p 00017000 08:02 3932564 /lib64/libpthread-2.12.so
36f6617000-36f6618000 r--p 00017000 08:02 3932564 /lib64/libpthread-2.12.so
36f6618000-36f6619000 rw-p 00018000 08:02 3932564 /lib64/libpthread-2.12.so
36f6619000-36f661d000 rw-p 00000000 00:00 0
36f6800000-36f6815000 r-xp 00000000 08:02 3932563 /lib64/libz.so.1.2.3
36f6815000-36f6a14000 ---p 00015000 08:02 3932563 /lib64/libz.so.1.2.3
36f6a14000-36f6a15000 r--p 00014000 08:02 3932563 /lib64/libz.so.1.2.3
36f6a15000-36f6a16000 rw-p 00015000 08:02 3932563 /lib64/libz.so.1.2.3
36f6c00000-36f6c83000 r-xp 00000000 08:02 3932493 /lib64/libm-2.12.so
36f6c83000-36f6e82000 ---p 00083000 08:02 3932493 /lib64/libm-2.12.so
36f6e82000-36f6e83000 r--p 00082000 08:02 3932493 /lib64/libm-2.12.so
36f6e83000-36f6e84000 rw-p 00083000 08:02 3932493 /lib64/libm-2.12.so
36f7000000-36f7007000 r-xp 00000000 08:02 3935994 /lib64/librt-2.12.so
36f7007000-36f7206000 ---p 00007000 08:02 3935994 /lib64/librt-2.12.so
36f7206000-36f7207000 r--p 00006000 08:02 3935994 /lib64/librt-2.12.so
36f7207000-36f7208000 rw-p 00007000 08:02 3935994 /lib64/librt-2.12.so
36f7800000-36f781d000 r-xp 00000000 08:02 3932588 /lib64/libselinux.so.1
36f781d000-36f7a1c000 ---p 0001d000 08:02 3932588 /lib64/libselinux.so.1
36f7a1c000-36f7a1d000 r--p 0001c000 08:02 3932588 /lib64/libselinux.so.1
36f7a1d000-36f7a1e000 rw-p 0001d000 08:02 3932588 /lib64/libselinux.so.1
36f7a1e000-36f7a1f000 rw-p 00000000 00:00 0
36f7c00000-36f7c16000 r-xp 00000000 08:02 3932572 /lib64/libresolv-2.12.so
36f7c16000-36f7e16000 ---p 00016000 08:02 3932572 /lib64/libresolv-2.12.so
36f7e16000-36f7e17000 r--p 00016000 08:02 3932572 /lib64/libresolv-2.12.so
36f7e17000-36f7e18000 rw-p 00017000 08:02 3932572 /lib64/libresolv-2.12.so
36f7e18000-36f7e1a000 rw-p 00000000 00:00 0
36f8000000-36f800e000 r-xp 00000000 08:02 3935998 /lib64/liblber-2.4.so.2.5.6
36f800e000-36f820d000 ---p 0000e000 08:02 3935998 /lib64/liblber-2.4.so.2.5.6
36f820d000-36f820e000 r--p 0000d000 08:02 3935998 /lib64/liblber-2.4.so.2.5.6
36f820e000-36f820f000 rw-p 0000e000 08:02 3935998 /lib64/liblber-2.4.so.2.5.6
36f8800000-36f8849000 r-xp 00000000 08:02 3932243 /lib64/libldap-2.4.so.2.5.6
36f8849000-36f8a49000 ---p 00049000 08:02 3932243 /lib64/libldap-2.4.so.2.5.6
36f8a49000-36f8a4b000 r--p 00049000 08:02 3932243 /lib64/libldap-2.4.so.2.5.6
36f8a4b000-36f8a4d000 rw-p 0004b000 08:02 3932243 /lib64/libldap-2.4.so.2.5.6
36f8c00000-36f8c16000 r-xp 00000000 08:02 3936000 /lib64/libgcc_s-4.4.7-20120601.so.1
36f8c16000-36f8e15000 ---p 00016000 08:02 3936000 /lib64/libgcc_s-4.4.7-20120601.so.1
36f8e15000-36f8e16000 rw-p 00015000 08:02 3936000 /lib64/libgcc_s-4.4.7-20120601.so.1
36f9400000-36f9535000 r-xp 00000000 08:02 4206136 /usr/lib64/libnss3.so
36f9535000-36f9734000 ---p 00135000 08:02 4206136 /usr/lib64/libnss3.so
36f9734000-36f9739000 r--p 00134000 08:02 4206136 /usr/lib64/libnss3.so
36f9739000-36f973b000 rw-p 00139000 08:02 4206136 /usr/lib64/libnss3.so
36f973b000-36f973d000 rw-p 00000000 00:00 0
36f9800000-36f9825000 r-xp 00000000 08:02 4206135 /usr/lib64/libnssutil3.so
36f9825000-36f9a24000 ---p 00025000 08:02 4206135 /usr/lib64/libnssutil3.so
36f9a24000-36f9a2a000 r--p 00024000 08:02 4206135 /usr/lib64/libnssutil3.sonumber left: 533078
ready? False
PIF Files DONE: 62122/595200
number left: 533068
ready? False
PIF Files DONE: 62132/595200
number left: 533056
ready? False
PIF Files DONE: 62144/595200
What is odd is that it continued on, whereas the previous runs caused a
failure of the _number_left and misfire on "ready()" (although the processes
still ran in the background).
I have run the dump program manually on the 16 processor box I have, and they
run in parallel fine, never seen that glibc error before. I have to assume
that it is associated with the python setup....I just don't know where.
This may be too complex for a forum diagnosis. Any further ideas on where I
might look or how I might be able to seem what happened are welcome.
One more tidbit...printed out pool._success. It changes to FALSE at the magic
moment with the _number_left stops moving.
INFO number left: 533167
ready? False
successful? True
INFO queue size: 424
PIF Files DONE: 62033/595200
INFO number left: 533117
ready? False
successful? True
INFO queue size: 424
PIF Files DONE: 62083/595200
INFO number left: 533087
ready? True
successful? False
INFO queue size: 424
PIF Files DONE: 62113/595200
INFO number left: 533087
ready? True
successful? False
INFO queue size: 424
PIF Files DONE: 62113/595200
Answer: You could try sys.stdout.flush().
|
how to install python3.3 completely and remove python2.7 on Ubuntu12.04?
Question: I have installed python3.3 with commands:
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install python3.3
It works,but python2.7 installed in ubuntu by default still exiet.When I type
"python",it show me the interactive shell of python2.7. I can use python3.3 by
type "python3.3",but i can't import some library,such as gtk,Qt.but it works
in 2.7.
Now I want to remove python2.7,it show me that it will free 247M,which beyond
my expection.If I did it,any important library would be removed togeter?
How to use the Qt library with python3.3 instead of 2.7?
Thank you for answering!
Answer: Completely removing python 2.7 is not the best option, as it is default python
version for ubuntu and you might end breaking some python-dependent utilities
and programs.
Consider using virtual environment manager for different libraries management.
I use [pyenv](https://github.com/yyuu/pyenv) for python versions management
and [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/)
for python packages management.
For example running
export PATH=/path/to/python3
source /usr/local/bin/virtualenvwrapper.sh
mkvirtualenv test --no-site-packages
will make a clean virtual environment for you, where you can install any
needed packages, sucha as `Qt`.
|
Python: " AttributeError: Element instance has no attribute 'firstchild' "
Question: I've looked everywhere, but can't seem to find anything that answers my
problem. I'm fairly new to Python, so maybe I am not understanding something
correctly. The error I keep getting, is "AttributeError: Element instance has
no attribute 'firstchild'"
# Imports
import urllib2
import re
from xml.dom import minidom
def main():
pass
if __name__ == '__main__':
main()
# Get RSS feed source
briefingRSS = minidom.parse(urllib2.urlopen('http://rss.briefing.com/Investor/RSS/UpgradesDowngrades.xml'))
# Find each Upgrade and Downgrade listed in XML file
channel = briefingRSS.getElementsByTagName("channel")[0]
items = channel.getElementsByTagName("item")
# Get info from each item
for item in items:
getTicker = item.getElementsByTagName("title")[0].firstchild.data
ticker = str(getTicker[1].split("<")[0])
print ticker
Edit: Alright, thank you for pointing out the C in firstchild. But turns out,
the program is spitting out one letter per line. I'm trying to capture a
ticker, which can be up to 5 characters long at times. How do I get it to give
me a full ticker?
Here is a snippet from the current XML for an item:

Answer: The `firstChild` property needs a capital letter 'C' in the middle.
The documentation isn't very clear, because it is written in terms of the DOM
standard and how to map the standard to Python, so it can help just to open up
the `minidom.py` source and see the methods and properties it defines and
uses.
|
renaming a list of pdf files with for loop
Question: i am trying to rename a list of pdf files by extracting the name from the file
using PyPdf. i tried to use a for loop to rename the files but i always get an
error with code 32 saying that the file is being used by another process. I am
using python2.7 Here's my code
import os, glob
from pyPdf import PdfFileWriter, PdfFileReader
# this function extracts the name of the file
def getName(filepath):
output = PdfFileWriter()
input = PdfFileReader(file(filepath, "rb"))
output.addPage(input.getPage(0))
outputStream = file(filepath + '.txt', 'w')
output.write(outputStream)
outputStream.close()
outText = open(filepath + '.txt', 'rb')
textString = outText.read()
outText.close()
nameStart = textString.find('default">')
nameEnd = textString.find('_SATB', nameStart)
nameEnd2 = textString.find('</rdf:li>', nameStart)
if nameStart:
testName = textString[nameStart+9:nameEnd]
if len(testName) <= 100:
name = testName + '.pdf'
else:
name = textString[nameStart+9:nameEnd2] + '.pdf'
return name
pdfFiles = glob.glob('*.pdf')
m = len(pdfFiles)
for each in pdfFiles:
newName = getName(each)
os.rename(each, newName)
Answer: You're not closing the input stream (the file) used by the pdf reader. Thus,
when you try to rename the file, it's still open.
So, instead of this:
input = PdfFileReader(file(filepath, "rb"))
Try this:
inputStream = file(filepath, "rb")
input = PdfFileReader(inputStream)
(... when done with this file...)
inputStream.close()
|
Scrape data and write in single line for every iteration BS4 python
Question: I have the following code. It all scraping the data. But my concern is to
write the data in a single line for each iteration it goes.
Here is my code
import bs4 as bs
import urllib2
import re
page = urllib2.urlopen("http://www.codissia.com/member/members-directory/?mode=paging&Keyword=&Type=&pg=1")
content = page.read()
soup = bs.BeautifulSoup(content)
eachbox = soup.find_all('div', {'class':re.compile(r'members_box[12]')})
for eachuniversity in eachbox:
data = [re.sub('\s+', '', text).strip().encode('utf8') for text in eachuniversity.find_all(text=True) if text.strip()]
print(','.join(data))
**UPDATE**
I want to the output to be like this (in single line) for an iteration
Name:,Mr.Srinivasan.N,Designation:,Proprietor,CODISSIA - Designation:,(Past President, CODISSIA),Name of the Industry:,Arian Soap Manufacturing Co,Specification:,LIFE,Date of Admission:,19.12.1969, "Parijaat" 26/1Shanker Mutt Road, Basavana Gudi,Phone:,2313861
But I am getting as follows
Name:,Mr.Srinivasan.N,Designation:,Proprietor,CODISSIA - Designation:,(Past President, CODISSIA),Name of the Industry:,Arian Soap Manufacturing Co,Specification:,LIFE,Date of Admission:,19.12.1969
"Parijaat" 26/1Shanker Mutt Road, Basavana Gudi,Phone:,2313861
Answer: `eachbox` is either class `members_box1` or `members_box2`, so iterating over
`eachbox` will print every box contents on a separate line, when really you
want both on one line. One way to get around this would be like this:
box1s = soup.find_all('div', class_='members_box1')
box2s = soup.find_all('div', class_='members_box2')
for box1, box2 in zip(box1s, box2s):
data = [re.sub('\s+', '', text).strip().encode('utf8') for text in box1.find_all(text=True) + box2.find_all(text=True) if text.strip()]
print(','.join(data))
|
How to define a decimal class holding 1000 digits in python?
Question: I need a class holding 1000 decimal digits to calculate something like pi
number in a series. Taking time is not important. How can I define `__add__` &
... functions to do this? For example I need a value can hold this number:
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113
:))
This number using `decimal.Decimal` shows like this:
from decimal import Decimal as dc
>>> x=dc(3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113)
>>> x
Decimal('3.141592653589793115997963468544185161590576171875')
But I need a new class holding all DIGITS and I can use adding, dividing and
... functions in it like 2+1 and pi number is an example of that and exactly I
don't need to calculate pi number I wanted to calculate extra large decimal
numbers!
Answer: You have to set a context with 1000 decimal digits:
context = decimal.Context(prec=1000)
decimal.setcontext(context)
From now on computations will use 1000 digits precision.
Example:
>>> decimal.setcontext(decimal.Context(prec=1000))
>>> pi = decimal.Decimal('3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113')
>>> pi
Decimal('3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113')
>>> pi + 2
Decimal('5.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113')
Note that:
* You have to use strings to initialize the `Decimal` because if you use a `float` the interpreter will have to truncate it first. (also I believe only the most recent versions of `decimal` accept a `float` argument. In older versions you had to use `Decimal.from_float` instead).
* The decimal digits are preserved during calculations.
* * *
You can also use the context locally via the `localcontext` contextmanager:
context = decimal.Context(prec=1000)
with decimal.localcontext(context):
# here decimal uses 1000 digits for computations
pass
# here the default context is restored.
|
Clean Python multiprocess termination dependant on an exit flag
Question: I am attempting to create a program using multiple processes and I would like
to cleanly terminate all the spawned processes if errors occur. below I've
wrote out some pseudo type code for what I think I need to do but I don't know
what the best way is to communicate to all the processes that an error has
occured and they should terminate.
I think I should be using classes for this sort of thing but I'm quite new to
Python so I'm just trying to get my head around the basics first.
#imports
exitFlag = True
# Function for threads to process
def url_thread_worker( ):
# while exitFlag:
try:
# do something
except:
# we've ran into a problem, we need to kill all the spawned processes and cleanly exit the program
exitFlag = False
def processStarter( ):
process_1 = multiprocessing.Process( name="Process-1", target=url_thread_worker, args=( ) )
process_2 = multiprocessing.Process( name="Process-2", target=url_thread_worker, args=( ) )
process_1.start()
process_2.start()
if __name__ == '__main__':
processStarter( )
Thanks in advance
Answer: Here's my suggestion:
import multiprocessing
import threading
import time
def good_worker():
print "[GoodWorker] Starting"
time.sleep(4)
print "[GoodWorker] all good"
def bad_worker():
print "[BadWorker] Starting"
time.sleep(2)
raise Exception("ups!")
class MyProcManager(object):
def __init__(self):
self.procs = []
self.errors_flag = False
self._threads = []
self._lock = threading.Lock()
def terminate_all(self):
with self._lock:
for p in self.procs:
if p.is_alive():
print "Terminating %s" % p
p.terminate()
def launch_proc(self, func, args=(), kwargs= {}):
t = threading.Thread(target=self._proc_thread_runner,
args=(func, args, kwargs))
self._threads.append(t)
t.start()
def _proc_thread_runner(self, func, args, kwargs):
p = multiprocessing.Process(target=func, args=args, kwargs=kwargs)
self.procs.append(p)
p.start()
while p.exitcode is None:
p.join()
if p.exitcode > 0:
self.errors_flag = True
self.terminate_all()
def wait(self):
for t in self._threads:
t.join()
if __name__ == '__main__':
proc_manager = MyProcManager()
proc_manager.launch_proc(good_worker)
proc_manager.launch_proc(good_worker)
proc_manager.launch_proc(bad_worker)
proc_manager.wait()
if proc_manager.errors_flag:
print "Errors flag is set: some process crashed"
else:
print "Everything closed cleanly"
You need to have a wrapper thread for each process run, that waits for its
end. When a process ends, check for the exitcode: if > 0, means it raised some
unhandled exception. Now call terminate_all() to close all remaining active
processes. The wrapper threads will also finish as they are dependent on the
process run.
Also, in your code you're completely free to call proc_manager.terminate_all()
whenever you want. You can be checking for some flags in a different thread or
something like that..
Hope it's good for your case.
PS: btw.. in your original code you did something like an global exit_flag:
you can never have a "global" exit_flag in multiprocessing because it simply
ain't global as you are using separated processes with separated memory
spaces. That only works in threaded environments where state can be shared. If
you need it in multiprocessing then you must have explicit communication
between processes ([Pipe and Queue accomplish
that](http://docs.python.org/2/library/multiprocessing.html#pipes-and-queues))
or something like [shared memory
objects](http://docs.python.org/2/library/multiprocessing.html#shared-ctypes-
objects)
|
Struggling with 'synchronisation' between Session's in SQLAlchemy
Question: I've a created `delete_entity` function which delete's entities and I've a
function which tests this functions.
#__init__.py
engine = create_engine('sqlite://:memory')
Session = scoped_session(sessionmaker(engine))
# entity.py
def delete_entity(id, commit=False):
""" Delete entity and return amount of amount of rows affected. """
rows_affected = Session.query(Entity).filter(Entity.id == id).delete()
if commit:
Session.commit()
# Marker @A
return rows_affected
# test_entity.py
def test_delete_entity(Session):
# ... Here I add 2 Entity objects. Database now contains 2 rows.
assert delete_entity(1) == 1 # Not committed, row stays in database.
assert delete_entity(1, commmit=True) # Row should be deleted
# marker @B
assert len(Session.query(Entity).all()) == 1
This test passes when I run the `test_delete_entity()` **alone**. But when I
run this test together with other tests this test fails. It fails on `assert
len(Session.query(Entity)).all()) == 1`. The query finds 2 rows, so it looks
like the row hasn't been deleted. But, when I use the Python debugger
(`pytest.set_trace()`) on @A and query for all Entity objects in the database
I find 1 row. So the delete query was succesfull and one row has been deleted.
But when I query for all Entity rows on @B I get 2 rows.
How can I 'synchronize' both Sessions, so my test will pass?
Answer: I just tried but couldn't reproduce your issue. Here's my full script with the
results:
import sqlalchemy as sa
from sqlalchemy import create_engine, Column, Integer, Unicode
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite://', echo=True)
Base = declarative_base(bind=engine)
Session = scoped_session(sessionmaker(bind=engine))
class Entity(Base):
__tablename__ = 'test_table'
id = Column(Integer(), primary_key=True)
name = Column(Unicode(200))
Base.metadata.create_all()
e1 = Entity()
e1.name = 'first row'
e2 = Entity()
e2.name = 'second row'
Session.add_all([e1, e2])
Session.commit()
print len(Session.query(Entity).all())
# CORRECTLY prints 2
rows_affected = Session.query(Entity).filter(Entity.id == 1).delete()
print rows_affected
# CORRECTLY prints 1
Session.commit()
print len(Session.query(Entity).all())
# CORRECTLY prints 1
Since your script is not runnable, we can't find your issue.
Please provide a runnable script that shows the entire issue, like I did. The
script must do all imports, insert the data, delete, and query in the end.
Otherwise there's no way I can reproduce it here in my environment.
|
get numpy array from pygame
Question: I want to access my Webcam via python. Unfortunately openCV is not working
because of the webcam. Pygame.camera works like a charm with this code:
from pygame import camera,display
camera.init()
webcam = camera.Camera(camera.list_cameras()[0])
webcam.start()
img = webcam.get_image()
screen = display.set_mode((img.get_width(), img.get_height()))
display.set_caption("cam")
while True:
screen.blit(img, (0,0))
display.flip()
img = webcam.get_image()
My question is now, how can I get a numpy array from the webcam?
Answer: `get_image` returns a
[`Surface`](http://www.pygame.org/docs/ref/surface.html). According to
<http://www.pygame.org/docs/ref/surfarray.html>, you can use
`pygame.surfarray.array2d` (or one of the other functions in the `surfarray`
module) to convert the Surface to a numpy array. E.g.
img = webcam.get_image()
data = pygame.surfarray.array2d(img)
|
F2PY - Access module parameter from subroutine
Question: I cannot get f2py to reference a parameter from a module in a separate
subroutine where it is used to defined an input array dimension. I.e. the
paramter is defeind in a module:
! File: testmod.f90
MODULE testmod
INTEGER, PARAMETER :: dimsize = 20
END MODULE testmod
and the parameter dimsize needs to be referenced in a subroutine (NOT
contained in the module) in another file, which will be the entry point for my
python module:
! File testsub.f90
SUBROUTINE testsub(arg)
USE testmod
REAL, INTENT(IN) :: arg(dimsize)
END SUBROUTINE testsub
I compile like this:
f2py -m testmod -h testmod.pyf testsub.f90
pgf90 -g -Mbounds -Mchkptr -c -fPIC testmod.f90 -o testmod.o
pgf90 -g -Mbounds -Mchkptr -c -fPIC testsub.f90 -o testsub.o
f2py -c testmod.pyf testmod.o testsub.o
but get this error:
testmodmodule.c: In function 'f2py_rout_testmod_testsub':
testmodmodule.c:180: error: 'dimsize' undeclared (first use in this function)
I have tried modifying testsub.g90 to include the following directive, as
suggested ni other posts:
SUBROUTINE testsub(arg)
USE testmod
!f2py integer, parameter :: dimsize
REAL, INTENT(IN) :: arg(dimsize)
END SUBROUTINE testsub
but to no avail. I need to keep the subroutine separate from the module.
How can I get f2py to correctly resolve the variable `dimsize`?
TIA
Answer: Although I've not tested it, I _think_ you nearly have it with your original
code. We do something similar for some of our code, but with gfortran.
You shouldn't need to `f2py` the testmod.f90 file. You should just compile it
to an object file just like you would if this were normal Fortran:
pgf90 -g -Mbounds -Mchkptr -c -fPIC testmod.f90 -o testmod.o
Then you should be able to compile your testsub.f90 into a python-usable
module with:
f2py --fcompiler=pgf90 --f90flags="-g -Mbounds -Mchkptr" -c testsub.f90 -m testsub testmod.o
That should build a testsub.so, or equivalent, letting you `import testsub`
and then `testsub.testsub(my_arg)` in python.
|
how do you get the current local directory in python
Question: I actually think I know the answer to this, and it is:
current_working_directory = os.getcwd().split("/")
local_working_directory = current_working_directory[len(current_working_directory)-1]
this works for me. none of the other posts I've checked out (ex:Find current
directory and file's directory) seem to explain how to get the local
directory, as opposed to the whole directory path. so posting this as an
already answered question. perhaps the question should be: how do I post the
answer to a question I've already answered, in order to help others out? and
hey, perhaps there's a better answer :-)
cheers,
-mike :-)
Answer: I would use `basename`
import os
path = os.getcwd()
print(os.path.basename(path))
|
Python ImageDraw - how to draw a thick line (width or thickness more than 1 px)
Question: Simple problem: Using the ImageDraw module in Python, draw a line between (x1,
y1) and (x2, y2) with a thickness or width larger than 1 pixel.
Answer: Quote from actual script, showing only the part actually involved in drawing
the thick line:
from PIL import Image, ImageDraw
import math
x1 = 100
y1 = 100
x2 = 200
y2 = 175
# thickness of line
thick = 4
# compute angle
a = math.atan((y2-y1)/(x2-x1))
sin = math.sin(a)
cos = math.cos(a)
xdelta = sin * thick / 2.0
ydelta = cos * thick / 2.0
xx1 = x1 - xdelta
yy1 = y1 + ydelta
xx2 = x1 + xdelta
yy2 = y1 - ydelta
xx3 = x2 + xdelta
yy3 = y2 - ydelta
xx4 = x2 - xdelta
yy4 = y2 + ydelta
draw.polygon((xx1, yy1, xx2, yy2, xx3, yy3, xx4, yy4))
Here's a result of this technique. The segments composing the dial are each
drawn using the "thick line" technique.

**EDIT:** This is the discussion that initiated my search for a "thick line"
function in Python (also contains the full script I wrote):
<http://gimpforums.com/thread-how-to-draw-this-geometric-pattern-
programmatically>
|
Strange behavior of Python 'is' operator if combined with 'in'
Question: How does Python interpret "'a' in 'abc' is True"? I was trying to evaluate the
following two expressions:
>>> 'a' in 'abc' is True:
False
>>> ('a' in 'abc') is True:
True
(I know "is" keyword shouldn't generally be used to compare to `True`, this is
just an example)
Answer: Interesting question!
Here's the bytecode from `'a' in 'abc' is True`:
>>> import dis
>>> dis.disassemble((lambda: 'a' in 'abc' is True).func_code)
2 0 LOAD_CONST 1 ('a') # stack: 'a'
3 LOAD_CONST 2 ('abc') # stack: 'a' 'abc'
6 DUP_TOP # stack: 'a' 'abc' 'abc'
7 ROT_THREE # stack: 'abc' 'a' 'abc'
8 COMPARE_OP 6 (in) # stack: 'abc' True
11 JUMP_IF_FALSE_OR_POP 21 # stack: 'abc'
14 LOAD_GLOBAL 0 (True) # stack: 'abc' True
17 COMPARE_OP 8 (is) # stack: False
20 RETURN_VALUE
>> 21 ROT_TWO
22 POP_TOP
23 RETURN_VALUE
And compare with that from `('a' in 'abc') is True`:
>>> import dis
>>> dis.disassemble((lambda: ('a' in 'abc') is True).func_code)
1 0 LOAD_CONST 1 ('a') # stack: 'a'
3 LOAD_CONST 2 ('abc') # stack: 'a' 'abc'
6 COMPARE_OP 6 (in) # stack: True
9 LOAD_GLOBAL 0 (True)
12 COMPARE_OP 8 (is)
15 RETURN_VALUE
So it seems like the expression `'a' in 'abc' is True` evaluates as roughly:
>>> 'a' in 'abc' and 'abc' is True
It seems like this is a result of operator chaining:
<http://stackoverflow.com/a/19751586/71522> — the same magic which makes `1 <
5 < 10` work properly.
Very interesting!
(Note: this was done with CPython 2.7.2)
|
Why does python regex seem to fail to match beyond 112 bytes?
Question: I have a file, **what.dmp** , which is 116 bytes long. And my python code
looks like this:
import binascii
import re
import sys
print(sys.version)
needle = re.compile(b".{112}")
with open("what.dmp", "rb") as haystack:
chunk = haystack.read()
print("Read {0} bytes.".format(len(chunk)))
matches = needle.search(chunk)
if matches:
print(matches.start())
print(binascii.hexlify(matches.group(0)))
else:
print("No matches found.")
Running this code is fine:
C:\test>C:\Python33\python.exe test.py
3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:06:53) [MSC v.1600 64 bit (AMD64)]
Read 116 bytes.
0
b'0101060001010600087e88758f4e8e75534589751df7897583548775e4bcf001e6d0f001cae3f001ccf7f0010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000090d91300000000002c003100eb6fb024'
However, change the regex from 112 to 113:
needle = re.compile(b".{113}")
And no match is found:
C:\test>C:\Python33\python.exe test.py
3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:06:53) [MSC v.1600 64 bit (AMD64)]
Read 116 bytes.
No matches found.
So the question is: why does the regex not match the 113th character. I
haven't posted what.dmp because surely the contents are irrelevant?!
Many thanks!
Answer: There is a very good chance that byte 113 is equivalent to `\n`, (10 in
binary, 0a in hex). Try adding the re.DOTALL flag to your regex.
However as noted in comments, you probably don't need regular expressions for
this.
|
Pygame - Keep Sprites from overlapping
Question: I'm making a space invaders type game using python and pygame. I currently
have where I can move my user ship back and forth along the bottom with my
arrow keys, and my enemies spawn and move back and forth across the top. My
problem is with the enemies. I have it set so if they hit 100 pixels from the
edge of the game screen, they start going back the other direction. This
results in an overlap while the ones that have hit the limit are already
heading back and the ones that haven't hit the limit are still going the
original direction. I want it so that when the enemies on the edge hit the
limit, all the enemies immediately go back in the other direction, just like
in space invaders. My code is below. I have KeepFormation under enemies to try
to do what I wanted, but it isn't working. Thank you!
from pygame import *
size_x = 900
size_y = 650
class Object:
def disp(self, screen):
screen.blit(self.sprite, self.rect)
class User_Ship(Object):
def __init__(self):
self.sprite = image.load("mouse.bmp")
self.rect = self.sprite.get_rect()
self.rect.centerx = size_x/2
self.rect.centery = size_y - 40
self.count = 0
self.move_x = 0
self.move_y = 0
def checkwith(self, otherrect):
if self.rect.colliderect(otherrect):
exit()
def cycle(self):
self.rect.centerx += self.move_x
if self.rect.centerx < 100:
self.rect.centerx = 100
if self.rect.centerx > size_x - 100:
self.rect.centerx = size_x - 100
self.rect.centery += self.move_y
if self.rect.centery < 0:
self.rect.centery = 800
def right(self):
self.move_x += 10
def left(self):
self.move_x -= 10
def stop_x(self):
self.move_x = 0
def stop_y(self):
self.move_y = 0
def shoot(self):
if keys[pygame.K_SPACE]:
self.Bullet.x = self.rect.centerx
self.Bullet.y = self.rect.centery
self.Bullet.speed = 10
class Enemys(Object):
def __init__(self):
self.sprite = image.load("ball.bmp")
self.rect = self.sprite.get_rect()
self.rect.centerx = size_x/3
self.rect.centery = 200
self.mval = -2
def KeepFormation(self, otherrect):
if self.rect.colliderect(otherrect):
self.rect.centerx += 5
def cycle(self):
self.rect.centerx += self.mval
if self.rect.centerx < 100:
self.mval = 2
elif self.rect.centerx > (size_x - 100):
self.mval = -2
class Bullet(Object):
def __init__(self):
self.sprite = image.load("missile.png")
self.rect = self.sprite.get_rect()
self.rect.centerx = -100
self.rect.centery = 100
self.speed = 0
EnemyList = []
for i in range(14):
EnemyList.append(Enemys())
EnemyList[i].rect.centerx = (i) * 50
# EnemyList[i].count = i * 20
init()
screen = display.set_mode((size_x, size_y))
m = User_Ship()
en = Enemys()
b = Bullet()
clock = time.Clock()
while True:
for e in event.get():
if e.type == QUIT:
quit()
if e.type == KEYDOWN:
if e.key == K_RIGHT:
m.right()
elif e.key == K_LEFT:
m.left()
if e.type == KEYUP:
if e.key == K_RIGHT or e.key == K_LEFT:
m.stop_x()
m.cycle()
screen.fill((255,255,255))
for enemy in EnemyList:
enemy.cycle()
enemy.disp(screen)
enemy.KeepFormation(enemy.rect)
m.disp(screen)
b.disp(screen)
display.flip()
clock.tick(60)
Answer: Add a flag in
def cycle(self):
self.rect.centerx += self.mval
if self.rect.centerx < 100:
self.mval = 2
return True # Direction change!
elif self.rect.centerx > (size_x - 100):
self.mval = -2
return True
return False # no change
But then you need to make this change happen only once. Thus...
done_once = False
for enemy in EnemyList:
if not done_once:
change = enemy.cycle()
if change and not done_once:
# call the new enemy function that changes every enemy's direction
done_once = True
enemy.disp(screen)
enemy.KeepFormation(enemy.rect)
Process this flag such that everyone should change direction now. You may need
another function & direction_tracking variable in your `Enemy` class. Good
luck & good job!
|
Python Regex match multiline Java annotation
Question: I am trying to take advantage of JAXB code generation from a XML Schema to use
in an Android project through SimpleXML library, which uses another type of
Assertion than JAXB (I do not want to include a 9MB lib tu support JAXB in my
Android project). See question [previously
asked](http://stackoverflow.com/a/6233830/186880 "JAXB to Simple XML")
Basically, I am writing a small Python script to perform the required changes
on each Java file generated through the xcj tool, and so far it is working for
import deletion/modification, simple line annotation, and also the annotation
for which a List @XMLElement needs to be converted to an @ElementList one.
The only issue I am facing right now is for removing annotations on several
lines, such as @XMLSeeAlso or @XMLType like the following
@XmlType(name = "AnimatedPictureType", propOrder = {
"resources",
"animation",
"caption"
})
or
@XmlSeeAlso({
BackgroundRGBColorType.class,
ForegroundRGBColorType.class
})
I tried different strategies using either Multineline, DotAll, or both, but
without any success. I am new to "advanced" regex usage as well as Python so I
am probably missing something silly.
For my simple XSD processing that is the only step I cannot get running to
achieve a fully automated script using xcj and then automatically convert JAXB
annotations into Simple XML ones.
Thank you in advance for your help.
Answer: `@Xml.*\}\)` with dotall enabled should as far as i know match any annotation
starting with @Xml and ending with "})", even when it is multiline.
For a good view of what your regex actually matches you could always test your
regular expressions at websites like <https://pythex.org/>
|
Is it possible that processes created by subprocess module in Python still use up memory (are zombies), although subprocess.Popen.wait() was called?
Question: According to the answers to another question in stackoverflow ([how to kill
(or avoid) zombie processes with subprocess
module](http://stackoverflow.com/questions/2760652/how-to-kill-or-avoid-
zombie-processes-with-subprocess-module)) one can avoid zombie processes by
using the command `subprocess.Popen.wait()`.
However, when I run the following function `perform_sth` inside my script a
several thousand times, the memory usage of each individual process tends to
increase:
For example, the first process only needs 7 MB, but nr. 1000 already 500 MB,
until in the end more than 8 GB are used and I have to kill the whole Python
script. The process should always use more or less the same amount of memory.
Probably I have a flaw in my function and need to additionally kill the
processes?
My code is:
def perform_sth(arg1, arg2):
import subprocess
sth_cline = ["sth", "-asequence=%s"%arg1, "-bsequence=%s"]
process = subprocess.Popen(
sth_cline,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE
)
process.wait()
return
Answer: Do not use `stdout=PIPE` if you don't read from the pipe.
Your child process is _not zombie_ (a zombie is a dead process; it needs only
a tiny amount of memory, to store its exit status in the process table). Your
child process is _alive_ (that is why it is capable of consuming gigabytes of
memory).
The OS pipe buffer is probably full and the child process is blocked while
trying to write to the pipe. Your parent should drain the buffer by reading
from the pipe to allow the child to continue but the parent waits for
`process.wait()` to return forever (a deadlock).
If you don't need the output, use `stdout=subprocess.DEVNULL` instead. Or see
[How to hide output of subprocess in Python
2.7](http://stackoverflow.com/q/11269575/4279)
#!/usr/bin/env python
from subprocess import check_call, DEVNULL, STDOUT
check_call(["sth", "arg 1", "arg2"], stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
|
Use Redefined command from PExpect
Question: How can I redefine a command in a bash script so that a python script called
from the bash script will execute the redefined version when called via
pexpect?
_**test.sh_**
#!/bin/bash
function scp(){
echo "Skip SCP"
}
export -f scp
python test.py
_**test.py_**
import pexpect
scp = pexpect.spawn("scp")
scp.expect([pexpect.EOF,pexpect.TIMEOUT],timeout=1500)
print scp.before
In this example I expect (and want) to see is:
> Skip SCP
but what I actually see is:
usage: scp [-1246BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
[-l limit] [-o ssh_option] [-P port] [-S program]
[[user@]host1:]file1 ... [[user@]host2:]file2
I can change anything about the shell script, but the python comes from a
third party and is copied into many different projects, so changing it would
be impractical.
Answer: Most likely `spawn` executes it's commands directly (via execv or something)
or it uses a specific shell like /bin/sh. If you want it to use a customized
environment, you'll have to specify that yourself. I don't know pexpect, but
something like
spawn bash
expect your_prompt
send "function scp { echo 'skip scp'; }"
expect your_prompt
send scp
expect "skip scp"
Additionally, bash functions are not exported to child processes unless you
`export -f scp`
* * *
Since you can't touch the pexpect part, the only thing you can change is
`scp`. You will have to provide a program named "scp" that occurs earlier in
the path than the regular scp
#!/bin/sh
PATH=/my/special/path:$PATH
cat > /my/special/path/scp <<END
#!/bin/sh
echo "no scp here!"
END
chmod 755 /my/special/path/scp
python test.py
|
rpy2 module is invisible from python 2.7 on Mac 10.6.8
Question: I am a bit of a noob on Mac and my python installation is refusing to
acknowledge the existence of the rpy2 module on my mac. It looks like it only
sees it as a Python 2.6 module. How do I make it visible in 2.7 ? Do I need to
downgrade my python ? If so, how ? On the RPy2 web page
(<http://rpy.sourceforge.net/rpy2_download.html>) Python 2.6 is recommended.
Thanks!
mayumi@MAYUMI-iMac~:/ python --version
Python 2.7.6
mayumi@MAYUMI-iMac~:/ pip install rpy2
Requirement already satisfied (use --upgrade to upgrade): rpy2 in /Library/Python/2.6/site-packages/rpy2-2.3.8-py2.6-macosx-10.6-universal.egg
Cleaning up...
mayumi@MAYUMI-iMac~:/ python
Python 2.7.6 (v2.7.6:3a1db0d2747e, Nov 10 2013, 00:42:54)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import rpy2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rpy2
>>>
Answer: I also found it is difficult to successfully install `rpy2` in OSX machines.
Sometime it works, sometimes it doesn't, which is very annoying. I eventually
settled with `Anaconda` `Python` distribution from
<https://store.continuum.io/cshop/anaconda/> to save all the troubles.
Installing `rpy2` never fails since the switch.
The default installation of `Anaconda` does not included `rpy2`, so you want
to run the installation command, from `Anaconda` folder, `bin` subfolder
conda install rpy2
Depends on the version, you may get a bunch of warnings. Just ignore them.
Then `rpy2` just works! Of course, only under the `Anaconda python`, not the
other `python` version you may have installed on your machine.

You can run a few test to make sure `rpy2` works, following this example:
<http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/3607712653c66d63e0d7f13f073bde8c0f209ba8/docs/examples/notebooks/rmagic_extension.ipynb>
`bash` commands, run in the folder `/Users/YOUR_USER_NAME/anaconda/bin/`:
user-Mac-Pro:bin user$ conda install rpy2
and it says:
Conda package not available for rpy2, attempting to install via pip
Downloading/unpacking rpy2
Downloading rpy2-2.3.8.tar.gz (185kB): 185kB downloaded
Running setup.py egg_info for package rpy2
If you don't have `R` installed it will complain with a few warnings and fetch
`R` for you. Then there may be some other depreciation warnings dependents on
what you have installed.
(I am not associated with Continuum in any way)
|
Python sum of the array elements
Question: I have file with following lines:
date:ip num#1 num#2
2013.09:142.134.35.17 10 12
2013.09:142.134.35.17 4 4
2013.09:63.151.172.31 52 13
2013.09:63.151.172.31 10 10
2013.09:63.151.172.31 16 32
2013.10:62.151.172.31 16 32
How do I sum up the last two elements with the same IP to get such a
conclusion?
2013.09:142.134.35.17 14 16
2013.09:63.151.172.31 78 55
2013.10:62.151.172.31 16 32
Answer: Try this:
from collections import Counter
with open('full_megalog.txt') as f:
data = [d.split() for d in f]
sum1, sum2 = Counter(), Counter()
for d in data:
sum1[d[0]] += int(d[1])
sum2[d[0]] += int(d[2])
for date_ip in sum1.keys():
print date_ip, sum1[date_ip], sum2[date_ip]
|
Custom constants in python humanname not working
Question: I'm following the instructions [here on using the python-
nameparser](https://code.google.com/p/python-nameparser/).
My problem is that for human names containing "assistant professor," I'm
getting 'assistant' is the title, and 'professor' is assigned the role of
first name.
>>> o.full_name = 'Assistant Professor Darwin Mittenchops'
>>> o
<HumanName : [
Title: 'Assistant'
First: 'Professor'
Middle: 'Darwin'
Last: 'Mittenchops'
Suffix: ''
]>
Instead of
>>> o
<HumanName : [
Title: 'Assistant Professor'
First: 'Darwin'
Middle: ''
Last: 'Mittenchops'
Suffix: ''
]>
Their example for adding custom constants to address this is:
>>> from nameparser import HumanName
>>> from nameparser.constants import PREFIXES
>>>
>>> prefixes_c = PREFIXES | set(['te'])
>>> hn = HumanName(prefixes_c=prefixes_c)
>>> hn.full_name = "Te Awanui-a-Rangi Black"
>>> hn
<HumanName : [
Title: ''
First: 'Te Awanui-a-Rangi'
Middle: ''
Last: 'Black'
Suffix: ''
]>
So, the following should allow me to make "assistant professor" a title:
>>> from nameparser import HumanName
>>> from nameparser.constants import TITLES
>>> titles_c = TITLES | set(["assistant professor"])
>>> hn = HumanName(titles_c=titles_c)
>>> hn.full_name = 'Assistant Professor Darwin Mittenchops'
>>> hn
<HumanName : [
Title: 'Assistant'
First: 'Professor'
Middle: 'Darwin'
Last: 'Mittenchops'
Suffix: ''
]>
No dice.
>>> "assistant professor" in titles_c
True
So, I know it's there. Just doesn't seem to work.
Answer: Ah, I read too quickly. The code _can_ handle combined words, but we need to
add the words individually to the `titles_c` set, not as a unit. Example:
>>> from nameparser import HumanName
>>> from nameparser.constants import TITLES
>>> titles_c = TITLES | set("assistant professor".split())
>>> hn = HumanName(titles_c=titles_c)
>>> hn.full_name = "Assistant Professor Darwin Mittenchops"
>>> hn
<HumanName : [
Title: 'Assistant Professor'
First: 'Darwin'
Middle: ''
Last: 'Mittenchops'
Suffix: ''
]>
|
Locating specific <p> tag after <h1> tag in Python Html Parser
Question: I'm attempting to parse through a series of webpages and grab just 3
paragraphs after the header occurs on each of these pages. They all have the
same format (I think). I'm using urllib2 and beautiful soup, but i'm not quite
sure how to just jump the to header and then grab the few
tags that follow it.I know the first split("h1") is not correct but its my
only decent attempt so far. Here's my code,
from bs4 import BeautifulSoup
import urllib2
from HTMLParser import HTMLParser
BANNED = ["/events/new"]
def main():
soup = BeautifulSoup(urllib2.urlopen('http://b-line.binghamton.edu').read())
for link in soup.find_all('a'):
link = link.get('href')
if link != None and link not in BANNED and "/events/" in link:
print()
print(link)
eventPage = "http://b-line.binghamton.edu" + link
bLineSubPage = urllib2.urlopen(eventPage)
bLineSubPageStr = bLineSubPage.read()
headAccum = 0
for data in bLineSubPageStr.split("<h1>"):
if(headAccum < 1):
accum = 0
for subData in data.split("<p>"):
if(accum < 5):
try:
print(BeautifulSoup(subData).get_text())
except Exception as e:
print(e)
accum+=1
print()
headAccum += 1
bLineSubPage.close()
print()
main()
Answer:
>>> page_txt = urllib2.urlopen("http://b-line.binghamton.edu/events/9305").read(
>>> soup = bs4.BeautifulSoup(pg.split("<h1>",1)[-1])
>>> print soup.find_all("p")[:3]
is that what you want?
|
Python global keyword behavior
Question: I am trying to use a global variable across modules by importing the variable
and modifying it locally within a function. The code and the output is below.
The last output is not what I expected, I thought it should be 15 since it has
been already modified in global scope by func3. Can anybody please explain why
the last output is still 10.
Thank you!
test2.py
myGlobal = 5
def func3():
global myGlobal
myGlobal = 15
print "from 3: ", myGlobal
test1.py
from test2 import myGlobal, func3
def func1():
global myGlobal
myGlobal = 10
def func2():
print "from 2: ", myGlobal
print "init: ", myGlobal
func1()
func2()
func3()
func2()
The outputs:
init: 5
from 2: 10
from 3: 15
from 2: 10
Answer: As stated in the comments, `global` in Python means module level.
So doing:
a = 1
Has the exact same effect on `a` as:
def f():
global a
a = 1
f()
And in both cases the variable is not shared across modules.
If you want to share a variable across modules, check
[this](http://stackoverflow.com/questions/142545/python-how-to-make-a-cross-
module-variable).
|
How to plot an ogive?
Question: I'm wondering if there exists a way to plot a histogram and an ogive using
matplotlib in Python.
I have the following for plotting a histogram
a = np.array(values)
plt.hist(a, 32, normed=0, facecolor='blue', alpha = 0.25)
plt.show()
But I don't know if matplotlib has got a good way to plot an ogive.
Here's what I'm doing:
a = np.array(values)
bins = np.arange(int(min), int(max) + 2)
histogram = np.histogram(a, bins = bins, normed = True)
v = []
s = 0.0
for e in histogram[0]:
s = s + e
v.append(s)
v[0] = histogram[0][0]
plt.plot(v)
plt.show()
Answer: By `ogive` do you just mean a cumulative histogram? If so, just pass
`cumulative=True` to `plt.hist`.
For example:
import matplotlib.pyplot as plt
import numpy as np
data = np.random.normal(0, 1, 1000)
fig, (ax1, ax2) = plt.subplots(nrows=2)
ax1.hist(data)
ax2.hist(data, cumulative=True)
plt.show()

If you want it to be drawn as a line, just use `numpy.histogram` directly
(that's what `plt.hist` is using). Alternately, you can use the values that
`plt.hist` returns. `counts` and `bins` are what `np.histogram` would return;
`plt.hist` just returns the plotted patches as well.
For example:
import matplotlib.pyplot as plt
import numpy as np
data = np.random.normal(0, 1, 1000)
fig, ax = plt.subplots()
counts, bins, patches = plt.hist(data)
bin_centers = np.mean(zip(bins[:-1], bins[1:]), axis=1)
ax.plot(bin_centers, counts.cumsum(), 'ro-')
plt.show()

|
Wrapping Pyro4 name server
Question: I made a class to allow me to start and stop a pyro name server from a script
(i.e. not having to start multiple programs such as in the
[tutorial](http://pythonhosted.org/Pyro4/tutorials.html)). The class is as
follow:
class NameServer(Pyro4.threadutil.Thread):
def __init__(self, host, isDeamon, port=0, enableBroadcast=True,
bchost=None, bcport=None, unixsocket=None, nathost=None, natport=None):
super(NameServer,self).__init__()
self.setDaemon(isDeamon)
self.host=host
self.started=Pyro4.threadutil.Event()
self.unixsocket = unixsocket
self.port = port
self.enableBroadcast = enableBroadcast
self.bchost = bchost
self.bcport = bcport
self.nathost = nathost
self.natport = natport
#This code is taken from Pyro4.naming.startNSloop
self.ns_daemon = Pyro4.naming.NameServerDaemon(self.host, self.port, self.unixsocket,
nathost=self.nathost, natport=self.natport)
self.uri = self.ns_daemon.uriFor(self.ns_daemon.nameserver)
internalUri = self.ns_daemon.uriFor(self.ns_daemon.nameserver, nat=False)
self.bcserver=None
self.ns = self.ns_daemon.nameserver
if self.unixsocket:
hostip = "Unix domain socket"
else:
hostip = self.ns_daemon.sock.getsockname()[0]
if hostip.startswith("127."):
enableBroadcast=False
if enableBroadcast:
# Make sure to pass the internal uri to the broadcast responder.
# It is almost always useless to let it return the external uri,
# because external systems won't be able to talk to this thing anyway.
bcserver=Pyro4.naming.BroadcastServer(internalUri, self.bchost, self.bcport)
bcserver.runInThread()
def run(self):
try:
self.ns_daemon.requestLoop()
finally:
self.ns_daemon.close()
if self.bcserver is not None:
self.bcserver.close()
def startNS(self):
self.start()
def stopNS(self):
self.ns_daemon.shutdown()
if self.bcserver is not None:
self.bcserver.shutdown()
Now, if I run the following script
import socket
import Pyro4
from threading import Thread
import time
from multiprocessing import Process
import sys
from datetime import datetime
HMAC_KEY = "1234567890"
Pyro4.config.HMAC_KEY = HMAC_KEY
sys.excepthook = Pyro4.util.excepthook
[... definition of class NameServer given previously ...]
class Dummy:
x = {}
def getX(self):
return self.x
class Worker(Process):
def run(self):
Pyro4.config.HMAC_KEY = HMAC_KEY
sys.excepthook = Pyro4.util.excepthook
for i in range(10):
a = datetime.now()
with Pyro4.Proxy("PYRONAME:dummy") as obj:
obj.getX()
print i, (datetime.now() - a).total_seconds()
def main():
nameserver = NameServer(socket.gethostbyname(socket.gethostname()), False)
nameserver.startNS()
daemon=Pyro4.Daemon(socket.gethostname(), port=7676) # make a Pyro daemon
obj = Dummy()
uri=daemon.register(obj) # register the greeting object as a Pyro object
nameserver.ns.register("dummy", uri) # register the object with a name in the name server
thread = Thread(target = daemon.requestLoop)
thread.setDaemon(1)
thread.start()
time.sleep(1)
worker = Worker()
worker.start()
if __name__ == "__main__":
main()
I get the following output:
0 1.078
1 1.05
2 1.013
3 1.037
4 1.013
5 1.087
6 1.063
7 1.1
8 1.063
9 1.05
However, if I run this code as two different programs without using my
NameServer class, I dont get these delays. For example, runing the first
script:
import Pyro4
import sys
HMAC_KEY = "1234567890"
Pyro4.config.HMAC_KEY = HMAC_KEY
sys.excepthook = Pyro4.util.excepthook
class Dummy:
x = {}
def getX(self):
return self.x
def main():
obj = Dummy()
Pyro4.Daemon.serveSimple({obj: "dummy"}, ns = False)
if __name__ == "__main__":
main()
and the second script
import Pyro4
from multiprocessing import Process
import sys
from datetime import datetime
HMAC_KEY = "1234567890"
Pyro4.config.HMAC_KEY = HMAC_KEY
sys.excepthook = Pyro4.util.excepthook
class Worker(Process):
def run(self):
Pyro4.config.HMAC_KEY = HMAC_KEY
sys.excepthook = Pyro4.util.excepthook
for i in range(10):
a = datetime.now()
with Pyro4.Proxy("[the URI given by Pyro when running script 1]") as obj:
obj.getX()
print i, (datetime.now() - a).total_seconds()
def main():
worker = Worker()
worker.start()
if __name__ == "__main__":
main()
I get the following results
0 0.053
1 0.049
2 0.051
3 0.05
4 0.013
5 0.049
6 0.051
7 0.05
8 0.013
9 0.049
... what can be wrong with the first approach? I don't understand why I get
delays of 1 second at each Pyro call. Profiling it tells me that it is the
socket method connect that takes 1 second...
Answer: I do not know what exactly is wrong, but you can try my new
[PyroMP](https://pypi.python.org/pypi/PyroMP "PyroMP") package which has a
wrapper for a the Pyro4 NameServer as well as an easy way to create processes.
Your example will look like:
from threading import Thread
import time
from datetime import datetime
import Pyro4
import PyroMP
import PyroMP.log_server as log
class Dummy(object):
def __init__(self):
self.x = {}
def getX(self):
return self.x
class Worker(PyroMP.Service):
def run(self):
logger = self.get_logger()
for i in range(10):
a = datetime.now()
with Pyro4.Proxy("PYRONAME:dummy") as obj:
obj.getX()
logger.info("{}: {}".format(i, (datetime.now() - a).total_seconds()))
def main():
with PyroMP.NameServer(), log.LogServer():
log.set_loglevel("INFO")
daemon = Pyro4.Daemon()# make a Pyro daemon
obj = Dummy()
uri = daemon.register(obj) # register the greeting object as a Pyro object
ns = PyroMP.NameServer.locate()
ns.register("dummy", uri) # register the object with a name in the name server
thread = Thread(target=daemon.requestLoop)
thread.setDaemon(1)
thread.start()
time.sleep(1)
with Worker() as worker:
worker.run()
if __name__ == "__main__":
main()
The output is:
2014-02-19 18:56:32,877 - LogServer.WORKER - INFO - 0: 0.0
2014-02-19 18:56:32,892 - LogServer.WORKER - INFO - 1: 0.016
2014-02-19 18:56:32,892 - LogServer.WORKER - INFO - 2: 0.0
2014-02-19 18:56:32,894 - LogServer.WORKER - INFO - 3: 0.001
2014-02-19 18:56:32,940 - LogServer.WORKER - INFO - 4: 0.031
2014-02-19 18:56:32,956 - LogServer.WORKER - INFO - 5: 0.015
2014-02-19 18:56:32,956 - LogServer.WORKER - INFO - 6: 0.0
|
python import within import
Question: i am trying to import a module within a module and then access the lower level
module from the top, however it is not available. is this normal behaviour?
# caller.py
import first
print second.some_var
# first.py
import second
# second.py
some_var = 1
running `caller.py` gives error
NameError: name 'second' is not defined
do i have to `import second` within `caller.py`? this seems counter-intuitive
to me.
Answer: You can use
import first
print first.second.some_var
Having `second` appear in the namespace automatically just by importing
`first` would lead to lots of conflicts
This would also work
from first import second
print second.some_var
The use of wildcard
from first import *
is discouraged because if someone adds extra attributes/functions to `first`
they may overwrite attributes you are using locally if they happen to choose
the same name
|
Conversion to datetime object leads to crash of numpy functions
Question: My python script produces an strange error and I have no clue why. Maybe
anybody else has an idea. Try to provide a example everybody else should be
able to run.
import datetime
import numpy as np
date = np.array([20120203123054, 20120204123054]) #date format: YYYYMMDDhhmmss
longitude = np.array([52., 53.])
latitude = np.array([-22.0, -23.0])
# Loop to convert date into datetime object
date_new = []
for j in range(len(date)):
date_string = str(date[j])
dt=datetime.datetime.strptime(date_string[:],'%Y%m%d%H%M%S')
date_new.append(dt)
data = np.array([date, longitude, latitude])
data_new = np.array([date_new, longitude, latitude])
#function to calculate distance between two locations (fixed location:
#longitude=50.,latitude=-20.)
def calculate_distance(longi, lati):
distance = []
latitude = ((50. + lati)/2)* 0.01745
dx = 111.3 * np.cos(latitude) * (50. - longi)
dy = 111.3 * (-20. - lati)
d = np.sqrt(dx * dx + dy * dy)
distance.append(d)
return distance
#call function
calculate_distance(data[1], data[2]) # Script works!
calculate_distance(data_new[1], data_new[2]) # Script doesn't work!
# see Traceback below
Why does it crash?
Traceback(most recent call last):
File "data_analysis.py" line 85,
dx = 111.3 * cos(latitude) * (lon_station - longitude)
AttributeError: cos
Answer: `numpy` arrays don't play so well with heterogeneous types. When you made
`data_new`, you were storing different kinds of Python objects in it, and so
that forced the `dtype` to be `object` to be broad enough to handle
everything.
>>> data
array([[ 2.01202031e+13, 2.01202041e+13],
[ 5.20000000e+01, 5.30000000e+01],
[ -2.20000000e+01, -2.30000000e+01]])
>>> data.dtype
dtype('float64')
>>> data_new
array([[datetime.datetime(2012, 2, 3, 12, 30, 54),
datetime.datetime(2012, 2, 4, 12, 30, 54)],
[52.0, 53.0],
[-22.0, -23.0]], dtype=object)
>>> data_new.dtype
dtype('O')
Since taking the cosine of an arbitrary object doesn't make sense, this isn't
implemented:
>>> np.cos(data[1])
array([-0.16299078, -0.91828279])
>>> np.cos(data_new[1])
Traceback (most recent call last):
File "<ipython-input-87-00101e4be00a>", line 1, in <module>
np.cos(data_new[1])
AttributeError: cos
You can return to `float` type (unfortunately it doesn't look like you can use
a `view` with object arrays):
>>> np.cos(data_new[1].astype(float))
array([-0.16299078, -0.91828279])
FWIW I prefer [`pandas`](http://pandas.pydata.org) DataFrames when working
with this kind of mixed-type data, but people's preferences vary. I don't know
enough about numpy structured arrays to know if you can mix object dtypes and
more primitive dtypes, although I didn't think so.
|
Python 2.7 How to add images on the canvas with Tkinter
Question: I need to add an image to the canvas. I have tried countless amounts of
things, and finally decided to make a question on here.
This is what I have imported
from Tkinter import *
import tkFont
from PIL import ImageTk, Image
And this is the line of code I'm trying to add to import an image from the
same folder the main file is in.
c.create_image(100,100, anchor=N, image = ghost.jpg)
I've also tried putting `""`s around `'ghost.jpg'` and it says _the Image does
not exist_ then. Without the quotes it says _"global name 'ghost' does not
exist."_
Can anyone help?
Answer: [Canvas.create_image](http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.create_image-
method)'s image argument
> should be a [PhotoImage](http://effbot.org/tkinterbook/photoimage.htm) or
> [BitmapImage](http://effbot.org/tkinterbook/bitmapimage.htm), or a
> compatible object (such as the [PIL's
> PhotoImage](http://effbot.org/imagingbook/imagetk.htm)). The application
> must keep a reference to the image object.
|
Deck card class in python
Question: I am working on creating a class for the first time, and I am thinking that I
have done every thing to get it run, but I still get bunch of issues which is
`'list' object has no attribute 'shffule'` so the problem here is it will not
shuffle the cards, and it will not tell the remaining cards, can any one tell
me what am I doing wrong? Thanks in advance
import random
class card_deck:
def __init__(self, rank, suite, card):
self.rank= rank
self.suite = suite
def ranks(self):
return self.rank
def suites(self):
return self.suite
def cards(self,card):
suit_name= ['The suit of Spades','The suit of Hearts', 'The suit of Diamonds','Clubs']
rank_name=['Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King']
def value(self):
if self.rank == 'Ace':
return 1
elif self.rank == 'Jack':
return 11
elif self.rank == 'Queen':
return 12
elif self.rank == 'King':
return 13
def shffule(self):
random.shuffle(self.cards)
def remove(self,card):
self.cards.remove(card)
def cardremaining(self):
self.suite-self.rank
def main():
try:
deck=[]
for i in ['Spades','Hearts', ' Diamonds','Clubs']:
for c in ['Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King']:
deck.append((c, i))
deck.shffule
hand = []
user =eval(input('Enter a number of cards: 1-7 '))
print()
while user <1 or user >7:
print ("Only a number between 1-7:")
return main()
for i in range(user):
hand.append(deck[i])
print (hand)
except ValueError:
print("Only numbers")
main()
Answer: Apart from your code containing many small errors; I will try to answer your
main problems.
If you are going to use `shffule` _[sic]_ method of `card_deck` class, then
you first need to create an instance of that class(whereas you tried to call
that method with a list). Like this:
deck = card_deck(some_rank,some_suit,some_card)
deck.shffule() #Now you can call the method
Now, since you made it a class instance, you cannot get items from it like
`hand.append(deck[i])`
Unless you defined the method `__getitem__` in your class definition, like
this:
#this will be in your class definition
def __getitem__(self,i):
return self.card_list[i] #Of course you have to define a list of cards in your class too.
In my opinion, you should spend a little more time trying to understand how is
a class defined, how does methods work and how to access members of a class.
After that you will be doing much better here
|
Computing Pointwise Mutual Information of a text document using python
Question: My goal is to compute the PMI of the text below: `a= 'When the defendant and
his lawyer walked into the court, some of the victim supporters turned their
backs on him`'
formula: PMI-IR (w1, w2) = log2 p(w1&w2)/p(w1)*p(w2); p=probability, w=word
My attempt:
>>> from nltk import bigrams
>>> import collections
>>> a1=a.split()
>>> a2=collections.Counter(a1)
>>> a3=collections.Counter(bigrams(a1))
>>> a4=sum([a2[x]for x in a2])
>>> a5=sum([a3[x]for x in a3])
>>> a6={x:float(a2[x])/a4 for x in a2} # word probabilities(w1 and w2)
>>> a7={x:float(a3[x])/a5 for x in a3} # joint probabilites (w1&w2)
>>> for x in a6:
k={x:round(log(a7[b]/(a6[x] * a6[y]),2),4) for b in a7 for y in a6 if x and y in b}
u.append(k)
>>> u
[{'and': 4.3959}, {'on': 4.3959}, {'his': 4.3959}, {'When': 4.3959}.....}]
The result I got doesn't seem right due to the following (1)I wanted one large
dictionary and got many little ones for each item.(2) The probabilities may
not have been fitted into the equation correctly as this is my first attempt
at this problem.
Any suggestion? Thanks.
Answer: I am not an NLP expert, but your equation looks fine. The implementation has a
subtle bug. Consider the below precedence deep dive:
"""Precendence deep dive"""
'hi' and True #returns true regardless of what the contents of the string
'hi' and False #returns false
b = ('hi','bob')
'hi' and 'bob' in b #returns true BUT not because 'hi' is in b!!!
'hia' and 'bob' in b #returns true as the precedence is 'hia' and ('bob' in b)
result2 = 'bob' in b
'hia' and result2 #returns true and shows the precedence more clearly
'hi' and 'boba' in b #returns false
#each string needs to check in b
'hi' in b and 'bob' in b #return true!!
'hia' in b and 'bob' in b #return false!!
'hi' in b and 'boba' in b #return false!! - same as before but now each string is checked separately
Notice the difference in the joint probabilities u and v. u contains the wrong
precedence and v contains the right precedence
from nltk import bigrams
import collections
a= """When the defendant and his lawyer walked into the court, some of the victim supporters turned their backs on him. if we have more data then it will be more interesting because we have more chance to repeat bigrams. After some of the victim supporters turned their backs then a subset of the victim supporters turned around and left the court."""
a1=a.split()
a2=collections.Counter(a1)
a3=collections.Counter(bigrams(a1))
a4=sum([a2[x]for x in a2])
a5=sum([a3[x]for x in a3])
a6={x:float(a2[x])/a4 for x in a2} # word probabilities(w1 and w2)
a7={x:float(a3[x])/a5 for x in a3} # joint probabilites (w1&w2)
u = {}
v = {}
for x in a6:
k={x:round(math.log((a7[b]/(a6[x] * a6[y])),2),4) for b in a7 for y in a6 if x and y in b}
u[x] = k[x]
k={x:round(math.log((a7[b]/(a6[x] * a6[y])),2),4) for b in a7 for y in a6 if x in b and y in b}
v[x] = k[x]
u['the']
v['the']
|
Super object has no attribute 'append'
Question: I am working on creating a class for the first time, and I am facing
difficulties here and there, first read my code, and I will post the error
after it
import random
class card_deck:
suites= ["Clubs", "Diamonds", "Hearts", "Spades"]
ranks= ["Ace", "2", "3", "4", "5", "6", "7",
"8", "9", "10", "Jack", "Queen", "King"]
def __init__(self, rank, suite, card):
self.rank= rank
self.suite = suite
self.card = card
def card_list(self):
suites= ["Clubs", "Diamonds", "Hearts", "Spades"]
ranks= ["Ace", "2", "3", "4", "5", "6", "7",
"8", "9", "10", "Jack", "Queen", "King"]
def ranks(self):
return self.rank
def suite(self):
return self.suite
def card(self,card):
return self.card
def __str__(self):
return (Card.ranks[self.rank],
Card.suits[self.suit])
def value(self):
if self.rank == 'Ace':
return 1
elif self.rank == 'Jack':
return 11
elif self.rank == 'Queen':
return 12
elif self.rank == 'King':
return 13
def shffule(self):
random.shuffle(self.card)
def remove(self,card):
self.card.remove(card)
def __getitem__(self,i):
return self.card_list()
def append(self,value):
super(card_deck,self).append(value)
return self
def cardremaining(self):
self.suite-self.rank
def main():
try:
rank = []
suite = []
card = []
deck = card_deck(rank,suite,card)
deck.shffule()
#drup=[]
for i in ['Spades','Hearts', ' Diamonds','Clubs']:
for c in ['Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King']:
deck.append([c, i])
hand = []
user =eval(input('Enter a number of cards: 1-7 '))
print()
while user <1 or user >7:
print ("Only a number between 1-7:")
return main()
for i in range(user):
hand.append(deck[i])
print(hand)
except ValueError:
print("Only numbers")
main()
Here is what I get when I run main()
Traceback (most recent call last):
File "<pyshell#64>", line 1, in <module>
main() File "/Users/user/Desktop/deck_class.py", line 66, in main
deck.append([c, i])
File "/Users/user/Desktop/deck_class.py", line 44, in append
super(card_deck,self).append(value)
AttributeError: 'super' object has no attribute 'append'
so even if I try to remove super and just write slef.append(value) I get
another error it which python keep printing
File "/Users/user/Desktop/deck_class.py", line 44, in append
card_deck,self.append(value)
File "/Users/user/Desktop/deck_class.py", line 44, in append
I did research before posting the question I tried to fixing it my self, but
it just feels too complicated for me, and I am hoping you guys can help! so
what am i doing wrong?
Thanks
Answer: I'm getting the impression that you're trying to make a `card_deck` object
that is _pretending_ to be a list of some sort. I also feel that you're trying
to make your `card_deck` object act as two separate things: a deck of cards,
and a single card.
Taking that in mind, it would be far simpler to take a step back, split up
your code into two separate classes, and do something like the below. I left
comments in the code to explain my thought process:
import random
class Card(object):
'''Remember, a 'card' is completely different from a deck. You can have a
card that is not contained in a deck, and a deck is simply another object
that contains one or more cards, with a few convenience methods attached.'''
def __init__(self, rank, suite):
self.rank = rank
self.suite = suite
def __repr__(self):
'''The different between '__repr__' and '__str__' is not particularly
important right now. You can google the difference yourself.'''
return "Card({0}, {1})".format(self.rank, self.suite)
def __str__(self):
return "{0} of {1}".format(self.rank, self.suite)
def value(self):
ranks = ["Ace", "2", "3", "4", "5", "6", "7",
"8", "9", "10", "Jack", "Queen", "King"]
# This is something called a 'dictionary comprehension'. It lets you
# map the rank of the card to its corresponding value.
#
# 'enumerate' is a built-in. Try doing `enumerate(["a", "b", "c", "d"])`
# in the shell, and see what happens.
values = {rank: i + 1 for (i, rank) in enumerate(ranks)}
return values[self.rank]
class Deck(object):
'''Now, we have the deck.'''
def __init__(self):
self.suites = ["Clubs", "Diamonds", "Hearts", "Spades"]
self.ranks = ["Ace", "2", "3", "4", "5", "6", "7",
"8", "9", "10", "Jack", "Queen", "King"]
# Here, I've chosen to create a full deck when instantiating the object.
# You may chose to modify your code to pass in the cards you want instead.
#
# Notice how we're keeping track of all of our cards inside of a list.
# This way, we're free to write whatever methods we want, while still
# internally representing our deck of cards in the cleanest manner possible.
self.cards = []
for suite in self.suites:
for rank in self.ranks:
self.cards.append(Card(suite, rank))
def shuffle(self):
random.shuffle(self.cards)
def remove(self, card):
# idk if this will actually work -- you should test it.
self.cards.remove(card)
def append(self, card):
'''In this method, we're taking a card, and adding it to our deck.
We've written the entire thing ourselves -- no need to call to super
(which doesn't work, in any case)'''
self.cards.append(card)
def get_top_card(self):
'''This is a common operation when dealing with decks -- why not add it?'''
return self.cards.pop()
def __repr__(self):
return "[" + ", ".join(repr(card) for card in self.cards) + "]"
def __str__(self):
return '\n'.join(str(card) for card in self.cards)
def main():
deck = Deck()
deck.shuffle()
hand = []
while True:
user = int(input('Enter a number of cards: 1-7 '))
print()
if not 1 <= user <= 7:
print ("Only a number between 1-7:")
else:
break
for i in range(user):
hand.append(deck.get_top_card())
print(hand)
if __name__ == '__main__':
main()
Now, you may be wondering what `super(card_deck,self).append(value)` was
actually doing in your original example. Calling `super(card_deck, self)` will
return the **parent class** of the `card_deck` class -- in other words, the
class `card_deck` inherits from.
In this case, your class isn't inheriting anything (technically, it inherits
the built-in "object" class, but every class inherits from `object`, so that's
moot).
Then, when you call `append`, you're trying to call the `append` method that
exists inside the parent class of `card_deck`. However, no such method exists,
so your code throws an exception.
In your case, since you're just beginning to use classes, I would strongly
recommend that you ignore inheritance and the 'super' builtin function for
now. It's overkill, and will only confuse you when you're trying to get a grip
on object-oriented programming. Focus instead on writing good objects that
provide good methods that manipulate variables you define yourself.
|
got errors when import mod_python
Question: After finishing install mod_python, I got 500 Internal Server Error. I looked
up the log, it says: python_handler: Can't get/create interpreter.
Then I open a python terminal and to test if I can import mod_python. Then I
got errors as follows:
>>> import mod_python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/mod_python/__init__.py", line 25, in <module>
import version
File "/usr/local/lib/python2.7/dist-packages/mod_python/version.py", line 3
version = "fatal: Not a git repository (or any of the parent directories): .git
^
SyntaxError: EOL while scanning string literal
I installed mod_python with command --with-python=/usr/bin/python which
version is 2.7.3.
Any ideas why this happens? Thanks ahead!
EDIT: I tried to reinstall mod_python with python2.6, I found I missed the
SyntaxError posted during installation.
SyntaxError: ('EOL while scanning string literal', ('/usr/local/lib/python2.6/site-packages/mod_python/version.py', 3, 79, 'version = "fatal: Not a git repository (or any of the parent directories): .git\n'))
This error did appear during the installation.
Answer: I had the same problem using mod_python-3.5.0. The problem seems to be in the
dist/version.sh file which runs git, but since the distribution doesn't have
the .git repository structure, the error is generated.
I fixed version.sh by just replacing the offending line with an empty
definition:
#GIT=`git describe --always`
GIT=
|
How can I terminate
Question: I am learning socket programming and python. For a school assignment, I
created a server that runs, but I don't know how to terminate it. The
instruction says that my server runs repeatedly until terminated by a
supervisor (don't lead open sockets). Can someone give me an example or point
me to the right direction? Thanks so much!
Here is a portion of my code:
import socket
import sys
import os
def main():
#HOST = ''
#PORT = 8888
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
try:
server_socket.bind((HOST, PORT)) #bind to a address(and port)
except socket.error, msg:
print 'Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
print 'Socket bind complete'
#put the socket in listening mode
server_socket.listen(10) #maximum 10 connections
print 'TCP Server Waiting for client on port 30021'
#wait to accept a connection - blocking call
client, addr = server_socket.accept()
#display client information
print 'Connected with ' + addr[0] + ':' + str(addr[1])
try:
#keep talking with the client
while 1:
#Receiving from client
data = client.recv(1024)
if not data:
break
#DO SOMETHING HERE
except KeyboardInterrupt:
print "Exiting gracefully."
finally:
server_socket.close()
if __name__ == "__main__":
main()
Answer: If you're running it interactively (that is, you started it with e.g. `python
myprogram.py` or `./myprogram.py` and you have a console where you can see its
output), you should be able to send it an interrupt, by pressing `CTRL``C`.
You should then see the “exiting gracefully” message and it should terminate.
If you're running it some other way, how to terminate it depends on what
platform you're using. If you're on Windows, you should be able to find the
`python.exe` or `pythonw.exe` process and press `End Process` in Task Manager.
If you're on a POSIX system, you might be able to find the process with `ps`
and end it with `kill -INT id`, where `id` is the process ID you obtained from
`ps`.
|
Using Hadoop to run a jar file - Python
Question: I have an existing Python program that has a sequence of operations that goes
something like this:
1. Connect to MySQL DB and retrieve files into local FS.
2. Run a program X that operates on these files. Something like: `java -jar X.jar <folder_name>` This will open every file in the folder and perform some operation on them and writes out an equal number of transformed files into another folder.
3. Then, run a program Y that operates on _these_ files as: `java -jar Y.jar <folder_name>` This creates multiple files of one line each which are then merged into a single file using a merge function.
4. This merged file is then the input for some further operations and analyses that is not really important for this question.
I'd like to use make use of Hadoop to speed up operation Y as it takes very
long to complete if there are: a) more number of files or b) large input files
to be operated upon.
What I'd like to know is if it is a good idea to go with Hadoop in the first
place to do something of this nature and if threads would make more sense in
this case. Bear in mind that X and Y are things that **cannot** be replaced or
changed in any way.
I came up with this idea:
1. After step 2 above, within a mapper, copy the files into the HDFS and back out again, run the jar file, at which point the results are written back into the HDFS. I copy the results back out to the File System and send it for further processing.
I would like to know if this makes sense at all and especially given that the
mapper expects a (key,value) pair, would I even have a k-v pair in this
scenario?
I know this sounds like a project and that's because it is, but I'm not
looking for code, just some guidance about whether or not this would even work
and if it did, what is the right way of going about doing this if my proposed
solution is not accurate (enough).
Thank you!
Answer: You absolutely can use hadoop mapreduce framework to complete your work, but
the answer for if it's a good idea could be "it depends". It depends the
number and sizes of files you want to proceed.
Keep in mind that hdfs is not very good at deal with small files, it could be
a disaster for the namenode if you have a good number (say 10 million) of
small files (size is less than 1k bytes). An another hand, if the sizes are
too large but only a few files are needed to proceed, it is not cool to just
wrap step#2 directly in a mapper, because the job won't be spread widely and
evenly (in this situation i guess the key-value only can be "file no. - file
content" or "file name - file content" given you mentioned X can't changed in
any way. Actually, "line no. - line" would be more situable)
BTW, there are 2 ways to utilize hadoop mapreduce framework, one way is you
write mapper/reducer in java and compile them in jar then run mapreduce job
with hadoop jar you_job.jar . Another way is
[streaming](http://hadoop.apache.org/docs/r0.19.1/cn/streaming.html), you can
write mapper/reducer by using python is this way.
|
psutil module not fully working on debian 7
Question: I'm trying to get the amount of free memory in the system using python. basically the same info that i can get from `cat /proc/meminfo | grep MemFree`.
>>> import psutil
>>> psutil.NUM_CPUS # this works fine
2
>>> psutil.virtual_memory() # this fails
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'virtual_memory'
i'm using python 2.7.3
**update**
>>> psutil.__version__
'0.5.0'
Answer:
Python 2.7.5+ (default, Sep 19 2013, 13:48:49)
[GCC 4.8.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil
>>> psutil.NUM_CPUS # this works fine
4
>>> psutil.virtual_memory() # this fails
vmem(total=4042084352L, available=1697619968L, percent=58.0, used=3149373440L, free=892710912L, active=2016649216, inactive=835248128, buffers=55672832L, cached=749236224)
>>> quit()
~$ cat /proc/meminfo | grep MemFree
MemFree: 876836 kB
~$ python -c "print 892710912/1024"
871788
~$ python -c "import psutil;print psutil.__version__"
1.1.3
Possibly you need to run:
sudo pip install psutil --upgrade
Note that you will never get exactly the same answers as you are running
python in one case and not in the other.
|
IRC Client in Python; not a IRC Bot
Question: I've searched extensively on this and only came up with bot specific clients.
I know they're basically the same, but for what I want to do I just can't
figure it out.
I am trying to program a python client, that is pure and simple on it's face.
For example when run from the command line it will immediately connect
invisibly and then print "say: " after it has established the connection.
(username, server, etc. are already set) It will wait until the user has
pressed -return/enter- and then it will send the message and then retrieve the
messages that it has been logging to an array and print the last 8 lines/array
values.
So the user will only see the messages on the channel, and be able to send
messages. My issue is figuring out how to strip everything but the messages.
I'm so sorry if I seem like I'm wanting someone to do this for me. I've looked
and looked but I can't figure it out. Do I have to use a IRC module? I'm
pretty sure that I'll end up using regex somehow, but I'm stumped. Thanks guys
for reading my silly question!
import sys
import socket
import string
HOST="irc.freenode.net"
PORT=6667
NICK="MauBot"
IDENT="maubot"
REALNAME="MauritsBot"
readbuffer=""
s=socket.socket( )
s.connect((HOST, PORT))
s.send("NICK %s\r\n" % NICK)
s.send("USER %s %s bla :%s\r\n" % (IDENT, HOST, REALNAME))
while 1:
readbuffer=readbuffer+s.recv(1024)
temp=string.split(readbuffer, "\n")
readbuffer=temp.pop( )
for line in temp:
line=string.rstrip(line)
line=string.split(line)
if(line[0]=="PING"):
s.send("PONG %s\r\n" % line[1])
Answer: You can connect to an IRC server using `Telnet`.
There are libraries in Python to accomplish this.
See these links for more information:
> > [Python Telnet
> connection](http://stackoverflow.com/questions/4528831/python-telnet-
> connection)
>
> > [Using Telnet in Python](http://www.pythonforbeginners.com/code-snippets-
> source-code/python-using-telnet)
>
> > [Python Doco: 20.14. telnetlib — Telnet
> client](http://docs.python.org/2/library/telnetlib.html)
Once you have connected to the IRC server, you can:
1. Output whatever commands you need to in the telnet session (such as login credentials).
2. Join a channel.
3. Write out the lines that you need to.
|
python pandas text block to data frame mixed types
Question: I am a python and pandas newbie. I have a text block that has data arranged in
columns. The data in the first six columns are integers and the rest are
floating point. I tried to create two DataFrames that I could then
concatenate:
sect1 = DataFrame(dtype=int)
sect2 = DataFrame(dtype=float)
i = 0
# The first 26 lines are header text
for line in txt[26:]:
colmns = line.split()
sect1[i] = colmns[:6] # Columns with integers
sect2[i] = colmns[6:] # Columns with floating point
i +=
This causes an AssertionError: Length of values does not match length of index
Here are two lines of data
2013 11 15 0000 56611 0 1.36e+01 3.52e-01 7.89e-02 4.33e-02 3.42e-02 1.76e-02 2.89e+04 5.72e+02 -1.00e+05
2013 11 15 0005 56611 300 1.08e+01 5.50e-01 2.35e-01 4.27e-02 3.35e-02 1.70e-02 3.00e+04 5.50e+02 -1.00e+05
Thanks in advance for the help.
Answer: You can use Pandas [csv parser](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.io.parsers.read_csv.html#pandas-io-parsers-read-
csv) along with
[StringIO](http://docs.python.org/2/library/stringio.html#StringIO.StringIO).
[An example in pandas documentation.](http://pandas.pydata.org/pandas-
docs/stable/io.html)
For you sample that will be:
>>> import pandas as pd
>>> from StringIO import StringIO
>>> data = """2013 11 15 0000 56611 0 1.36e+01 3.52e-01 7.89e-02 4.33e-02 3.42e-02 1.76e-02 2.89e+04 5.72e+02 -1.00e+05
... 2013 11 15 0005 56611 300 1.08e+01 5.50e-01 2.35e-01 4.27e-02 3.35e-02 1.70e-02 3.00e+04 5.50e+02 -1.00e+05"""
Load data
>>> df = pd.read_csv(StringIO(data), sep=r'\s+', header=None)
Convert first three rows to datetime (optional)
>>> df[0] = df.iloc[:,:3].apply(lambda x:'{}.{}.{}'.format(*x), axis=1).apply(pd.to_datetime)
>>> del df[1]
>>> del df[2]
>>> df
0 3 4 5 6 7 8 9 10 \
0 2013-11-15 00:00:00 0 56611 0 13.6 0.352 0.0789 0.0433 0.0342
1 2013-11-15 00:00:00 5 56611 300 10.8 0.550 0.2350 0.0427 0.0335
11 12 13 14
0 0.0176 28900 572 -100000
1 0.0170 30000 550 -100000
|
Save data to VTK using Python and tvtk with more than one vector field
Question: I'm trying to save three sets of vector quantities corresponding to the same
structured grid (velocity, turbulence intensity and standard deviation of
velocity fluctuations). Ideally, I'd like them to be a part of the same vtk
file but so far I have only been able to get one of them into the file like
so:
sg = tvtk.StructuredGrid(dimensions=x.shape, points=pts)
sg.point_data.vectors = U
sg.point_data.vectors.name = 'U'
write_data(sg, 'vtktestWake.vtk')
I've spent past few hours searching for an example of how to add more then one
vector or scalar field but failed and so thought I'd ask here. Any guidance
will be most appreciated.
Thanks,
Artur
Answer: After some digging around I found the following solution based on
[this](http://docs.enthought.com/mayavi/mayavi/data.html#imagedata) and
[this](http://docs.enthought.com/mayavi/mayavi/auto/example_atomic_orbital.html#example-
atomic-orbital) example. You have to add the additional data field using the
`add_array` method see:
from tvtk.api import tvtk, write_data
import numpy as np
data = np.random.random((3,3,3))
data2 = np.random.random((3,3,3))
i = tvtk.ImageData(spacing=(1, 1, 1), origin=(0, 0, 0))
i.point_data.scalars = data.ravel()
i.point_data.scalars.name = 'scalars'
i.dimensions = data.shape
# add second point data field
i.point_data.add_array(data2.ravel())
i.point_data.get_array(1).name = 'field2'
i.point_data.update()
write_data(i, 'vtktest.vtk')
|
Making pig embedded with python script and pig cassandra integration to work with oozie
Question: I am new to oozie and I have few problems. 1) I am trying to embed a pig
action in oozie which has a python script import. I've placed the jython.jar
file in the lib path and have an import in the pig script which will take the
python UDFs. I don't seems to get this working. The .py file is not getting
picked up. How to go about this? 2) I have a pig cassandra integration where
in I use the cql to get the data from cassandra using pig and do some basic
transformation. In the CLI i am able to get this working. But on the oozie
front I am not. I don't seem to find the solution(configuration and others) to
do this in oozie. Can anyone please help me with this? Thanks in advance.
Answer: This is solved. Solutions..
1) Put the python file in the oozie worklow path and then reference if from
here.
2) Added cassandra jar files in the lib folder in the oozie's HDFS path.
|
Python3 write string as binary
Question: For a Python 3 programming assignment I have to work with Huffman coding. It's
simple enough to generate the correct codes which result in a long string of
0's and 1's.
Now my problem is actually writings this string of as binary and not as text.
I attempted to do this:
result = "01010101 ... " #really long string of 0's and 1's
filewrt = open(output_file, "wb") #appending b to w should write as binary, should it not?
filewrt.write(result)
filewrt.close()
however I'm still geting a large text file of 0 and 1 characters. How do I fix
this?
EDIT: It seems as if I just simply don't understand how to represent an
arbitrary bit in Python 3.
Based on [this SO
question](http://stackoverflow.com/questions/7213996/convert-binary-sting-
representation-of-a-byte-to-actual-binary-value-in-python) I devised this ugly
monstrosity:
for char in result:
filewrt.write( bytes(int(char, 2)) )
Instead of getting anywhere close to working, it outputted a zero'd file that
was twice as large as my input file. Can someone please explain to me how to
represent binary arbitrarily? And in the context of creating a huffman tree,
how do I go about concatinating or joining bits based on their leaf locations
if I should not use a string to do so.
Answer:
def intToTextBytes(n, stLen=0):
bs = b''
while n>0:
bs = bytes([n & 0xff]) + bs
n >>= 8
return bs.rjust(stLen, b'\x00')
num = 0b01010101111111111111110000000000000011111111111111
bs = intToTextBytes(num)
print(bs)
open(output_file, "wb").write(bs)
EDIT: A more complicated, but faster (about 3 times) way:
from math import log, ceil
intToTextBytes = lambda n, stLen=0: bytes([
(n >> (i<<3)) & 0xff for i in range(int(ceil(log(n, 256)))-1, -1, -1)
]).rjust(stLen, b'\x00')
|
How to convert a string with multiple dictionaries, so json.load can parse it?
Question: How can I write a function in python that that will take a string with
multiple dictionaries, one per line, and convert it so that json.loads can
parse the entire string in single execution.
For example, if the input is (one dictionary per line):
Input = """{"a":[1,2,3], "b":[4,5]}
{"z":[-1,-2], "x":-3}"""
This will not parse with json.loads(Input). I need to write a function to
modify it so that it does parse properly. I am thinking if the function could
change it to something like this, json will be able to parse it, but am not
sure how to implement this.:
Input2 = """{ "Dict1" : {"a":[1,2,3], "b":[4,5]},
"Dict2" : {"z":[-1,-2], "x":-3} }"""
Answer:
>>> import json
>>>
>>> dict_str = """{"a":[1,2,3], "b":[4,5]}
>>> {"z":[-1,-2], "x":-3}"""
>>>
>>> #strip the whitespace away while making list from the lines in dict_str
>>> dict_list = [d.strip() for d in dict_str.splitlines()]
>>>
>>> dict_list
>>> ['{"a":[1,2,3], "b":[4,5]}', '{"z":[-1,-2], "x":-3}']
>>>
>>> j = [json.loads(i) for i in dict_list]
>>> j
>>> [{u'a': [1, 2, 3], u'b': [4, 5]}, {u'x': -3, u'z': [-1, -2]}]
Not in function form like you requested, but the code would be almost the
same. Also, this produces the dicts in a list.
Adding the following might be of use to you
>>> d = {('Dict'+str(i+1)):v for i in range(len(j)) for v in j}
>>> d
>>> {'Dict1': {u'x': -3, u'z': [-1, -2]}, 'Dict2': {u'x': -3, u'z': [-1, -2]}}
|
Python: return list of sequentially occuring common items from lists and also a list of uncommon ones
Question:
lists = [[a,b,c,d],[a,b,c,d,e],[a,b,c,x],[a,b,c,d,e,f]....lots]
common_items = [a,b,c]
uncommon_items = [[d], [d,e], [x], [d,e,f]]
common_elements(lists[0],lists[1])
def common_elements(list1, list2):
return [element for element in list1 if element in list2]
lot of the answers on SO only able to do this with two lists at a time. need
one that can handle any number of lists, and returns two lists.
Note: Order of list is important which leaves out sets. Note: Must be lowest
common items from each list, not based on the just the first list in lists.
Answer: The easiest way is to use sets, but you will lose ordering.
lists = [['a','b','c','d'],
['a','b','c','d','e'],
['a','b','c','x'],
['a','b','c','d','e','f']]
sets = map(set, lists)
common = set.intersection(*sets)
uncommon = [s-common for s in sets]
print common # set(['a', 'c', 'b'])
print uncommon # [set(['d']), set(['e', 'd']), set(['x']), set(['e', 'd', 'f'])]
Sets are the best way to represent common elements. You can maintain the order
of uncommon elements by using a different lists comprehension.
uncommon = [[x for x in l if x not in common] for l in lists]
print uncommon # [['d'], ['d', 'e'], ['x'], ['d', 'e', 'f']]
Assuming the elements of `common` appear in the same order in all lists, you
can then convert the common set to a list.
common = [x for x in lists[0] if x in common]
|
How to loop using .split() function on a text file python
Question: I have a html file with different team names written throughout the file. I
just want to grab the team names. The team names always occur after certain
text and end before certain text, so I've split function to find the team
name. I'm a beginner, and I'm sure I'm making this harder than it is. Data is
the file
teams = data.split('team-away">')[1].split("</sp")[0]
for team in teams:
print team
This returns each individual character for the first team that it finds (so
for example, if teams = San Francisco 49ers, it prints "S", then "A", etc.
instead of what I need it to do: Print "San Francisco 49ers" then on the next
line the next team "Carolina Panthers", etc.
Thank you!
Answer: "I'm a beginner, and I'm sure I'm making this harder than it is."
Well, kind of.
import re
teams = re.findall('team-away">(.*)</sp', data)
(with credit to Kurtis, for a simpler regular expression than I originally
had)
Though an actual [HTML
parser](http://docs.python.org/2/library/htmlparser.html) would be best
practice.
|
PySide. Extracting DOM HTML. AccessNetworkmanager
Question: I need to extract all calendar data from page like
"<http://www.dukascopy.com/swiss/english/marketwatch/calendars/eccalendar/>".
Firstly - to extract all html with inner dom. Using eclipse and Python 3.3,
win7. Searched here answers, and coded smth based on them. Looks like:
from PySide import QtGui, QtDeclarative
from PySide.QtGui import QApplication, QDesktopServices, QImage, QPainter
from PySide.QtCore import QByteArray, QUrl, QTimer, QEventLoop, QIODevice, QObject
from PySide.QtWebKit import QWebFrame, QWebView, QWebPage, QWebSettings
from PySide.QtNetwork import QNetworkAccessManager, QNetworkProxy, QNetworkRequest, QNetworkReply, QNetworkDiskCache
#!/usr/bin/env python
"""
app = QApplication(sys.argv)
web = QWebView()
web.load(QUrl("http://www.dukascopy.com/swiss/english/marketwatch/calendars/eccalendar/"))
web.show()
sys.exit(app.exec_())
"""
app = QApplication(sys.argv)
w = QWebView()
request = QNetworkRequest(QUrl("http://www.dukascopy.com/swiss/english/marketwatch/calendars/eccalendar/"))
reply = w.page().networkAccessManager().get(request)
print(reply)
byte_array = reply.readAll()
plist = reply.rawHeaderList()
print(plist)
print(byte_array)
When loading page to QWebView() it works fine (commented code), but I couldn't
find how to extract all html from QWebView(). So i tried via "request" -
decommented code. And nothing prints.
Answer: Try with signals:
def print_content():
print web.page().mainFrame().toHtml() # or toPlainText()
# or
# print web.page().currentFrame().toHtml() # or toPlainText()
and
web.page().mainFrame().loadFinished.connect(print_content)
# or web.loadFinished.connect(print_content)
web.load(QUrl("http://www.dukascopy.com/swiss/english/marketwatch/calendars/eccalendar/"))
web.show()
`print_content` should be called then loadFinished signal arrives
|
error: unpack requires a string argument of length 8
Question: I was running my script and I stumbled upon on this error
WARNING *** file size (24627) not 512 + multiple of sector size (512)
WARNING *** OLE2 inconsistency: SSCS size is 0 but SSAT size is non-zero
Traceback (most recent call last):
File "C:\Email Attachments\whatever.py", line 20, in <module>
main()
File "C:\Email Attachments\whatever.py", line 17, in main
csv_from_excel()
File "C:\Email Attachments\whatever.py", line 7, in csv_from_excel
sh = wb.sheet_by_name('B2B_REP_YLD_100_D_SQ.rpt')
File "C:\Python27\lib\site-packages\xlrd\book.py", line 442, in sheet_by_name
return self.sheet_by_index(sheetx)
File "C:\Python27\lib\site-packages\xlrd\book.py", line 432, in sheet_by_index
return self._sheet_list[sheetx] or self.get_sheet(sheetx)
File "C:\Python27\lib\site-packages\xlrd\book.py", line 696, in get_sheet
sh.read(self)
File "C:\Python27\lib\site-packages\xlrd\sheet.py", line 1055, in read
dim_tuple = local_unpack('<ixxH', data[4:12])
error: unpack requires a string argument of length 8
I was trying to process this excel file.
<https://drive.google.com/file/d/0B12NevhOGQGRMkRVdExuYjFveDQ/edit?usp=sharing>
One solution that I found is that I have to open manually the spreadsheet,
save it, then close it before I run my script of converting .xls to .csv. I
find this solution a bit cumbersome and clunky.
This kind of spreadsheet is saved daily in my drive via an Outlook Macro.
Unprocessed data is increasing that's why I turned into scripting to ease the
job.
Answer: Who made the Outlook macro that's dumping this file? `xlrd` uses byte level
unpacking to read in the Excel file, and is failing to read a field in this
excel file. There are ways to follow where its failing, but none to
automatically recover from this type of error.
The erroneous data seems to be at `data[4:12]` of a specific frame (we'll see
later), which should be a bytestring that's parsed as such:
1. one integer (i)
2. 2 pad bytes (xx)
3. unsigned short 2 byte integer (H).
You can set xlrd to `DEBUG mode`, which will show you which bytes its parsing,
and exactly where in the file there is an error:
import xlrd
xlrd.DEBUG = 2
workbook = xlrd.open_workbook(u'/home/sparker/Downloads/20131117_040934_B2B_REP_YLD_100_D_LT.xls')
Here's the results, slightly trimmed down for the sake of SO:
parse_globals: record code is 0x0293
parse_globals: record code is 0x0293
parse_globals: record code is 0x0085
CODEPAGE: codepage 1200 -> encoding 'utf_16_le'
BOUNDSHEET: bv=80 data '\xfd\x04\x00\x00\x00\x00\x18\x00B2B_REP_YLD_100_D_SQ.rpt'
BOUNDSHEET: inx=0 vis=0 sheet_name=u'B2B_REP_YLD_100_D_SQ.rpt' abs_posn=1277 sheet_type=0x00
parse_globals: record code is 0x000a
GET_SHEETS: [u'B2B_REP_YLD_100_D_SQ.rpt'] [1277]
GET_SHEETS: sheetno = 0 [u'B2B_REP_YLD_100_D_SQ.rpt'] [1277]
reqd: 0x0010
getbof(): data='\x00\x06\x10\x00\xbb\r\xcc\x07\x00\x00\x00\x00\x06\x00\x00\x00'
getbof(): op=0x0809 version2=0x0600 streamtype=0x0010
getbof(): BOF found at offset 1277; savpos=1277
BOF: op=0x0809 vers=0x0600 stream=0x0010 buildid=3515 buildyr=1996 -> BIFF80
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build/bdist.macosx-10.4-x86_64/egg/xlrd/__init__.py", line 457, in open_workbook
File "build/bdist.macosx-10.4-x86_64/egg/xlrd/__init__.py", line 1007, in get_sheets
File "build/bdist.macosx-10.4-x86_64/egg/xlrd/__init__.py", line 998, in get_sheet
File "build/bdist.macosx-10.4-x86_64/egg/xlrd/sheet.py", line 864, in read
struct.error: unpack requires a string argument of length 8
Specifically, you can see that it parses the name of the workbook named
`u'B2B_REP_YLD_100_D_SQ.rpt`.
Lets check the source code. The traceback throws an error
[here](https://github.com/python-excel/xlrd/blob/master/xlrd/sheet.py#L1058)
where we can see from [the parent loop](https://github.com/python-
excel/xlrd/blob/master/xlrd/sheet.py#L1048) that we're trying to parse the
`XL_DIMENSION` and `XL_DIMENSION2` values. These directly correspond to the
shape of the Excel Sheet.
And that's where there's a problem in your workbook. It's not being made
correctly. So, back to my original question, who made the excel macro? It
needs to be fixed. But that's for another SO question, some other time.
|
Keep a Frame in an other window Frame
Question: My programm create a Frame with three panels in an horizontal boxsizer. A menu
with "new window" item for create a seconde Frame. I give the seconde panel as
parent of the seconde window. I wante the seconde Frame stays in the seconde
panel area of my first frame. if user move one of the two windows, the seconde
stays in the panel screen area. Do you know a way or something for that? I
tried a little something, but using is not very aesthetic.
and
import wx
class MainWindow(wx.Frame):
def __init__(self,parent,id):
wx.Frame.__init__(self,parent,id,'Python Test App',size=(600,400))
#Widgets
panel_gch = wx.Panel(self,-1,size = (150,-1))
panel_gch.SetBackgroundColour('white')
self.panel=wx.Panel(self,-1,size=(300,400))
self.panel.SetBackgroundColour((200,230,200))
panel_drt = wx.Panel(self,-1,size = (150,-1))
panel_drt.SetBackgroundColour('white')
box = wx.BoxSizer(wx.HORIZONTAL)
self.SetSizer(box)
#Add
box.Add(panel_gch,0,wx.EXPAND)
box.Add(self.panel,1,wx.EXPAND)
box.Add(panel_drt,0,wx.EXPAND)
#Menu
status=self.CreateStatusBar()
menubar=wx.MenuBar()
file_menu=wx.Menu()
ID_FILE_NEW = 1
file_menu.Append(ID_FILE_NEW,"New Window","This is a new window")
menubar.Append(file_menu,"File")
self.SetMenuBar(menubar)
#bind and layout
self.Bind(wx.EVT_MENU, self.get_new_window)
panel_gch.Layout()
self.panel.Layout()
panel_drt.Layout()
self.Layout()
def get_new_window(self,event): # create new window
self.new = NewWindow(self.panel,-1)
self.new.Show(True)
self.new.Bind(wx.EVT_MOVE,self.window2_on_move)
def window2_on_move(self,event): # Window2 must stay in
x, y = event.GetPosition()
v,w =self.panel.GetScreenPosition()
s,t = self.panel.GetClientSizeTuple()
if x < v:
self.new.Move((v,-1))
if y < w:
self.new.Move((-1,w))
if x+200 > v+s:
self.new.Move((v+s-200,-1))
if y+200 > w+t:
self.new.Move((-1,w+t-200))
class NewWindow(wx.MiniFrame):
def __init__(self,MainWindow,id):
wx.MiniFrame.__init__(self, MainWindow, id, 'New Window', size=(200,200),\
style = wx.MINIMIZE | wx.CAPTION | wx.CLOSE_BOX | wx.CLOSE_BOX)
self.CenterOnParent()
if __name__=='__main__':
app=wx.PySimpleApp()
frame=MainWindow(parent=None,id=-1)
frame.Show()
app.MainLoop()
Answer: What you probably want is AUI. I personally recommend the wx.lib.agw.aui set
rather than wx.aui as the former is pure Python and has had a LOT more recent
work done on it. There are multiple examples in the wxPython demo package. You
can also read about it here:
* <http://wxpython.org/Phoenix/docs/html/lib.agw.aui.framemanager.AuiManager.html>
|
Class scopes in Python Bottle routes
Question: Inside a bottle route I am instantiating a class.
Feasibly, this page may get called simultaneously and need to create
simultaneous instances of this class named "newuser" in the function.
I wanted to make sure there won't be conflicts since all instances are
assigned the name "newuser" by the function.
I think this is fine since the class is created within the function call and
the scope of the class should only be local to the function?
from bottle import route, run
class user:
def __init__(self,id, name):
self.id = id
self.name = name
#Do some stuff that takes a while.
@route('/user/<id>/<name>', method = 'POST')
def test():
newuser = user(id, name)
run(host='localhost', port=8080, debug=True)
Answer: This is indeed fine; the `newuser` name is entirely local to the `test()`
function scope. The instances will not be shared between calls to that route.
|
Python - Remove whitespaces and punctuation without functions
Question: first of all, sorry for my bad english. I'm a beginner programmer and I have
some problems with my python program. I have to make a program that normalizes
the whitespaces and the punctuation, for example:
If I put a string called
" hello how, are u? "
The new string has to be...
"Hello how are u"
But in my code, the result appears like this and I dont know why:
"helloo how,, aree u??"
**Note:** I can't use any kind of function like split(), strip(), etc...
Here is my code:
from string import punctuation
print("Introduce your string: ")
string = input() + " "
word = ""
new_word = ""
final_string = ""
#This is the main code for the program
for i in range(0, len(string)):
if (string[i] != " " and (string[i+1] != " " or string[i+1] != punctuation)):
word += string[i]
if (string[i] != " " and (string[i+1] == " " or string[i+1] == punctuation)):
word += string[i] + " "
new_word += word
word = ""
#This destroys the last whitespace
for j in range(0,len(new_word)-1):
final_string += new_word[j]
print(final_string)
Thank you all.
**EDIT:**
Now i have this code:
letter = False
for element in my_string:
if (element != " " and element != punctuation):
letter= True
word += element
print(word)
But now, the problem is that my program doesn't recognize the punctuation so
if i put:
"Hello ... how are u?"
It has to be like `"Hellohowareu"`
But it is like:
"Hello...howareu?
Answer: I'm not going to write the code for you since this is obviously homework, but
I will give you some hints.
I think your approach of checking the next character is a bit error-prone.
Rather, I would have a flag that you set when you see a space or punctuation.
The next time through the loop, check if the flag is set: if it is, and you
still see a space, then ignore it, otherwise, reset the flag to false.
|
pip-3.3 install MySQL-python
Question: I am getting an error
pip version
pip-3.3 -V pip 1.4.1 from /usr/local/lib/python3.3/site-
packages/pip-1.4.1-py3.3.egg (python 3.3)
how to install MySQLdb in Python3.3 helpp..
root@thinkpad:~# pip-3.3 install MySQL-python
Downloading/unpacking MySQL-python
Downloading MySQL-python-1.2.4.zip (113kB): 113kB downloaded
Running setup.py egg_info for package MySQL-python
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip_build_root/MySQL-python/setup.py", line 14, in <module>
from setup_posix import get_config
File "./setup_posix.py", line 2, in <module>
from ConfigParser import SafeConfigParser
ImportError: No module named 'ConfigParser'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip_build_root/MySQL-python/setup.py", line 14, in <module>
from setup_posix import get_config
File "./setup_posix.py", line 2, in <module>
from ConfigParser import SafeConfigParser
ImportError: No module named 'ConfigParser'
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/MySQL-python
Storing complete log in /root/.pip/pip.log
Answer: In python3 `ConfigParser` was renamed to `configparser`. It seems `MySQL-
python` does not support python3. Try:
$ pip install PyMySQL
[PyMySQL](https://github.com/PyMySQL/PyMySQL) is a different module, but it
supports python3.
|
"No module named time"
Question: I compiled Python from source using:
wget http://python.org/ftp/python/2.6.6/Python-2.6.6.tar.bz2
tar jxvf Python-2.6.6.tar.bz2
cd Python-2.6.6
./configure
make
make install
Version of Python:
as3:~# python -V
Python 2.6.6
I also installed pip installer but when I use `pip install xxx`, I always get
the following error:
Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python2.6/site-packages/distribute-0.6.49-py2.6.egg/pkg_resources.py", line 16, in <module>
import sys, os, time, re, imp, types, zipfile, zipimport
ImportError: No module named time
How do I fix this?
Answer: You need to save all the output generated by `configure` in a file and check
whether it tried to build the `time` module and if not, then why not.
Usually, this doesn't happen because of missing header files. Fix these
problems and build Python again.
If you have a package manager, then you should really consider installing
Python from there: It will then come with all the dependencies and all
available modules should just work.
Lastly, make sure you execute the correct executable. To check this, run
Python with an absolute path. To execute it in the current folder, use
`$PWD/python`.
|
Renderer problems using Matplotlib from within a script
Question: I've narrowed down to this call:
fig.canvas.tostring_argb() #fig=matplotlib.pyplot.figure()
this function raises an `AttributeError` when I run the code as a python
script. `AttributeError: 'FigureCanvasGTKAgg' object has no attribute
'renderer'`
However, this code works properly if run in the `ipython --pylab` command
line.
As far as I can tell from the documentation, the Agg renderer should work OK.
The context is that I'm trying to make a movie from figures, without saving
the frames to disk; as per [this
question](http://stackoverflow.com/questions/4092927/generating-movie-from-
python-without-saving-individual-frames-to-files). I'm using the approach that
streams the pixel arrays to `ffmpeg` (running as a separate process) to do
this, I need the `argb` array of values from the frame.
Is there some configuration setting I can make to get matplotlib to work
correctly from within a script?
_Edit_ Tried `use('Agg')` as per a comment; still fails; this is a minimal
working example.
[dave@dave tools]$ python -c "import matplotlib; matplotlib.use('Agg'); import matplotlib.pyplot; fig=matplotlib.pyplot.figure(); fig.canvas.tostring_argb()"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_agg.py", line 416, in tostring_argb
return self.renderer.tostring_argb()
AttributeError: FigureCanvasAgg instance has no attribute 'renderer'
Answer: I suspect that you have missed out the call to:
fig.canvas.draw()
before
fig.canvas.tostring_argb()
as
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot
fig=matplotlib.pyplot.figure()
fig.canvas.tostring_argb()
fails for me, but
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot
fig=matplotlib.pyplot.figure()
fig.canvas.draw()
fig.canvas.tostring_argb()
works.
|
python : print output of each thread to seperate file no processes
Question: I have several threads and each thread writes output to stdout. However I want
to redirect the ouput of each thread to a separate file independently of each
other, then merge them to keep he flow of each thread together.
What I mean is the following:
Thread1 writes every print, every exception and every other ouput into
file1.log Thread2 writes every print, every exception and every other ouput
into file2.log and so on. So what I'm looking for is to set the stdout for
each thread exclusivly. However setting the stdout only works globally mean
that Thread1 and Tread2 will always write to the same defined stdout. I have
not found out yet how to do this. I can't use processes though, because f
another issue.
How can I do that?
Answer: The Python [**`logging`**](http://docs.python.org/2/library/logging.html)
module is [thread-safe](http://docs.python.org/2/library/logging.html#thread-
safety).
Use it to create an individual logger for each thread, and register a
[`FileHandler`](http://docs.python.org/2/library/logging.handlers.html#logging.FileHandler)
to (also) log to a file:
logger = logging.getLogger('thread-1')
file_handler = logging.FileHandler('thread-1.log')
logger.addHandler(file_handler)
Here's a more complete example:
import logging
import random
import threading
import time
NUM_THREADS = 5
def worker(delay, logger):
"""A toy worker function, taking the logger as an argument.
"""
logger.info("Starting work...")
for i in range(3):
logger.info('Sleeping %0.02f', delay)
time.sleep(delay)
logger.info('Done.')
for n in range(1, NUM_THREADS + 1):
# create the thread's logger
logger = logging.getLogger('thread-%s' % n)
logger.setLevel(logging.DEBUG)
# create a file handler writing to a file named after the thread
file_handler = logging.FileHandler('thread-%s.log' % n)
# create a custom formatter and register it for the file handler
formatter = logging.Formatter('(%(threadName)-10s) %(message)s')
file_handler.setFormatter(formatter)
# register the file handler for the thread-specific logger
logger.addHandler(file_handler)
delay = random.random()
t = threading.Thread(target=worker, args=(delay, logger))
t.start()
main_thread = threading.currentThread()
for t in threading.enumerate():
if t is not main_thread:
t.join()
This will give you five logfiles, `thread-1.log` through `thread-5.log`,
containing only the output of the respective thread:
**`thread-1.log`**
(Thread-1 ) Starting work...
(Thread-1 ) Sleeping 0.53
(Thread-1 ) Sleeping 0.53
(Thread-1 ) Sleeping 0.53
(Thread-1 ) Done.
* * *
If you still want to log to the console, simply create a
[`StreamHandler`](http://docs.python.org/2/library/logging.handlers.html#streamhandler)
and attach it to your logger:
stream_handler = logging.StreamHandler()
logger.addHandler(stream_handler)
This will log to `STDERR` by default. If you want `STDOUT`, use
logging.StreamHandler(sys.stdout)
For more information on using the Python `logging` module, see the [Advanced
Logging Tutorial](http://docs.python.org/2/howto/logging.html#logging-
advanced-tutorial).
|
How to use Python kazoo library?
Question: I am planning to use Python kazoo library for Zookeeper. It's all about Python
question here not zookeeper at all I guess meaning how to use Python kazoo
properly..
I am totally new to python so I have no idea how to get going and how to use
kazoo to connect with zookeeper.
This is the document I was reading to start using kazoo for Zookeeper.
<http://kazoo.readthedocs.org/en/latest/install.html>
In that wiki, they have asked to install kazoo. And they are using some pip
command for that?
What does pip do here? And I am currently using windows so I have cygwin
installed and python installed as well. I am using Python 2.7.3
host@D-SJC-00542612 ~
$ python
Python 2.7.3 (default, Dec 18 2012, 13:50:09)
[GCC 4.5.3] on cygwin
Now what I did is - I copied this command exactly as it is from the above
website - `pip install kazoo` and ran it on my cygwin command prompt.
host@D-SJC-00542612 ~
$ pip install kazoo
Downloading/unpacking kazoo
Running setup.py egg_info for package kazoo
warning: no previously-included files found matching '.gitignore'
warning: no previously-included files found matching '.travis.yml'
warning: no previously-included files found matching 'Makefile'
warning: no previously-included files found matching 'run_failure.py'
warning: no previously-included files matching '*' found under directory 'sw'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*pyo' found anywhere in distribution
Downloading/unpacking zope.interface>=3.8.0 (from kazoo)
Running setup.py egg_info for package zope.interface
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Requirement already satisfied (use --upgrade to upgrade): distribute in c:\python27\lib\site-packages (from zope.interface>=3.8.0->kazoo)
Installing collected packages: kazoo, zope.interface
Running setup.py install for kazoo
warning: no previously-included files found matching '.gitignore'
warning: no previously-included files found matching '.travis.yml'
warning: no previously-included files found matching 'Makefile'
warning: no previously-included files found matching 'run_failure.py'
warning: no previously-included files matching '*' found under directory 'sw'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*pyo' found anywhere in distribution
Running setup.py install for zope.interface
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
building 'zope.interface._zope_interface_coptimizations' extension
********************************************************************************
WARNING:
An optional code optimization (C extension) could not be compiled.
Optimizations for this package will not be available!
()
Unable to find vcvarsall.bat
********************************************************************************
Skipping installation of C:\Python27\Lib\site-packages\zope\__init__.py (namespace package)
Installing C:\Python27\Lib\site-packages\zope.interface-4.0.5-py2.7-nspkg.pth
Successfully installed kazoo zope.interface
Cleaning up...
Does it got installed properly? Now I can start writing code in python to
connect with zookeeper?
Sorry for asking all these dumb questions as I don't have any background with
python so learning little bit here..
It's all about Python question here not zookeeper at all I guess..
Answer: `pip` is common way to install packages. It queries and downloads the packages
from [pypi](https://pypi.python.org/pypi). Kazoo has been installed as per the
log statements. Give it a try.
you should be able to find the package at `where python is installed\lib\site-
packages\kazoo`.
You should try to load (import) the package without errors:
from kazoo.client import KazooClient
After you have started the zookeeper. Your zookeeper configuration will have
the client port details.
tickTime=2000
dataDir=...../zookeeperdata/cluster/server1/data
clientPort=2181
initLimit=5
Use that to connect to zookeeper.
# Create a client and start it
zk = KazooClient(hosts='127.0.0.1:2181')
zk.start()
# Now you can do the regular zookepper API calls
# Ensure some paths are created required by your application
zk.ensure_path("/app/someservice")
# In the end, stop it
zk.stop()
|
Using Matplotlib imshow to show GIF image
Question: I need to show a background to a matplotlib plot using `ax.imshow()`. The
background images that I will be using are GIF-images. Despite having PIL
installed, the following code results in an error complaining that the Python
Image Library (PIL) is not installed (which it is):
from pylab import imread
im_file = open("test.gif")
im_obj = imread(im_file)
Reading the image using PIL directly works better:
import Image
import numpy
img = Image.open("test.gif")
img_arr = asarray(img.getdata(), dtype=numpy.uint8)
However, when reshaping the array, the following code does not work:
img_arr = img_arr.reshape(img.size[0], img.size[1], 3) #Note the number 3
The reason is that the actual color information is contained in a color table
accessed through `img.getcolors()` or `img.getpalette()`.
Converting all the images to PNG or another suitable format that results in
RGB images when opening them with `imread()` or `Image.open()` is not an
option. I could convert the images when needed using PIL but I consider that
solution ugly. So the question is as follows: Is there a simple and fast (the
images are 5000 x 5000 pixels) way to convert the GIF images to RGB (in RAM)
so that I can display them using `imshow()`?
Answer: You need to convert the GIF to RGB first:
img = Image.open("test.gif").convert('RGB')
See this question: [Get pixel's RGB using
PIL](http://stackoverflow.com/questions/11064786/get-pixels-rgb-using-pil)
|
Python unique values in a list
Question: I am new to Python and I am finding set() to be a bit confusing. Can someone
offer some help with finding and creating a new list of unique numbers(
another words eliminate duplicates)?
import string
import re
def go():
import re
file = open("C:/Cryptography/Pollard/Pollard/newfile.txt","w")
filename = "C:/Cryptography/Pollard/Pollard/primeFactors.txt"
with open(filename, 'r') as f:
lines = f.read()
found = re.findall(r'[\d]+[^\d.\d+()+\s]+[^\s]+[\d+\w+\d]+[\d+\^+\d]+[\d+\w+\d]+', lines)
a = found
for i in range(5):
a[i] = str(found[i])
print(a[i].split('x'))
Now
print(a[i].split('x'))
....gives the following output
['2', '3', '1451', '40591', '258983', '11409589', '8337580729',
'1932261797039146667']
['2897', '514081', '585530047', '108785617538783538760452408483163']
['2', '3', '5', '19', '28087', '4947999059',
'2182718359336613102811898933144207']
['3', '5', '53', '293', '31159', '201911', '7511070764480753',
'22798192180727861167']
['2', '164493637239099960712719840940483950285726027116731']
How do I output a list of only non repeating numbers? I read on the forums
that "set()" can do this, but I have tried this with no avail. Any help is
much appreciated!
Answer: A `set` is a collection (like a `list` or `tuple`), but it does not allow
duplicates and has very fast membership testing. You can use a list
comprehension to filter out values in one list that have appeared in a
previous list:
data = [['2', '3', '1451', '40591', '258983', '11409589', '8337580729', '1932261797039146667'],
['2897', '514081', '585530047', '108785617538783538760452408483163'],
['2', '3', '5', '19', '28087', '4947999059', '2182718359336613102811898933144207'],
['3', '5', '53', '293', '31159', '201911', '7511070764480753', '22798192180727861167'],
['2', '164493637239099960712719840940483950285726027116731']]
seen = set() # set of seen values, which starts out empty
for lst in data:
deduped = [x for x in lst if x not in seen] # filter out previously seen values
seen.update(deduped) # add the new values to the set
print(deduped) # do whatever with deduped list
Output:
['2', '3', '1451', '40591', '258983', '11409589', '8337580729', '1932261797039146667']
['2897', '514081', '585530047', '108785617538783538760452408483163']
['5', '19', '28087', '4947999059', '2182718359336613102811898933144207']
['53', '293', '31159', '201911', '7511070764480753', '22798192180727861167']
['164493637239099960712719840940483950285726027116731']
Note that this version does not filter out values that are duplicated within a
single list (unless they're already duplicates of a value in a previous list).
You could work around that by replacing the list comprehension with an
explicit loop that checks each individual value against the `seen` set (and
`add`s it if it's new) before appending to a list for output. Or if the order
of the items in your sub-lists is not important, you could turn them into sets
of their own:
seen = set()
for lst in data:
lst_as_set = set(lst) # this step eliminates internal duplicates
deduped_set = lst_as_set - seen # set subtraction!
seen.update(deduped_set)
# now do stuff with deduped_set, which is iterable, but in an arbitrary order
Finally, if the internal sub-lists are a red herring entirely and you want to
simply filter a flattened list to get only unique values, that sounds like a
job for the `unique_everseen` recipe from the [`itertools`
documentation](http://docs.python.org/3/library/itertools.html#itertools-
recipes):
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
|
Using Python instead of XML for loading resources in C++?
Question: I am building a simple 2D game (for learning purposes) in c++ and am currently
parsing XML files using TinyXML to load my textures and other resources.
Recently, however, I have been intrigued by python and wish to use python
instead of XML for various reasons(once again, for learning purposes).
I was wondering if I could translate my objects in XML into a large tuple in
python, and then by using an embedded python interpreter parse the elements of
the tuple and extract the data into my C++ game.
Mount and Blade Warband (A game that first introduced me to Python modules)
seems to do it this way, and has sparked my interest.
Here is an example of the first two elements in a large tuple for Mount and
Blade....
sounds = [
("click", sf_2d|sf_vol_3,["drum_3.ogg"]),
("tutorial_1", sf_2d|sf_vol_7,["tutorial_1.ogg"]),
However, Mount and Blade requires you run an executable on these python
scripts which translates them into large .txt files...
drum_3.ogg 769
tutorial_1.ogg 1793
Which leads me to believe that the game is actually parsing these text files.
Is what I attempt to do still possible?
I have searched around for some APIs and have found a few good ones,
predominately Python/C or Boost.Python and was hoping someone may be able to
give me some direction.
Thank you very much and any input is greatly appreciated!
Answer: Resource/settings files written in python are suitable if your engine (the
game or some special resource processor) is written in python.
This is because you can just `import <resource-file>` from your pythonic
module, in place of parsing on XML/text/other-format.
If you have no python involved, I can't see any reason to write resources in
python.
Regarding your example about Mount and Blade - exactly as I've said, when game
is not written in python, you should use some resource pre-processor.
|
Convert Python Code to Use Buttons instead
Question: I have tried but I am not sure how to make this code work using buttons
instead of the canvas. Its for a calculator using tkinter. I need to make this
work using Buttons but everything i have tried has failed. If someone could
tell me how to do it or even do the whole thing that would be much
appreciated. I am new to this language and its confusing me. Thanks
from Tkinter import *
def quit():
window.destroy()
def buttonclick(event):
global calcvalue
global savedvalue
global operator
pressed = ""
if event.x >10 and event.x <70 and event.y > 50 and event.y < 110 : pressed = 7
if event.x >10 and event.x <70 and event.y > 120 and event.y < 180 : pressed = 4
if event.x >10 and event.x <70 and event.y > 190 and event.y < 250 : pressed = 1
if event.x >10 and event.x <70 and event.y > 260 and event.y < 320 : pressed = 0
if event.x >80 and event.x <140 and event.y > 50 and event.y < 110 : pressed = 8
if event.x >80 and event.x <140 and event.y > 120 and event.y < 180 : pressed = 5
if event.x >80 and event.x <140 and event.y > 190 and event.y < 250 : pressed = 2
if event.x >150 and event.x <210 and event.y > 50 and event.y < 110 : pressed = 9
if event.x >150 and event.x <210 and event.y > 120 and event.y < 180 : pressed = 6
if event.x >150 and event.x <210 and event.y > 190 and event.y < 250 : pressed = 3
if event.x >80 and event.x <140 and event.y > 260 and event.y < 320 : pressed = "equals"
if event.x >150 and event.x <210 and event.y > 260 and event.y < 320 : pressed = "clear"
if event.x >220 and event.x <280 and event.y > 50 and event.y < 110 : pressed = "divide"
if event.x >220 and event.x <280 and event.y > 120 and event.y < 180 : pressed = "times"
if event.x >220 and event.x <280 and event.y > 190 and event.y < 250 : pressed = "minus"
if event.x >220 and event.x <280 and event.y > 260 and event.y < 320 : pressed = "plus"
if pressed == 0 or pressed == 1 or pressed == 2 or pressed == 3 or pressed == 4 or pressed == 5 or pressed == 6 or pressed == 7 or pressed == 8 or pressed == 9 :
calcvalue = calcvalue * 10 + pressed
if pressed == "divide" or pressed == "times" or pressed == "minus" or pressed == "plus" :
operator = pressed
savedvalue = calcvalue
calcvalue = 0
if pressed == "equals":
if operator == "divide": calcvalue = savedvalue /calcvalue
if operator == "times": calcvalue = savedvalue * calcvalue
if operator == "minus": calcvalue = savedvalue - calcvalue
if operator == "plus": calcvalue = savedvalue + calcvalue
if pressed == "clear":
calcvalue = 0
displayupdate()
canvas.update()
def displayupdate():
canvas.create_rectangle(10, 10, 280, 40, fill="white", outline="black")
canvas.create_text(260, 25, text=calcvalue,font="Times 20 bold",anchor=E)
def main():
global window
global tkinter
global canvas
window = Tk()
window.title("Simple Calculator")
Button(window, text="Quit", width=5, command=quit).pack()
canvas = Canvas(window, width= 290, height=330, bg = 'beige')
canvas.bind("<Button-1>", buttonclick)
#Add the numbers
canvas.create_rectangle(10, 50, 50, 110, fill="yellow", outline="black")
canvas.create_text(40, 80, text="7",font="Times 30 bold")
canvas.create_rectangle(10, 120, 70, 180, fill="yellow", outline="black")
canvas.create_text(40, 150, text="4",font="Times 30 bold")
canvas.create_rectangle(10, 190, 70, 250, fill="yellow", outline="black")
canvas.create_text(40, 220, text="1",font="Times 30 bold")
canvas.create_rectangle(10, 260, 70, 320, fill="yellow", outline="black")
canvas.create_text(40, 290, text="0",font="Times 30 bold")
canvas.create_rectangle(80, 50, 140, 110, fill="yellow", outline="black")
canvas.create_text(110, 80, text="8",font="Times 30 bold")
canvas.create_rectangle(80, 120, 140, 180, fill="yellow", outline="black")
canvas.create_text(110, 150, text="5",font="Times 30 bold")
canvas.create_rectangle(80, 190, 140, 250, fill="yellow", outline="black")
canvas.create_text(110, 220, text="2",font="Times 30 bold")
canvas.create_rectangle(150, 50, 210, 110, fill="yellow", outline="black")
canvas.create_text(180, 80, text="9",font="Times 30 bold")
canvas.create_rectangle(150, 120, 210, 180, fill="yellow", outline="black")
canvas.create_text(180, 150, text="6",font="Times 30 bold")
canvas.create_rectangle(150, 190, 210, 250, fill="yellow", outline="black")
canvas.create_text(180, 220, text="3",font="Times 30 bold")
#Add the operators
canvas.create_rectangle(80, 260, 140, 320, fill="green", outline="black")
canvas.create_text(110, 290, text="=",font="Times 30 bold")
canvas.create_rectangle(150, 260, 210, 320, fill="green", outline="black")
canvas.create_text(180, 290, text="C",font="Times 30 bold")
canvas.create_rectangle(220, 50, 280, 110, fill="pink", outline="black")
canvas.create_text(250, 80, text="/",font="Times 30 bold")
canvas.create_rectangle(220, 120, 280, 180, fill="pink", outline="black")
canvas.create_text(250, 150, text="*",font="Times 30 bold")
canvas.create_rectangle(220, 190, 280, 250, fill="pink", outline="black")
canvas.create_text(250, 220, text="-",font="Times 30 bold")
canvas.create_rectangle(220, 260, 280, 320, fill="pink", outline="black")
canvas.create_text(250, 290, text="+",font="Times 30 bold")
#Setup the display
canvas.create_rectangle(10, 10, 280, 40, fill="white", outline="black")
global calcvalue
calcvalue = 0
displayupdate()
canvas.pack()
window.mainloop()
main()
Answer: You don't say why you're having a problem using buttons, so I don't know what
problem you're trying to solve. Certainly it's possible to create a grid of
buttons, and certainly it's possible for buttons to call functions when
clicked.
To me, the real challenge here is to write code that is compact,
understandable, and easy to maintain. You want it to be easy to add new
functions and/or new operators without having to rework the whole GUI.
Personally I would use an object-oriented approach, creating custom objects
that represent numerals, functions and operators. Since you're using a non-OO
approach, I recommend creating some helper functions to abstract out some of
the details.
You have three types of buttons: numbers, operators ("+", "-", etc) and
functions ("C", "="). I would create a pair of functions for each type: one
function to create the button, and one function to respond to the button.
Doing this avoids having a single monolithic function to handle all button
presses.
I would also add a helper function to lay out the buttons, just to make it
easier to visualize the final product in the code.
Let's start with the number buttons. We know that all the number button has to
do is insert that number into the calculator. So, let's first write a function
to do that. In this case I'll assume you have an entry widget to hold the
value, since working with canvas text objects is cumbersome:
def do_number(n):
global entry # the entry where we should insert the number
entry.insert("end", n)
If we call this function with `do_number("3")`, it will insert "3" into the
calculator. We can use this same function for all of the buttons, all we have
to do is pass in what number to insert. You can create similar functions named
`do_operator` and `do_function`, which take the label of the button and do the
appropriate thing. You could just as well have a unique function for each
button if you want.
To create the button, we want a little helper function that can tie a button
to that function. This example uses `lambda`, but you can use
`functools.partial`, or a function factory. `lambda` requires the fewest extra
steps, so we'll go with that.
def number(number):
global frame
b = Button(frame, text=label, command=lambda: do_number(number))
return b
When we call this function like `number("4")`, it will create a button with
"4" as the label. It will also call the function `do_number` with the number
"4" as an argument, so that it will insert the proper string. You can then
create similar functions named "operator" and "function" to create buttons
that act as operators and those that act as functions.
> Note 1: I'm generally against using global variables in this manner.
> Generally speaking it better to pass in the containing frame rather than
> rely on a global. In this specific case the use of the global makes the code
> a bit more compact, as you'll see in another paragraph or two. If we were
> building this calculator as class, we could use an instance variable instead
> of a global.
>
> Note 2: declaring the variable as global here has no real effect. Globals
> must only be declared when you want to modify the variable. However, I put
> the global statement in to serve as a declaration that I intend for the
> variable named `frame` to be global. This is purely a matter of personal
> preference
So, now we can create buttons of each time, and have them call a function with
a unique parameter. Now all that is left is to use `grid` to organize the
buttons. Another helper function will make this easier:
def make_row(row, *buttons):
for column,button in enumerate(buttons):
button.grid(row=row, column=column, sticky="nsew")
This helper function lets us pass in a list of buttons, and it will lay them
out in a row. Now, combining this with our helper functions to create the
buttons, we can create the GUI with something like this:
frame = Frame(window, bg="beige")
entry = Entry(frame, borderwidth=1, relief="solid")
entry.grid(row=0, column=0, columnspan=4, sticky="nsew")
make_row(1, number("7"), number("8"), number("9"), operator("/"))
make_row(2, number("4"), number("5"), number("6"), operator("*"))
make_row(3, number("1"), number("2"), number("3"), operator("-"))
make_row(4, number("0"), function("="), function("C"), operator("+"))
Notice how it's now possible to see the layout of the buttons in the code. If
you want to add another column, add another row, or rearrange the existing
buttons, it should be completely self-evident how to do that.
Now, of course, this isn't the only way to accomplish this. You can do all of
this without helper functions, and just create a few hundred lines of code. By
taking a little extra time to break the code down into logical chunks you can
create code that is considerably easier to read and maintain.
|
Python: sorting a list by key returns error: 'string must be integers'
Question: I've been searching to no avail and am hoping someone can point me in the
right direction. I'm trying to:
* call a url which holds a json-formatted file
* convert the resulting dict to a list (I don't think I need the keys that get inserted)
* order the items in that list by a key ('loved_count')
My code is:
url = "http://hypem.com/playlist/tags/dance/json/1/data.js"
output = json.load(urllib.urlopen(url))
output = output.values() #convert dict to list
output = output.sort(key=itemgetter('loved_count')) #sort list by loved_count
Which gives me the following error:
output = output.sort(key=itemgetter('loved_count')) #sort list by loved_count
TypeError: string indices must be integers
Any thoughts on where I'm messing this up? Thanks in advance!
Answer: An item in the list is not a dictionary:
>>> import urllib
>>> import json
>>> url = "http://hypem.com/playlist/tags/dance/json/1/data.js"
>>> output = json.load(urllib.urlopen(url))
>>> for x in output.values():
... print(type(x))
...
<type 'dict'>
<type 'dict'>
<type 'dict'>
<type 'dict'>
<type 'dict'>
<type 'unicode'>
<type 'dict'>
....
>>> u'1.1'['loved_count']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: string indices must be integers
* * *
You can workaround by filtering out non-dictionary item(s):
>>> items = [x for x in output.values() if isinstance(x, dict)]
>>> items.sort(key=itemgetter('loved_count'))
# No error.
But, I'd rather ask the data provider what's wrong with the data because
array/list is supported to contain heterogeneous data.
* * *
BTW, the code is assigning the return value of `sort`. `sort` return `None`;
You lose the list. Remove assignment, just call `sort`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.