text
stringlengths 226
34.5k
|
---|
Sending POST request to AJAX with Python3.4 and urllib
Question: I am trying to scrape line movements from: <http://scores.covers.com/football-
scores-matchups.aspx>
I need to iterate through each Season and Week of using the Calendar provided:
When I inspect the network to see what is getting sent, I see two POST
requests when I change the Season:
(I've combined the above 3 images into 1 since I don't have enough rep to post
more than 1 image)
<http://s29.postimg.org/773muclra/covers_scrape4.jpg>
I have searched and read and hacked away at it all day and have made no
progress. I can recreate the error I'm getting by using Firebug to edit and
resend the post request if I edit the payload to be something unexpected. So I
have a feeling the problem is in the way I'm encoding and sending the data.
I've tried json.dump, utf-8, Content-Type application/x-www..., in every
combination I can think of.
My current code is as follows:
import urllib.request
import json
import urllib.parse
class Scraper():
def __init__(self):
pass
def scrape(self):
url = 'http://scores.covers.com/ajax/SportsDirect.Controls.LiveScoresControls.ScoresCalendar,SportsDirect.Controls.LiveScoresControls.ashx?_method=changeDay&_session=no'
data = {
'league': '1',
'SeasonString': '2012-2013',
'Year': '1',
'Month': '1',
'Day': '1'
}
# data = urllib.parse.urlencode(data).encode('utf-8')
data = json.dumps(data).encode('utf-8')
headers = {
'Host': 'scores.covers.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate',
'Referer': 'http://scores.covers.com/football-scores-matchups.aspx',
'Content-Type': 'application/json',
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache'
}
req = urllib.request.Request(url)
req.data = data
req.headers = headers
f = urllib.request.urlopen(req)
print(f.read())
return f
Which gives:
In [1]: import scraper
In [2]: s = scraper.Scraper()
In [3]: s.scrape()
b"new Object();r.error = new ajax_error('System.ArgumentException','Object of type \\'System.DBNull\\' cannot be convert
ed to type \\'System.String\\'.',0)"
Out[3]: <http.client.HTTPResponse at 0x4d6b650>
Thanks in advance.
Answer: use `data="league=1\nSeasonString=2014-2015\nYear=1\nMonth=1\nDay=1"`
the data is not a json type.
|
Is it possible to get jedi autocomplete for a C++ library binded to python?
Question: I am using vim with jedi-vim to edit some python code. However, some libraries
we use are C++ shared libraries for which we generated python bindings using
pybindgen. When using jedi-vim, I do not get signature for any of the classes
and methods, just a listing of them.
For example, in this library, <https://github.com/jorisv/SpaceVecAlg> if I
install the library and import it:
import spacevecalg as sva
Then, `sva.` will correctly show all first-order functions and classes.
However, if I select the first one, `sva.ABInertia(` jedi will not suggest me
any of the class constructors.
I suppose I have to somehow export the class definitions to a kind of python
documentation, and I figured I could use the doxygen annotations for that, but
I have no idea how to feed that extra documentation to jedi (or any other
completion engine, such as the one built in IPython).
Thanks a lot !
Answer: You cannot feed extra documentation to Jedi. However, you can set the
`__doc__` attribute in a way that Jedi understands it. If you define call
signatures the same way as the standard library, I guess it should work.
As a side note, I have to mention that in Python 3.4+ there's an even better
way of defining the docstrings. IMHO it's the proper way to define it. I'm not
sure how exactly to do it (but there are ways to use it):
>>> inspect.signature(exit)
<inspect.Signature object at 0x7f2b5a05aa58>
>>> str(inspect.signature(exit))
'(code=None)'
Jedi doesn't understand it yet, but it definitely will in the future.
|
how do i export PDF file attachments via python
Question: How do I extract a PDF file attachment via python? (File attached to the PDF)
I seem to not be able to find anything about this topic.
Answer: This is not a native python solution, but try to use
[pdfdetach](http://www.dsm.fordham.edu/cgi-bin/man-
cgi.pl?topic=pdfdetach&sect=1)(1) with
[subprocess](https://docs.python.org/2/library/subprocess.html)
from subprocess import call
call(["pdfdetach", "-saveall", "file.pdf"])
(1) there is also Windows port by Cygwin
|
PYXB - Generation of namespace groups requires generate-to-files
Question: PYXB - When generating class definitions at runtime, iam facing following
expection
import pyxb.binding.generate
path = "E:/schema/schema.xsd"
code = pyxb.binding.generate.GeneratePython(schema_location=path)
rv = compile(code, 'test', 'exec')
xsd = eval(rv)
above code gives this error
raise pyxb.BindingGenerationError('Generation of namespace groups requires generate-to-files')
pyxb.exceptions_.BindingGenerationError: Generation of namespace groups requires generate-to-file
Answer: See [the PyXB help
forum](https://sourceforge.net/p/pyxb/discussion/956708/thread/d1b6f062/)
where somebody coincidently asked exactly this same question at the same time.
Odd.
|
pip install dependency links
Question: I am using `python version 2.7` and `pip version is 1.5.6`.
I want to install extra libraries from url like a git repo on setup.py is
being installed.
I was putting extras in `install_requires` parameter in `setup.py`. This
means, my library requires extra libraries and they must also be installed.
...
install_requires=[
"Django",
....
],
...
But urls like git repos are not valid string in `install_requires` in
`setup.py`. Assume that, I want to install a library from github. I have
searched about that issue and I found something which I can put libraries such
that in `dependency_links` in `setup.py`. But that still doesn't work. Here is
my dependency links definition;
dependency_links=[
"https://github.com/.../tarball/master/#egg=1.0.0",
"https://github.com/.../tarball/master#egg=0.9.3",
],
The links are valid. I can download them from a internet browser with these
urls. These extra libraries are still not installed with my setting up. I also
tried `--process-dependency-links` parameter to force pip. But result is same.
I take no error when pipping.
After installation, I see no library in `pip freeze` result in
`dependency_links`.
How can I make them to be downloaded with my `setup.py` installation?
# Edited:
Here is my complete `setup.py`
from setuptools import setup
try:
long_description = open('README.md').read()
except IOError:
long_description = ''
setup(
name='esef-sso',
version='1.0.0.0',
description='',
url='https://github.com/egemsoft/esef-sso.git',
keywords=["django", "egemsoft", "sso", "esefsso"],
install_requires=[
"Django",
"webservices",
"requests",
"esef-auth==1.0.0.0",
"django-simple-sso==0.9.3"
],
dependency_links=[
"https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0",
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3",
],
packages=[
'esef_sso_client',
'esef_sso_client.models',
'esef_sso_server',
'esef_sso_server.models',
],
include_package_data=True,
zip_safe=False,
platforms=['any'],
)
# Edited 2:
Here is pip log;
Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/
URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0):
* https://pypi.python.org/simple/esef-auth/1.0.0.0
* https://pypi.python.org/simple/esef-auth/
Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0
Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Cleaning up...
Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build...
No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Exception information:
Traceback (most recent call last):
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
It seems, it does not use the sources in `dependency_links`.
Answer: Pip removed support for dependency_links a while back. The [latest version of
pip that supports dependency_links is
1.3.1](https://pip.pypa.io/en/latest/news.html), to install it
pip install pip==1.3.1
your dependency links should work at that point. Please note, that
dependency_links were always the last resort for pip, ie. if a package with
the same name exists on pypi it will be chosen over yours.
Note, <https://github.com/pypa/pip/pull/1955> seems to start allowing
dependency_links, pip kept it, but you might need to use some command line
switches to use a newer version of pip.
**EDIT** : As of pip 7 ... they rethought dep links and have enabled them,
even though they haven't removed the deprecation notice, from the discussions
they seem to be here to stay. With pip>=7 here is how you can install things
pip install -e . --process-dependency-links --allow-all-external
Or add the following to a pip.conf, e.g. `/etc/pip.conf`
[install]
process-dependency-links = yes
allow-all-external = yes
trusted-host =
bitbucket.org
github.com
|
Using list comprehension to search a 2d array (python)
Question: I am trying to search a 2D array of characters and return the array indices,
(x_T,y_T), of all of the letter T's in the array. I figure this could easily
done with two stacked for loops but I was curious to know if it could be done
my efficiently using list comprehension. I have tried things like:
TPos = [[x, y] for x in range(0, len(array[x])) for y in range(0, len(array[x][y])) if array[x][y] == 'T']
But I am getting errors to do with array bounds. Could someone point me in the
right direction. Cheers, Jack
# Edit
Now trying to use 'ndenumerate' like so:
TPos = [pos for pos, x in numpy.ndenumerate(array) if x == "T"]
Answer: This ended up being a beast to one-line, for readability it may be easier to
break it up.
l = [['a','b','T'],
['T','T','g'],
['c','T','T']]
reduce(lambda x,y: x+y, [[(x,y) for y in range(len(l[y])) if l[x][y] == 'T'] for x in range(len(l))])
Output
[(0, 2), (1, 0), (1, 1), (2, 1), (2, 2)]
_Edit_ With `numpy` this is much simpler.
import numpy as np
[pos for pos, x in np.ndenumerate(np.array(l)) if x == 'T']
Output
[(0, 2), (1, 0), (1, 1), (2, 1), (2, 2)]
|
Py2exe - module does not find
Question: I'm trying to make an exe file from my (2) py files. In one file is bs4
imported - `import bs4` When I try to execute this script:
setup(
console = ['gui.py'],
options = {
'py2exe': {
'packages': ["bs4"]
}
}
)
the console returns:
C:\Users\uživatel\PycharmProjects\mail_checker>setup.py py2exe
running py2exe
*** searching for required modules ***
Traceback (most recent call last):
File "C:\Users\u×ivatel\PycharmProjects\mail_checker\setup.py", line 12, in <m
odule>
'packages': ["bs4"]
File "C:\Python27\lib\distutils\core.py", line 151, in setup
dist.run_commands()
File "C:\Python27\lib\distutils\dist.py", line 953, in run_commands
self.run_command(cmd)
File "C:\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 243, in run
self._run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 296, in _run
self.find_needed_modules(mf, required_files, required_modules)
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 1306, in find_n
eeded_modules
mf.import_hook(f)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 719, in import_hook
return Base.import_hook(self,name,caller,fromlist,level)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 136, in import_hook
q, tail = self.find_head_package(parent, name)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 204, in find_head_pack
age
raise ImportError, "No module named " + qname
ImportError: No module named bs4
So I suppose that I haven't setup.py written correct. Could you give me an
advice? Thanks
Answer: **SOLUTION:**
The problem was probably that I have installed bs4 (and xlsxwriter) by
easy_install which creates *.egg files in `site-packages` folder. Py2exe
couldn't find bs4 in `site-packages` for some reason. So I tried to open
BeautifulSoup egg file and copy bs4 folder into `site-packages` folder, I did
the same with xlsxwriter.
_It helped. Program works properly._
|
Python - Files wont download
Question: Here's My code, all the urls are in a Config Parser format file. When the
button is pressed files will not download. What did go wrong? I used urllib
should I have used urllib2? Some of the functions may be there but not used
just ignore that.
import wx
import ConfigParser
import urllib
def Download(url):
response = urllib.urlopen(url).read()
doned = wx.MessageDialog("Download Done")
doned.ShowModal()
doned.Destroy()
#First thing i gona do is Parse 98 box data
BoxParser = ConfigParser.RawConfigParser() #Set Raw
BoxParser.read("98.box") #Mount into 98.box
#Get Vars, and print them
WxTitle = BoxParser.get("meta_data","Window_title") #get the window title
Application = BoxParser.get("meta_data","Application") #Get app name
Desc = BoxParser.get("meta_data","Description") #Get the description of the app
Author = BoxParser.get("meta_data","Author") #Of course! I dont wanna be plagurized
Contact = BoxParser.get("meta_data","Contact_Info") #My Contact Info
Date = BoxParser.get("meta_data","Date") #Date when the current update was made
#UpdateUrl = BoxParser.get("meta_data","Update_url") #Url to update
#BoxUp = BoxParser.get("meta_data","Update_box") #Url to update 98.box
# Meta Data loaded
#time to load the firmwares
e660 = BoxParser.get("Firmware_links","660") #6.60
e6602 = False
e660g = BoxParser.get("Firmware_links","660go") #6.60 Go Eboot
e6602g = False
e639 = BoxParser.get("Firmware_links","639") #6.39
e6392 = False
e639g = BoxParser.get("Firmware_links","639go") #6.39 Go Eboot
e6392g = False
e635 = BoxParser.get("Firmware_links","635") #6.35
e6352 = False
e635g = BoxParser.get("Firmware_links","635go") #6.35 Go Eboot
e6352g = False
e620 = BoxParser.get("Firmware_links","620") #6.20
e550 = BoxParser.get("Firmware_links","550") #5.50
e5502 = False
e500 = BoxParser.get("Firmware_links","500") #5.00
e5002 = False
e401 = BoxParser.get("Firmware_links","401") #4.01
e4012 = False
#Firmwares Loaded
def BoxUpdate():
Download(Update_box)
#Check if DD equ true so we can post the MSG
if downloaddone == True:
Done2 = wx.MessageDialog(self,"Download Done, 98.Box Updated!")
Done2.ShowModal()
Done.Destroy()
#Time to get out Gui
class FrameClass(wx.Frame): #Finally making the gui!
def __init__(self,parent,title): #making init!
app = wx.Frame
app.__init__(self,parent,title=WxTitle,size = (340,280)) #set window size
Menu = wx.Menu() #Lets make a menu!
panel = wx.Panel(self) #set the panel var
contact = Menu.Append(wx.ID_NONE,"&Contact Info") #add update thing
self.Bind(wx.EVT_MENU,self.contact1,contact) #Add event for Update
fwMsg = wx.StaticText(panel, label='Firmware', pos=(59,25))
fwlist = wx.ComboBox(panel,pos=(118,22), choices=["6.60","6.60 Go/N1000","6.39","6.39 Go/N1000","6.35 Go/N1000","5.50","5.00","4.01"])
self.Bind(wx.EVT_COMBOBOX, self.getsel, fwlist)
downloadbutton = wx.Button(panel, label="Download FW", pos=(100,76))
self.Bind(wx.EVT_BUTTON, self.DLB, downloadbutton)
#now for the member!
TopM = wx.MenuBar()
TopM.Append(Menu, "Tool Opt")
self.SetMenuBar(TopM)
self.Show(True)
def DLUpdate(self,e):
#Check if DD equ true so we can post the MSG
Download(Update_url)
print "downloading"
Done = wx.MessageDialog(self,"Download Done, download stored in \"DLBOXV$.zip\" file")
Done.ShowModal()
Done.Destroy()
def contact1(self,e):
con = wx.MessageDialog(self,Contact)
con.ShowModal()
con.Destroy()
def getsel(self,e):
i = e.GetString()
if i == "6.60":
e6602 = True
print e6602,"660"
else:
e6602 = False
print e6602,"660"
if i == "6.60 Go/N1000":
e6602g = True
print e6602g,"660 go"
else:
e6602g = False
print e6602g,"660 go"
if i == "6.39":
e6392 = True
print e6392,"639"
else:
e6392 = False
print e6392,"639"
if i == "6.39 Go/N1000":
e6392g = True
print e6392g,"639 go"
else:
e6392g = False
print e6392g,"639 go"
if i == "6.35 Go/N1000":
e6352g = False
print e6352g,"635 go"
else:
e6352g = False
print e6352g,"635 go"
if i == "5.50":
e5502 = True
print e5502,"550"
else:
e5502 = False
print e5502,"550"
if i == "500":
e5002 = True
print e5002,"500"
else:
e5002 = False
print e5002,"500"
if i == "401":
e4012 = True
print e4012,"401"
else:
e4012 = False
print e4012,"401"
def DLB(self,e):
if e6602 == True:
Download(e660)
elif e6602g == True:
Download(e660g)
elif e6392 == True:
Download(e639)
elif e639g == True:
Download(e639g)
elif e6352g == True:
Download(e635g)
elif e5502 == True:
Download(e550)
elif e5002 == True:
Download(e500)
elif e4012 == True:
Download(e401)
G = wx.App(False)
Win = FrameClass(None,WxTitle)
G.MainLoop()
But at the function `Download(url)` will not function, it will not download
def Download(url):
response = urllib.urlopen(url).read()
doned = wx.MessageDialog("Download Done")
doned.ShowModal()
doned.Destroy()
what triggers `Download(url)` is a few if and elsif statements
def getsel(self,e):
i = e.GetString()
if i == "6.60":
e6602 = True
print e6602,"660"
else:
e6602 = False
print e6602,"660"
if i == "6.60 Go/N1000":
e6602g = True
print e6602g,"660 go"
else:
e6602g = False
print e6602g,"660 go"
if i == "6.39":
e6392 = True
print e6392,"639"
else:
e6392 = False
print e6392,"639"
if i == "6.39 Go/N1000":
e6392g = True
print e6392g,"639 go"
else:
e6392g = False
print e6392g,"639 go"
if i == "6.35 Go/N1000":
e6352g = False
print e6352g,"635 go"
else:
e6352g = False
print e6352g,"635 go"
if i == "5.50":
e5502 = True
print e5502,"550"
else:
e5502 = False
print e5502,"550"
if i == "500":
e5002 = True
print e5002,"500"
else:
e5002 = False
print e5002,"500"
if i == "401":
e4012 = True
print e4012,"401"
else:
e4012 = False
print e4012,"401"
def DLB(self,e):
if e6602 == True:
Download(e660)
elif e6602g == True:
Download(e660g)
elif e6392 == True:
Download(e639)
elif e639g == True:
Download(e639g)
elif e6352g == True:
Download(e635g)
elif e5502 == True:
Download(e550)
elif e5002 == True:
Download(e500)
elif e4012 == True:
Download(e401)
Answer: > What did go wrong? I used urllib should I have used urllib2?
Yes. The fact that just adding the `2` to your code fixes the problem is
obviously proof of that, but it doesn't explain much.
As the docs for
[`urllib.urlopen`](https://docs.python.org/2/library/urllib.html#urllib.urlopen)
say:
> Deprecated since version 2.6: The `urlopen()` function has been removed in
> Python 3 in favor of `urllib2.urlopen()`.
This doesn't mean that they stopped fixing bugs in `urllib` as of 2.6 (there
was a bug fix as late as 2.7.9), but it does mean that missing functionality
will never be added. That includes, among other things, some kinds of HTTP
authentication, redirections, HTTPS with non-standard CAs, proxy settings, and
proxy authentication.
Without knowing anything at all about the problem besides "the download
doesn't happen", or what URL you're trying to download (you seem to be passing
in a variable named `Update_box` that isn't assigned anywhere), or the setup
you're running on, it's impossible to know exactly which one of these problems
(or any others) is the key.
|
Python - Should I alias imports with underscores?
Question: This is a conceptual question rather than an actual problem, I wanted to ask
the great big Internet crowd for feedback.
We all know imported modules end up in the namespace of that module:
# Module a:
import b
__all__ = ['f']
f = lambda: None
That allows you to do this:
import a
a.b # <- Valid attribute
Sometimes that's great, but most imports are side effects of the feature your
module provides. In the example above I don't mean to expose `b` as a valid
interface for callers of `a`.
To counteract that we **could** do:
import b as _b
This marks the import as private. But I can't find that practice described
anywhere nor does PEP8 talk about using aliasing to mark imports as private.
So I take it it's not common practice. But from a certain angle I'd say it's
definitely semantically clearer, because it cleans up the exposed bits of your
module leaving only the relevant interfaces you actually mean to expose.
Working with an IDE with autocomplete it makes the suggested list much
slimmer.
My question boils down to if you've seen that pattern in use? Does it have a
name? What arguments would go against using it?
I have not had success using the the `__all__` functionality to hide the `b`
import. I'm using PyCharm and do not see the autocomplete list change.
E.g. from some module I can do:
import a
And the autocomplete box show both `b` and `f`.
Answer: No one uses that pattern, and it is not named.
That's because the _proper_ method to use is to explicitly mark your exported
names with the `__all__` variable. IDEs will honour this variable, as do tools
like `help()`.
Quoting the [`import` statement
documentation](https://docs.python.org/2/reference/simple_stmts.html#the-
import-statement):
> The _public names_ defined by a module are determined by checking the
> module’s namespace for a variable named `__all__`; if defined, it must be a
> sequence of strings which are names defined or imported by that module. The
> names given in `__all__` are all considered public and are required to
> exist. If `__all__` is not defined, the set of public names includes all
> names found in the module’s namespace which do not begin with an underscore
> character (`'_'`). `__all__` should contain the entire public API. It is
> intended to avoid accidentally exporting items that are not part of the API
> (**such as library modules which were imported and used within the
> module**).
(Emphasis mine).
Also see [Can someone explain __all__ in
Python?](http://stackoverflow.com/questions/44834/can-someone-explain-all-in-
python)
|
Python: searching csv and return entire row
Question: I couldn´t find a better place to ask my question. I am learning Python and
trying to create a script as follows.
1) Should be able to search csv file.
2) Return entire row if match is found.
My csv:
Product,Scan,Width,Height,Capacity
LR,2999,76,100,17.5
RT,2938,37,87,13.4
If I search for 2938 for an example, entire row is returned as follows:
Product: RT
Scan: 2938
Width: 37
Height: 87
Capacity: 13,4
So far I have:
csvFile = getComponent().filePath
pos = csvFile.rfind('Desktop\\')
csvFile = csvFile[:pos] + 'programm\\products.csv'
myfile = open(csvFile)
myfile.seek(0)
for line in myfile.split('\n'):
data = line.split(',')
print data
if data[2] == agv_O_Cal.value and data[3] == agv_O_Mod.value:
print 'found: value = %s' %agv_O_Cal.value, agv_O_Mod.value
Product = data[5]
Scan = data[6]
Width = data[7]
Height = data[9]
Capacity = data[10]
print , Product, Scan, Width, Height, Capacity
The solution doesn´t work.
Answer: you can use `csv` module like this:
import csv
reader = csv.reader(open(csvFile, 'r'))
for data in reader:
#list index start from 0, thus 2938 is in data[1]
if data[1] == agv_O_Cal.value and data[3] == agv_O_Mod.value:
#do somethinngs
|
Fit a line to a matrix in python
Question: I have a matrix of shape 256x256 to which I'm trying to find a line of best
fit. This is an image by the way, so these are simply intensity values. Let's
assume I want to find the line of best fit through all the intensities, how
would I go about doing that?
[This](http://stackoverflow.com/questions/2298390/fitting-a-line-in-3d) link
describes how to do so on a 3d dataset, using svd. However I'm a bit confused
as to how that would be applied to my numpy array?
EDIT: Here's an example with random values of doubles that I've profiled with
%timeit:
ran = [25,50,75,100,125]
for i in ran:
J = np.random.random((i,i))
y,x=np.indices(J.shape)
x = x.ravel()
y = y.ravel()
J=J.ravel()
data = np.concatenate((x[:, np.newaxis],
y[:, np.newaxis],
J[:, np.newaxis]),axis=1)
datamean = data.mean(axis=0)
print "Doing %d now" %i
%timeit U, S, V = np.linalg.svd(data - datamean)
I get the following output:
Doing 25 now
100 loops, best of 3: 10.4 ms per loop
Doing 50 now
1 loops, best of 3: 285 ms per loop
Doing 75 now
1 loops, best of 3: 3 s per loop
Doing 100 now
1 loops, best of 3: 5.83 s per loop
Doing 125 now
1 loops, best of 3: 15.1 s per loop
EDIT2: [Here's my actual
array](https://www.dropbox.com/s/8a8zj686j4k0dfn/arr.npy?dl=0). I just saved
it in numpy's npy format
Answer: [The answer you pointed out](http://stackoverflow.com/a/2333251/832621) is
directly applicable to your problem by doing:
import numpy as np
z = your_matrix_256_x_256
y, x = np.indices(z.shape)
x = x.ravel()
y = y.ravel()
z = z.ravel()
Note that the intervals for `x` and `y` can be reajusted multiplying these
arrays by proper scalars.
* * *
## EDIT:
having a look to your data it seems more that **your problem is a 2-D curve
fit** , which can be done with `np.polyfit()`, as shown in the example below.
z = np.load('arr.npy').astype(np.int32)
y, x = np.indices(z.shape)
valid_z = (y.ravel()>0) & (z.ravel()>0)
x_valid = x.ravel()[valid_z]
y_valid = y.ravel()[valid_z]
z_valid = z.ravel()[valid_z]
# fitting best curve
fig = plt.figure(figsize=(5,3))
ax = fig.add_subplot(111)
ax.scatter(x_valid, y_valid, c=z_valid, alpha=0.2, s=20, edgecolor='none',
cmap=plt.cm.jet)
# finding best-fit curve
z = np.polyfit(x_valid, y_valid, w=z_valid**0.5, deg=1)
p = np.poly1d(z)
# plotting
x_plot = np.linspace(x_valid.min(), x_valid.max(), 100)
y_plot = p(x_plot)
ax.plot(x_plot, y_plot, '-r', lw=2)
ax.set_xlim(0, x.shape[1])
ax.set_ylim(0, y.shape[0])
ax.legend(loc='lower left', frameon=False, fontsize=8)
fig.savefig('test.png', bbox_inches='tight')
which gives:

|
tkinter option menu - update options on fly
Question: I'm creating a GUI using Tkinter with Python 2.7.6.
I have a drop down menu, created and initially disabled with the following
code:
self.dropdown = Tkinter.OptionMenu(self, self.dropdownVar, "Select SED...")
self.dropdown.grid(column=0,row=1)
self.dropdown.configure(state="disabled")
After a user selects a directory, I call a function `onEnterDir()` which then
gets a list of files in that directory. So, I have a list of files in a
variable called `dirFiles`.
What I want is to then update the options in the dropdown menu with the items
in this `dirFiles` list. How would I do this?
My question is different to others on here because I just want to update the
list of items `self.dropdown` displays. It's not dependent on any other
widget. I have a python list of things I want to put in. How do I do this?
Answer: You can use the same technique in [the
answer](http://stackoverflow.com/questions/17252096/python-tkinter-dynamic-
optionmenu/17252390#17252390) you mentioned in question:
For example:
from Tkinter import *
from tkFileDialog import askdirectory
import os
def onEnterDir(dropdown, var):
path = askdirectory()
if not path:
return
filenames = os.listdir(path)
dropdown.configure(state='normal') # Enable drop down
menu = dropdown['menu']
# Clear the menu.
menu.delete(0, 'end')
for name in filenames:
# Add menu items.
menu.add_command(label=name, command=lambda: var.set(name))
root = Tk()
dropdownVar = StringVar()
dropdown = OptionMenu(root, dropdownVar, "Select SED...")
dropdown.grid(column=0, row=1)
dropdown.configure(state="disabled")
b = Button(root, text='Change directory',
command=lambda: onEnterDir(dropdown, dropdownVar))
b.grid(column=1, row=1)
root.mainloop()
|
How to convert IETF BCP 47 language identifier to ISO-639-2?
Question: I am writing a server API for an iOS application. As a part of the
initialization process, the app should send the phone interface language to
server via an API call.
The problem is that Apple uses something called [IETF BCP 47 language
identifier](http://tools.ietf.org/html/bcp47) in its [`NSLocale
preferredLanguages`
function](https://developer.apple.com/library/mac/documentation/cocoa/reference/foundation/classes/NSLocale_Class/Reference/Reference.html#jumpTo_18).
The returned values have different lengths (e.g. `[aa, ab, ace, ach, ada, ady,
ae, af, afa, afh, agq, ...]`, and I found very few parsers that can convert
this code to a proper language identifier.
I would like to use the more common [ISO-639-2 three-letters language
identifier](http://en.wikipedia.org/wiki/List_of_ISO_639-2_codes), which is
ubiquitous, has many parsers in many languages, and has a standard, 3-letter
representation of languages.
**How can I convert a IETF BCP 47 language identifier to ISO-639-2 three-
letters language identifier, preferably in Python?**
Answer: BCP 47 identifiers start with a 2 letter ISO 639-1 _or_ 3 letter 639-2, 639-3
or 639-5 language code; see the [RFC 5646 Syntax
section](http://tools.ietf.org/html/rfc5646#section-2.1):
>
> Language-Tag = langtag ; normal language tags
> / privateuse ; private use tag
> / grandfathered ; grandfathered tags
>
> langtag = language
> ["-" script]
> ["-" region]
> *("-" variant)
> *("-" extension)
> ["-" privateuse]
>
> language = 2*3ALPHA ; shortest ISO 639 code
> ["-" extlang] ; sometimes followed by
> ; extended language subtags
> / 4ALPHA ; or reserved for future use
> / 5*8ALPHA ; or registered language subtag
>
I don't expect Apple to use the `privateuse` or `grandfathered` forms, so you
can assume that you are looking at ISO 639-1, ISO 639-2, ISO 639-3 or ISO
639-5 language codes here. Simply map the 2-letter ISO-639-1 codes to 3-letter
ISO 639-* codes.
You can use the [`pycountry` package](https://pypi.python.org/pypi/pycountry)
for this:
import pycountry
lang = pycountry.languages.get(alpha2=two_letter_code)
three_letter_code = lang.terminology
Demo:
>>> import pycountry
>>> lang = pycountry.languages.get(alpha2='aa')
>>> lang.terminology
u'aar'
where the _terminology_ form is the preferred 3-letter code; there is also a
_bibliography_ form which differs only for 22 entries. See [ISO 639-2 _B and T
codes_](http://en.wikipedia.org/wiki/ISO_639-2#B_and_T_codes). The package
doesn't include entries from ISO 639-5 however; that list overlaps and
conflicts with 639-2 in places and I don't think Apple uses such codes at all.
|
Python unittest: to mock.patch() or just replace method with Mock?
Question: When mocking classes or methods when writing unittests in Python, why do I
need to use
[@patch](http://www.voidspace.org.uk/python/mock/patch.html#mock.patch)
decorator? I just could replace the method with Mock object without any patch
annotation.
Examples:
class TestFoobar(unittest.TestCase):
def setUp(self):
self.foobar = FooBar()
# 1) With patch decorator:
@patch.object(FooBar, "_get_bar")
@patch.object(FooBar, "_get_foo")
def test_get_foobar_with_patch(self, mock_get_foo, mock_get_bar):
mock_get_bar.return_value = "bar1"
mock_get_foo.return_value = "foo1"
actual = self.foobar.get_foobar()
self.assertEqual("foo1bar1", actual)
# 2) Just replacing the real methods with Mock with proper return_value:
def test_get_foobar_with_replacement(self):
self.foobar._get_foo = Mock(return_value="foo2")
self.foobar._get_bar = Mock(return_value="bar2")
actual = self.foobar.get_foobar()
self.assertEqual("foo2bar2", actual)
**Could someone produce an example, where patch decorator is good and
replacing is bad?**
We have always used patch decorator with our team, but after reading [this
comment](http://mauveweb.co.uk/posts/2014/09/every-mock-patch-is-a-little-
smell.html#comment-1573595883) for a post, I got the idea that maybe we could
write nicer-looking code without the need of patch decorators.
I understand that patching is temporary, so maybe with some cases, it is
dangerous to not use patch decorator and replace methods with mock instead?
Could it be that replacing objects in one test method can affect the result of
the next test method?
I tried to prove this, but came up empty: both tests pass in the next code:
def test_get_foobar_with_replacement(self):
self.foobar._get_foo = Mock(return_value="foo2")
self.foobar._get_bar = Mock(return_value="bar2")
actual = self.foobar.get_foobar()
self.assertIsInstance(self.foobar._get_bar, Mock)
self.assertIsInstance(self.foobar._get_foo, Mock)
self.assertEqual("foo2bar2", actual)
def test_get_foobar_with_real_methods(self):
actual = self.foobar.get_foobar()
self.assertNotIsInstance(self.foobar._get_bar, Mock)
self.assertNotIsInstance(self.foobar._get_foo, Mock)
self.assertIsInstance(self.foobar._get_bar, types.MethodType)
self.assertIsInstance(self.foobar._get_foo, types.MethodType)
self.assertEqual("foobar", actual)
Full source code (Python 3.3):
dropbox.com/s/t8bewsdaalzrgke/test_foobar.py?dl=0
Answer: `patch.object` will restore the item you patched to its original state after
the test method returns. If you monkey-patch the object yourself, you need to
restore the original value if that object will be used in another test.
In your two examples, you are actually patching two different things. Your
call to `patch.object` patches the _class_ `FooBar`, while your monkey patch
patches a specific _instance_ of `FooBar`.
Restoring the original object isn't important if the object will be created
from scratch each time. (You don't show it, but I assume `self.foobar` is
being created in a `setUp` method, so that even though you replace its
`_get_foo` method, you aren't reusing that _specific_ object in multiple
tests.)
|
Structuring Plain Text to JSON
Question: I am attempting to take a collection of strings, tokenize the strings into
individual characters, and restructure them into JSON for the purpose of
building a cluster dendrogram visualization (sort of like [this word
tree](http://bl.ocks.org/emeeks/4733217), except for strings instead of
sentences). As such, there are times when the sequence of characters is shared
(or reshared) across the data.
So, for example, lets say I have a text file that looks like:
xin_qn2
x_qing4n3
x_qing4nian_
This is all I anticipate for my input; there's no CSV headings or anything
associated with the data. The JSON object, then would look something like:
{
"name": "x",
"children": [
{
"name": i,
},
{
"name": _,
"children": [
{
"name": "q"
}
]
}
]
}
And so on. I've been trying to structure the data ahead of time, before
sending it to D3.js, using Ruby to split the lines into individual characters,
but I'm stuck trying to figure out how to structure this in hierarchical JSON.
file_contents = File.open("single.txt", "r")
file_contents.readlines.each do |line|
parse = line.scan(/[A-Za-z][^A-Za-z]*/)
puts parse
end
I could do this in the browser with d3.js instead, I just haven't tried that
yet.
Just wondering if there are any suggestions, pointers, or existing
tools/scripts that might help me out. Thanks!
**Update 2014-10-02**
So I've spent a little time trying this in Python, but I keep getting stuck.
Nor am I handling "children" elements correctly, I now see. Any suggestions?
**Attempt One**
#!/usr/bin/python
from collections import defaultdict
import json
def tree():
return defaultdict(tree)
file_out = open('out.txt', 'wb')
nested = defaultdict(tree)
with open("single.txt") as f:
for line in f:
o = list(line)
char_lst = []
for chars in o:
d = {}
d['name']=chars
char_lst.append(d)
for word in d:
node = nested
for char in word:
node = node[char.lower()]
print node
print(json.dumps(nested))
**Attempt Two**
#!/usr/bin/python
from collections import defaultdict
import json
def tree():
return defaultdict(tree)
nested = defaultdict(tree)
words = list(open("single.txt"))
words_output = open("out.json", "wb")
for word in words:
node = nested
for char in word:
node = node[char.lower()]
def print_nested(d, indent=0):
for k, v in d.iteritems():
print '{}{!r}:'.format(indent * ' ', k)
print_nested(v, indent + 1)
print_nested(nested)
Answer: You're almost there with attempt #2. Adding `json.dumps(nested)` to the end
would print the following JSON:
{
"x":{
"i":{
"n":{
"_":{
"q":{
"n":{
"2":{
}
}
}
}
}
},
"_":{
"q":{
"i":{
"n":{
"g":{
"4":{
"n":{
"i":{
"a":{
"n":{
"_":{
}
}
}
},
"3":{
}
}
}
}
}
}
}
}
}
}
Close, but not quite what you want. By the way, you can also convert the
nested defaultdict into a regular dict using the following function:
def convert(d):
return dict((key, convert(value)) for (key,value) in d.iteritems()) if isinstance(d, defaultdict) else d
But we still only have a dict of dicts (of dicts...). Using recursion, we can
convert it to your required format as follows:
def format(d):
children = []
for (key, value) in d.iteritems():
children += [{"name":key, "children":format(value)}]
return children
Finally, let's print out the json:
print json.dumps(format(convert(nested)))
This prints the following JSON (formatted for clarity):
[
{
"name":"x",
"children":[
{
"name":"i",
"children":[
{
"name":"n",
"children":[
{
"name":"_",
"children":[
{
"name":"q",
"children":[
{
"name":"n",
"children":[
{
"name":"2",
"children":[
]
}
]
}
]
}
]
}
]
}
]
},
{
"name":"_",
"children":[
{
"name":"q",
"children":[
{
"name":"i",
"children":[
{
"name":"n",
"children":[
{
"name":"g",
"children":[
{
"name":"4",
"children":[
{
"name":"n",
"children":[
{
"name":"i",
"children":[
{
"name":"a",
"children":[
{
"name":"n",
"children":[
{
"name":"_",
"children":[
]
}
]
}
]
}
]
},
{
"name":"3",
"children":[
]
}
]
}
]
}
]
}
]
}
]
}
]
}
]
}
]
}
]
Here's the complete code:
#!/usr/bin/python
from collections import defaultdict
import json
def tree():
return defaultdict(tree)
nested = defaultdict(tree)
words = open("single.txt").read().splitlines()
words_output = open("out.json", "wb")
for word in words:
node = nested
for char in word:
node = node[char.lower()]
def convert(d):
return dict((key, convert(value)) for (key,value) in d.iteritems()) if isinstance(d, defaultdict) else d
def format(d):
children = []
for (key, value) in d.iteritems():
children += [{"name":key, "children":format(value)}]
return children
print json.dumps(format(convert(nested)))
|
PermissionError: [WinError 5] Access is denied python using moviepy to write gif
Question: I'm using windows 8.1 64 bit
my code
import pdb
from moviepy.editor import *
clip = VideoFileClip(".\\a.mp4")
clip.write_gif('.\\aasda.gif')
the exception is at write_gif method
Traceback (most recent call last):
File "C:\abi\youtubetogif_project\test.py", line 5, in <module>
clip.write_gif('G:\\abi\\aasda.gif')
File "<string>", line 2, in write_gif
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\decorators.py", line 49, in requires_duration
return f(clip, *a, **k)
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\video\VideoClip.py", line 435, in write_gif
dispose= dispose, colors=colors)
File "<string>", line 2, in write_gif
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\decorators.py", line 49, in requires_duration
return f(clip, *a, **k)
File "C:\Python34\lib\site-packages\moviepy-0.2.1.8.12-py3.4.egg\moviepy\video\io\gif_writers.py", line 186, in write_gif
stdout=sp.PIPE)
File "C:\Python34\lib\subprocess.py", line 848, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1104, in _execute_child
startupinfo)
PermissionError: [WinError 5] Access is denied
I moved the script to another folder and partition, running moviepy
dependancies and python as admin, turning off UAC still gives me error
Answer: I've run into this as well, solution is usually to be sure to run the program
as an administrator (right click, run as administrator.)
|
Jasper Report Module on OpenERP 7
Question: I was trying to install Jasper Report module for OpenERP 7
I got them Syleam mdule from here <https://github.com/syleam/openerp-
jasperserver>
and download OpenERP 7 from here <http://nightly.openerp.com/7.0/nightly/src/>
I already install httplib2, pyPdf and python-dime that was required for this
module.But when i try to install the module i got this error
> OpenERP Server Error
>
> Client Traceback (most recent call last): File
> "/opt/openerp-7/openerp/addons/web/http.py", line 204, in dispatch
> response["result"] = method(self, **self.params) File
> "/opt/openerp-7/openerp/addons/web/controllers/main.py", line 1132, in
> call_button action = self._call_kw(req, model, method, args, {}) File
> "/opt/openerp-7/openerp/addons/web/controllers/main.py", line 1120, in
> _call_kw return getattr(req.session.model(model), method)(*args, **kwargs)
> File "/opt/openerp-7/openerp/addons/web/session.py", line 42, in proxy
> result = self.proxy.execute_kw(self.session._db, self.session._uid,
> self.session._password, self.model, method, args, kw) File
> "/opt/openerp-7/openerp/addons/web/session.py", line 30, in proxy_method
> result = self.session.send(self.service_name, method, *args) File
> "/opt/openerp-7/openerp/addons/web/session.py", line 103, in send raise
> xmlrpclib.Fault(openerp.tools.ustr(e), formatted_info)
>
> Server Traceback (most recent call last): File
> "/opt/openerp-7/openerp/addons/web/session.py", line 89, in send return
> openerp.netsvc.dispatch_rpc(service_name, method, args) File
> "/opt/openerp-7/openerp/netsvc.py", line 296, in dispatch_rpc result =
> ExportService.getService(service_name).dispatch(method, params) File
> "/opt/openerp-7/openerp/service/web_services.py", line 626, in dispatch res
> = fn(db, uid, *params) File "/opt/openerp-7/openerp/osv/osv.py", line 190,
> in execute_kw return self.execute(db, uid, obj, method, *args, **kw or {})
> File "/opt/openerp-7/openerp/osv/osv.py", line 132, in wrapper return
> f(self, dbname, *args, **kwargs) File "/opt/openerp-7/openerp/osv/osv.py",
> line 199, in execute res = self.execute_cr(cr, uid, obj, method, *args,
> **kw) File "/opt/openerp-7/openerp/osv/osv.py", line 187, in execute_cr
> return getattr(object, method)(cr, uid, *args, **kw) File
> "/opt/openerp-7/openerp/addons/base/module/module.py", line 426, in
> button_immediate_install return self._button_immediate_function(cr, uid,
> ids, self.button_install, context=context) File
> "/opt/openerp-7/openerp/addons/base/module/module.py", line 477, in
> _button_immediate_function _, pool = pooler.restart_pool(cr.dbname,
> update_module=True) File "/opt/openerp-7/openerp/pooler.py", line 39, in
> restart_pool registry = RegistryManager.new(db_name, force_demo, status,
> update_module) File "/opt/openerp-7/openerp/modules/registry.py", line 233,
> in new openerp.modules.load_modules(registry.db, force_demo, status,
> update_module) File "/opt/openerp-7/openerp/modules/loading.py", line 354,
> in load_modules loaded_modules, update_module) File
> "/opt/openerp-7/openerp/modules/loading.py", line 256, in
> load_marked_modules loaded, processed = load_module_graph(cr, graph,
> progressdict, report=report, skip_modules=loaded_modules,
> perform_checks=perform_checks) File
> "/opt/openerp-7/openerp/modules/loading.py", line 188, in load_module_graph
> load_data(module_name, idref, mode) File
> "/opt/openerp-7/openerp/modules/loading.py", line 76, in load_data = lambda
> *args: _load_data(cr, *args, kind='data') File
> "/opt/openerp-7/openerp/modules/loading.py", line 124, in _load_data
> tools.convert_xml_import(cr, module_name, fp, idref, mode, noupdate, report)
> File "/opt/openerp-7/openerp/tools/convert.py", line 959, in
> convert_xml_import obj.parse(doc.getroot()) File
> "/opt/openerp-7/openerp/tools/convert.py", line 852, in parse
> self._tags[rec.tag](self.cr, rec, n) File
> "/opt/openerp-7/openerp/tools/convert.py", line 812, in _tag_record f_val =
> _eval_xml(self,field, self.pool, cr, self.uid, self.idref) File
> "/opt/openerp-7/openerp/tools/convert.py", line 154, in _eval_xml for n in
> node]), idref) File "/opt/openerp-7/openerp/tools/convert.py", line 148, in
> _process idref[id]=self.id_get(cr, id) File
> "/opt/openerp-7/openerp/tools/convert.py", line 829, in id_get res =
> self.model_id_get(cr, id_str) File
> "/opt/openerp-7/openerp/tools/convert.py", line 838, in model_id_get return
> model_data_obj.get_object_reference(cr, self.uid, mod, id_str) File
> "/opt/openerp-7/openerp/tools/cache.py", line 18, in lookup r =
> self.lookup(self2, cr, *args) File "/opt/openerp-7/openerp/tools/cache.py",
> line 46, in lookup value = d[key] = self.method(self2, cr, *args) File
> "/opt/openerp-7/openerp/addons/base/ir/ir_model.py", line 876, in
> get_object_reference data_id = self._get_id(cr, uid, module, xml_id) File
> "/opt/openerp-7/openerp/tools/cache.py", line 18, in lookup r =
> self.lookup(self2, cr, *args) File "/opt/openerp-7/openerp/tools/cache.py",
> line 46, in lookup value = d[key] = self.method(self2, cr, *args) File
> "/opt/openerp-7/openerp/addons/base/ir/ir_model.py", line 869, in _get_id
> raise ValueError('No such external ID currently defined in the system:
> %s.%s' % (module, xml_id)) ValueError: No such external ID currently defined
> in the system: jasper_server.load_jrxml_file_wizard_action
Anyone can help me what happen and how to solve that ?
oh and 1 more when i try to open module jasper_server_wizard_sample i got an
error too (open not install)
Answer: There is currently an open [Pull Request](https://github.com/syleam/openerp-
jasperserver/pull/9) to " install module without error about missing
reference".
Maybe it's a bug and that PR fixes it.
|
Python class inheriting multiprocessing: mocking one of the class objects
Question: I have written a class that inherits the _multiprocessing.Process()_ class. In
the initialization I set some parameters, one of them is another class that
writes to some file on my hard drive. For the purpose of unit testing I would
like to mock this class instance in order to avoid actually writing to some
file. Here is some minimal example:
import mock
import time
import multiprocessing as mp
import numpy as np
class MyProcess(mp.Process):
def __init__(self):
super(MyProcess, self).__init__()
# the original code would create some instance of a file manipulation
# class here:
self._some_class = np.zeros(100)
def run(self):
# the following line would actually write to some file in the original
# code:
self._some_class.sum()
for ii in range(10):
print(str(ii))
time.sleep(.01)
if __name__ == '__main__':
proc = MyProcess()
# proc._some_class = mock.Mock()
proc.start()
proc.join()
The code above should run as is. However, if I try to mock the class
__some_class_ in the class _MyProcess_ (= uncommenting the line in the main
function) I get errors. Interestingly, I get the exact same errors if I try to
initialize _self._some_class_ with a function (e.g. replace line 13 in the
code above with *self._some_class = lambda x: x/2 *). So my guess is that
there is some problem with copying the objects in _MyProcess_ when a new
process is spawned. That raises two questions:
* Could somebody shed some light why it is not possible to initialize a class object with a function?
* How could I mock one of the class objects of _MyProcess_?
I would really appreciate any help...
EDIT 1 (more information on the error messages):
If I uncomment the line in the main function I get a bunch of errors, where I
think the following should be the relevant one:
pickle.PicklingError: Can't pickle : it's not the same object as mock.Mock
EDIT 2 (found some relevant information):
I found the following
[issue](https://code.google.com/p/mock/issues/detail?id=139) on google code,
which seems to be related to my problem. Indeed, changing the mock in the main
function to the following makes the code executable:
proc._some_class = mock.MagicMock()
proc._some_class.__class__ = mock.MagicMock
However, what I would be interested in for testing is the following call:
_proc._some_class.some_method.called_ , which is always _False_ despite the
fact that the method has obviously been called. I guess that has something to
do with the workaround that I mentioned above.
EDIT 3 (workaround based on the suggestion of jb.):
One can work around the issue by calling the _run_ method directly. The
following code contains the main function and shows how to test the function
using mocks:
if __name__ == '__main__':
proc = MyProcess()
proc._some_class = mock.MagicMock()
proc.run()
print(proc._some_class.sum.called)
Answer: While this does not address your question directly, please consider following
approach. Inheriting from Process object might be make implementation easier,
but (as you noted) can be very hard when doing unit-testing.
It would be much easier if you pass `run` function as a parameter to `Process`
instance in that way you can test `run` function separately from the
multiprocessing enviorment. If you need to test `run` behaviour in another
process just create a callable object, and mock proper things inside.
Something along the lines:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
If you want to stick to your design you'd shouldn't use MagicMock, python
multiprocessing interface relies on
[pickling](https://docs.python.org/2.7/library/pickle.html), and mock library
and pickling don't go [well
together](https://code.google.com/p/mock/issues/detail?id=139). Just write
your own mocking class.
|
APLpy attribute error : 'module' object has no attribute 'FITSfigure'
Question: I have installed APLpy (version 0.9.12) & I have python 2.7.
I have a FITS image called "test.fits".
I gave following commands:
import aplpy
fig = aplpy.FITSfigure("test.fits")
Then I got this message:
AttributeError: 'module' object has no attribute 'FITSfigure'
I got same message when I tried following:
fig = aplpy.aplpy.FITSfigure("test.fits")
I am new to python & APLpy.
Answer: Your line:
fig = aplpy.FITSfigure("test.fits")
is spelled wrong it has to be:
fig = aplpy.FITSFigure("test.fits")
|
Testing Tornado app for 4xx status code
Question: Consider the following Tornado (v 4.0.2) application, which is a little bit
modified version of official [hello
world](http://www.tornadoweb.org/en/latest/#hello-world) example:
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.set_status(400)
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
As you can see, the only difference here is `set_status` call in
`MainHandler`. Now, I save this code into `app.py`, then I open `tests.py` and
I put there this simple unit test:
import tornado.ioloop
from tornado.httpclient import HTTPRequest
from tornado.testing import AsyncHTTPTestCase, gen_test
from app import application
class SimpleTest(AsyncHTTPTestCase):
def get_app(self):
return application
def get_new_ioloop(self):
return tornado.ioloop.IOLoop.instance()
@gen_test
def test_bad_request(self):
request = HTTPRequest(url=self.get_url('/'))
response = yield self.http_client.fetch(request)
self.assertEqual(response.code, 400)
When I run this test with `python -m tornado.test.runtests tests` I get the
following result:
E
======================================================================
ERROR: test_bad_request (tests.SimpleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/testing.py", line 118, in __call__
result = self.orig_method(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/testing.py", line 494, in post_coroutine
timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 418, in run_sync
return future_cell[0].result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 109, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 631, in run
yielded = self.gen.throw(*sys.exc_info())
File "tests.py", line 18, in test_bad_request
response = yield self.http_client.fetch(request)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 628, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 111, in result
raise self._exception
HTTPError: HTTP 400: Bad Request
----------------------------------------------------------------------
Ran 1 test in 0.022s
FAILED (errors=1)
[E 140929 12:55:59 testing:687] FAIL
Obviously this is correct, because the handler sets 400 status code. **But how
can I test my application for such case?** I think 4xx codes are useful, so I
don't want to give them up. However I'm new to Tornado and I wasn't able to
find a way to test them. Is there any?
Answer: Try this:
@gen_test
def test_bad_request(self):
request = HTTPRequest(url=self.get_url('/'))
with self.assertRaises(tornado.httpclient.HTTPError) as context:
yield self.http_client.fetch(request)
self.assertEqual(context.exception.code, 400)
See the documentation for
[assertRaises](https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertRaises).
|
Trouble with scraping text from site using lxml / xpath()
Question: quick one. I'm new to using lxml and have spent quite a while trying to scrape
text data from a particular site. The element structure is as shown below:
<http://tinypic.com/r/2iw7zaa/8>
What i want to do is extract the 100,100 that is shown within the highlighted
area. The statements i've tried include (I saved the source of the site into a
text file to test, test.txt - tried also with html extension):
from lxml import html
tree = html.parse(test.txt)
#value = tree.xpath('//*[@id="content"]/table[4]/tbody/tr[1]/td[2]')
#value = tree.xpath('//*[@id="content"]/table[4]/tbody/tr[1]/td[2]/text()')
All i seem to get as a result is an empty list [] ,any help would be greatly
appreciated.
ps i commented out the two value statements as I'm showing what i tried. I
tried a bunch of other xpath statements similiar to the ones above but they
were lost as the python shell crashed on me.
pps. apologies for the link to the pic - due to rep I can't post the pic
directly.
Answer: Try removing '/tbody' from the xpath.
The browser might be adding the `/tbody' tag whereas it might not appear in
the raw HTML.
Read more [here](http://stackoverflow.com/questions/18241029/why-does-my-
xpath-query-scraping-html-tables-only-work-in-firebug-but-not-the) and
[here](http://stackoverflow.com/questions/1678494/why-does-firebug-add-tbody-
to-table/1681427#1681427).
|
Avoid calling an object after it is dead
Question: I have a `threading.Thread` subclass in python and its run method is as
follows:
def run(self):
while self.caller.isAlive():
details = self.somefile.read()
if self.status() and details:
self.handler(details)
time.sleep(self.interval)
The arguments passed to this thread's `__init__` are as follows:
* `caller` is the thread object of the caller
* `status` is the callable that returns a boolean indicating the status
* `handler` is called to handle the details (some data)
The while loop checks if the caller thread is alive. If yes, it enters the
loop and checks if it's time for a status update and handles the data. It
sleeps and continues the loop. Note that in this case the `caller` is main
thread and `status` is provided by the main thread (a function of some sort).
**The issue is** in between `while self.caller.isAlive():` and `if
self.status() and details:`. If the caller thread gets terminated in that tiny
fraction of a second, its `status` function would still get called and it will
raise an error. Are there any ways to prevent this?
**Edit:** The main thread invokes this thread. The main thread has the status
function inside a class. Since it is a classmethod, the instance parameter
`self` must be passed while calling it. In my case, it is like:
def OnStatus(self, *args, **kwargs):
...
When the main thread is exited (normally) the class object and it's instances
are destroyed, but the thread that was started still persists and might call:
# self.status is self.OnStatus() defined before,
# it is passed as the status callable to init
self.status() # Things might go wrong here (at the wrong time)
**Edit** Tried this with a wxpython application and there was an exception
raised!
'The object that you are referring to is:'
<bound method TextCtrl.IsShown of wxPython wrapper for DELETED TextCtrl object! (The C++ object no longer exists.)>
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/user/Desktop/test.py", line 37, in run
if self.status() and details:
File "/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 9179, in IsShown
return _core_.Window_IsShown(*args, **kwargs)
TypeError: in method 'Window_IsShown', expected argument 1 of type 'wxWindow const *'
Answer: Nothing is stopping you from calling `status()` on the caller after the main
thread has ended. The `caller` object will still be there. It stays alive
because your background thread still holds a reference to it, which means its
reference count is still non-zero. Consider this example:
import threading
import time
class Ok(threading.Thread):
def __init__(self, obj):
super(Ok, self).__init__()
self.obj = obj
def run(self):
while True:
print self.obj.ok
time.sleep(2)
class Obj(object):
def __init__(self):
self.ok = "hey"
o = Obj()
t = Ok(o)
t.start()
del o
We create an instance of `Obj` in the main thread, and explicitly delete the
reference to it from the main thread just before it ends. However, our output
looks like this:
hey
hey
hey
hey
hey
hey
<forever>
Because the background thread has a reference to the instance of `Obj`, it
stays alive even though the main thread is finished.
Additionally, I would recommend using an `Event` to signal that the `caller`
thread is shutting down, so that the background`Thread` will be woken up from
sleep as soon as it occurs:
caller_dead = threading.Event()
def run(self):
while not caller_dead.is_set()
details = self.somefile.read()
if self.status() and details:
self.handler(details)
caller_dead.wait(self.interval)
....
# This is the end of your main thread
caller_dead.set()
|
Python - Extracting excel docs from file, need help reading data
Question: So I've been working on a project extracting .xlsx docs from a file in attempt
to compile the data into one worksheet.
So for I've managed a loop to pull the documents but now I'm stuck trying to
read the documents.
Python 2.7
As follows, my script and response in the shell
#-------------- loop that pulls in files from folder--------------
import os
#create directory from which to pull the files
rootdir = 'C:\Users\username\Desktop\Mults'
for subdir, dir, files in os.walk(rootdir):
for file in files:
print os.path.join(subdir,file)
#----------------------merge work books-----------------------
import xlrd
import xlsxwriter
wb = xlsxwriter.workbook('merged.xls')
ws = workbook.add_worksheet()
for file in filelist:
r = xlrd.open_workbook(file)
head, tail = os.path.split(file)
count = 0
for sheet in r:
if sheet.number_of_rows()>0:
count += 1
for sheet in r:
if sheet.number_of_rosw()>0:
if count == 1:
sheet_name = tail
else:
sheet_name = "%s_%s" (tail, sheet.name)
new_sheet = wb.create_sheet(sheet_name)
new_sheet.write_reader(sheet)
new_sheet.close()
wb.close()
The error I'm receiving when I run the program
C:\Users\username\Desktop\Mults\doc1.xlsx
C:\Users\username\Desktop\Mults\doc2.xlsx
C:\Users\username\Desktop\Mults\doc3.xlsx
C:\Users\username\Desktop\Mults\doc4.xlsx
Traceback (most recent call last):
File "C:\Users\username\Desktop\Work\Python\excel practice\xlsx - loops files
- 09204.py", line 20, in <module>
wb = xlsxwriter.workbook('merged.xls')
TypeError: 'module' object is not callable
I know I'm missing a step somewhere to connect the data.
I've practiced with xlsxwriter in other scripts and the module worked fine.
For some reason won't recognize it here.
Also, as suggested I've tried xlwt, but experienced trouble importing the
module into my shell even though it is installed accordingly.
Any tips will be helpful!
Thanks!
Answer: It is a capital `W` in `WorkBook`
wb = xlsxwriter.Workbook('merged.xls')
You should also use `/` slashes or `r` raw string in paths in windows:
r'C:\Users\username\Desktop\Mults'
'C:/Users/username/Desktop/Mults'
`ws = workbook.add_worksheet()` will also cause an error as `workbook` is not
defined anywhere.
I think you mean `wb.add_worksheet()`
|
create database by load a csv files using the header as columnnames (and add a column that has the filename as a name)
Question: I have CSV files that I want to make database tables from in mysql. I've
searched all over and can't find anything on how to use the header as the
column names for the table. I suppose this must be possible. In other words,
when creating a new table in MySQL do you really have to define all the
columns, their names, their types etc in advance. It would be great if MySQL
could do something like Office Access where it converts to the corresponding
type depending on how the value looks.
I know this is maybe a too broadly defined question, but any pointers in this
matter would be helpful. I am learning Python too, so if it can be done
through a python script that would be great too.
Thank you very much.
Answer: Using Python, you could use the [csv
DictReader](https://docs.python.org/2/library/csv.html#csv.DictReader) module
to makes it pretty easy to use the headers from the csv files as labels for
the input data. It basically reads all lines in as a dictionary object with
the keys as the headers, so you can use the keys as the source for your column
names when accessing mySQL.
A quick example that reads a csv into a list of dictionaries:
example.csv:
name,address,city,state,phone
jack,111 washington st, somewhere, NE, 888-867-5309
jill,112 washington st, somewhere else, NE, 888-867-5310
john,113 washington st, another place, NE, 888-867-5311
example.py:
import csv
data = []
with open("example.csv") as csvfile:
reader = csv.DictReader(csvfile)
for line in reader:
data.append(line)
print(data[0].keys())
print(data[0]['address'])
print(data[1]['name'])
print(data[2]['phone'])
output:
$:python example.py
dict_keys(['name', 'address', 'city', 'state', 'phone'])
111 washington st
jill
888-867-5311
More in-depth examples at: <http://java.dzone.com/articles/python-101-reading-
and-writing>
Some info on connection to MySQL in Python: [How do I connect to a MySQL
Database in Python?](http://stackoverflow.com/questions/372885/how-do-i-
connect-to-a-mysql-database-in-python)
|
Python replace/delete special characters
Question:
character = (%.,-();'0123456789-—:`’)
character.replace(" ")
character.delete()
I want to delete or replace all the special characters and numbers from my
program, I know it can be done in the one string just not sure how to space
all the special characters with quotes or anything. Somehow I'm supposed to
separate all the special character in the parenthesis just not sure how to
break up and keep all the characters stored in the variable.
Answer: The `translate` method is my preferred way of doing this. Create a mapping
between the chars you want mapped and then apply that table to your input
string.
from string import maketrans
special = r"%.,-();'0123456789-—:`’"
blanks = " " * len(special)
table = maketrans(special, blanks)
input_string.translate(table)
|
Nested Loop lines in a file n times (Example 3) below: Python
Question: I have a file with lines which need to be repeated as many times as the
decimal/hex value mentioned in the first line.
Input Example:
Loop 3 {Line1} ##the line is within curly braces so I used regex but not printing it out right.
Label: Blank {Line2}
Jump Label {Line3}
Output Example:
Line1
Line2
Line3
Line2
Line3
Line2
Line3
My code so far:
line_1_ = re.compile(r'Loop\s*([0-9]+|0x[0-9a-fA-F]+)\s*\{(\w+)\}', re.DOTALL)
for match in line_1_.finditer(inputfileContents):
looptimes = int(match.group(1))
line_1 = match.group(2)
jump = re.search(r'Jump\s*(\w+)\s*\{(.*?)\}', inputfileContents, re.DOTALL)
label = re.search(r'(\w+):\s*(\w+)\s*\{(\w+)\}', inputfileContents, re.DOTALL)
for i in range(looptimes):
if jump.group(1) == label.group(1):
print '%s\n%s' % (label.group(3), jump.group(2))
Error: I cannot increment a line with line++. Which I understand as it is a
string but not sure how to increment it otherwise.
Effectively it is just repeating Line2 and Line 3 3 times. But if there are
multiple lines between the Jump and loop it has to print all the lines
starting from Label statement to the jump statement including the jump
statement.
This doesn't work if there are multiple lines between the jump and label
statements...
Example 2:
Blank {Line0}
Loop 3 {Line1}
Label: Blank {Line2}
Blank {Line3}
Blank {Line4}
Jump Label {Line5}
Blank {Line6}
Expected Output int he above case:
Line0
Line1
Line2
Line3
Line4
Line5
Line2
Line3
Line4
Line5
Line2
Line3
Line4
Line5
Line6
Example 3:
Blank {Line0}
Loop 3 {Line1}
Label2 {Line2}
Blank {Line3}
Loop 2 {Line4}
Label1:{Line5}
Blank {Line6}
Jump Label1 {Line7}
Blank {Line8}
Jump Label2 {Line9}
Blank {Line10}
Output I need:
Line0
Line1
Line2
Line3
Line4
Line5
Line6
Line7
Line5
Line6
Line7
Line8
Line9
Line2
Line3
Line4
Line5
Line6
Line7
Line5
Line6
Line7
Line8
Line9
Line2
Line3
Line4
Line5
Line6
Line7
Line5
Line6
Line7
Line8
Line9
Line10
I tried to implement @Samy Arous method from here: [Using Python to Parse a
File for Nested Loops](http://stackoverflow.com/questions/22327002/using-
python-to-parse-a-file-for-nested-loops) but couldn't implement it so far.
Answer: I found it easier without regex:
import sys
infile = sys.stdin
def extract(line):
return line[line.find('{') + 1: line.rfind('}')]
for first in infile:
if first.startswith('Loop '):
break
else:
print extract(first)
count = int(first.split()[1])
lines = [extract(first)] + map(extract, infile)
while count > 0:
print '\n'.join(lines)
count -= 1
|
How to start redis server and config nginx to run mediacrush script on CentOS?
Question: I found a MediaCrush open source from here
> <https://github.com/MediaCrush/MediaCrush>
But stuck in last steps. I started the Redis server, use command
> $redis-cli
that received the "PONG" response.
Then used the command
> $celery worker -A mediacrush -Q celery,priority
and after
> python app.py
But it seem that nothing works. I just installed nginx, run it on my IP ok,
but after edit the nginx.conf like a Mediacrush script, then accessing my IP,
nothing happens. So what am I missing here? and how to config nginx server and
start redis server to run this script on CentOS (i can change it to Arch like
them if required)
Thanks!
Answer: I just wanted to run it for fun.. so this may be wrong but what I did after
running the celery daemon was edit the app.py script and manually set the
host, port, and set debug to false. Then I just executed it like any other
python script.
**EDIT** : This may work
PORT=8000 gunicorn -w 4 app:app
it switches your port to 8000 and runs the gunicron daemon with 4 workers.
both approaches worked for me.
**Edit File:** `./app.py`
from mediacrush.app import app
from mediacrush.config import _cfg, _cfgi
import os
app.static_folder = os.path.join(os.getcwd(), "static")
if __name__ == '__main__':
# CHANGE THIS LINE TO REFLECT YOUR DATA
app.run(host=_cfg("debug-host"), port=_cfgi('debug-port'), debug=True)
# FOR EXAMPLE I CHANGED IT TO THIS
# app.run(host="92.222.25.245", port=8000, debug=0)
Also to start redis i believe you should do `redis-server&` I use the cli to
manually tinker with it.
Btw I did this on linux mint / ubuntu 14.04 / debian
|
Opening tabs using Webbrowser module in Python
Question: I'm writing a Python Script using webbrowser module to automatically open the
desired webpages.
The issue I'm facing is that I'm only able to open the webpages on different
Browser windows and not on the same Browser window on different tabs.
Below is the code that I'm using.
#! /usr/bin/python -tt
import webbrowser
def main():
webbrowser.open('url1')
webbrowser.open('url2')
webbrowser.open('url3')
if __name__ == '__main__':
main()
I want to open all these links on the same web browser window on separate tabs
and not on different browser windows. Thanks :)
Answer: Simply `webbrowser.open_new_tab('url')`
|
Socket server not responding when in thread
Question: I'm trying to set up a simple socket server in a qgis plugin. The ultimate
goal is to communicate between qgis and matlab.
I found a clear example for something comparable from here:
<http://www.blog.pythonlibrary.org/2013/06/27/wxpython-how-to-communicate-
with-your-gui-via-sockets/>
When implemented it seems to hang at self.socket.accept() even when a client
connects with it. The connection is established (tried with matlab and python)
but I don't receive anything. Netstat shows that there is a server listening
at the correct port.
Here's the relevant code:
class SocketServerListen(threading.Thread):
def __init__(self):
host = "127.0.0.1"
port = 22001
QgsMessageLog.logMessage("Initializing server")
threading.Thread.__init__(self)
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.bind((host, port))
self.socket.listen(5)
self.setDaemon(True)
self.start()
def run(self):
while True:
try:
QgsMessageLog.logMessage("Waiting for connection")
conn, address = self.socket.accept()
ready = select.select([conn,],[], [],2)
if ready[0]:
conn.sendall("hello")
QgsMessageLog.logMessage("Connected to client")
time.sleep(0.5)
#receive header and message
message = conn.recv(512)
#disconnect connection
QgsMessageLog.logMessage("Message:" + message)
self.socket.shutdown(socket.SHUT_RDWR)
conn.close()
self.socket.close()
break
except socket.error, msg:
print "Socket error! %s" % msg
break
In the log I can see the "Waiting for connection" message but it never passes
"Connected to client". I tried implementing it without threading (remove
threading, setDaemon and start()->run())and it does actually work then.
However the gui obviously freezes then so that's not an option.
Edit:
I removed the select, which was indeed stalling the process, and only close
the client socket. Here's the standalone code for the server:
import threading
import socket
import time
import select
class SocketServerListen(threading.Thread):#threading.Thread
def __init__(self):
host = "127.0.0.1"
port = 22001
print("Initializing server")
threading.Thread.__init__(self)
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.bind((host, port))
self.socket.listen(5)
self.setDaemon(True)
self.start()
def run(self):
while True:
try:
print("Waiting for connection")
conn, address = self.socket.accept()
conn.sendall("hello")
print("Connected to client")
time.sleep(0.5)
#receive message
message = conn.recv(512)
print(message)
#disconnect connection
conn.close()
#self.socket.close()
#break
except socket.error, msg:
print("Socket error! %s"),msg
break
listenIncoming = SocketServerListen()
for i in range(0,10):
time.sleep(5)
print("doing other stuff")
print("end")
And client:
import socket
host = "127.0.0.1"
port = 22001
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host,port))
message = client.recv(512)
print("Message:" + message)
client.sendall("hiya")
client.shutdown(socket.SHUT_RDWR)
client.close()
Standalone this now does what I wanted it to do :). Unfortunately after
implementing the changes in the qgis plugin the same problem still occurs. I
suppose the problem lies with qgis then and how it handles separate threads.
Answer: I'm only guessing here, but my guess is that the client doesn't send anything
until your server sends something first. If that's the case then the `select`
call will timeout, and you go back to call `accept` _again_ which will block
forever (or until you connect with another client).
My advice is that you should skip the `select` call.
* * *
Also, after you close connection to the client you also close the _listening_
socket, so no more client can connect.
|
Use variables python script to open file
Question: I'm not too familiar with python, can anyone tell me how I can open files
using variables in python? I want to automate this in a script to run the same
task across multiple directories
Here machine and inputfile are variables.. I tried the following code but keep
getting errors
file = open(machine + '/' + inputfile)
printing works fine.. i.e. the variables were populated correctly:
print 'Input file is "', inputfile -> abc
print 'Machine "', machine -> xyz
Hence file location should be ./xyz/abc
Error: The error I get is file ./machine/inputfile does not exist, i.e.
instead of taking the value of the variable machine and inputfile it is taking
them as is.
Apologies if this is too trivial a question
Answer: In a general sense, there is nothing wrong with your code. There is probably
something wrong with your path and/or file name.
This is how I would do it (on windows)
import os
dir='C:/Users/xxx' # You can use forward slashes on Windows
file='somefile.txt'
full_path=os.path.join(dir,file) # OS independent way of building paths
with open(full_path,'r') as f: # 'with' will automatically close file for you,
for line in f: # Do something with the file
print line
|
Fast 1D convolution with finite filter and sum of dirac deltas in python
Question: I need to compute the following convolution:

And K is a very simple filter, that is simply a rectangular box with finite
(!) size. My data is a list of the times t_i of the Dirac deltas.
The straightforward solution would be to bin the data and use one of numpy or
scipys convolution functions. Yet, is there a quicker way? Can I avoid the
binning of the data and take advantage of the fact that a) my filter is finite
in size (just a box) and b) I have a list of time points. Thus, I just have to
check whether my time points are currently part of the sliding box, or not.
So, I am looking for a solution that has complexity O(d*n) with d the size of
the resolution of the convolution. Thus, I want to be much faster than O(b**2)
with b the number of bins. Moreover, since n << b, it still holds that O(d*n)
is much less than O(b * log b) for fft based convolution. Thanks!
Answer: Convolutions with large box filters can be speed up using a [cumulative sum of
the signal](http://en.wikipedia.org/wiki/Summed_area_table):
**Example signal:**
import numpy as np
a = np.random.rand(10)
print a
Output:
[ 0.22501645 0.46078123 0.6788864 0.88293584 0.10379173 0.50422604
0.4670648 0.22018486 0.96649785 0.44451671]
**Convolution using the[default convolution
function](http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html):**
print np.convolve(a, np.ones(3) / 3, mode='valid')
Output:
[ 0.45489469 0.67420116 0.55520466 0.49698454 0.35836086 0.39715857
0.55124917 0.54373314]
**Convolution using the cumulative sum:**
s = np.cumsum(np.concatenate(([0], a)))
print (s[3:] - s[:-3]) / 3
Output:
[ 0.45489469 0.67420116 0.55520466 0.49698454 0.35836086 0.39715857
0.55124917 0.54373314]
* * *
Both the `cumsum` computation as well as the list subtraction are O(n) with n
being the number of list elements, so the overall computing time is O(n) and -
interestingly - independent of the filter size.
|
Python: LOAD DATA INFILE mySQL
Question: I try to import data into a mySQL database using Python, but I can't get it to
work. I don't get any errors it looks as if everything is working OK, but the
text file never gets imported.
I can import the text file just fine if I do it manually via the mysql command
line in Terminal. What am I doing wrong?
imoprt mysql.connector
cnx = mysql.connector.connect(user ='user1', password ='12345', host ='127.0.0.1', database ='stockStatus')
cursor = cnx.cursor()
cursor.execute('use stockStatus')
cursor.execute('truncate table products')
cursor.execute("LOAD DATA INFILE '/Path/products.txt' INTO TABLE products IGNORE 1 LINES")
cnx.close()
Answer: Try using double quotes around the input file name:
cursor.execute('LOAD DATA INFILE "/Path/products.txt" INTO TABLE products IGNORE 1 LINES')
|
Send commands to a socket with Python
Question: I am trying to connect to a socket file and send it some commands :
#!/usr/bin/env python
import socket
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect("/var/run/socket")
s.send('a command here')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
Did I miss something ?
I am totally new with Python.
Thanks in advance for any help !
Answer: You don't say what the actual problem is, or show any errors, but here are
some things to check.
1. Firstly, does the socket actually exist?
2. Does your client python process have permission to access the socket file?
3. Is there a server process running that is listening for incoming connections on that socket?
Note that you need a client _and_ a server for communication to take place.
The server must already be running before the client can connect. The server
creates the socket which must not already exist.
server.py
import os
import socket
address = '/tmp/socket'
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.bind(address) # this creates the socket file
s.listen(1)
r, a = s.accept()
r.send('Hello\n')
msg = r.recv(1024)
print msg
r.close()
s.close()
os.unlink(address) # remove the socket file so that it can be recreated on next run
Run this server, then (in another terminal) run your client code:
client.py
import socket
server_addr = '/tmp/socket'
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect(server_addr)
s.send('a command here')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
|
Python Pandas DataFrame how to Pivot
Question: Dear amazing hackers of the world,
I'm a newbie, and can't figure out which python/pandas function can achieve
the "transformation" I want. Showing you what I have ("original") and what
kind of result I want ("desired") is better than a lengthy description (I
think and hope).
import pandas as pd
# original DataFrame input
df_orig = pd.DataFrame()
df_orig["Treatment"] = ["C", "C", "D", "D", "C", "C", "D", "D"]
df_orig["TimePoint"] = [24, 48, 24, 48, 24, 48, 24, 48]
df_orig["AN"] = ["ALF234","ALF234","ALF234","ALF234","XYK987","XYK987","XYK987","XYK987"]
df_orig["Bincode"] = [33,33,33,33,44,44,44,44]
df_orig["BC_all"] = ["33.7","33.7","33.7","33.7","44.9","44.9","44.9","44.9"]
df_orig["RIA_avg"] = [0.202562419159333,0.281521224788666, 0.182828319454333,0.294909088002333,
0.105941322218833,0.247949961707,0.1267545610749,0.159711714967666]
df_orig["sum14N_avg"] = [4120031.79121666,3742633.37033333,4659315.47073666,4345668.76408666,
26307312.1188333,24089229.9177999,35367286.7322666,34093045.3129]
# show original DataFrame

# desired DataFrame input,
df_wanted = pd.DataFrame()
df_wanted["AN"] = ["ALF234","XYK987"]
df_wanted["Bincode"] = [33,44]
df_wanted["BC_all"] = ["33.7","44.9"]
df_wanted["C_24_RIA_avg"] = [0.202562419159333, 0.105941322218833]
df_wanted["C_48_RIA_avg"] = [0.281521224788666,0.247949961707]
df_wanted["D_24_RIA_avg"] = [0.182828319454333,0.1267545610749]
df_wanted["D_48_RIA_avg"] = [0.294909088002333, 0.159711714967666]
df_wanted["C_24_sum14N_avg"] = [4120031.791, 26307312.12]
df_wanted["C_48_sum14N_avg"] = [3742633.37, 24089229.92]
df_wanted["D_24_sum14N_avg"] = [4659315.471, 35367286.73]
df_wanted["D_48_sum14N_avg"] = [4345668.764, 34093045.31]
# show desired DataFrame

Thank you very much for your support!!
Answer: I believe you want to pivot this using `pd.pivot_table`. See [the examples on
pivot tables](http://pandas.pydata.org/pandas-
docs/stable/reshaping.html#pivot-tables-and-cross-tabulations) to understand
better how this works.
The following should give you what you want.
df_wanted = pd.pivot_table(
df_orig,
index=['AN', 'Bincode', 'BC_all'],
columns=['Treatment', 'Timepoint'],
values=['RIA_avg', 'sum14N_avg']
)
Note that the column names will not be transformed exactly as you stated in
your output, but rather there will be a hierarchical index on both the columns
and rows, which should be more convenient to work with.
Getting rows/columns/values out from this format is possible by using `.loc`:
df_wanted.loc['XYK987', :]
df_wanted.loc[:, ('sum14N_avg')]
df_wanted.loc['ALF234', ('RIA_avg', 'C', 24)]
|
removing \xa0, \n, \t from python string
Question: I have a list item, that I've converted into a string:
[u'\n Door:\xa0Novum \t ']
I need to remove everything so that Im left with
Door:Novum
I have tried various methods:
string = string.replace("\xa0", "")
string.rstrip('\n')
string.translate(string.maketrans("\n\t", ""))
I am obviously doing something wrong, but can't figure out what
Answer: You need to _store the return value_ ; strings are immutable so methods return
a new string with the change applied.
You can translate for all those characters, but use the `unicode` form of the
method:
toremove = dict.fromkeys((ord(c) for c in u'\xa0\n\t '))
outputstring = inputstring.translate(toremove)
I'm assuming you wanted to get rid of _spaces_ as well.
Demo:
>>> inputstring = u'\n Door:\xa0Novum \t '
>>> toremove = dict.fromkeys((ord(c) for c in u'\xa0\n\t '))
>>> outputstring = inputstring.translate(toremove)
>>> outputstring
u'Door:Novum'
A better method still would be to use `str.split()`, then join again:
outputstring = u''.join(inputstring.split())
`\xa0`, spaces, tabs and newlines are all included in what `str.split()` will
split on, as well as carriage returns.
Demo:
>>> u''.join(inputstring.split())
u'Door:Novum'
This is better because it is a _lot_ faster for this job than using
`str.translate()`!
>>> import timeit
>>> timeit.timeit('inputstring.translate(toremove)', 'from __main__ import inputstring, toremove')
3.4527599811553955
>>> timeit.timeit('u"".join(inputstring.split())', 'from __main__ import inputstring')
0.5409181118011475
|
Django Sites Framework: Initial Data Migration Location
Question: Before Django 1.7, when using the [Django Sites
Framework](https://docs.djangoproject.com/en/dev/ref/contrib/sites/#module-
django.contrib.sites) one could/should define the initial data using [Initial
Fixtures](https://docs.djangoproject.com/en/1.7/howto/initial-data/#providing-
initial-data-with-fixtures).
> ### myproject/fixtures/initial_data.json
>
>
> [
> {
> "pk": 1,
> "model": "sites.site",
> "fields": {
> "domain": "domain1",
> "name": "name1"
> }
> },
> {
> "pk": 2,
> "model": "sites.site",
> "fields": {
> "domain": "domain2",
> "name": "name2"
> }
> },
> {
> "pk": 3,
> "model": "sites.site",
> "fields": {
> "domain": "domain3",
> "name": "name3"
> }
> }
> ]
>
Since it is a global project setting, I added a "fixtures" folder to the
project root, and added it to FIXTURE_DIRS.
# Used to search fixture files directories.
# Fixture files are files that provide initial data to be
# inserted in the database. (>python manage.py loaddata)
FIXTURE_DIRS = [
os.path.join(PROJECT_ROOT, "fixtures"),
]
Now, I'm using Django 1.7, and it is recommended to use
[migrations](https://docs.djangoproject.com/en/1.7/ref/contrib/sites/#enabling-
the-sites-framework). Quoting Django documentation:
> To set the correct name and domain for your project, you can use a data
> migration.
The problem is **Migrations are app-specific** :
> python manage.py makemigrations --empty yourappname
So, what is the recommended approach to add the Site information to my
project, using a data migration? Where should this migration live?
Running `python manage.py makemigrations --empty sites` creates the migration
in the third party app folder, so we don't want that.
Shouldn't be possible to define a MIGRATION_DIRS as FIXTURE_DIRS existed for
the initial_data?
I found
[MIGRATION_MODULES](https://docs.djangoproject.com/en/1.7/ref/settings/#migration-
modules) in settings documentation, but the problem still remains, it is app-
specific.
Answer: First, configure `MODULE_MIGRATIONS` in your django settings:
MIGRATION_MODULES = {
'sites': 'myproject.fixtures.sites_migrations',
}
Then, run `./manage.py makemigrations sites` to have django create the
directory and create `0001_intitial.py` in the
`myproject.fixtures.sites_migrations` package.
Then, do `./manage.py makemigrations --empty sites`. The migration file should
be created in the specified package.
My file `0002_initialize_sites.py` looks like this:
-*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
def insert_sites(apps, schema_editor):
"""Populate the sites model"""
Site = apps.get_model('sites', 'Site')
Site.objects.all().delete()
# Register SITE_ID = 1
Site.objects.create(domain='create.tourtodo.com', name='create')
# Register SITE_ID = 2
Site.objects.create(domain='www.tourtodo.com', name='www')
class Migration(migrations.Migration):
dependencies = [
('sites', '0001_initial'),
]
operations = [
migrations.RunPython(insert_sites)
]
|
Python: installer with py2exe and project with OpenOPC module
Question: I searched for how can I produce a installer for my Python project. I found a
good alternative, that is the py2exe module. This is used on a setup.py.
But my project uses a com server with win32com module into the OpenOPC module.
For this reason, after I produce a standalone directorie with the exe file,
this executable does not works, returning this exception:
IOError: [Errno 2] No such file or directory:
'C:\\Users\\(project directory)\\dist\\lib\\shared.zip\\win32com\\gen_py\\__init__.py'
I searched more about this and found this page:
<http://www.py2exe.org/index.cgi/Py2exeAndWin32com>
This page teaches a 'model' for setup.py to include a com server as module.
But I did not understand this 'model'. It is generic for all com servers and
do not introduce where should I include the OpenOPC module. I have tried some
ways to use this model like:
from distutils.core import setup
import py2exe
import sys
class Target:
def __init__(self):
self.version = version
self.company_name = author
self.copyright = license_
self.name = name
self.description = description
self.modules = ['C:\\OpenOPC\\src\\OpenOPC.py']
self.create_exe = True
self.create_dll = False
sys.argv.append('py2exe')
setup(name=name,
version=version,
author=author,
author_email=author_email,
maintainer=maintainer,
maintainer_email=maintainer_email,
url=url,
license=license_,
description=description,
long_description=long_description,
keywords=keywords,
platforms=platforms,
console=console, zipfile=zipfile,
com_server=[Target()])
Unfortunately this did not work. I tried to put some others files or
directories on modules into Target class constructor. It seems that I have to
put the OpenOPC module here if it is not in other point.
Answer: I did:
sys.argv.append('py2exe')
setup(name=name,
version=version,
author=author,
author_email=author_email,
maintainer=maintainer,
maintainer_email=maintainer_email,
url=url,
license=license_,
description=description,
long_description=long_description,
keywords=keywords,
platforms=platforms,
console=console, zipfile=zipfile,
options=options)
where
options = {'py2exe': {'packages': ['win32com']}}
That is it. Worked. And my application have a OPC client and not OPC server.
If it would had a OPC server (COM server), maybe it would have been even more
difficult.
|
Dictionary value initialization on lookup
Question: Consider the following:
mylist = ['a','b','c','d']
mydict = {}
for val in mylist:
if val not in mydict:
mydict[val] = []
mydict[val].append(1)
Is there any way to avoid the double lookup ("`val in mydict`" and
"`mydict[val]`")?
Note - I've tried using a default return value (i.e. `mydict.get(val,
[]).append(1)`), but the new list isn't actually registered in the dictionary
as the value for the key. For example:
mydict = {}
mydict.get('not_in_dict','default')
mydict
returns `{}`. In my case I want something that would return `{'not_in_dict' :
'default'}`.
Or is the correct answer here that I should't worry about double lookups?
(e.g. "chill, dude, don't optimize if you don't have to" or "python is
awesome, it takes care of this already").
I'm using python 3.4.
Answer: You can use
[dict.setdefault](http://www.tutorialspoint.com/python/dictionary_setdefault.htm)
mylist = ['a','b','c','d']
mydict = {}
for val in mylist:
mydict.setdefault(val,[])
mydict[val].append(1)
Or use
[collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict)
which is more efficient
from collections import defaultdict
mylist = ['a','b','c','d']
mydict = defaultdict(list)
for val in mylist:
mydict[val].append(1)
In [14]: mylist = ['a','b','c','d']
In [15]: mydict = {}
In [16]: %%timeit
....: for val in mylist:
....: mydict.setdefault(val,[])
....: mydict[val].append(1)
....:
1000000 loops, best of 3: 1.51 µs per loop
In [18]: mydict = defaultdict(list)
In [19]: %%timeit
....: for val in mylist:
....: mydict[val].append(1)
....:
1000000 loops, best of 3: 603 ns per loop
|
Accessing Python Dictionary Elements
Question: I want to save raw input elements in a dictionary to a variable. Here is a
sample of what I am doing:
accounts = {}
def accountcreater():
accountname = raw_input("Account Name: ")
accountpassword = raw_input("Account Password: ")
accountUUID = 1
accountUUID += 1
accounts[accountname] = {"password":accountpassword,"UUID":accountUUID}
def login():
loginusername = raw_input("Account Name: ")
loginpassword = raw_input("Account Password: ")
for usernames in account:
if usernames == loginusername:
accountpassword = accounts[usernames][???]
accountpassword = accounts[usernames][???]
else:
pass
That is a very simple example of what the code is like. Now the part where the
"[???]" is I have no idea what to put. I tried putting this code:
accountpassword = accounts[usernames][password]
accountpassword = accounts[usernames][UUID]
But that does not seem to work because it says `password` and `UUID` are not
defined. Yet I seem to be able to just input `[usernames]` and it will work
just fine. Any ideas?
## EDIT
For the follow code:
accountpassword = accounts[usernames]['password']
accountpassword = accounts[usernames]['UUID']
I have also tried putting them in strings, and it raises this error: `String
indices must be integers, not str`.
## EDIT 2
This is my code in its entirety, please be warned it is very long and
extensive. The only part that you need will be at the top under the functions
startup, login, and account.
import datetime
import time
#import pickle
filesfile = "filesfiles" #File's Pickle File
accountfile = "accountsfiles" #Account's Pickle File
accounts = {} #Where accounts are put
files = {} #Where files are put
currentaccount = None #The current account the user is on
#accountsaver = open(accountfile,'r') #Restores all current accounts
#accounts = pickle.load(accountsaver)
#accountsaver.close()
#filesaver = open(filesfile,'r') #Restores all current files
#files = pickle.load(filesaver)
#filesaver.close()
def startup():
for accountname in accounts:
# accountpassword = accounts[accountname]['password']
print type(accountname)
#accountsaver = open(accountfile,'wb') #Adds a new account if there is one
#pickle.dump(accounts, accountsaver)
#accountsaver.close()
print "\n ------------------- "
print " FILE SYSTEM MANAGER "
print " ------------------- "
print "\n To login type in: LOGIN"
print " To create a new account type in: ACCOUNT"
loginornew = raw_input("\n Please enter LOGIN or ACCOUNT: ") #Input to see where you want to go
if loginornew.lower() == "login":
login()
elif loginornew.lower() == "account":
newaccount()
else:
startup()
def newaccount():
newusername = ""
newpassword = ""
newpasswordagain = ""
UUID = 0 #UUID variable
print "\n--------------------------------------------"
print "\n Would you like to create a new account?"
yesorno = raw_input("\n Please enter YES or NO: ") #Checks to see if user wants to create a new account
if yesorno.lower() == "no":
print "\n--------------------------------------------"
startup()
elif yesorno.lower() == "yes":
while len(newusername) < 8: #Checks to see if username is atleast 8 characters
newusername = raw_input("\n Username must be atleast 8 characters\n Please enter a username for your account: ")
while len(newpassword) < 5: #Checks to see if password is atleast 5 characters
newpassword = raw_input("\n Password must be atleast 5 characters\n Please enter a password for your account: ")
while newpasswordagain == "": #Makes sure there is a input
newpasswordagain = raw_input(" Please confirm the password for your account: ")
if newpassword == newpasswordagain: #Checks to make sure the password is correct
for username in accounts: #Loops through all usernames in acccounts
if username.lower() == newusername.lower(): #Checks to see if the username already exists
print "\n Username already exists"
print " Please try again"
newaccount()
else: #If the username is not taken and the password is correct it creates the accounts
pass
UUID += 1 #Makes the current UUID number bigger by one
accounts[newusername] = {"password":newpassword,"UUID":UUID} #Creates a new account
print "\n Account Created"
print "\n--------------------------------------------"
startup() #Takes you back to startup menu
else: #If the passwords do not match each other
print "\n Passwords do not match"
print " Please try again"
newaccount()
else:
newaccount()
def login():
username = ""
password = ""
print "\n--------------------------------------------"
print "\n Would you like to login?"
yesorno = raw_input("\n Please enter YES or NO: ") #Checks to see if user wants to login
if yesorno.lower() == "no":
print "\n--------------------------------------------"
startup()
elif yesorno.lower() == "yes":
for usernames2 in accounts: #Testing Purposes
print usernames2, #Testing Purposes
print "" #Testing Purposes
for usernames23 in accounts: #Testing Purposes
for usernames3 in str(accounts[usernames23]): #Testing Purposes
print usernames3, #Testing Purposes
while username == "": #Makes sure there is a input
username = raw_input("\n Please enter your username: ")
while password == "": #Makes sure there is a input
password = raw_input("\n Please enter your password: ")
for usernames in accounts: #Loops through all usernames in accounts
if username.lower() == usernames.lower(): #Checks to see if the username input equalls a username in the accounts dictionary
accountpassword = accounts['username'][password]
accountUUID = 0
if password == accountpassword:
for accountname in accounts:
accountpassword = accounts[accountname]
print "\n Access Granted"
print "\n--------------------------------------------"
menu()
else:
pass
print "\n Access Denied"
print "\n Please try again"
login()
else:
pass
print "\n Access Denied"
print "\n Please try again"
login()
else:
login()
def menu():
#filesaver = open(filesfile,'wb') #Adds a new file if there is one
#pickle.dump(files, filesaver)
#filesaver.close()
print "\n ------------------- "
print " FILE SYSTEM MANAGER "
print " ------------------- "
print "\n What would you like to do with your files?"
print " To make a new file type in: NEW"
print " To edit a current file type in: EDIT"
print " To delete a current file type in: DELETE"
print " To view all current files type in: ALL"
chooser = raw_input("\n Please enter NEW, EDIT, DELETE, or ALL: ") #Input to see where you want to go
if chooser.lower() == "new":
newfile()
elif chooser.lower() == "edit":
editfiles()
elif chooser.lower() == "delete":
deletefiles()
elif chooser.lower() == "all":
allfiles()
else:
menu()
def newfile():
filename = ""
filetext = ""
while filename == "": #Makes sure there is a input
print "--------------------------------------------"
filename = raw_input("\n Please input your new files name: ")
while filetext == "":
filetext = raw_input("\n Please input the text for your new file: ")
filedate = datetime.date.today() #Grabs the current date
files[filename] = {userUUID:{filedate:filetext}} #Creates a new file
print "\n File Added"
print "\n--------------------------------------------"
menu()
def editfiles():
print "--------------------------------------------"
print " To edit a file type in: EDIT"
print " To view all current files type in: ALLFILES"
print " To cancel type in: CANCEL"
wheretogo = raw_input("\n Please enter EDIT, ALLFILES, or CANCEL: ")
if wheretogo.lower() == "edit":
print "\n To edit a file type in its name"
print " To cancel type in: CANCEL"
print "\n **Please Note** Editing a file changes its date!"
editname = raw_input("\n Please type in the file's name or CANCEL: ")
if editname.lower() == "cancel":
menu()
else:
newcontents = ""
for filename in files: #Loops through all file names in files
if filename.lower() == editname.lower():
print "\n What would you like this file to say?"
while newcontents == "":
newcontents = raw_input("\n Please input files new contents: ")
filetext = newcontents
filedate = datetime.date.today()
files[filename] = {filedate:filetext}
print "\n File Changed"
print "--------------------------------------------"
menu()
else:
pass
print "\n File not found!"
editfiles()
elif wheretogo.lower() == "allfiles":
print "\n--------------------------------------------"
for filename in files:
print "File Name: " + str(filename)
print "--------------------------------------------"
print "\n To edit a file type in: EDIT"
print " To cancel type in: CANCEL"
print "\n **Please Note** Editing a file changes its date!"
wheretogo = raw_input("\n Please enter EDIT or CANCEL: ")
if wheretogo.lower() == "edit":
editname = raw_input("\n Please type in the file's name to edit it: ")
newcontents = ""
for filename in files:
if filename.lower() == editname.lower():
print "\n What would you like this file to say?"
while newcontents == "":
newcontents = raw_input("\n Please input files new contents: ")
filetext = newcontents
filedate = datetime.date.today()
files[filename] = {filedate:filetext}
print "\n File Changed"
print "--------------------------------------------"
menu()
else:
pass
print "\nFile not found!"
editfiles()
elif wheretogo.lower() == "cancel":
menu()
else:
menu()
elif wheretogo.lower() == "cancel":
menu()
else:
menu()
def deletefiles():
print "--------------------------------------------"
print " To delete a file type in: DELETE"
print " To view all current files type in: ALLFILES"
print " To cancel type in: CANCEL"
wheretogo = raw_input("\n Please enter DELETE, ALLFILES, or CANCEL: ")
if wheretogo.lower() == "delete":
print "\n To delete a file type in its name"
print " To cancel type in: CANCEL"
deletename = raw_input("\n Please type in the file's name or CANCEL: ")
if deletename.lower() == "cancel":
menu()
else:
for filename in files:
if filename.lower() == deletename.lower():
del files[filename]
print "\n File Removed"
print "--------------------------------------------"
menu()
else:
pass
print "\n File not found!"
deletefiles()
elif wheretogo.lower() == "allfiles":
print "\n--------------------------------------------"
for filename in files:
print "File Name: " + str(filename)
print "--------------------------------------------"
print "\n To delete a file type in: DELETE"
print " To cancel type in: CANCEL"
wheretogo = raw_input("\n Please enter DELETE or CANCEL: ")
if wheretogo.lower() == "delete":
deletename = raw_input("\n Please type in the file's name to delete it: ")
for filename in files:
if filename.lower() == deletename.lower():
del files[filename]
print "\n File Removed"
print "--------------------------------------------"
menu()
else:
pass
print "\nFile not found!"
deletefiles()
elif wheretogo.lower() == "cancel":
menu()
else:
menu()
elif wheretogo.lower() == "cancel":
menu()
else:
menu()
def allfiles():
filetexttotal = ""
for filename in files:
print "\n--------------------------------------------"
print "\nFile Name: " + str(filename)
for filedate in files[filename]:
print "File Date: " + str(filedate)
for filetext in files[filename][filedate]:
filetexttotal = filetexttotal + str(filetext)
print "File Contents: " + str(filetexttotal)
filetexttotal = ""
print "\n--------------------------------------------"
menu()
startup()
Please note also on top of all of that. This code may have some errors, it is
a work in progress. Yes if you are wondering, this is a filing system! :)
Answer: You need them as strings:
accountpassword = accounts[usernames]['password']
accountpassword = accounts[usernames]['UUID']
You defined your nested dictionary with stringed keys:
accounts[newusername] = {"password":accountpassword,"UUID":accountUUID}
Therefore, lookup requires the string name. Did that fix your problem?
## In regard to your recent edit:
Sounds like your `accounts` variable is not a dictionary, or at least not a
nested dictionary. Sounds like a string is being returned, and you are trying
to index it, rather than a dictionary being returned.
|
Vincent visualizations are not displaying from the command line
Question: I'm new to Python visualizations, and have been trying out [vincent's Quick
Start examples](https://github.com/wrobstory/vincent) in iPython Notebook.
I pasted the following code in iPython Notebook and the visualization
displayed. Then I pasted the same code into (1) the shell and then (2) in a
.py file that I ran from the command line, and both times no visualization
showed up. What am I doing wrong?
import pandas as pd
import random
import vincent
#Iterable
list_data = [10, 20, 30, 20, 15, 30, 45]
vincent.core.initialize_notebook()
#Dicts of iterables
cat_1 = ['y1', 'y2', 'y3', 'y4']
index_1 = range(0, 21, 1)
multi_iter1 = {'index': index_1}
for cat in cat_1:
multi_iter1[cat] = [random.randint(10, 100) for x in index_1]
cat_2 = ['y' + str(x) for x in range(0, 10, 1)]
index_2 = range(1, 21, 1)
multi_iter2 = {'index': index_2}
for cat in cat_2:
multi_iter2[cat] = [random.randint(10, 100) for x in index_2]
line = vincent.Line(multi_iter1, iter_idx='index')
line.axis_titles(x='Index', y='Value')
line.legend(title='Categories')
Answer: You need to tell Vincent to output html and then use a browser to display the
results. In a standalone script it doesn't make sense to do the
`vincent.core.initialize_notebook()` line so you should remove it.
In your case just add the following line at then end of your script:
line.to_json('line.json', html_out=True, html_path='line.html')
Afterwards you can just double click the generated `line.html` file and it
will open in your browser. Take a look
[here](https://vincent.readthedocs.org/en/latest/quickstart.html#output) for
more details.
|
get the tip revision informations from mercurial API
Question: How can I get the tip revision informations of a remote mercurial repository
from a python script?
I want something like:`hg tip`. AFAIK hg commands needs a local repository.
I found another approach with mercurial API : [List remote branches in
Mercurial](http://stackoverflow.com/questions/4296636/list-remote-branches-in-
mercurial). But I can't find a documentation on mercurial API to go further
this way.
Any help would be very much appreciated.
Answer: It works similar to the second answer in your link ([List remote branches in
Mercurial](http://stackoverflow.com/questions/4296636/list-remote-branches-in-
mercurial)):
from mercurial import ui, hg, node
peer = hg.peer(ui.ui(), {}, 'http://hg.python.org/cpython')
print node.short(peer.lookup("tip"))
I've tested this with mercurial 2.3.2, for more information you might want to
take a look at wireproto.py (class wirepeer).
|
What is the meaning of % in this python expression
Question: Can someone explain what this regular expression means? I am looking at
someone else's python code, and I just find myself curious as to what the
expression is doing. I am also not certain what the 2nd % sign means.
regexStr = '(%s)' % '|'.join(['.*'.join(str(i) for i in p) for p in itertools.permutations(charList)])
Answer: So it does this:
import itertools
charList = [1, 2, 3]
'(%s)' % '|'.join(['.*'.join(str(i) for i in p) for p in itertools.permutations(charList)])
#>>> '(1.*2.*3|1.*3.*2|2.*1.*3|2.*3.*1|3.*1.*2|3.*2.*1)'
First it generates all of the permutations of the input (unique orders):
for permutation in itertools.permutations(charList):
print(permutation)
#>>> (1, 2, 3)
#>>> (1, 3, 2)
#>>> (2, 1, 3)
#>>> (2, 3, 1)
#>>> (3, 1, 2)
#>>> (3, 2, 1)
For each of these, it converts each item to a string and joins them with `.*`
'.*'.join(str(i) for i in (1, 2, 3))
#>>> '1.*2.*3'
Then it joins all of those with `|`
'|'.join(['.*'.join(str(i) for i in p) for p in itertools.permutations(charList)])
#>>> '1.*2.*3|1.*3.*2|2.*1.*3|2.*3.*1|3.*1.*2|3.*2.*1'
and finally uses `'(%s)' % result` to [wrap the result in
brackets](https://docs.python.org/3.4/tutorial/inputoutput.html#old-string-
formatting):
'(%s)' % '|'.join(['.*'.join(str(i) for i in p) for p in itertools.permutations(charList)])
#>>> '(1.*2.*3|1.*3.*2|2.*1.*3|2.*3.*1|3.*1.*2|3.*2.*1)'
The pattern `'1.*2.*3'` matches all sequences like `111111222333333`.
The patern `A|B|C|D` matches _one_ of `A`, `B`, `C` or `D`.
So the resulting regex matches any permutation, with each item repeated any
number of times (including 0).
The outer brackets make this a capturing group.
|
Request signature does not match signature provided for Amazon AWS using Python
Question: So I'm attempting to collect reviews from Amazon using their API.
Unfortunately though it seems that I might be doing something wrong at some
point in my program. It's sending back a response that many others were
apparently getting. Believe me, I've gone through and looked and everyone
elses questions and nothing has worked. Please help me.
Here's the code:
__author__ = 'dperkins'
import requests
import amazonproduct
import time
import datetime
import hmac
import hashlib
import base64
import urllib
import ssl
from bs4 import BeautifulSoup
# Configuration for the AWS credentials
config = {
'access_key': 'XXXXXXXXXXXXXXXXXXXX',
'secret_key': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
'associate_tag': 'dperkgithu-20',
'locale': 'us'
}
api = amazonproduct.API(cfg=config)
productASIN = ''
productTitle = ''
# Product look up for the official iPhone 5s White 16gb Unlocked
for product in api.item_search('Electronics', Keywords='iPhone'):
if product .ASIN == 'B00F3J4E5U':
productTitle = product.ItemAttributes.Title
productASIN = product.ASIN
# Product Title with ASIN and a formatted underline
print productTitle + ': ' + productASIN
underline = ''
for int in range(len(productTitle + ': ' + productASIN)):
underline += '-'
print underline
# URL First portion of the request for reviews
signatureTime = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
signatureTime = urllib.quote_plus(signatureTime) # Must url encode the timestamp
url = "http://webservices.amazon.com/onca/xml?Service=AWSECommerceService&Operation=ItemLookup&ResponseGroup=Reviews&IdType=ASIN&ItemId=%s&AssociateTag=%s&AWSAccessKeyId=%s&Timestamp=%s" % (productASIN, api.associate_tag, api.access_key, signatureTime)
# # HMAC with SHA256 hash algorithm
# dig = hmac.new(api.secret_key, msg=url, digestmod=hashlib.sha256).digest()
# signature = base64.b64encode(dig).decode() # py3k-mode
#url = 'http://webservices.amazon.com/onca/xml?Service=AWSECommerceService&Operation=ItemLookup&ResponseGroup=Reviews&IdType=ASIN&ItemId=%s&AssociateTag=%s&AWSAccessKeyId=%s&Timestamp=%s&Signature=%s' % (productASIN, api.associate_tag, api.access_key, signatureTime, signature)
#Split and byte order the request (poorly but should always be the same)
parameters = [1, 2, 3, 4, 5, 6, 7]
for line in url.split('&'):
if (line.startswith('AssociateTag')):
parameters[0] = line
elif (line.startswith('AWSAccessKeyId')):
parameters[1] = line
elif (line.startswith('IdType')):
parameters[2] = line
elif (line.startswith('ItemId')):
parameters[3] = line
elif (line.startswith('Operation')):
parameters[4] = line
elif (line.startswith('ResponseGroup')):
parameters[5] = line
elif (line.startswith('Timestamp')):
parameters[6] = line
rejoined = ''
i = 1
for line in parameters:
if i < len(parameters):
rejoined += line + '&'
else:
rejoined += line
i += 1
print 'Rejoined: ' + rejoined
# Prepend the request beginning
prepend = 'GET\nwebservices.amazon.com\n/onca/xml\n' + rejoined
print 'Prepend: ' + prepend
# HMAC with SHA256 hash algorithm
dig = hmac.new(api.access_key, msg=prepend, digestmod=hashlib.sha256).digest()
signature = base64.b64encode(dig).decode() # py3k-mode
print 'Signature: ' + signature
encodedSignature = urllib.quote_plus(signature) # encode the signature
print 'Encoded Signature: ' + encodedSignature
finalRequest = 'http://webservices.amazon.com/onca/xml?' + rejoined + '&Signature=' + encodedSignature
# Final request to send
print 'URL: ' + finalRequest
# Use BeautifulSoup to create the html
r = requests.get(finalRequest)
soup = BeautifulSoup(r.content)
print soup.prettify()
Here's the response:
Apple iPhone 5s, Gold 16GB (Unlocked): B00F3J4E5U
-------------------------------------------------
Rejoined: AssociateTag=dperkgithu-20&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&IdType=ASIN&ItemId=B00F3J4E5U&Operation=ItemLookup&ResponseGroup=Reviews&Timestamp=2014-10-01T19%3A36%3A41Z
Prepend: GET
webservices.amazon.com
/onca/xml
AssociateTag=dperkgithu-20&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&IdType=ASIN&ItemId=B00F3J4E5U&Operation=ItemLookup&ResponseGroup=Reviews&Timestamp=2014-10-01T19%3A36%3A41Z
Signature: YAeIaDuigxbTX7AoZzRreZzn//RbIucCiwsG9VqMayQ=
Encoded Signature: YAeIaDuigxbTX7AoZzRreZzn%2F%2FRbIucCiwsG9VqMayQ%3D
URL: http://webservices.amazon.com/onca/xml?AssociateTag=dperkgithu-20&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&IdType=ASIN&ItemId=B00F3J4E5U&Operation=ItemLookup&ResponseGroup=Reviews&Timestamp=2014-10-01T19%3A36%3A41Z&Signature=YAeIaDuigxbTX7AoZzRreZzn%2F%2FRbIucCiwsG9VqMayQ%3D
<html>
<body>
<itemlookuperrorresponse xmlns="http://ecs.amazonaws.com/doc/2005-10-05/">
<error>
<code>
SignatureDoesNotMatch
</code>
<message>
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
</message>
</error>
<requestid>
c159b688-9b08-4cc9-94fe-35245aa69cc9
</requestid>
</itemlookuperrorresponse>
</body>
</html>
Answer: You already have a successful request to the Amazon Product Advertising API.
If you want to have a peek at the returned XML, use the product object.
As for you reviews, they are no longer available as plain text via the API.
You would need to scrape HTML from Amazon.com directly.
|
How to guarantee tcp data sent using python asyncio?
Question: I have a client that connect to server and send all messages of the list, and
each message sent, it is deleted from the list.
But when I force to close the server, client still continue sending and
deleting messages from the list. I would like that if connection is down,
client stop to send messages, or don't delete from list if has no guarantee
that server has received the message.
I check that when I sent more than four messages after connection is down,
show the error "socket.send() raised exception". But I don't know how to get
that error, I think it is asynchronous. Anyway, if I get that error, the error
don't happen if the list has less than five messages to sent after connection
down.
Ps: I wrote the server just for my tests, but the server I will have no
access. So, the client that need to do everything to have the guarantee of
data sent.
Thank you so much.
# client.py
import asyncio
class Client(asyncio.Protocol):
TIMEOUT = 1.0
event_list = []
for i in range(10):
event_list.append('msg' + str(i))
def __init__(self):
self.client_tcp_timeout = None
print(self.event_list)
def connection_made(self, transport):
print('Connected to Server.')
self.transport = transport
self.client_tcp_timeout = loop.call_later(self.TIMEOUT, self.send_from_call_later)
def data_received(self, data):
self.data = format(data.decode())
print('data received: {}'.format(data.decode()))
def send_from_call_later(self):
self.msg = self.event_list[0].encode()
self.transport.write(self.msg)
print('data sent: {}'.format(self.msg))
print('Removing data: {}'.format(self.event_list[0]))
del self.event_list[0]
print(self.event_list)
print('-----------------------------------------')
if len(self.event_list) > 0:
self.client_tcp_timeout = loop.call_later(self.TIMEOUT, self.send_from_call_later)
else:
print('All list was sent to the server.')
def connection_lost(self, exc):
print('Connection lost!!!!!!.')
loop = asyncio.get_event_loop()
coro = loop.create_connection(Client, 'localhost', 8000)
client = loop.run_until_complete(coro)
loop.run_forever()
# server.py
import asyncio
class Server(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
print('data received: {}'.format(data.decode()))
#self.transport.write(data)
loop = asyncio.get_event_loop()
coro = loop.create_server(Server, 'localhost', 8000)
server = loop.run_until_complete(coro)
print('serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
print("exit")
finally:
server.close()
loop.close()
# server output (force (CTRL+C) to close server after received msg4):
$ python3 server.py
serving on ('127.0.0.1', 8000)
connection from ('127.0.0.1', 56119)
data received: msg0
data received: msg1
data received: msg2
data received: msg3
data received: msg4
^Cexit
# client output
$ python3 client.py
['msg0', 'msg1', 'msg2', 'msg3', 'msg4', 'msg5', 'msg6', 'msg7', 'msg8', 'msg9']
Connected to Server.
data sent: b'msg0'
Removing data: msg0
['msg1', 'msg2', 'msg3', 'msg4', 'msg5', 'msg6', 'msg7', 'msg8', 'msg9']
-----------------------------------------
data sent: b'msg1'
Removing data: msg1
['msg2', 'msg3', 'msg4', 'msg5', 'msg6', 'msg7', 'msg8', 'msg9']
-----------------------------------------
data sent: b'msg2'
Removing data: msg2
['msg3', 'msg4', 'msg5', 'msg6', 'msg7', 'msg8', 'msg9']
-----------------------------------------
data sent: b'msg3'
Removing data: msg3
['msg4', 'msg5', 'msg6', 'msg7', 'msg8', 'msg9']
-----------------------------------------
data sent: b'msg4'
Removing data: msg4
['msg5', 'msg6', 'msg7', 'msg8', 'msg9']
-----------------------------------------
Connection lost!!!!!!.
data sent: b'msg5'
Removing data: msg5
['msg6', 'msg7', 'msg8', 'msg9']
-----------------------------------------
data sent: b'msg6'
Removing data: msg6
['msg7', 'msg8', 'msg9']
-----------------------------------------
data sent: b'msg7'
Removing data: msg7
['msg8', 'msg9']
-----------------------------------------
data sent: b'msg8'
Removing data: msg8
['msg9']
-----------------------------------------
socket.send() raised exception.
data sent: b'msg9'
Removing data: msg9
[]
-----------------------------------------
All list was sent to the server.
Answer: A write to a TCP socket does not guarantee that the data get received. It only
sends the data to the OS kernel which then will try as hard as possible to
send the data to the other side. But, the write call will already return
success once the data are sent to the OS kernel. If the data then get received
by the peers OS kernel it will acknowledge them at the TCP level. But, that
only means that the data are received by the kernel and not that they are
processed by the application.
If you want to have guaranteed message delivery and deal with possible close
of the peer you have to implement some kind of acknowledgements inside your
protocol and only remove the data once you get an explicit acknowledgment from
your peer application after it successfully processed the data.
|
Accessing global variables from inside a module
Question: I wrote some python code to control a number of USB (electrical relays and
temperature sensors) and RS232 (vacuum gauges) devices. From within this main
script (e.g., `myscript.py`), I would like to import a module (e.g.,
`exp_protocols.py`) where I define different experimental protocols, i.e. a
series of instructions to open or close relays, read temperature and pressure
values, with some simple flow control thrown in (e.g. "wait until temperature
exceeds 200 degrees C").
My initial attempt looked like this:
switch_A = Relay('A')
switch_B = Relay('B')
gauge_1 = Gauge('1')
global switch_A
global switch_B
global gauge_1
from exp_protocols import my_protocol
my_protocol()
with `exp_protocols.py` looking like this:
def my_protocol():
print 'Pressure is %.3f mbar.' % gauge_1.value
switch_A.close()
switch_B.open()
This outputs a global variable error, because `exp_protocols.my_protocol`
cannot access the objects defined in `myscript.py`.
It seems, from reading the answers to earlier questions here, that I could
(should?) create all my `Relay` and `Gauge` variable in another module, e.g.,
`myconfig.py`, and then `import myconfig` both in `myscript.py` and
`exp_protocols`? But if I do that, won't my `Relay` and `Gauge` objects be
created twice (thus trying to open serial ports already active, etc.)?
What would be the best (most Pythonic) way to achieve this kind of inter-
module communication?
Thanks in advance.
Answer: No matter how many times you `import myconfig`, python only imports the module
once. After the first import, future import statements just grab another
reference to the module.
Globals should only be used if these are static bits of data. Your function
would be more generic if it took the variables as parameters:
def my_protocol(switch_A, switch_B, gauge_1):
print 'Pressure is %.3f mbar.' % gauge_1.value
switch_A.close()
switch_B.open()
modules could use it with many combinations of data. Suppose you have blocks
of switches in a list (and I'm just making this up because I have no idea how
you configure your data...), you could process them all with the same
function:
import exp_protocols
switch_blocks = [
[Relay('1-A'), Relay('1-B'), Gauge('1-1')],
[Relay('2-A'), Relay('2-B'), Gauge('2-1')],
]
for switch1, switch2, gauge in switch_blocks:
exp_protocols.my_protocol(switch1, switch2, gauge)
|
Find, split and concatenate
Question: I have to find certain supplier according number in the second line (17.
position)
For example , I have to find , split and concatenate this type of text - (The
specifier for find , split and concatenate is second line - NUMBER , which
consists from 6 numbers , so I have to find this number and concatenate
according this number)
Do I have to use some kind of regular expressions ? or only find,split and
concatenate ? (according numbers - 45107, 57107)
Output is also here:
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Output (after find , split and concatenate):
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Answer: You seem to want the output sorted by the number so you can use `re.split` and
sort with a `lambda` if all your output is as posted:
s = """
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
"""
import re
srt = sorted(re.split("(?<=which CPython is used.)\n",s),key=lambda x: re.findall("\d{5}",x))
for line in srt:
print line
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
47107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
Add and Remove Platforms.
57107 Specify which Python runtime, CPython or Jython, to use as a
Choose which CPython is used.
|
Python/Excel - Slice extracted excel data - exclude rows maintain structure
Question: So I'm attempting exclude the top three rows during a data extraction.
for col_num in xrange(sheet.ncols):
col = sheet.col_values(col_num, start_rowx=3, end_rowx=None)
writer.writerow(col) #this syntax also may be skewing my results as well
This for loop eliminates the top 3 rows put then turns the rows into columns.
Any advice on how to maintain the data structure but at the same time
eliminate rows?
Full script below:
import glob
import os
import xlrd
import csv
ROOTDIR = r'C:\Users\username\Desktop\Mults'
wb_pattern = os.path.join(ROOTDIR, '*.xlsx')
workbooks = glob.glob(wb_pattern)
with open('merged.csv', 'wb') as outcsv:
writer = csv.writer(outcsv)
for wb in workbooks:
book_path = os.path.join(ROOTDIR, wb)
book = xlrd.open_workbook(book_path)
sheet = book.sheet_by_index(0)
for colx in xrange(sheet.ncols):
col = sheet.col_values(colx, start_rowx=2, end_rowx=None)
writer.writerow(col) #this syntax also may be skewing my results
Thank you!
Any help is much appreciated!
Answer: If you want row values, why are you pulling the columns to write as rows? Pull
the row values and write those:
import glob
import os
import xlrd
import csv
ROOTDIR = r'C:\Users\username\Desktop\Mults'
wb_pattern = os.path.join(ROOTDIR, '*.xlsx')
workbooks = glob.glob(wb_pattern)
start_rownum = 3 # or wherever you want to start copying
with open('merged.csv', 'wb') as outcsv:
writer = csv.writer(outcsv)
for wb in workbooks:
book_path = os.path.join(ROOTDIR, wb)
book = xlrd.open_workbook(book_path)
sheet = book.sheet_by_index(0)
for rownum in xrange(start_rownum, sheet.numrows):
row = sheet.row_values(rownum)
writer.writerow(row)
|
Is there any way to run three python files sequentially as the output of each file depends on the other
Question: I have three python files `file1.py`, `file2.py`, `file3.py`. Each file will
generate a `.csv` file and will give it to the other file sequentially. To
elaborate `file1.py` will generate `file1.csv` and this `.csv` file will be
the input of `file2.py` and so on.
import file1
import file2
import file3
file1
file2
file3
IOError: File file2.csv does not exist
The problem is when I import `file2.py`,there is no `file1.csv` as it is yet
to be executed. Please let me how to call each one sequentially without even
reading the next python file.
Answer: Each of your scripts should be set up as a function or class that generates
the file. The general structure could be something like this:
# file1.py
def generate_csv(filename):
# put your file generation code here
# you could easily use Python's csv module, for example
return csv_filename
Then in your main script you can call each of your scripts, like this:
# main.py
import file1, file2, file3
def main(filename):
fname_one = file1.generate_csv(filename)
fname_two = file2.generate_csv(fname_one)
fname_three = file3_generate_csv(fname_two)
This keeps your original scripts from being run when imported. Your main
script controls the order of execution and can do whatever needs to be done to
the 3rd file name that is returned.
|
Using ctypes to grab a pointer from a nullary function (segfault) x64
Question: I've reduced my problem to the following toy file and command:
// a.c --> a.out, compiled with `gcc -fPIC -shared a.c`
void* r2() {
return NULL; // <-- could be anything
}
`python -i -c "from ctypes import *; clib =
cdll.LoadLibrary('/home/soltanmm/tmp/a.out');
CFUNCTYPE(c_void_p).in_dll(clib,'r2')()"`
^ results in a segfault in a call directly within `ffi_call_unix64`.
I'm on an AMD64 Linux machine running Python 2.7. What am I doing wrong?
## EDIT
To lend weight to how the pointers don't matter, a second example that
segfaults:
// a.c --> a.out
int r1() {
return 1;
}
`python -i -c "from ctypes import *; clib =
cdll.LoadLibrary('/home/soltanmm/tmp/a.out');
CFUNCTYPE(c_int).in_dll(clib,'r1')()"`
Answer: CFUNCTYPE is used for callbacks (or pointers to functions defined as a
variable in the shared object). After you do `cdll.LoadLibrary` you should
simply be able to call `C` functions on the returned library object directly.
So something like this should work:
from ctypes import *;
clib = cdll.LoadLibrary('/home/soltanmm/tmp/a.out');
print(clib.r2())
Method `in_dll` is generally used to access **variables** that are exported
from shared objects. Not functions themselves. An example of using `in_dll`
would be something like this:
File **a.c** :
#include <stdlib.h>
int r2() {
return 101;
}
int (*f)(void) = r2;
char *p = "Hello World";
char *gethw() {
return p;
}
Python script:
from ctypes import *;
clib = cdll.LoadLibrary('/home/soltanmm/tmp/a.out');
# print should call r2() since f is a variable initialized to
# point to function r2 that returns an int. Should
# print 101
print (CFUNCTYPE(c_int).in_dll(clib,'f')())
# or call r2 directly
print(clib.r2())
# prints out the character (char *) string variable `p'
# should result in 'Hello World' being printed.
print((c_char_p).in_dll(clib,'p').value)
# call the gethw() function that returns a point to a char *
# This too should print 'Hello World'
# we must set the restype c_char_p explicitly since the default
# is to assume functions return `int`
gethw = clib.gethw
gethw.restype = c_char_p
print(gethw())
More on the usage of ctypes can found in the [Python
Documentation](https://docs.python.org/3/library/ctypes.html)
|
How to open a video file in python 2.7?
Question: I am new to python, and I am trying to open a video file "This is the
file.mp4" and then read the bytes from that file. I know I should be using
open(filename, "rb"), however I am not clear about the following things:
1. * In what directory is python looking for my file when I use open()? (My file is located in the downloads folder, should I move it? Where?
2. * Is using "rb" the correct way to read the bytes from a file?
So far I tried to open the file and I get this error:
IOError: [Errno 2] No such file or directory: 'This is the file.mp4'
I know it is probably an obvious thing to do, however I have looked all over
the internet and I still haven't found an answer.
Thank you in advance!
Answer: By default, Python opens the file from the current working directory, which is
usually the folder where the `.py` script of the program is located.
If you move the video file in the same directory as the script, it should
work.
You can also view the current working directory like this:
import os
print os.getcwd()
Also, instead of moving the file, you can just change `"This is the file.mp4"`
to `"C:/Users/<username>/Downloads/This is the file.mp4"` if you are using
Windows 7 and maybe 8. You will have to change the `<username>` to your
computer username.
Wildcards might also work: `"~/Downloads/This is the file.mp4"`
Finally, what are you planning to do with the video file bytes? If you want to
copy the file to somewhere else, there are modules to do that.
`"rb"` is a correct way to read bytes of a file.
|
What's the difference between stdin and sys.argv in python?
Question: I was docked points in a coding challenge that specified that I needed to read
from STDIN. This was my input method:
def __init__(self, input):
self._dictionary = {}
with open(input, 'r') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
if self._dictionary.__contains__(row[0]):
self._dictionary[row[0]].append(row[1])
else:
self._dictionary.update({row[0]: row[1].split()})
and at the end of the script
if __name__ == "__main__":
script = Script(sys.argv[1])
for line in script.output_method():
print line
Was I wrong to use sys.argv in a challenge that asked to read from stdin?
What's the difference? What should I have done to satisfy the requirements?
Answer: They are not the same:
>>> import sys
>>> sys.argv == sys.stdin
False
_sys.argv_
* The list of command line arguments passed to a Python script.
_sys.stdin_ sys.stdout sys.stderr
* File objects corresponding to the interpreter’s standard input, output and error streams. stdin is used for all interpreter input except for scripts but including calls to input() and raw_input().
As @Vivek Rai mentioned in the comments, you can use `sys.stdin.readlines()`
to read from standard in. Also,
[`fileinput`](https://docs.python.org/2/library/fileinput.html), is available
to you, which seems to do exactly what you want.
import fileinput
for line in fileinput.input():
process(line)
> This iterates over the lines of all files listed in sys.argv[1:], defaulting
> to sys.stdin if the list is empty. If a filename is '-', it is also replaced
> by sys.stdin. To specify an alternative list of filenames, pass it as the
> first argument to input(). A single file name is also allowed.
|
Pulling ephem.next_rising(sun) for various lat/long locations around the world
Question: I'd like to set up a Python program to be able to pull sunrise/sunset from
various locations to trigger lights in the local location to symbolize the
remote sunrise as it would be -- if you were actually there. What I mean by
this, is if you live in Berlin, and YOUR sunrise/sunset in the middle of
December is 7:45am/4:15pm, you could have a little LED that would light when
some tropical sunrise is happening (say Hawaii). But this is happening in
reaction to your own local time.
So, using Python's ephem, and pytz, and localtime, pull the information for
sunrise/sunset for various locations, and trigger events based on each
location.
I've set up a test program using Vancouver, BC and Georgetown, French Guyana
as a test case, and it mostly works -- but the sunrise/sunset for Georgetown
is completely wrong.
You can cut and paste this entire thing into a Python window to test, and
please forgive the extraneous time calls, but I find it interesting to see
what each of the time calls pull.
Nonetheless, what you'll see is that the Guyana.date is absolutely correct,
but the sunrise/sunset is something like 1:53 AM / 13:57 PM, which is
completely whacked. Any ideas on how this could have gone so horribly,
horribly wrong?
**EDITED TO REMOVE UNNECESSARY CODE**
import ephem
from datetime import datetime, timedelta
from pytz import timezone
import pytz
import time
Guyana = ephem.Observer()
Guyana.lat = '5'
Guyana.lon = '58.3'
Guyana.horizon = 0
Guyana.elevation = 80
Guyana.date = datetime.utcnow()
sun = ephem.Sun()
print("Guyana.date is ",Guyana.date)
print("Guyana sunrise is at",Guyana.next_rising(sun))
print("Guyana sunset is going to be at ",Guyana.next_setting(sun))
The results of this are as follows:
Guyana.date is 2014/10/4 16:47:36
Guyana sunrise is at 2014/10/5 01:53:26
Guyana sunset is going to be at 2014/10/5 13:57:05
What is so wrong about this, is that the actual sunrise in Guyana today is
5:40am, so the 1:53:26 is off by not just hours, but in many ways off.
Answer: To answer your updated version: positive longitudes refer to East but Guyana
(America) is to the west from Greenwich therefore you should use minus:
`Guyana.lon = '-58.3'` then the time of the sunrise becomes:
Guyana sunrise is at 2014/10/5 09:39:47
The time is in UTC, you could convert it to the local (Guyana) time:
>>> utc_dt = Guyana.next_rising(sun).datetime().replace(tzinfo=pytz.utc)
>>> print(utc_dt.astimezone(pytz.timezone('America/Guyana')))
2014-10-05 05:39:46.673263-04:00
5:40am local time seems reasonable for sunrise.
* * *
From [`ephem` documentation](http://rhodesmill.org/pyephem/quick.html#dates):
> Dates **always** use Universal Time, **never** your local time zone.
As I said [in my answer to your previous
question](http://stackoverflow.com/a/25936856/4279):
> You should pass datetime.utcnow() to the observer instead of your local
> time.
i.e., `Vancouver.date = now` is wrong because you use `datetime.now()` that
returns a naive local time (pass `datetime.utcnow()` (or `ephem.now()`)
instead), `Guyana.date = utc_dt.astimezone(FrenchGuyanaTZ)` is wrong because
`FrenchGuyanaTZ` timezone is probably has a non-zero UTC offset (pass just
`utc_dt` instead).
Unrelated: a correct way to find the timestamp for the current time is
`time.time()` i.e., `gmNow` should be equal to `timetimeNow` (always). As [I
said](http://stackoverflow.com/questions/25935077/localtime-not-actually-
giving-localtime#comment40604605_25935077):
> you should use `time.time()` instead of `time.mktime(time.localtime())` the
> later might return wrong result during DST transitions.
The correct way to find the current time in UTC is:
utc_dt_naive = datetime.utcnow()
Or if you need an aware datetime object:
utc_dt = datetime.now(utc_timezone)
|
Python boto ec2 - How do I wait till an image is created or failed
Question: I am writing a code to iterate through all available instances and create an
AMI for them as below:
for reservation in reservations:
......
ami_id = ec2_conn.create_image(instance.id, ami_name, description=ami_desc, no_reboot=True)
But how do I wait till an image is created before proceeding with the creation
of next image? Because I need to track the status of each ami created.
I know that I can retrieve the state using:
image_status = get_image(ami_id).state
So, do I iterate through the list of ami_ids created and then fetch the state
for each of them? If so, then what if the image is still pending when I read
the state of the image? How will I find out if the image creation has failed
eventually?
Thanks.
Answer: If I understand correctly, you want to initiate the `create_image` call and
then wait until the server-side operation completes before moving on. To do
this, you have to poll the EC2 service periodically until the state of the
image is either `available` (meaning it succeeded) or `failed` (meaning it
failed). The code would look something like this:
import time
...
image_id = ec2_conn.create_image(instance.id, ...)
image = ec2_conn.get_all_images(image_ids=[image_id])[0]
while image.state == 'pending':
time.sleep(5)
image.update()
if image.state == 'available':
# success, do something here
else:
# handle failure here
|
Convert structured array with various numeric data types to regular array
Question: Suppose I have a NumPy structured array with various numeric datatypes. As a
basic example,
my_data = np.array( [(17, 182.1), (19, 175.6)], dtype='i2,f4')
How can I cast this into a regular NumPy array of floats?
From [this answer](http://stackoverflow.com/a/5957455/2223706), I know I could
use
np.array(my_data.tolist())
but apparently it is slow since you "convert an efficiently packed NumPy array
to a regular Python list".
Answer: You can do it easily with Pandas:
>>> import pandas as pd
>>> pd.DataFrame(my_data).values
array([[ 17. , 182.1000061],
[ 19. , 175.6000061]], dtype=float32)
|
how to execute scrapy shell "URL" with notebook
Question: i am trying to scrapy
and there is scrapy shell "URL" command, executing console ipython with a
response object from the URL.
but i want to do that thing with notebook.
is there any way to execute scrapy shell with notebook,
or how can i get the same response object of the URL in ipython manually?
Answer: Stumbled across this unanswered question looking for the same answer. The
closest thing I can come up right now is this:
import requests
from scrapy.http import TextResponse
r = requests.get('http://stackoverflow.com/')
response = TextResponse(r.url, body=r.text, encoding='utf-8')
Then you can debug response using usual methods like xpath().
|
KeyError: 'filename' while linking python and html using karrigell
Question: I'm trying to read an uploaded file using python 2.7 and Karrigell. But it is
showing me:
Traceback (most recent call last):
File "C:\Karrigell-3.1.1\karrigell\core\HTTP.py", line 333, in process_request
File "C:\Karrigell-3.1.1\karrigell\core\k_target.py", line 399, in run
File "", line 7, in
File "cgi.pyc", line 541, in __getitem__
KeyError: 'filename'
Here is my web_access.py:
import cgi, os
form = cgi.FieldStorage()
fileitem=form['filename']
if fileitem.file:
print fileitem.file.read()
else:
print "Error"
My Html page:
<section>
Enter the text:
<form action="web_access.py" method="post" enctype="multipart/form-data">
<input type="file" name="filename" id="file" value="" size="50"/>
<br />
<input type="submit" value="Analyze"/>
</form>
</section>
Help me out please!
Answer: With Karrigell you don't need to use the CGI module ; you get a reference to
form fields as keys of the built-in object `REQUEST`
web_access.py could be something like
print REQUEST['filename'].file.read()
This is documented
[here](http://karrigell.sourceforge.net/en/fileuploads.html)
|
Python - Race Results - Issues sorting dictionay entries and printing out data in particular form
Question: FYI I am new to python and there could be a more efficient way to produce the
desired results. Please feel free to suggest an alternative method.
Problem 1 - I cannot figure out a way to add "1st Place:, 2nd Place:, etc." to
output 1
Problem 2 - I cannot figure out why, in output 2, I cannot get the time to
print.
import os
os.system('cls')
#**********************************************
# The goal of this script is to collect both a racers name and finish time. When the last
# racers data is entered the script will order the names of racers (i.e. First, Second,
# Third) based on time (less time is better.) and print the ordered results to the screen.
# The race times will be entered as integers.
#**********************************************
# Sample input:
# What is the first racers name: Larry
# What is the first racers time: 12
# What is the second racers name: Moe
# What is the second racers time: 9
# What is the third racers name: Curly
# What is the third racers time: 20
# Sample output:
# 1st Place: Moe
# 2nd Place: Larry
# 3rd Place: Curly
#**********************************************
print ('\n')
print ('\n')
# Enter the first racers name
racer_name_1 = input("Enter racer number one's name: ")
# Enter the first racers time
racer_time_1 = int(input("Enter racer number one's time: "))
# Enter the Second racers name
racer_name_2 = input("Enter racer number two's name: ")
# Enter the Second racers time
racer_time_2 = int(input("Enter racer number two's time: "))
# Enter the Third racers name
racer_name_3 = input("Enter racer number three's name: ")
# Enter the Third racers time
racer_time_3 = int(input("Enter racer number three's time: "))
# Create the race results dictionary
raceList = {racer_name_1:"racer_time_1", racer_name_2:"racer_time_2",
racer_name_3:"racer_time_3"}
print ('\n')
# This is output 1
for value in sorted(raceList, reverse=True):
print (value)
print ('\n')
# This is output 2
print (raceList)
print ('\n')
print ('\n')
Answer: **Question 2**
You don't need quotes here, or the values will be interpreted as string
literals, which they are
raceList = {racer_name_1:"racer_time_1",
racer_name_2:"racer_time_2",
racer_name_3:"racer_time_3"}
Just use your variables as the values
raceList = {racer_name_1: racer_time_1,
racer_name_2: racer_time_2,
racer_name_3: racer_time_3}
**Question 1**
Make a list of tuples like (name, time)
racers = [(i, raceList[i]) for i in raceList]
Sort that list by their times
places = sorted(racers, key = lambda i: int(i[1]))
Then print out their names based on the position in the sorted list.
print('1st Place: {}'.format(places[0][0]))
print('2nd Place: {}'.format(places[1][0]))
print('3rd Place: {}'.format(places[2][0]))
|
How to use beautifulsoup when HTML element doesn't have a class name?
Question: I am using the following code (slightly modified from Nathan Yau's "Visualize
This" early example) to scrape weather data from WUnderGround's site. As you
can see, python is grabbing the numeric data from the element with class name
"wx-data".
However, I'd also like to grab the average humidity from the
DailyHistory.htmml. **The problem is that not all of the 'span' elements have
a class name, which is the case for the average humidity cell.** How can I
select this particular cell using BeautifulSoup and the code below?
(Here is an example of the page being scraped - hit your dev mode and search
for 'wx-data' to see the 'span' element being referenced:
<http://www.wunderground.com/history/airport/LAX/2002/1/1/DailyHistory.html>)
import urllib2
from BeautifulSoup import BeautifulSoup
year = 2004
#create comma-delim file
f = open(str(year) + '_LAXwunder_data.txt','w')
#iterate through month and day
for m in range(1,13):
for d in range (1,32):
#Chk if already gone through month
if (m == 2 and d > 28):
break
elif (m in [4,6,9,11]) and d > 30:
break
# open wug url
timestamp = str(year)+'0'+str(m)+'0'+str(d)
print 'Getting data for ' + timestamp
url = 'http://www.wunderground.com/history/airport/LAX/'+str(year) + '/' + str(m) + '/' + str(d) + '/DailyHistory.html'
page = urllib2.urlopen(url)
#Get temp from page
soup = BeautifulSoup(page)
#dayTemp = soup.body.wx-data.b.string
dayTemp = soup.findAll(attrs = {'class':'wx-data'})[5].span.string
#Format month for timestamp
if len(str(m)) < 2:
mStamp = '0' + str(m)
else:
mStamp = str(m)
#Format day for timestamp
if len(str(d)) < 2:
dStamp = '0' + str(d)
else:
dStamp = str(d)
#Build timestamp
timestamp = str(year)+ mStamp + dStamp
#Wrtie timestamp and temp to file
f.write(timestamp + ',' + dayTemp +'\n')
#done - close
f.close()
Answer: You can search for the cell containing the text, then move _up_ and over to
the next cell:
humidity = soup.find(text='Average Humidity')
next_cell = humidity.find_parent('td').find_next_sibling('td')
humidity_value = next_cell.string
I'm using BeautifulSoup version 4 here, not 3; you really want to upgrade as
version 3 has been mothballed 2 years ago now.
BeautifulSoup 3 can do this specific trick too; use `findParent()` and
`findNextSibling()` instead there though.
Demo:
>>> import requests
>>> from bs4 import BeautifulSoup
>>> response = requests.get('http://www.wunderground.com/history/airport/LAX/2002/1/1/DailyHistory.html')
>>> soup = BeautifulSoup(response.content)
>>> humidity = soup.find(text='Average Humidity')
>>> next_cell = humidity.find_parent('td').find_next_sibling('td')
>>> next_cell.string
u'88'
|
python detect if any element in a dictionary changes
Question: **Rather than saving a duplicate** of the dictionary and comparing the old
with the new, alike this:
dict = { "apple":10, "pear":20 }
if ( dict_old != dict ):
do something
dict_old = dict
**How is it possible to detect WHEN any element of a dictionary changes?**
Answer: You could subclass `dict` and include some custom `__setitem__` behavior:
class MyDict(dict):
def __setitem__(self, item, value):
print "You are changing the value of %s to %s!!"%(item, value)
super(MyDict, self).__setitem__(item, value)
Example usage:
In [58]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:class MyDict(dict):
: def __setitem__(self, item, value):
: print "You are changing the value of %s to %s!!"%(item, value)
: super(MyDict, self).__setitem__(item, value)
:--
In [59]: d = MyDict({"apple":10, "pear":20})
In [60]: d
Out[60]: {'apple': 10, 'pear': 20}
In [61]: d["pear"] = 15
You are changing the value of pear to 15!!
In [62]: d
Out[62]: {'apple': 10, 'pear': 15}
You would just change the `print` statement to involve whatever checking you
need to perform when modifying.
If you are instead asking about how to check whether a particular variable
name is modified, it's a much trickier problem, especially if the modification
doesn't happen within the context of an object or a context manager that can
specifically monitor it.
In that case, you _could_ try to modify the `dict` that `globals` or `locals`
points to (depending on the scope you want this to happen within) and switch
it out for, e.g. an instance of something like `MyDict` above, except the
`__setitem__` you custom create could just check if the item that is being
updated matches the variable name you want to check for. Then it would be like
you have a background "watcher" that is keeping an eye out for changes to that
variable name.
The is a very bad thing to do, though. For one, it would involve some severe
mangling of `locals` and `globals` which is not usually very safe to do. But
perhaps more importantly, this is much easier to achieve by creating some
container class and creating the custom update / detection code there.
|
Django, apache and mod_wsgi
Question: I am trying to deploy an Apache webserver with a Django installation.
I have installed Apache 2.2.25 (is working) and mod_wsgi 3.5.
In my error log I get
[Sun Oct 05 10:09:10 2014] [notice] Apache/2.2.25 (Win32) mod_wsgi/3.5 Python/3.4.1 configured -- resuming normal operations
So I think something might be working.
The problem arises when I go to `http://localhost/`. I get a `500 Internal
Server Error`.
The error log says:
[Sun Oct 05 10:09:10 2014] [notice] Apache/2.2.25 (Win32) mod_wsgi/3.5 Python/3.4.1 configured -- resuming normal operations
[Sun Oct 05 10:09:10 2014] [notice] Server built: Jul 10 2013 01:52:12
[Sun Oct 05 10:09:10 2014] [notice] Parent: Created child process 5832
httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.0.19 for ServerName
httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.0.19 for ServerName
[Sun Oct 05 10:09:10 2014] [notice] Child 5832: Child process is running
[Sun Oct 05 10:09:10 2014] [notice] Child 5832: Acquired the start mutex.
[Sun Oct 05 10:09:10 2014] [notice] Child 5832: Starting 64 worker threads.
[Sun Oct 05 10:09:10 2014] [notice] Child 5832: Starting thread to listen on port 80.
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] mod_wsgi (pid=5832): Exception occurred processing WSGI script 'C:/Users/Username/Dropbox/myproject/myproject/wsgi.py'.
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] Traceback (most recent call last):\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 359, in urlconf_module\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] return self._urlconf_module\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] AttributeError: 'RegexURLResolver' object has no attribute '_urlconf_module'\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] \r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] During handling of the above exception, another exception occurred:\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] \r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] Traceback (most recent call last):\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\handlers\\wsgi.py", line 168, in __call__\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] self.load_middleware()\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\handlers\\base.py", line 46, in load_middleware\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] mw_instance = mw_class()\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\middleware\\locale.py", line 23, in __init__\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] for url_pattern in get_resolver(None).url_patterns:\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 367, in url_patterns\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 361, in urlconf_module\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] self._urlconf_module = import_module(self.urlconf_name)\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\importlib\\__init__.py", line 109, in import_module\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] return _bootstrap._gcd_import(name[level:], package, level)\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 2254, in _gcd_import\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 2237, in _find_and_load\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 1129, in _exec\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 1471, in exec_module\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Users\\Username\\Dropbox\\myproject\\myproject\\urls.py", line 8, in <module>\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] admin.autodiscover()\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\__init__.py", line 23, in autodiscover\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] autodiscover_modules('admin', register_to=site)\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\utils\\module_loading.py", line 67, in autodiscover_modules\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] for app_config in apps.get_app_configs():\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\apps\\registry.py", line 137, in get_app_configs\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] self.check_apps_ready()\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\apps\\registry.py", line 124, in check_apps_ready\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] raise AppRegistryNotReady("Apps aren't loaded yet.")\r
[Sun Oct 05 10:09:14 2014] [error] [client 127.0.0.1] django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.\r
So the problem seems to be `AttributeError: 'RegexURLResolver' object has no
attribute '_urlconf_module'\r`. First I thought it must be something with my
project, so I've tried not loading any apps, but even with a clean django
project it causes the problem (I have checked if it works with the normal
django development server).
Can anyone see what I am doing wrong?
# Edit (error log when omitting `admin.autodiscover()` from `urls.py`):
[Sun Oct 05 10:31:51 2014] [notice] Apache/2.2.25 (Win32) mod_wsgi/3.5 Python/3.4.1 configured -- resuming normal operations
[Sun Oct 05 10:31:51 2014] [notice] Server built: Jul 10 2013 01:52:12
[Sun Oct 05 10:31:51 2014] [notice] Parent: Created child process 5492
httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.0.19 for ServerName
httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.0.19 for ServerName
[Sun Oct 05 10:31:51 2014] [notice] Child 5492: Child process is running
[Sun Oct 05 10:31:52 2014] [notice] Child 5492: Acquired the start mutex.
[Sun Oct 05 10:31:52 2014] [notice] Child 5492: Starting 64 worker threads.
[Sun Oct 05 10:31:52 2014] [notice] Child 5492: Starting thread to listen on port 80.
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] mod_wsgi (pid=5492): Exception occurred processing WSGI script 'C:/Users/Username/Dropbox/myproject/myproject/wsgi.py'.
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] Traceback (most recent call last):\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 359, in urlconf_module\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] return self._urlconf_module\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] AttributeError: 'RegexURLResolver' object has no attribute '_urlconf_module'\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] \r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] During handling of the above exception, another exception occurred:\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] \r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] Traceback (most recent call last):\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\handlers\\wsgi.py", line 168, in __call__\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] self.load_middleware()\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\handlers\\base.py", line 46, in load_middleware\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] mw_instance = mw_class()\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\middleware\\locale.py", line 23, in __init__\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] for url_pattern in get_resolver(None).url_patterns:\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 367, in url_patterns\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 361, in urlconf_module\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] self._urlconf_module = import_module(self.urlconf_name)\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\importlib\\__init__.py", line 109, in import_module\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] return _bootstrap._gcd_import(name[level:], package, level)\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 2254, in _gcd_import\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 2237, in _find_and_load\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 1129, in _exec\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 1471, in exec_module\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Users\\Username\\Dropbox\\myproject\\myproject\\urls.py", line 10, in <module>\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] url(r'^admin/', include(admin.site.urls)),\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\sites.py", line 260, in urls\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] return self.get_urls(), self.app_name, self.name\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\sites.py", line 221, in get_urls\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] self.check_dependencies()\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\sites.py", line 159, in check_dependencies\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] if not apps.is_installed('django.contrib.admin'):\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\apps\\registry.py", line 223, in is_installed\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] self.check_apps_ready()\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] File "C:\\Python34\\lib\\site-packages\\django-1.7-py3.4.egg\\django\\apps\\registry.py", line 124, in check_apps_ready\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] raise AppRegistryNotReady("Apps aren't loaded yet.")\r
[Sun Oct 05 10:32:00 2014] [error] [client 127.0.0.1] django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.\r
# Edit 2:
WSGIScriptAlias / C:/Users/Username/Dropbox/myproject/myproject/wsgi.py
WSGIPythonPath C:/Users/Username/Dropbox/myproject
<Directory C:/Users/Username/Dropbox/myproject/myproject>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
My `wsgi.py`:
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
# Edit 3 (10-07-2014):
With Python 2.7 I get the error:
[Tue Oct 07 11:55:19 2014] [warn] mod_wsgi: Compiled for Python/2.7.6.
[Tue Oct 07 11:55:19 2014] [warn] mod_wsgi: Runtime using Python/2.7.8.
[Tue Oct 07 11:55:19 2014] [notice] Apache/2.2.25 (Win32) mod_wsgi/3.5 Python/2.7.8 configured -- resuming normal operations
[Tue Oct 07 11:55:19 2014] [notice] Server built: Jul 10 2013 01:52:12
[Tue Oct 07 11:55:19 2014] [notice] Parent: Created child process 5824
[Tue Oct 07 11:55:19 2014] [warn] mod_wsgi: Compiled for Python/2.7.6.
[Tue Oct 07 11:55:19 2014] [warn] mod_wsgi: Runtime using Python/2.7.8.
[Tue Oct 07 11:55:19 2014] [notice] Child 5824: Child process is running
[Tue Oct 07 11:55:20 2014] [notice] Child 5824: Acquired the start mutex.
[Tue Oct 07 11:55:20 2014] [notice] Child 5824: Starting 64 worker threads.
[Tue Oct 07 11:55:20 2014] [notice] Child 5824: Starting thread to listen on port 80.
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] mod_wsgi (pid=5824): Exception occurred processing WSGI script 'C:/Users/sgan/Dropbox/myproject/myproject/wsgi.py'.
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] Traceback (most recent call last):
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\handlers\\wsgi.py", line 168, in __call__
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] self.load_middleware()
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\handlers\\base.py", line 46, in load_middleware
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] mw_instance = mw_class()
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\middleware\\locale.py", line 23, in __init__
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] for url_pattern in get_resolver(None).url_patterns:
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 367, in url_patterns
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\core\\urlresolvers.py", line 361, in urlconf_module
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] self._urlconf_module = import_module(self.urlconf_name)
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\importlib\\__init__.py", line 37, in import_module
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] __import__(name)
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Users\\sgan\\Dropbox\\myproject\\myproject\\urls.py", line 9, in <module>
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] url(r'^admin/', include(admin.site.urls)),
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\sites.py", line 260, in urls
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] return self.get_urls(), self.app_name, self.name
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\sites.py", line 221, in get_urls
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] self.check_dependencies()
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\contrib\\admin\\sites.py", line 159, in check_dependencies
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] if not apps.is_installed('django.contrib.admin'):
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\apps\\registry.py", line 223, in is_installed
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] self.check_apps_ready()
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] File "C:\\Python34\\Lib\\site-packages\\django-1.7-py3.4.egg\\django\\apps\\registry.py", line 124, in check_apps_ready
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] raise AppRegistryNotReady("Apps aren't loaded yet.")
[Tue Oct 07 11:55:25 2014] [error] [client 127.0.0.1] AppRegistryNotReady: Apps aren't loaded yet.
The Apache configuration is
WSGIScriptAlias / C:/Users/Username/Dropbox/myproject/myproject/wsgi.py
WSGIPythonPath C:/Users/Username/Dropbox/myproject;C:/Python27;C:/Python27/Lib
<Directory C:/Users/Username/Dropbox/myproject/myproject>
<Files wsgi.py>
Order deny,allow
Allow from all
#Require all granted
</Files>
</Directory>
Answer: The problem was that the correct way to create the wsgi app was edited between
django 1.6 and 1.7 to
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Now it works in both python 2.7 and python 3.4
|
python fabric perform actions from prompt on all servers
Question: I have several servers on my my fab file, and I want to have a function that
will prompt me of what to do and than perform it on all servers.
def simple():
actions = prompt('Type the actions: ')
run(actions)
now when I do that I get:
[web1] Type the actions:
bla bla bla
[web2] Type the actions:
bla bla bla
[web3] Type the actions:
bla bla bla
I want to type the "bla bla bla" only once and it will be performed on all
servers, is that possible?
Answer: Well, if `simple()` here is a Fabric task, then this behaviour is expected, as
you will have the `prompt()` call for every host on which the task is
executed. I suppose, you use the `fab` command-line tool, and in the simplest
case:
# fabfile.py
from fabric.api import run, env
env.hosts = ['host1', 'host2']
def simple(command=''):
run(command)
And launch it as follows:
$ fab simple:'ls -la'
|
Python: Import all variables
Question: So I have two python files. Let's call them module.py and main.py. module.py
looks like this:
name = "bob"
age = 20
def changename():
name = "tim"
and main.py looks like this:
import module
print(module.name)
module.changename()
print(module.name)
When I run main.py, I get this output, as expected:
>"bob"
>"tim"
However, always having to write module.name will get very time consuming, so I
can do from module import *, which imports all of the variables and functions
into main.py. However, after changing the code in main.py to this:
from module import *
print(name)
changename()
print(name)
I get this output:
>"bob"
>"bob"
I assume that this is because python imports the values of the variables at
the beginning, and then doesn't update them when they get changed by functions
within module.py.
My question is, is there a way to nicely import all the functions and
variables from a module, but still allow the module to update it's variables?
Thanks in advance!
Answer: I think the best way is to write getter and setter functions in module.py for
each global variable you want to share. Like this:
name = "bob"
age = 20
def changename():
global name # This tells that name is not local variable.
name = "tim"
def getname():
return name
Then, you can use these getters in main.py like this:
import module
print(module.getname())
module.changename()
print(module.getname())
But I recommend to import functions one by one to prevent long names. Like
this:
from module import changename
from module import getname
print(getname())
changename()
print(getname())
I tested these codes with Python2.7. Define a variable as a global one, before
setting a value to it; because setting a value to a variable in a function
makes it local (my Python acts in this way!).
|
Printing in the same line in python
Question: I am quite new in python and I need your help.
I have a file like this:
>chr14_Gap_2
ACCGCGATGAAAGAGTCGGTGGTGGGCTCGTTCCGACGCGCATCCCCTGGAAGTCCTGCTCAATCAGGTGCCGGATGAAGGTGGT
GCTCCTCCAGGGGGCAGCAGCTTCTGCGCGTACAGCTGCCACAGCCCCTAGGACACCGTCTGGAAGAGCTCCGGCTCCTTCTTG
acacccaggactgatctcctttaggatggactggctggatcttcttgcagtccaaggggctctcaagagt
………..
>chr14_Gap_3
ACCGCGATGAAAGAGTCGGTGGTGGGCTCGTTCCGACGCGCATCCCCTGGAAGTCCTGCTCAATCAGGTGCCGGATGAAGGTGGT
GCTCCTCCAGGGGGCAGCAGCTTCTGCGCGTACAGCTGCCACAGCCCCTAGGACACCGTCTGGAAGAGCTCCGGCTCCTTCTTG
acacccaggactgatctcctttaggatggactggctggatcttcttgcagtccaaggggctctcaagagt
………..
One string as a tag and one string the dna sequence. I want to calculate the
number of the N letters and the number of the lower case letters and take the
percentage. I wrote the following script which works but I have a problem in
printing.
#!/usr/bin/python
import sys
if len (sys.argv) != 2 :
print "Usage: If you want to run this python script you have to put the fasta file that includes the desert area's sequences as arument"
sys.exit (1)
fasta_file = sys.argv[1]
#This script reads the sequences of the desert areas (fasta files) and calculates the persentage of the Ns and the repeats.
fasta_file = sys.argv[1]
f = open(fasta_file, 'r')
content = f.readlines()
x = len(content)
#print x
for i in range(0,len(content)):
if (i%2 == 0):
content[i].strip()
name = content[i].split(">")[1]
print name, #the "," makes the print command to avoid to print a new line
else:
content[i].strip()
numberOfN = content[i].count('N')
#print numberOfN
allChar = len(content[i])
lowerChars = sum(1 for c in content[i] if c.islower())
Ns_persentage = 100 * (numberOfN/float(allChar))
lower_persentage = 100 * (lowerChars/float(allChar))
waste = Ns_persentage + lower_persentage
print ("The waste persentage is: %s" % (round(waste)))
#print ("The persentage of Ns is: %s and the persentage of repeats is: %s" % (Ns_persentage,lower_persentage))
#print (name + waste)
The thing is that it can print the tag in the first line and the waste
variable in the second one like this:
chr10_Gap_18759
The waste persentage is: 52.0
How can I print it in the same line, tab separated?
eg
chr10_Gap_18759 52.0
chr10_Gap_19000 78.0
…….
Thank you very much.
Answer: You can print it with:
print name, "\t", round(waste)
If you are using python 2.X I would make some modification to your code. There
is the [argparse](https://docs.python.org/2.7/library/argparse.html) module of
python to manage the arguments from the command line. I would do something
like this:
#!/usr/bin/python
import argparse
# To use the arguments
parser = argparse.ArgumentParser()
parser.add_argument("fasta_file", help = "The fasta file to be processed ", type=str)
args = parser.parse_args()
f= open(args.fasta_file, "r")
content = f.readlines()
f.close()
x = len(content)
for i in range(x):
line = content[i].strip()
if (i%2 == 0):
#The first time it will fail, for the next occasions it will be printed as you wish
try:
print bname, "\t", round(waste)
except:
pass
name = line.split(">")[1]
else:
numberOfN = line.count('N')
allChar = len(line)
lowerChars = sum(1 for c in content[i] if c.islower())
Ns_persentage = 100 * (numberOfN/float(allChar))
lower_persentage = 100 * (lowerChars/float(allChar))
waste = Ns_persentage + lower_persentage
# To print the last case you need to do it outside the loop
print name, "\t", round(waste)
You can also print it like the other answer with `print("{}\t{}".format(name,
round(waste)))`
I am not sure about the use of `i%2`, Note that if the sequence uses and odd
number of lines you'll will not get the name of the next sequence until the
same event occurs. I would check if the line begin with ">" then use store the
name, and sum the characters of the next line.
|
Calculate most common string in a wxListBox
Question: I have a wxListBox that is filled with strings (Customer Names) that the user
inputs. I have to calculate the most occurring name and least occurring name
in the list. I have to use a loop.
Below is actual code mixed with pseudo code, but I am having trouble with the
logic:
cust_name = ""
for names in range(self.txtListBox.GetCount()):
for compareName in counterList:
if:
names == compareName:
count += 1
else:
add names to counterList
set count to 1
What is the best way to do this with a loop in Python?
Answer: Use
[collections.Counter](https://docs.python.org/2/library/collections.html#collections.Counter)
to count the names
from collections import Counter
names = ["foo","foo","foobar","foo","foobar","bar"]
c = Counter(names).most_common() # returns a list of tuples from most common to least
most_com, least_com = c[0][0],c[-1][0] # get first which is most common and last which is least
print most_com,least_com
foo bar
Using a loop, just call `Counter.update`
c = Counter()
for name in names:
c.update(name)
c = c.most_common()
most_com, least_com = c[0][0],c[-1][0]
If you cannot import a module use a normal dict:
d = {}
for name in names:
d.setdefault(name,0)
d[name] += 1
print d
{'foobar': 2, 'foo': 3, 'bar': 1}
srt = sorted(d.items(),key=lambda x:x[1],reverse=True)
most_com,least_com = srt[0][0],srt[-1][0]
|
Embedding Python in C++. Passing vector of strings receving a list of lists
Question: I have a Windows application that is written in C++. I have a vector of
strings that I need to pass to a python Script for processing. I know how to
embed python in C++ with simple type but I am looking at how to create a
python object equivalent to a vector of string. I am looking at something
like:
for (size_t i = 0; i < myvector.size(); ++i)
{
PyUnicode_FromString.append(myvector[i]);
}
or maybe
for (size_t i = 0; i < myvector.size(); ++i)
{
pValue = PyUnicode_FromString(myvector[i]);
pyTuple_SetItem(myvector.size(), i, pValue);
}
The vector will hardly ever get very large (100 items max I would say). At the
moment I am just saving a text file and I have the python script opening it
and processing it which is obviously not very good but it tells me that
everything else works as planned. The processing from the Python script
produce 4 values per item(3 strings and 1 integer(long)). I also need to
return this the the main C++ program and I have no idea how to go about that.
(EDIT) After reviewing the possible options, I am thinking of a list of lists
(since dictionaries are not ordered and require more parsing operations)
should do the trick but I don't know how I would go about decoding this back
inside the C++ program (again for now this is done with file writing/reading
so I know it works). If anyone has done this, can you please provide small
code snippets so I can adapt it to my needs. I should also mention that I
cannot use the boost library(and preferably no SWIG either) - so, basically
done in Python C-API. All examples I have seen talks about subprocesses or
NumPy which I do not believe (maybe incorrectly) apply to my case.
Answer: Alright, I managed to figure out what I wanted. Below is a fully working
example of what I was trying to do. The answer is however not complete as
there are quite a few error checking missing and more importantly a few
Py_DECREF missing. I will try to catch them and update as I figure it out but
if someone fluent in that sort of thing can help, that would be much
appreciated. Here goes:
Python script: (testingoutput.py) This script receives a list of strings, For
each string, it returns 3 random strings (from a provided list) and one random
integer. Format is: [[sssi],]
import random
import string
def get_list_send_list(*input_list):
outer_list = []
for key in input_list:
inner_list = []
# from here, your program should produce the data you want to retrieve and insert it in a list
# for this example, a list comprised of 3 random strings and 1 random number is returned
for i in range (0, 3):
some_words = random.choice(["red", "orange", "yellow", "green", "blue", "purple", "white", "black"])
inner_list.append(some_words)
inner_list.append(random.randint(1,10))
outer_list.append(inner_list)
return outer_list
And this is the cpp file. Largely the same as the Python C API example but
slightly modified to accomodate lists. I did not need this for my needs but I
put a few type checking here and there for the benefit of anyone who might
need that sort of thing.
#include <Python.h>
#include <iostream>
#include <vector>
int main(int argc, char *argv[])
{
PyObject *pName, *pModule, *pFunc;
PyObject *pArgs, *pValue;
PyObject *pList, *pListItem, *pyString;
char* strarray[] = {"apple", "banana", "orange", "pear"};
std::vector<std::string> strvector(strarray, strarray + 4);
std::string pyFile = "testingoutput";
std::string pyFunc = "get_list_send_list";
Py_Initialize();
pName = PyUnicode_FromString(pyFile.c_str());
/* Error checking of pName left out */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL)
{
pFunc = PyObject_GetAttrString(pModule, pyFunc.c_str());
/* pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(strvector.size());
for (size_t i = 0; i < strvector.size(); ++i)
{
pValue = PyUnicode_FromString(strvector[i].c_str());
if (!pValue)
{
Py_DECREF(pArgs);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert argument\n");
return 1;
}
/* pValue reference stolen here: */
PyTuple_SetItem(pArgs, i, pValue);
}
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL)
{
int py_list_size = PyList_Size(pValue);
int sub_list_size = 0;
std::cout << "Retrieving content..."<< "\n";
for(Py_ssize_t i = 0; i < py_list_size; ++i)
{
pList = PyList_GetItem(pValue, i);
sub_list_size = PyList_Size(pList);
// verify if the subitem is also a list - if yes process it
if(PyList_Check(pList))
{
std::cout << "********** " << i << " **********\n";
for(Py_ssize_t j = 0; j < sub_list_size; ++j)
{
pListItem = PyList_GetItem(pList, j);
// verify if the item is a string or a number
if(PyUnicode_Check(pListItem))
{
// "Error ~" does nothing here but it should be defined to catch errors
pyString = PyUnicode_AsEncodedString(pListItem, "utf-8", "Error ~");
const char *tmpCstChar = PyBytes_AS_STRING(pyString);
std::cout << "Item " << j << ": " << tmpCstChar << "\n";
}
else if(PyLong_Check(pListItem))
{
int pyNumber = PyLong_AsLong(pListItem);
std::cout << "Item " << j << ": " << pyNumber << "\n";
}
}
}
else
{
std::cout << "This item is not a list\n";
}
}
Py_DECREF(pValue);
}
else
{
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
fprintf(stderr,"Call failed\n");
return 1;
}
}
else
{
if (PyErr_Occurred())
PyErr_Print();
fprintf(stderr, "Cannot find function \"%s\"\n", argv[2]);
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
else
{
PyErr_Print();
fprintf(stderr, "Failed to load \"%s\"\n", argv[1]);
return 1;
}
Py_Finalize();
return 0;
}
Again, there are a few error checks missing and significantly more important,
a few Py_DECREFs missing. i.e. THIS PROGRAM IS LEAKING MEMORY. If you know how
to fix this, your help would be appreciated.
|
Remove character from file in-place with native Windows tools
Question: I'd like to remove the last character off a large file. The restrictions are
that:
* the file has to be modified in-situ, without using the disk space required for a similar second file
* it's a windows machine
* I cannot copy any compiled code onto the machine so I cannot create a small c# program of c++ program to manipulate the file
* this also means any non-native scripting is not available, like python.
As far as I know, this limits me to bat, VB (or JScript) but it does not look
like there is a viable option in these. VB requires a TextStream to be created
from a systemfile object, but then this stream I believe must be saved
elsewhere, using diskspace.
Is there a way to do this simply?
Answer: Following the idea from Noodles (of course you need to have some .net
framework version installed), you can try this
(save as `trim.cmd` and call as `trim.cmd "fileToTrim.dat"`)
@if (@this==@isBatch) @then
@echo off
setlocal enableextensions disabledelayedexpansion
rem check arguments
set "fileToTrim=%~1"
if not exist "%fileToTrim%" goto :eof
rem search for a valid framework version
set "frameworks=%SystemRoot%\Microsoft.NET\Framework"
set "jsc="
for /f "tokens=* delims=" %%a in (
'dir /b /a:d /o:-n "%frameworks%\v*"'
) do if not defined jsc if exist "%frameworks%\%%a\jsc.exe" set "jsc=%frameworks%\%%a\jsc.exe"
if not defined jsc goto :eof
set "executable=%~dpn0.%random%.exe"
%jsc% /nologo /out:"%executable%" "%~f0"
if exist "%executable%" (
"%executable%" "%fileToTrim%"
del "%executable%" >nul 2>nul
)
endlocal
exit /b 0
@end
import System;
import System.IO;
var arguments:String[] = Environment.GetCommandLineArgs();
if (arguments.length > 1) {
var fi:FileInfo = new FileInfo(arguments[1]);
var fs:FileStream = fi.Open(FileMode.Open);
fs.SetLength (
Math.max(0, fi.Length - 1)
);
fs.Close();
};
This is far from efficient, the jscript code is compiled each time. Better
directly write the program, compile and use. But just as an example ...
|
How to install PyMongo
Question: I am currently trying to install MongoDB driver for Python on my Mac OS X
(mavericks).
But when I run
[ Dsl ~/Documents/python ] sudo easy_install pymongo
I get the following output
Searching for pymongo
Best match: pymongo 2.7
Processing pymongo-2.7-py2.7-macosx-10.9-intel.egg
Removing pymongo 2.7rc1 from easy-install.pth file
Adding pymongo 2.7 to easy-install.pth file
Using /Library/Python/2.7/site-packages/pymongo-2.7-py2.7-macosx-10.9-intel.egg
Processing dependencies for pymongo
Finished processing dependencies for pymongo
I try a lot of different commands, but nothing work. How to install pymongo ?
Thanks for your help
Edit: When I try to use it in a python script
#!/usr/bin/env python3
import pymongo
client = MongoClient()
I have this error
Traceback (most recent call last):
File "./mongo.py", line 2, in <module>
import pymongo
ImportError: No module named 'pymongo'
Answer: Try these commands.
[Source](http://api.mongodb.org/python/2.0.1/installation.html)
$ easy_install -U setuptools
$ python -m easy_install pymongo
|
When to maintain reference to key vs. reference to actual entity object after put operation.
Question: When working with datastore entities in App Engine, people have noticed odd
behavior after a put operation is performed on an entity if you choose to hold
on to a reference of that entity.
For example, see [this issue](https://code.google.com/p/appengine-ndb-
experiment/issues/detail?id=208) where repeated String properties mutated to
_BaseValue after a put was performed.
In the ensuing discussion, in reference to a repeated String property, Guido
van Rossum writes:
> "I see. We should probably document that you're not supposed to hang on to
> the list of items for too long; there are various forms of undefined
> behavior around that."
I get the sense from this thread that it's not a good idea to maintain
reference to an entity for too long after a put, as unexpected behavior might
arise.
However, when I look at the [GAE source
code](https://code.google.com/p/googleappengine/source/browse/branches/1.2.1/python/google/appengine/ext/db/__init__.py)
for the `Model.get_or_insert()` method, I see the following code (docstring
removed):
@classmethod
def get_or_insert(cls, key_name, **kwds):
def txn():
entity = cls.get_by_key_name(key_name, parent=kwds.get('parent'))
if entity is None:
entity = cls(key_name=key_name, **kwds)
entity.put()
return entity
return run_in_transaction(txn)
Notice how a put operation is performed, and the entity is returned post-put.
Just above, we saw an example of when this is not recommended.
Can anyone clarify when and when it is not ok to maintain a reference to an
entity, post put? Guido seemed to hint that there are various scenarios when
this could be a bad idea. Just curious if anyone has seen documentation on
this (I cannot find any).
Thanks!
Answer: The problem described in the issue is not regarding entities, but rather lists
obtained from its properties. You can hold a copy of entity as long as you
like. It's just an object.
The issue is caused by some "magical" functionality provided by ndb. Let's
take a look at the model definition
from google.appengine.ext.ndb import model
class MyModel(model.Model):
items = model.StringProperty(repeated=True)
What can we say about items property?
It looks like a class attribute, but metaclass logic of model.Model transforms
it into an instance attribute.
What type are these instance attributes?
They can be accessed like a list of strings, but they are more complex objects
having the logic required for storing and retrieving the data from datastore,
validating etc.
This "magic" works well in most cases, but sometimes it doesn't. One of the
problematic cases is when you get the reference to items from the instance and
try to use it after put was called. Another case, mentioned by Guido, was to
pass external list to initialize items property and then try to modify this
property by manipulating the external list.
**The thing to remember** : Model properties in ndb try to behave like their
basic types, but they are more complex objects. You can read more about their
internals in [Writing property
subclasses](https://cloud.google.com/appengine/docs/python/ndb/subclassprop)
|
Pillow OSError when loading into tkinter using Python 3.4
Question: I am loading an image from a server, and I keep getting this error when I use
[Base64](http://en.wikipedia.org/wiki/Base64) on the data.
Here's my code:
import tkinter as tk
from PIL import ImageTk
root = tk.Tk()
import urllib.request
URL = "https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSm7eLMSp4NbjGwkjU3rPokWaQI6224lQCR1qIIGIfldm4M0TgY0JKTGQLQ"
u = urllib.request.urlopen(URL)
raw_data = u.read()
u.close()
import base64
b64_data = base64.encodestring(raw_data)
image = ImageTk.PhotoImage(data=b64_data)
label = tk.Label(image=image)
label.pack()
I get this error:
Traceback (most recent call last):
File "C:/testt.py", line 11, in <module>
image = ImageTk.PhotoImage(data=b64_data)
File "C:\Python34\lib\site-packages\PIL\ImageTk.py", line 88, in __init__
image = Image.open(BytesIO(kw["data"]))
File "C:\Python34\lib\site-packages\PIL\Image.py", line 2287, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.BytesIO object at 0x0000000003152048>
Answer: Your code works, if you remove some of it. The base64 encoding is not needed:
import tkinter as tk
from PIL import ImageTk
import urllib.request
root = tk.Tk()
URL = "https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSm7eLMSp4NbjGwkjU3rPokWaQI6224lQCR1qIIGIfldm4M0TgY0JKTGQLQ"
u = urllib.request.urlopen(URL)
raw_data = u.read()
u.close()
image = ImageTk.PhotoImage(data=raw_data)
label = tk.Label(image=image)
label.pack()
tk.mainloop()
Result:

|
Sending byte strings to serial device
Question: I'm using Python3 running on a Raspberry. I have a serial device
(max232/PiC16F84) connected to the Raspberry via an USB to Serial adapter. I
try to send two bytes to the device (e.g 0000 0011) which then will be
interpreted as a command by the PIC. The USB - serial adapter is configured
correctly and the parameter such as bauderate should be ok. I guess that my
code doesn't send the correct bytes to the serial port.
import serial
ser = serial.Serial(
port='/dev/ttyUSB0',
baudrate=1200,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
xonxoff=serial.XOFF,
rtscts=False,
dsrdtr=False
)
ser.open()
ser.isOpen()
print("Initializing the device ..")
ser.write(bytes(0x00))
print("Write command")
ser.write (bytes(0x04))
print('Done')
Answer: You are using the `bytes` constructor incorrectly. When you call it with an
`int` as argument, you get:
> bytes(int) -> bytes object of size given by the parameter initialized with
> null bytes
So `bytes(0x00)` (which is just `bytes(0)`) is the empty string, and
`bytes(0x04)` is four zero bytes:
>>> bytes(0x00)
b''
>>> bytes(0x04)
b'\x00\x00\x00\x00'
What you want is `bytes([ 0x00 ])` etc., or simply an array with all your byte
values:
>>> bytes([0, 4])
b'\x00\x04'
If the string is short, you could simply write it as a constant:
`b'\x00\x04'`, for example. See the documentation of `bytes()` for more
options.
|
python - how register user with xmpp
Question: I'm trying to register a new user, but not work. I get the following error:
AttributeError: Client instance has no attribute 'SendAndWaitForResponse'
this is my code:
import xmpp, sys
usuario = 'test1@localhost'
password = 'mypassword'
jid = xmpp.JID(usuario)
cli = xmpp.Client(jid.getDomain(), debug=[])
cli.connect()
if xmpp.features.register(cli,jid.getDomain(),{'username':jid.getNode(),'password':password}):
sys.stderr.write('Success\n')
sys.exit(0)
else:
sys.stderr.write('Error\n')
sys.exit(1)
i use linux
Answer: I also met this issue. When you remove "debug=[]" from the connect param list.
You can get the debug output. In my side, this issue is because the server can
not be reached.
|
Error in downloading pdb from protein data bank using biopython
Question: Some pdbs cannot be download from PDB using biopython, though they exist in
PDB. It generates the error. This code is used to download pdb (2j8e) It could
not download however it works for other pdbs.
Python 2.7.4 (default, May 14 2013, 09:41:12)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from Bio.PDB import PDBList
>>> pdbdownload = PDBList()
>>> pdbid = '2j8e'
>>> downloaded_path = pdbdownload.retrieve_pdb_file(pdbid)
Downloading PDB structure '2j8e'...
Traceback (most recent call last):
IOError: [Errno ftp error] [Errno ftp error] 550 Can't change directory to pdb2j8e.ent.gz: No such file or directory
This is an error. Has someone faced this problem?
Answer: I'm sorry, but that file doesn't exist! See by yourself:
<ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/pdb/j8/>
There are no pdb2j8e.ent.gz in that dir. If you try to download 2j8f or any
other id, you get it without problem.
That file got obsolete, and can be retrieved from
<ftp://ftp.wwpdb.org/pub/pdb/data/structures/obsolete/pdb/j8/>. With Biopython
you can do:
downloaded_path = pdbdownload.retrieve_pdb_file(pdbid, obsolete=True)
|
How to append one csv file to another with python
Question: I have two .csv files that I need to either join into a new file or append one
to the other:
filea:
jan,feb,mar
80,50,52
74,73,56
fileb:
apr,may,jun
64,75,64
75,63,63
What I need is:
jan,feb,mar,apr,may,jun
80,50,52,64,75,64
74,73,56,75,63,63
What I'm getting:
jan,feb,mar
80,50,52
74,73,56
apr,may,jun
64,75,64
75,63,63
I'm using the simplest code I can find. A bit too simple I guess:
sourceFile = open('fileb.csv', 'r')
data = sourceFile.read()
with open('filea.csv', 'a') as destFile:
destFile.write(data
I'd be very grateful if anyone could tell me what I'm doing wrong and how to
get them to append 'horizontally' instead of 'vertically'.
Answer:
from itertools import izip_longest
with open("filea.csv") as source1,open("fileb.csv")as source2,open("filec.csv","a") as dest2:
zipped = izip_longest(source1,source2) # use izip_longest which will add None as a fillvalue where we have uneven length files
for line in zipped:
if line[1]: # if we have two lines to join
dest2.write("{},{}\n".format(line[0][:-1],line[1][:-1]))
else: # else we are into the longest file, just treat line as a single item tuple
dest2.write("{}".format(line[0]))
|
From date-time to usable value Python
Question: I need to make a histogram of events over a period of time. My dataset gives
me the time of each event in the format ex. 2013-09-03 17:34:04, how do I
convert this into something I'm able to plot in a histogram i Python? I know
how to do it the other way around with the datetime and time commands.
By the way my dataset contains above 1.500.000 datapoint, so please only
solutions that can be automated by loops or something like that ;)
Answer: Use `time.strptime()` to convert the local time string to a `time.struct_time`
and then `time.mktime()`, which will convert the `time.struct_time` to the
number of seconds since 1970-01-01 00:00:00, UTC.
#! /usr/bin/env python
import time
def timestr_to_secs(timestr):
fmt = '%Y-%m-%d %H:%M:%S'
time_struct = time.strptime(timestr, fmt)
secs = time.mktime(time_struct)
return int(secs)
timestrs = [
'2013-09-03 17:34:04',
'2013-09-03 17:34:05',
'2013-09-03 17:35:04',
'1970-01-01 00:00:00'
]
for ts in timestrs:
print ts,timestr_to_secs(ts)
I'm in timezone +10, and the output the above code gives me is:
2013-09-03 17:34:04 1378193644
2013-09-03 17:34:05 1378193645
2013-09-03 17:35:04 1378193704
1970-01-01 00:00:00 -36000
Of course, for histogram-making purpose you may wish to subtract a convenient
base time from these numbers.
* * *
Here's a better version, inspired by a comment by J. F. Sebastian.
#! /usr/bin/env python
import time
import calendar
def timestr_to_secs(timestr):
fmt = '%Y-%m-%d %H:%M:%S'
time_struct = time.strptime(timestr, fmt)
secs = calendar.timegm(time_struct)
return secs
timestrs = [
'2013-09-03 17:34:04',
'2013-09-03 17:34:05',
'2013-09-03 17:35:04',
'1970-01-01 00:00:00'
]
for ts in timestrs:
print ts,timestr_to_secs(ts)
**output**
2013-09-03 17:34:04 1378229644
2013-09-03 17:34:05 1378229645
2013-09-03 17:35:04 1378229704
1970-01-01 00:00:00 0
* * *
Whenever I think about the problems that can arise from using localtime() I'm
reminded of this classic example that happened to a friend of mine many years
ago.
A programmer who was a regular contributor to the FidoNet C_ECHO had written
process control code for a brewery. Unfortunately, his code used localtime()
instead of gmtime(), which had unintended consequences when the brewery
computer automatically adjusted its clock at the end of daylight saving. On
that morning, localtime 2:00 AM happened twice. So his program repeated the
process that it had already performed the first time 2:00 AM rolled around,
which was to initiate the filling of a rather large vat with beer ingredients.
As you can imagine, the brewery floor was a mess. :)
|
skimage slic: getting neighbouring segments
Question: There is a nice implementation of super resolution segment generation (SLIC)
in skimage.segmentation package in the python sklearn package.
The slic() method returns the integer sets of labels. My question is how can I
get the segments that are spatial neighbors of each other? What I would like
to do is build a graph using these segments and the edges would connect the
immediate neighbors. However, I cannot figure out how to get the immediate
neighbors of a segment.
The python code to perform the SLIC is as follows:
from skimage import io
from skimage.segmentation import slic
from skimage.segmentation import find_boundaries
# An image of dimensions 300, 300
image = img_as_float(io.imread("image.png"))
# call slic. This returns an numpy array which assigns to every
# pixel in the image an integer label
# So segments is a numpy array of shape (300, 300)
segments = slic(image, 100, sigma = 5)
# Now I want to know the neighbourhood segment for each super-pixel
# There is a method called find_boundaries which returns a boolean
# for every pixel to show if it is a boundary pixel or not.
b = find_boundaries(segments)
Here, I am stuck. I would like to know how to parse this boundary indices and
find out for a given label index (say 0), which label indexes share a boundary
with label of index 0. Is there a way to do this efficiently without looping
through the boundary array for every label index?
Answer: The way I do it is to build a graph containing an edge from each pixel to its
left and bottom pixel (so a 4 neighborhood), label them with their superpixel
number and remove duplicates. You can find code and details in [my blog
post](http://peekaboo-vision.blogspot.de/2011/08/region-connectivity-graphs-
in-python.html).
You can find some related functions
[here](https://github.com/amueller/segmentation/blob/master/utils.py#L140),
thought they are not very well documented (yet).
|
Python does not create log file
Question: I am trying to implement some logging for recording messages. I am getting
some weird behavior so I tried to find a minimal example, which I found
[here](https://docs.python.org/2/howto/logging.html#logging-to-a-file). When I
just copy the easy example described there into my interpreter the file is not
created as you can see here:
In [1]: import logging
...: logging.basicConfig(filename='example.log',level=logging.DEBUG)
...: logging.debug('This message should go to the log file')
...: logging.info('So should this')
...: logging.warning('And this, too')
WARNING:root:And this, too
In [2]: ls example.log
File not found
Can anybody help me to understand what I am doing wrong? Thanks...
EDIT: changed the output after the second input to English and removed the
unnecessary parts. The only important thing is that Python does not create the
file `example.log`.
Answer: The reason for your unexpected result is that you are using something on top
of Python (looks like IPython) which configures the root logger itself. As per
[the documentation for
basicConfig()](https://docs.python.org/2/library/logging.html#logging.basicConfig),
> This function does nothing if the root logger already has handlers
> configured for it.
What you get with just Python is something like this:
C:\temp>python
ActivePython 2.6.1.1 (ActiveState Software Inc.) based on
Python 2.6.1 (r261:67515, Dec 5 2008, 13:58:38) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import logging
>>> logging.basicConfig(filename='example.log', level=logging.DEBUG)
>>> logging.debug('This message should go to the log file')
>>> logging.info('And so should this')
>>> logging.warning('And this, too')
>>> ^Z
C:\temp>type example.log
DEBUG:root:This message should go to the log file
INFO:root:And so should this
WARNING:root:And this, too
|
How to create a function (Iteration/Recursion) to run over a dictionary of tuples in Python?
Question: I have a Python dictionary of lists like this one:
d = {'A': [(4, 4, 3), [1, 2, 3, 4, 5]],
'B': [(2, 1, 2), [5, 4, 3, 2, 1]],
'C': [(4, 1, 1), [2, 4, 1, 2, 4]]}
I need to create a formula that accesses the elements of the dictionary and,
for every value `[t, l]`:
* Calculates the mean of `t` (let's call this `m`);
* Takes a random sample `s`, with replacement and of length `len(t)`, from `l`;
* Compares `m` with the mean of `s` \- `True` if `m` is greater than the mean of `s`, `False` otherwise;
* Repeats this process 10,000 times
* Returns the percentage of times `m` is greater than the mean of `s`.
The output should look like:
In [16]: test(d)
Out[16]: {'A': 0.5, 'B': 0.9, 'C': 0.4}
I think I'm not that far from an answer, this is what I have tried:
def test(dict):
def mean_diff(dict):
for k, (v0, v1) in dict.iteritems():
m = np.mean(v0) > (np.mean(npr.choice(v1, size=(1, len(v0)), replace=True)))
return ({k: m})
for k, (v0, v1) in dict.iteritems():
bootstrap = np.array([means_diff(dict) for _ in range(10000)])
rank = float(np.sum(bootstrap))/10000
return ({k: rank})
However, I got:
RuntimeError: maximum recursion depth exceeded while calling a Python object
Answer: I'd use a list comprehension that essentially selects a random value and
compares it to the mean. This will produce a list of True/False. If you take
the mean of that, it will be averaging a list of 1's and 0's, so it will give
you the aggregate probability.
import numpy as np
d = {'A': [(4, 4, 3), [1, 2, 3, 4, 5]],
'B': [(2, 1, 2), [5, 4, 3, 2, 1]],
'C': [(4, 1, 1), [2, 4, 1, 2, 4]]}
def makeRanks(d):
rankDict = {}
for key in d:
tup = d[key][0]
mean = np.mean(tup)
l = d[key][1]
rank = np.mean([mean > np.mean(np.random.choice(l,len(tup))) for _ in range(10000)])
rankDict[key] = rank
return rankDict
Testing
>>> makeRanks(d)
{'C': 0.15529999999999999, 'A': 0.72130000000000005, 'B': 0.031899999999999998}
|
Python contour plotting wrong values with plot_surface
Question: I want to plot a surface in Matplotlib consisting of zeros everywhere, except
for a rectangular region centered in (0, 0), with sides (Dx, Dy), consisting
of ones - kind of like a table, if you wil; I can do that using the
`plot_surface` command, no worries there. I also want to plot its projections
in the "x" and "y" directions (as in [this
demo](http://matplotlib.org/examples/mplot3d/contour3d_demo3.html)) and that's
when the results become weird: Python seems to be interpolating my amplitude
values (which, again, should be either zero or one) for the contour plots and
showing some lines with values that do not correspond to my data points.
This is what I'm doing:
import numpy
from matplotlib import pylab
from mpl_toolkits.mplot3d import axes3d
Dx = 1. # Define the sides of the rectangle
Dy = 2.
x_2D = numpy.linspace(-Dx, Dx, 100) # Create the mesh points
y_2D = numpy.linspace(-Dy, Dy, 100)
x_mesh, y_mesh = numpy.meshgrid(x_2D, y_2D)
rect_2D = numpy.zeros(x_mesh.shape) # All values of "rect_2D" are zero...
for i in range(x_2D.size):
for j in range(y_2D.size):
if numpy.abs(x_mesh[i, j]) <= Dx/2 and numpy.abs(y_mesh[i, j]) <= Dy/2:
rect_2D[i, j] = 1. # ... except these ones
fig = pylab.figure(figsize=(9, 7))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(x_mesh, y_mesh, rect_2D, alpha=0.3)
ax.contour(x_mesh, y_mesh, rect_2D, zdir='x', offset=-1.5, cmap=pylab.cm.brg)
ax.contour(x_mesh, y_mesh, rect_2D, zdir='y', offset=3, cmap=pylab.cm.brg)
ax.set_xlim(-1.5, 1.5)
ax.set_ylim(-3, 3)
ax.set_zlim(0., 1.5)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
The resulting figure has a dark green line with amplitude a bit below 0.8 on
both the "x" and "y" projections, which does not exist in my "rect_2D"
variable. Does anyone knows if this is a bug or if there is a mistake in my
code? Any suggestions on how to get rid of it? Thanks in advance!
Answer: Add a `levels = [0],` kwarg to your `ax.contour` call. This specifies where
along the `zdir` axis your contours are computed. See `mpl.axes.Axes.contour`
docstring for more info.
The problem is that without specifying `levels`, contour automatically
computes the locations to plot contours and one of these contours is selected
just past the 'edge of your table', but before your Zdata is 0. At these
points contour interpolates between 0 and 1.
|
Examples of Google Cloud Storage Used With Google App Engine (Python)
Question: Anybody have any code examples for using google cloud storage with google app
engine(python)?
The best I've seen so far is from an answer I received from a prior question I
posted: <https://code.google.com/p/appengine-gcs-
client/source/browse/trunk/python/demo/main.py>
However, the code seems to run successfully regardless of the connection and I
can't seem to find the files in my GCS bucket
Answer: main.py this script is uploading directly to GCS, by altering the the
create_upload_url function and removing gb_bucket_name it will use the blob
store.
import os
import urllib
import webapp2
from google.appengine.ext import blobstore
from google.appengine.ext.webapp import blobstore_handlers
class MainHandler(webapp2.RequestHandler):
def get(self):
upload_url = blobstore.create_upload_url('/upload', gs_bucket_name='bucket_name')
self.response.out.write('<html><body>')
self.response.out.write('<form action="%s" method="POST" enctype="multipart/form-data">' % upload_url)
self.response.out.write("""Upload File: <input type="file" name="file"><br> <input type="submit"
name="submit" value="Submit"> </form></body></html>""")
class UploadHandler(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
blob_info = upload_files[0]
self.redirect('/serve/%s' % blob_info.key())
class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, resource):
resource = str(urllib.unquote(resource))
blob_info = blobstore.BlobInfo.get(resource)
self.send_blob(blob_info)
app = webapp2.WSGIApplication([('/', MainHandler),
('/upload', UploadHandler),
('/serve/([^/]+)?', ServeHandler)],
debug=True)
|
Python sorting a text file?
Question: So I know how to import a texfile and sort numbers such as:
1
6
4
6
9
3
5
But I don't know how to sort a data that looks like:
Merchant_9976 20122
Merchant_9977 91840
Merchant_9978 92739
Merchant_9979 97252
Merchant_9980 76885
Merchant_9981 67835
Merchant_9982 42201
Merchant_9983 47463
Here's my code so far
def sort_slow(seq):
for i in range(1, len(seq)):
j = i
while j > 0 and seq[j - 1] > seq[j]:
seq[j - 1], seq[j] = seq[j], seq[j - 1]
j -= 1
return seq
def main():
fileName = str(input('Please enter a filename: '))
file = open(fileName)
sort1 = []
for lines in file:
sort1.append(int(lines.strip()))
sort_slow(sort1)
print(sort1)
main()
I'm relatively new to coding.*about 1 month
Answer: The following will sort by the integer in the right column
with open('file.txt', 'r') as f:
data = f.readlines()
sorted_data = sorted(data, key=lambda i: int(i.split()[1]))
print(sorted_data)
Or if you simply want them sorted by the merchant number
with open('file.txt', 'r') as f:
data = f.readlines()
sorted_data = sorted(data)
print(sorted_data)
|
Random selection with criteria in python
Question: I want to (pseudo)randomly select an object from a list that fits a criterion.
I have a function that does this for one criterion I need:
from random import randint
def choose(array):
return array[randint(0,len(array)-1)]
def choose_c(array):
result = 'r'
while result[-1] == 'r':
result = choose(array)
return result
But each time I want a new criterion I need to write a new function. Is there
a simpler way?
Answer: One approach is to make your `choice_c` function accept a callable, which will
be evaluated on the randomly sampled result until it evaluates to `False`, at
which point the random sample will be returned.
def choice_c(array, criterion=None):
result = choose(array)
if criterion:
while criterion(result):
result = choose(array)
return result
def r_criterion(some_sample):
return some_sample == "r"
def gt5_criterion(some_sample):
return some_sample > 5
choice_c(array, r_criterion)
choice_c(array, gt5_criterion)
This can also be accomplished using `dropwhile` and/or `takewhile` from the
`itertools` module, and if this will be used heavily it might be worth making
the `choose` function behave as a generator to use this implementation.
from itertools import dropwhile
def choose(array):
while True:
yield array[randint(0,len(array)-1)]
def choice_c(array, criterion=None):
gen = choose(array)
return next(dropwhile(criterion, gen)) if criterion else next(gen)
Of course, in both cases, this places the burden on you to write good unit
tests, or otherwise ensure that the criterion functions make sense on the
array contents, and that any errors are handled correctly, and that you don't
loop over an infinite generator in the `while` section ...
|
django writing my first custom template tag and filter
Question: I am attempting to write a simple django [Custom template tags and
filters](https://docs.djangoproject.com/en/1.4/howto/custom-template-tags/) to
replace a html break (< b r / >) with a line space on a template.
I have followed the django docs, but I am getting the following error, which I
don't know how to solve.
The error I have is:
x_templates_extra' is not a valid tag library: Template library x_templates_extra not found, tried django.templatetags.x_templates_extra,django.contrib.staticfiles.templatetags.x_templates_extra,django.contrib.admin.templatetags.x_templates_extra,django.contrib.flatpages.templatetags.x_templates_extra,rosetta.templatetags.x_templates_extra,templatetag_handlebars.templatetags.x_templates_extra,globalx.common.templatetags.x_templates_extra,rollyourown.seo.templatetags.x_templates_extra,debug_toolbar.templatetags.x_templates_extra
Here is what I have done:
1. created a templatetags directory, at the same level as models.py, views.py, etc
2. created a __init__.py file inside the templatetags directory to ensure the directory is treated as a Python package
3. created a file inside the templatetags directory called x_templates_extra.py
4. added the load tag to the template: {% load x_templates_extra %}
5. added the tag to the template: {{ x_detail.x_details_institution_name|safe|replace:"<br />"|striptags|truncatechars:popover_string_length_20 }}
6. Added the following code to the file x_templates_extra.py:
from django import template
register = template.Library()
def replace(value, arg):
"""Replaces all values of arg from the given string with a space."""
return value.replace(arg, ' ')
**Question:**
The docs state that: So, near the top of your module, put the following:
from django import template
register = template.Library()
I have put the import & register lines in my file called: called
x_templates_extra.py. Is that correct?
Also I am not sure what to put in the INSTALLED_APPS to make the load work.
Tried many things, but I am now going in circles, so I do need help.
Thanks.
Answer: Finally got it to work.
adding the correct app name to the INSTALLED APPS was th eissue.
|
Python - Selecting All Row Values That Meet A particular Criteria Once
Question: I have a form set up with the following fields: Date Time, ID, and Address.
This form auto assigns each entry a unique id string (U_ID) and then this data
is later output to a csv with headers and rows something like this:
Date Time ID U_ID Address
9/12/13 12:07 13 adfasd 1345 Wilson Way
9/12/13 13:45 8 jklj 1456 Jackson Hollow
9/13/13 14:55 13 klidh 1345 Wilson Way
9/13/13 15:00 8 ikodl 1456 Jackson Hollow
I am looking for a way to remove duplicate submissions by those with the same
ID via Python Script, while maintaining the rest of the data in the row.
Ideally I want to keep only the first entry associated with the ID in the csv
file.
The output should look something like this:
Date Time ID U_ID Address
9/12/13 12:07 13 adfasd 1345 Wilson Way
9/12/13 13:45 8 jklj 1456 Jackson Hollow
So far I'm stuck at:
import csv
with open('/Users/user/Desktop/test.csv', 'rb') as f:
r = csv.reader(f)
headers = r.next()
rows = [(Date Time, ID, U_ID, Address) for Date Time, ID, U_ID, Address in r]
clean = [row for row in rows if row[1] != '#N/A']
clean2 = list(set(row[1]))
This gives me a list with only the unique values for ID, but I'm not sure how
to recover all of the other data associated with the rows for those values.
As stated if I can get the earliest submission as well, that would be
wonderful, but honestly any unique submission by ID should do.
Thanks for reading!
Answer: Take a look at [pandas](http://pandas.pydata.org/pandas-
docs/stable/10min.html), this is how you would do it:
import pandas as pd
pd.read_table('test.csv')\
.drop_duplicates(subset=['ID'])\
.to_csv('output.csv', index=None, sep='\t')
**output.csv** :
Date Time ID U_ID Address
9/12/13 12:07 13 adfasd 1345 Wilson Way
9/12/13 13:45 8 jklj 1456 Jackson Hollow
|
python using search engine to find text in text file
Question: I have lots of text files in a directory.Then i will ask a keyword from the
user.If the user enters for eg: 'hello'
Then,it has to search the entire text file of all the directories present in
the text file and then search and return the line of the text file ,having the
high priority of word hello.
Eg:
input: helloworld
output:
filename: abcd.txt
line : this world is a good world saying hello
Give me some ideas on how to deal with such problems!
Answer: Using glop as alternative, you can filter for specific file name, extension or
all file in directory
>>> from glob import glob
>>> key = 'hello'
>>> for file in glob("e:\data\*.txt"):
with open(file,'r') as f:
line_no = 0
for lines in f:
line_no+=1
if key.lower() in lines.lower():
print "Found in " + file + "(" + str(line_no) + "): " + lines.rstrip()
Found in e:\data\data1.txt(1): Hello how are you
Found in e:\data\data2.txt(4): Searching for hello
Found in e:\data\data2.txt(6): 3 hello
>>>
|
Detecting colored objects and focused on the camera with OpenCv
Question: i need some help with a project. Its about detecting red object in a
determined green area. I have to dodge objects and reach the goal (in this
case a blue area), also back to collect the objects with a servomotor and a
clamp, all using a camera with OpenCv and the Irobot.
Well the code that i have until now can recognize the red objects, and move to
them and stop when they are close. Also can rotate to left and right,i did it
this way to try to center the object in the camera, dividing the screen into 3
parts, then I Pick it with the servo. But im getting out of ideas, when i run
de python code, the process is very slow and and the detecting the of left and
right side is very sensitive, i cant focus the object on the center. Also i
dont have idea how to get the distance between the object and the irobot, in
theory I can and dodge objects, but dont know how to get to the goal, or how
to get back to them.
This is the python code:
import cv2
import numpy as np
import serial
from time import sleep
def serialCom(int):
ser = serial.Serial(port = "/dev/ttyACM0", baudrate=9600)
x = ser.readline()
ser.write(int)
x = ser.readline()
print 'Sending: ', x
# Recognizes how much red is in a given area by the parameters
def areaRed(img, xI, xF, yI, yF):
# counter red pixels
c = 0
red = 255
for i in range(xI, xF):
for j in range(yI, yF):
if img[i][j] == red:
c += 1
return c
def reconoce(lower_re, upper_re, lower_gree, upper_gree, lower_ble, upper_ble):
cap = cv2.VideoCapture(1)
while(1):
_, frame = cap.read()
# Converting BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
hsv2 = cv2.cvtColor(frame , cv2.COLOR_BGR2HSV)
hsv3 = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Threshold the HSV image to get only red,blue and green colors
mask = cv2.inRange(hsv, lower_re, upper_re)
maskBlue = cv2.inRange(hsv2, lower_ble, upper_ble)
maskGreen = cv2.inRange(hsv3, lower_gree, upper_gree)
moments = cv2.moments(mask)
area = moments['m00']
blueMoment = cv2.moments(maskBlue)
blueArea = blueMoment['m00']
greenMoment = cv2.moments(maskGreen)
greenArea = greenMoment['m00']
# Determine the limits of the mask
x = mask.shape[0] - 1
y = mask.shape[1] - 1
# Determine the center point of the image
xC = x / 2
yC = y / 2
x3 = (x/3)/2
y3 = y/2
# Define the variables for the center values of the camera
xI, xF, yI, yF = xC - (xC / 4), xC + (xC / 4), yC - (yC / 4), yC + (yC / 4)
# define the ranges of the mask for the left and right side
right = areaRed(mask, xI + (x/4), xF + (x/4), yI + (x/4), yF + (x/4))
left = areaRed(mask, xI - (x/4), xF - (x/4), yI - (x/4), yF - (x/4))
centro = areaRed(mask, xI, xF, yI, yF)
if right > 700:
serialCom ("4")
print "turning right"
return mask
if left > 700:
serialCom("5")
print "turning left"
return mask
#if there is a red objet
if centro > 5000:
print 'i detect red'
#and in on the a green area
if greenArea > 10000000:
#we stop the irbot
serialCom("1")
print 'I found red and is in the area'
cv2.destroyAllWindows()
return mask
else:
#then we continue moving
serialCom("3")
print ''
else:
print "there is not red objet:v"
cv2.imshow('frame',frame)
cv2.imshow('red',mask)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
# The range of colors are defined for the HSV
lower_red = np.array([170,150,50])
upper_red = np.array([179,255,255])
lower_green = np.array([85,80,150])
upper_green = np.array([95,255,255])
lower_blue = np.array([100,100,100])
upper_blue = np.array([120,255,255])
while True:
img = reconoce(lower_red, upper_red, lower_green, upper_green, lower_blue, upper_blue)
cv2.imshow('Freeze image', img)
cv2.waitKey(10000)
cv2.destroyAllWindows()
This is the arduino code conecting the irobot with the opencv
#include <Roomba.h>
Roomba roomba(&Serial1);
int p=0;
void init()
{
roomba.driveDirect(-100,100);
roomba.driveDirect(-100,100);
delay(100);
roomba.driveDirect(100,-100);
delay(100);
}
void move()
{
roomba.driveDirect(50,50);
roomba.driveDirect(50,50);
}
void reverse()
{
roomba.driveDirect(-50,-50);
}
void left_rotation()
{
roomba.driveDirect(-30,30);
delay(1000);
}
void right_rotation()
{
roomba.driveDirect(30,-30);
delay(1000);
}
void stop()
{
roomba.driveDirect(0,0);
delay(500);
Serial.println("9");
}
void setup()
{
Serial.begin(9600);
roomba.start();
roomba.safeMode();
}
void loop()
{
if(Serial.available()>0)
{
p=Serial.read();
if(p == 51)
{
Serial.println("1");
move();
}
if(p==50)
{
Serial.println("2");
reverse();
delay(1000);
}
if(p==52)
{
Serial.println("4");
left_rotation();
delay(1000);
stop();
}
if(p==53)
{
Serial.println("5");
right_rotation();
delay(1000);
stop();
}
if(p==49)
{
Serial.println("3");
stop();
delay(1000);
}
}
delay(100);
}
Answer: Yes your code is pretty slow because you are iterating the the image matrix
manually with your own naive source code. I would recommend to read a little
bit more about object tracking and the theoretical problems and then about
existing practical solutions. That might be better that trying to to program
with openCV on the low level. For your specific problem I suggest, checking
cvBlob
<http://code.google.com/p/cvblob/>
which is also about tracking of red objects (even with a cool demo video).
If the objects are not just a plain colour but textured, I suggest this part
of the openCV doc
<http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html>
which explains tracking by Feature Extraction, Decriptor Generation and FLANN
matching.
|
concatenate 2 readed files python
Question: I was trying to generate all the hex numbers from `0000000000` to `FFFFFFFFFF`
with all combinations on 10 length string but the file size it was very large,
so i think to divide in two lists from `00000` to `fffff` and then join that
lists through `stdout` and pipe it to aircrack
The code:
#!/usr/bin/python
# -*- encode: utf-8
import sys
def main():
fd = open("hexdic.txt", "r")
ffdd = open("dichex.txt","r")
for line in fd.readlines():
for otra in ffdd.readlines():
print line.replace("\n","") + otra.replace("\n","")
fd.close()
if __name__ == '__main__':
main()
the issue is when the file take the first line on the first file "00000" and
run all the other lines on the second file from "00000" to "FFFFF" the program
finish instead to continue with "00001" from the first list and run all the
second list.
why?
well i think to use regex to generate random combinations.
Can you help me with my code showing me what am i doing wrong?
Answer: Do you have to read the hex values from files? How about
`for i in xrange(0xFFFFFFFFFF): print '%010x' % i `
|
How to use wxPython for Python 3?
Question: I installed `wxPython 3.0.1.1`, but I'm unable to `import wx` using `Python
3.4.1`. I am getting the following error:
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 00:54:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import wx
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'wx'
Nevertheless, I can `import wx` if I use `Python 2.7` (the default
installation in my `OS X 10.9`):
Python 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import wx
>>>
How can I use wxPython for Python 3, and specifically for Python 3.4.1?
Answer: You have two different pythons installed on your machine (3.4.1 and 2.7.5). Do
not expect to be able to use one package installed in one python (wxPython
3.0.1.1 at python 2.7.5) automatically to be available in another python.
Additionally wxPython (classic) does not work for Python 3. You need
[_wxPython Phoenix_](http://wxpython.org/Phoenix/docs/html/index.html) to be
able to do that.
You can try to install one of [the wxPython Phoenix
snapshots](http://wxpython.org/Phoenix/snapshot-builds/) in your Python 3.4.1.
However, because Phoenix is not equally mature as classic, do not expect
everything to work out of the box.
You can find a complete explanation/description in the following wxPython wiki
at the following link:
> [Installing wxPython-Phoenix using
> pip](http://wiki.wxpython.org/How%20to%20install%20wxPython#Installing_wxPython-
> Phoenix_using_pip)
There are several important points:
* that `pip`/`setuptool` is new enough (> 6.x.x/> 12.x.x)
* that the builds are "inofficial", and thus `pip` refuses to install it: basically you need to add `--pre` when installing with pip.
* that you waive SSL verification `--trusted-host wxpython.org`
Full command for Windows machines:
C:\python27\scripts\pip.exe install --upgrade --trusted-host wxpython.org --pre -f http://wxpython.org/Phoenix/snapshot-builds/ wxPython_Phoenix
Note that this will install wxPython Phoenix for Python 2.7.
|
Python - parse a line of text
Question: I have the following input from a text file:
Title Value Position Perturbation 1.5 0.6 8.5 9.8 0 8.5 9.6 0.5 0.6 (...)
Title Value Position Perturbation 3 1.5 6 0 0.8 9.7 5.3 9.9 0.7 0.9 (...)
I want to remove the first 4 columns and for the columns with numbers I want
to subset every 4 values and changing the position of the 3rd value for the
second one and remove the fourth one, so, the output should look like:
1.5 8.5 0.6 0 9.6 8.5 0.6 (...)
3 6 1.5 0.8 5.3 9.7 0.7 (...)
For this propose I write the following Python code:
import sys
input_file= open (sys.argv[1],'r')
output_file= open (sys.argv[2], 'w')
with open(sys.argv[1]) as input_file:
for i, line in enumerate(input_file):
output_file.write ('\n')
marker_info= line.split()
#snp= marker_info[0]
end= len(marker_info)
x=4
y=8
# while y<=len(marker_info):
while x<=end:
intensities= marker_info[x:y]
AA= intensities[0]
BB= intensities[1]
AB= intensities[2]
NN= intensities[3]
output_file.write ('%s' '\t' '%s' '\t' '%s' '\t' % (AA, AB, BB))
x= y
y= x + 4
input_file.close()
output_file.close()
The code seems to work fine but the problem is that for each line, the last
four values are missing. So, I guess that the problem is in the "while"
statement...But I have not clue how to solve it (I know that it seems a simple
problem).
Thanks in advance for any suggestions.
Answer: Try this:
1\. Open file like csv and strip the labels
2\. Generate a sublist of desired size
3\. Do your swap and remove the trailing element
4\. Save the output (i've done it with a list, but you can do it with output
file)
>>> import csv
>>> output = []
>>> with open('sample.csv') as input:
... reader = csv.reader(input, delimiter=' ')
... for line in reader:
... line = line[4:] #strip labels
... slice_size = 4
... for slice_idx in range(0,len(line),slice_size):
... sublist = line[slice_idx : slice_idx+slice_size]
... if len(sublist) == slice_size:
... swap = sublist[2]
... sublist[2] = sublist[1]
... sublist[1] = swap
... output.append(sublist[:slice_size-1])
...
>>>
>>> output
[['1.5', '8.5', '0.6'], ['0', '9.6', '8.5'], ['3', '6', '1.5'], ['0.8', '5.3', '9.7']]
|
Python pptx unexpected keyword argument 'standalone'
Question: I try to run example of pptx- version 0.5.1 in Python 2.6.8. The code is
simple
from pptx import Presentation
prs = Presentation()
prs.save('test.pptx')
But I get the error "got an unexpected keyword argument 'standalone' ":
> Traceback (most recent call last): File "prez.py", line 4, in
>
>
> prs.save('test.pptx') File "/usr/local/lib64/python2.6/site-
> packages/python_pptx-0.5.1-py2.6.egg/pptx/api.py",
>
>
> line 132, in save
>
>
> return self._package.save(file) File "/usr/local/lib64/python2.6/site-
> packages/python_pptx-0.5.1-py2.6.egg/pptx/opc/package.py",
>
>
> line 144, in save PackageWriter.write(pkg_file, self.rels, self.parts) File
> "/usr/local/lib64/python2.6/site-
> packages/python_pptx-0.5.1-py2.6.egg/pptx/opc/pkgwriter.py",
>
> line 33, in write
>
>
> PackageWriter._write_content_types_stream(phys_writer, parts) File
>
>
> "/usr/local/lib64/python2.6/site-
> packages/python_pptx-0.5.1-py2.6.egg/pptx/opc/pkgwriter.py",
>
> line 45, in _write_content_types_stream
>
>
> _ContentTypesItem.xml_for(parts) File
> "/usr/local/lib64/python2.6/site-
> packages/python_pptx-0.5.1-py2.6.egg/pptx/opc/oxml.py",
>
>
> line 39, in serialize_part_xml
>
>
> xml = etree.tostring(part_elm, encoding='UTF-8', standalone=True) File
> "lxml.etree.pyx", line 2471, in lxml.etree.tostring
>
>
> (src/lxml/lxml.etree.c:24624) TypeError: tostring() got an unexpected
>
> keyword argument 'standalone'
I have no idea what is incorrect. All examples from documentation reports the
same error when I try to save the presentation.
Answer: The problem was with some libraries which were out of date.
|
Django 1.7 makemigrations - ValueError: Cannot serialize function: lambda
Question: I switch to Django 1.7. When I try makemigrations for my application, it
crash. The crash report is:
Migrations for 'roadmaps':
0001_initial.py:
- Create model DataQualityIssue
- Create model MonthlyChange
- Create model Product
- Create model ProductGroup
- Create model RecomendedStack
- Create model RecomendedStackMembership
- Create model RoadmapMarket
- Create model RoadmapUser
- Create model RoadmapVendor
- Create model SpecialEvent
- Create model TimelineEvent
- Create model UserStack
- Create model UserStackMembership
- Add field products to userstack
- Add field viewers to userstack
- Add field products to recomendedstack
- Add field product_group to product
- Add field vendor to product
- Add field product to dataqualityissue
Traceback (most recent call last):
File "manage.py", line 29, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/makemigrations.py", line 124, in handle
self.write_migration_files(changes)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/makemigrations.py", line 152, in write_migration_files
migration_string = writer.as_string()
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/writer.py", line 129, in as_string
operation_string, operation_imports = OperationWriter(operation).serialize()
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/writer.py", line 80, in serialize
arg_string, arg_imports = MigrationWriter.serialize(item)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/writer.py", line 245, in serialize
item_string, item_imports = cls.serialize(item)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/writer.py", line 310, in serialize
return cls.serialize_deconstructed(path, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/writer.py", line 221, in serialize_deconstructed
arg_string, arg_imports = cls.serialize(arg)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/writer.py", line 323, in serialize
raise ValueError("Cannot serialize function: lambda")
ValueError: Cannot serialize function: lambda
I found a note about that here <https://code.djangoproject.com/ticket/22892>
There is also link to documentation
<https://docs.djangoproject.com/en/dev/topics/migrations/#serializing-values>
But it does not make it clearer for me. The error meassage have not gave me a
clue where to look for problem.
Is there way how to detect what line exactly cause the problem?
Any hints?
Answer: We had this issue with using lambda in the custom field definition.
It is than hard to spot as it is not listed in traceback and the error is not
raised on the particular model which uses such custom field.
Our way to fix:
* check all your custom fields (even in the 3rd party libraries)
* change the lambda to callable, which is defined in the module (i.e. not in custom field class)
|
Why does selenium wait for a long time before executing this code?
Question: I'm trying to do infinite scrolling on this page and here is my code:
from selenium import webdriver
import time
profile = webdriver.FirefoxProfile()
profile.set_preference("general.useragent.override","Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0")
driver = webdriver.Firefox(profile)
driver.get("http://www.quora.com/Programming-Languages/followers")
for n in range(0,5): # For testing I have capped this at 5, will handle this properly once things start to work.
driver.execute_script("window.scrollTo(0,1000000);")
time.sleep(2)
So when I run this, it waits for a lot of seconds (more than 1 min sometimes)
before doing any scrolling and then waits again for the same amount of time
before the next scrolling. The code seems to work fine on other pages. Any
ideas on how to fix this?
When I try to use Chrome instead of firefox, I get these errors: `driver =
webdriver.Chrome('/home/asdf/apps/chromedrive/chromedriver')` added to the .py
file.
Traceback (most recent call last):
File "ok.py", line 8, in <module>
driver = webdriver.Chrome('/home/asdf/apps/chromedrive/chromedriver')
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 65, in __init__
keep_alive=True)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 73, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 121, in start_session
'desiredCapabilities': desired_capabilities,
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 171, in execute
response = self.command_executor.execute(driver_command, params)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/remote_connection.py", line 349, in execute
return self._request(command_info[0], url, body=data)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/remote_connection.py", line 379, in _request
self._conn.request(method, parsed_url.path, body, headers)
File "/usr/lib/python2.7/httplib.py", line 973, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1007, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 969, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 829, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 791, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 772, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno -2] Name or service not known
Answer: Switching to
[`Chrome()`](https://code.google.com/p/selenium/wiki/ChromeDriver) helped me
to solve the problem:
import time
from selenium import webdriver
followers_per_page = 18
driver = webdriver.Chrome()
driver.get("http://www.quora.com/Programming-Languages/followers")
# get the followers count
element = driver.find_element_by_class_name('count')
followers_count = int(element.text.replace('k', '000').replace('.', ''))
print followers_count
# scroll down the page iteratively with a delay
for _ in xrange(0, followers_count/followers_per_page + 1):
driver.execute_script("window.scrollTo(0, 0,1000000);")
time.sleep(2)
FYI, I'm, using a bit different approach: parsing the count of followers and
calculating followers per page taking into account the fact that it loads 18
followers at a time.
I've actually worked on a similar quora question before, see:
* [Scrolling web page using selenium python webdriver](http://stackoverflow.com/questions/25870906/scrolling-web-page-using-selenium-python-webdriver)
* * *
Well, this was not the first thing coming into my mind. Here's the story.
The problem is that there are pending requests to the
<http://tch840195.tch.quora.com/up/chan5-8886/updates> URL that are taking
minutes to complete. This is what makes selenium think the page is not
completely loaded. And, things are getting worse - this is a periodical thing
that happens every X seconds. Think about it as long pooling.
I've tried multiple things to overcome the problem using `Firefox` webdriver:
* set [`webdriver.load.strategy`](https://code.google.com/p/selenium/wiki/FirefoxDriver#-Beta-_load_fast_preference) preference to `unstable`
* set `network.http.response.timeout`, `network.http.connection-timeout` and `network.http.keep-alive.timeout` and [`network.http.request.max-start-delay`](http://kb.mozillazine.org/Network.http.request.max-start-delay) preferences
* set page load timeout:
driver.set_page_load_timeout(3)
* set script timeout:
driver.set_script_timeout(3)
* call `window.stop();` hoping it would stop active requests:
driver.execute_script('window.stop();')
* updated to most recent Firefox and selenium package versions
One other option that might work is to somehow block the request to that
["slow url"](http://tch840195.tch.quora.com/up/chan5-8886/updates) either
using a proxy server and point firefox to it, or, if it is possible, let
Firefox know to blacklist the URL (probably through an extension).
Also see the relevant issue with multiple workarounds inside:
* [Support a timeout argument on page load operations](https://code.google.com/p/selenium/issues/detail?id=687)
Also see:
* [How to Stop the page loading in firefox programaticaly?](http://stackoverflow.com/questions/5453423/how-to-stop-the-page-loading-in-firefox-programaticaly)
* [Firefox DelayedCommand waits for pending XMLHttpRequests when it doesn't need to.](https://code.google.com/p/selenium/issues/detail?id=5448)
* [FirefoxDriver webdriver.load.strategy unstable findelements getting elements from the wrong page](http://stackoverflow.com/questions/20954605/firefoxdriver-webdriver-load-strategy-unstable-findelements-getting-elements-fro)
* [Set up a real timeout for loading page in Selenium WebDriver?](http://stackoverflow.com/questions/10750198/set-up-a-real-timeout-for-loading-page-in-selenium-webdriver)
* [Selenium Firefox Open timeout](http://stackoverflow.com/questions/3134474/selenium-firefox-open-timeout)
|
Using C/C++ DLL with Python/Pyserial to communicate with Opticon barcode reader
Question: I have an opticon OPN-2001 barcode scanner that im trying to communicate with.
It officially supports C/C++ and .NET but i wanted to use it with python if
possible.
I have opened a serial connection to the device (or at least the port(?) but
when i use functions from the dll it gives me the communications error (-1)
when i am expecting an OK (0). I've never used DLL's or serial communication
so bear that in mind.
What im wondering is if i've made some obvious mistakes in calling the DLL-
function or using pyserial. Im also very interested in anyone else having a
look at their SDK. It seems to be expecting a 4 byte LONG as comPort below. I
thought this would work so im a bit stuck. I realize there is only so much you
can help without the actual hardware. Thank you for any help however!
Here is the code i have sofar:
from ctypes import *
from serial import *
opticonLib = WinDLL('Csp2.dll')
opticonLib.csp2SetDebugMode(1) #logs to textfile if using debug version of .dll
comPort = 3
opticonSerial = Serial(
port=comPort - 1,
baudrate=9600,
bytesize=EIGHTBITS,
parity=PARITY_ODD,
stopbits=STOPBITS_ONE,
timeout=5
)
if opticonSerial.isOpen():
print ('Port is open. Using ' + opticonSerial.name + '.')
print (opticonLib.csp2InitEx(comPort)) #Gives -1 instead of 0
opticonSerial.close()
[SDK for scanner if you want to dig
deeper](http://old.opticon.com/uploads/Software/SDK_EGFS012x.zip)
Answer: Windows keeps a lock on serial ports. Without looking at the SDK, I'm going to
guess that csp2InitEx tries to open the serial port itself and when it does so
it gets an error from Windows and fails.
Try not opening the serial port yourself.
|
independent prototyping with java
Question: I am new to java (well I played with it a few times), and I am wondering:
=> How to do _fast_ independent prototypes ? something like one file projects.
The last few years, I worked with python. Each time I had to develop some new
functionality or algorithm, I would make a simple python module (i.e. file)
just for it. I could then integrate all or part of it into my projects. So,
how should I translate such "modular-development" workflow into a java
context?
Now I am working on some relatively complex java DB+web project using spring,
maven and intelliJ. And I can't see how to easily develop and run independent
code into this context.
**Edit:**
I think my question is unclear because I confused two things:
1. fast developement and test of code snippets
2. incremental development
In my experience (with python, no web), I could pass from the first to the
second seemlessly.
For the sake of consistency with the title, **the 1st has priority**. However
it is good only for exploration purpose. In practice, the 2nd is more
important.
Answer: Definitely take a look at [Spring Boot](http://docs.spring.io/spring-
boot/docs/1.2.2.RELEASE/reference/htmlsingle/#getting-started-introducing-
spring-boot). Relatively new project. Its aim is to remove initial
configuration phase and spin up Spring apps quickly. You can think about it as
convention over configuration wrapper on top of Spring Framework.
It's also considered as good fit for micro-services architecture.
It has embedded servlet container (Jetty/Tomcat), so you don't need to
configure it. It also has various different bulk dependencies for different
technology combinations/stacks. So you can pick good fit for your needs.
|
How do I execute an def in python from input
Question: For example, I have:
def function():
return
and I want to execute it via:
d = input('Enter the def name: ')
by entering the def name('function' in this case).
How would I do this?
===================EDIT==========================
Found the way!
def func():
print('IT WORKS!')
r = raw_input('Function: ')
result = eval(r+'()')
Answer: Following on @jonrsharpe's comment, you can get to a function via the module's
namespace. If you are in the module with the function already, you can
f_name = raw_input('Enter the def name: ')
globals()[f_name]()
In this case, KeyError is raised if the name is not correct.
If the function is in a different module, you can
import functions
f_name = raw_input('Enter the def name: ')
getattr(functions, f_name)()
In this case, AttributeError is raised if the name is not correct.
|
Addition going wrong in Python
Question: I am a beginner at python and am writing a basic calculator
while True:
print("PyCalc")
print()
print()
init=input("Press 1 for Basic arithmetic")
if init=="1":
input1=input("Basic Arithmetic...Only +,-,*,/ accepted...")
input2=re.findall(r'\d+|\+|\-|\*|\/',input1 )
ans =basiccalc(input2)
print(ans )
The method basiccalc:
def basiccalc(givenlist):
ans=int(givenlist[0])
for pt in givenlist:
if str(pt).isdigit():
continue
elif pt=='+':
pos=givenlist.index(pt)
ans=ans+int(givenlist[pos+1])
return ans
when I run the program...adding 2 numbers works correctly.
PyCalc
Press 1 for Basic arithmetic1
Basic Arithmetic...Only +,-,*,/ accepted...2+3
5
but when I enter more than 2 numbers...it gives me a wrong answer
PyCalc
Press 1 for Basic arithmetic1
Basic Arithmetic...Only +,-,*,/ accepted...2+4+5+6
14
why do I get such an answer?
Answer: First problem you must use raw_input function instead of input, because input
already compute arithmetique operation on input.
Second problem is in your basiccalc function for more than 2 number the last
one was not calculate, try that:
import re
def sum(a,b):
return a+b
def basiccalc(givenlist):
ans=0
op=sum
for pt in givenlist:
if str(pt).isdigit():
ans = op(ans,int(pt))
last = int(pt)
elif pt=='+':
op=sum
return ans
input1=raw_input("Basic Arithmetic...Only +,-,*,/ accepted...")
input2 = re.findall( r'\d+|\+|\-|\*|\/', input1 )
ans = basiccalc(input2)
print(ans)
For more information about parsing look at Dragon Book code example :
<https://github.com/lu1s/dragon-book-source-
code/blob/master/parser/Parser.java>
|
Django: Naive datetime while time zone support is active (sqlite)
Question: I'm going around in circles on this on and need some help. I continue to get a
`naive timezone` warning. Not sure what I am doing wrong! Arg.
Here is the warning:
/django/db/models/fields/__init__.py:1222: RuntimeWarning: DateTimeField Video.modified received a naive datetime (2014-10-07 00:00:00) while time zone support is active.
RuntimeWarning)
Here is the model code (redacted somewhat):
from django.db import models
from django.utils import timezone
class ItemBase(models.Model):
created = models.DateTimeField(editable=False)
modified = models.DateTimeField(editable=False)
class Meta:
abstract = True
def save(self, *args, **kwargs):
"""Updates timestamps on save"""
if not self.id:
self.created = timezone.now()
self.modified = timezone.now()
return super(ItemBase, self).save(*args, **kwargs)
class Video(ItemBase):
pass
And the relevant (I think) part of my settings file:
TIME_ZONE = 'UTC'
USE_TZ = True
Is this a sqlite issue (am still testing things)? Or am I missing something
fundamental here? I've read up on it
[here](http://stackoverflow.com/questions/11708821/django-ipython-sqlite-
complains-about-naive-datetime) and
[here](http://stackoverflow.com/questions/18622007/runtimewarning-
datetimefield-received-a-naive-datetime) and, of course, at the docs
[here](https://docs.djangoproject.com/en/dev/topics/i18n/timezones/#code). But
I am stumped. Thanks.
## edit: added test that throws the error
Am getting the error when I run my tests ... I left the redacted stuff in
there but you should get the idea:
from django.test import TestCase
from django.contrib.auth import get_user_model
from video.models import Video, VideoAccount
class VideoTestCase(TestCase):
def setUp(self):
user = get_user_model().objects.create_user(
username='jacob', email='[email protected]', password='top_secret')
self.video_account = VideoAccount.objects.create(
account_type=1, account_id=12345, display_name="Test Account" )
self.pk1 = Video.objects.create(video_type=1, video_id="Q7X3fyId2U0",
video_account=self.video_account, owner=user)
def test_video_creation(self):
"""Creates a video object"""
self.assertEqual(self.pk1.video_id, "Q7X3fyId2U0")
self.assertEqual(self.pk1.video_link, "https://www.youtube.com/watch?v=Q7X3fyId2U0")
Answer: So I finally figured it out and I appreciate everyone's input which got me
thinking in the right way:
One of my past migrations had `datetime.date.today()` as the default value
(which is a hint the migrations give). I didn't think about it because at the
time I didn't even have any data in the model and then, even though that
migration had then been migrated again (further down the road), it appears the
test system is running every migration each time it starts. So: was getting
that warning.
Update: this should be [fixed in
1.7.1](https://docs.djangoproject.com/en/dev/releases/1.7.1/).
|
How to encode Chinese character as 'gbk' in json, to format a url request parameter String?
Question: I want to dump a dict as a json String which contains some Chinese characters,
and format a url request parameter with that.
here is my python code:
import httplib
import simplejson as json
import urllib
d={
"key":"上海",
"num":1
}
jsonStr = json.dumps(d,encoding='gbk')
url_encode=urllib.quote_plus(jsonStr)
conn = httplib.HTTPConnection("localhost",port=8885)
conn.request("GET","/?json="+url_encode)
res = conn.getresponse()
what I expected of the request string is this:
GET /?json=%7B%22num%22%3A+1%2C+%22key%22%3A+%22%C9%CF%BA%A3%22%7D
------------
|
V
"%C9%CF%BA%A3" represent "上海" in format of 'gbk' in url.
but what I got is this:
GET /?json=%7B%22num%22%3A+1%2C+%22key%22%3A+%22%5Cu6d93%5Cu5a43%5Cu6363%22%7D
------------------------
|
v
%5Cu6d93%5Cu5a43%5Cu6363 is 'some' format of chinese characters "上海"
I also tried to dump json with `ensure_ascii=False` option:
jsonStr = json.dumps(d,ensure_ascii=False,encoding='gbk')
but get no luck.
so, how can I make this work? thanks.
Answer: You almost got it with `ensure_ascii=False`. This works:
jsonStr = json.dumps(d, encoding='gbk', ensure_ascii=False).encode('gbk')
You need to tell `json.dumps()` that the strings it will read are GBK, and
that it should not try to ASCII-fy them. Then you must re-specify the output
encoding, because `json.dumps()` has no separate option for that.
This solution is similar to another answer here:
<http://stackoverflow.com/a/18337754/4323>
So this does what you want, though I should note that the standard for URIs
seems to say that they should be in UTF-8 whenever possible. For more on this,
see here: <http://stackoverflow.com/a/14001296/4323>
|
Buildbot - Traceback while polling for changes issue
Question: I'm running on Windows 7 x64. I followed the install documentation on Buildbot
and did some research on the issue I'm having and haven't found a solution
yet. When I do a force build, everything works fine. I'm using GitPoller. When
it tries to poll for changes, an exception is thrown; why? Let me know if I
can supply any more information. Here's what I'm getting on the master's
twistd.log every 5 minutes:
2014-10-09 00:19:53-0700 [-] while polling for changes
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\buildbot-0.8.9-py2.7.egg\buildbot\util\misc.py", line 54, in start
d = self.method()
File "C:\Python27\lib\site-packages\buildbot-0.8.9-py2.7.egg\buildbot\changes\base.py", line 70, in doPoll
d = defer.maybeDeferred(self.poll)
File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 139, in maybeDeferred
result = f(*args, **kw)
File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 1237, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 1099, in _inlineCallbacks
result = g.send(result)
File "C:\Python27\lib\site-packages\buildbot-0.8.9-py2.7.egg\buildbot\changes\gitpoller.py", line 147, in poll
yield self._dovccmd('init', ['--bare', self.workdir])
File "C:\Python27\lib\site-packages\buildbot-0.8.9-py2.7.egg\buildbot\changes\gitpoller.py", line 292, in _dovccmd
[command] + args, path=path, env=os.environ)
File "C:\Python27\lib\site-packages\twisted\internet\utils.py", line 176, in getProcessOutputAndValue
reactor)
File "C:\Python27\lib\site-packages\twisted\internet\utils.py", line 30, in _callProtocolWithDeferred
reactor.spawnProcess(p, executable, (executable,)+tuple(args), env, path)
File "C:\Python27\lib\site-packages\twisted\internet\posixbase.py", line 358, in spawnProcess
return Process(self, processProtocol, executable, args, env, path)
File "C:\Python27\lib\site-packages\twisted\internet\_dumbwin32proc.py", line 195, in __init__
raise OSError(pwte)
exceptions.OSError: (2, 'CreateProcess', 'The system cannot find the file specified.')
Also, here's the relevant portion of my config file:
from buildbot.changes.gitpoller import GitPoller
c['change_source'] = []
c['change_source'].append(GitPoller(
repourl='https://github.com/solstice333/BuildbotTest.git',
branch='master',
pollinterval=300))
Any ideas?
Answer: I have similar issue with HgPoller. Try to specify full path to git
c['change_source'].append(GitPoller(
gitbin='full/path/to/git.exe',
repourl='https://github.com/solstice333/BuildbotTest.git',
branch='master',
pollinterval=300))
I think something wrong with twisted - this dont work with same error
PS Twisted use win32process.CreateProcess and MSDN says about it first
argument: The string can specify the full path and file name of the module to
execute or it can specify a partial name. In the case of a partial name, the
function uses the current drive and current directory to complete the
specification. The function will not use the search path.
from twisted.internet import utils
utils.getProcessOutputAndValue("hg.exe", ['init', "test_dir"])
|
Installing the same python environment on another machine
Question: I'm developing a python project on my desktop (OS X) and it's running well.
Now I need to run it on a computing cluster that is running Linux and I have
no root access. On my Home in the computing cluster I installed Anaconda
locally. Then when I run my project there I got so many strange bugs and
errors. THe code runs fine on my desktop. So I thought probably the problem is
that probably Anaconda has newr modules than what I have and that's probably
causing errors. (backward compatibility issues)
Is there any way to use the same python modules versions I have in my desktop
on the computing cluster?
I wonder if there is a command that produces all the packages and modules I
have on my desktop with their versions and then import this file into some
tool in that linux machine and then I get a cloned version of my setup. Is
there something like that?
Answer: If you have pip installed you could try this [pip
freeze](http://pip.readthedocs.org/en/latest/reference/pip_freeze.html).
If you want to get list of modules from python shell:
help('modules')
|
Create executable of a python application
Question: I want to create an executable of a python application which can work on
ubuntu machine.
Python setuptools has options for windows(bdist_wininst) and rmp(bdist_rpm),
but i couldn't found any option in python setuptools for ubuntu or debian.
There is one more option "bdist_dumb" but i don't its use.
Questions:
1. How to create ubuntu executable using python setuptools ?
2. How pyinstaller is different from setuptools and which should i use ?
Answer: [stdeb](https://pypi.python.org/pypi/stdeb) might be what you're looking for
and it also works with distutils, which is similar to setuptools. Here's a
simple example taken from [their
repo](https://github.com/astraw/stdeb/blob/master/test_data/simple_pkg/setup.py):
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from distutils.core import setup
import sys
setup(name='simple_pkg',
packages=['simple_pkg'],
version='0.1',
author='Mister Unicodé',
author_email='[email protected]',
description='Python package with Unicodé fields',
long_description='This is a Python package with Unicodé data.',
)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.