text
stringlengths 226
34.5k
|
---|
How to build PyQt5 on Ubuntu
Question: I'm trying to move my work from `PySide` to `PyQt5`.
My project works with `Python3.4.1` yet Ubuntu's default python3 is
`Python3.4.0`, So I have to compile `PyQt5` by myself.
`Python3.4.1`'s path is `/opt/python3.4.1/bin/python3.4` and it works well
my system is ubuntu14.04
* * *
First, I download the source from official site, `PyQt-gpl-5.3.1.tar.gz` and
`sip-4.16.2.tar.gz`. Sip was installed successfully while an error occurs with
making `pyqt`.
my command is:
$/opt/python3.4.1/bin/python3.4 configure.py
$make
and the erro is
> g++ -m64 -Wl,-O1 -shared -o libpyqt5qmlplugin.so pluginloader.o
> moc_pluginloader.o -L/usr/X11R6/lib64 -L/opt/python3.4.1/lib -lpython3.4m
> -lQt5Qml -L/usr/lib/x86_64-linux-gnu -lQt5Network -lQt5Gui -lQt5Core -lGL
> -lpthread
>
> /usr/bin/ld: /opt/python3.4.1/lib/libpython3.4m.a(abstract.o): relocation
> R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making
> a shared object; recompile with -fPIC
>
> /opt/python3.4.1/lib/libpython3.4m.a: error adding symbols: Bad value
>
> collect2: error: ld returned 1 exit status
>
> make[1](http://pyside.readthedocs.org/en/latest/building/linux.html): _*_
> [libpyqt5qmlplugin.so] Error 1
The error do not raise when I use the default `$python3 configure.py` and
looks quite similar to an error when I build PySide(it is fixed by
[this](http://pyside.readthedocs.org/en/latest/building/linux.html)). But I am
weak in compiling so I can't fix pyqt like they do with pyside.
I also tried `pip install PyQt5` in my `virtualenv`, unfortunately, got the
same failure like [this](http://stackoverflow.com/questions/21577723/can-not-
install-pyqt5-from-the-python-packaging-index).
Even the successful build (looks like) with default python, two error shows
when running.
1. the default install path is "site-packages", which is instead of "dist-packages" on ubuntu. So I have to change my PYTHONPATH by myself.
2. which is the most annoying problem, when I import it:
> from PyQt5 import QtCore, QtGui
an error occurs :
> Traceback (most recent call last): File "", line 1, in from PyQt5 import
> QtCore, QtGui RuntimeError: the sip module implements API v11.0 but the
> PyQt5.QtCore module requires API v11.1
All the source is the latest released on the official site ! Is that a joke?
* * *
Finally, I use `apt-get install python3-pyqt5` to install pyqt to my default
python3.4.0. It works well now, and I still can't find a way to build a PyQt
for python3.4.1.
Answer: It is really a hard work!
> /usr/bin/ld: /opt/python3.4.1/lib/libpython3.4m.a(abstract.o): relocation
> R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making
> a shared object; recompile with -fPIC
This error raise because the linker could not find `libpythonX.X.so`. When I
built my `Python3.4.1`, I used only one option`--prefix`, so I do not have a
shared lib in my python's lib.
Then I rebuild my python with `--enable-shared`, then install `PyQt5`, and
successfully import PyQt.
However, I realize my python version is 3.4.0! Python exec. was linked to
system python's lib...
This article helps me a lot: <http://koansys.com/tech/building-python-with-
enable-shared-in-non-standard-location>
Finally, I add `LDFLAGS= -Wl,-rpath /opt/python3.4.1/lib` to configure option:
> ./configure --prefix=/opt/python3.4.1 --enable-shared LDFLAGS= -Wl,-rpath
> /opt/python3.4.1/lib
and then installed PyQt5.
Now, I can enjoy it~(≧▽≦)/~
|
Python: Simple TypeError
Question: This is a module containing two class:
import datetime
# Store the next available id for all new notes
last_id = 0
class Note:
'''Represent a note in the notebook. Match against a string
in searches and store tags for each note.'''
def __init__(self, memo, tags=''):
'''initialize a note with memo and optional
space-sparated tags. Automatically set the note's
creation date and a unique id.'''
self.memo = memo
self.tags = tags
self.creation_date = datetime.date.today()
global last_id
last_id += 1
self.id = last_id
def match(self, filter):
'''Determine if this note matches the filter
text. Return True if it matches, False otherwise.
Search is case sensitive and matches both text and tags.'''
return filter in self.memo or filter in self.tags
class Notebook:
'''Represent a collection of notes that can be tagged,
modified, and searched.'''
def __init__(self):
'''Initialize a notebook with an empty list.'''
self.notes = []
def new_note(self, memo, tags = ''):
'''Create a new note and add it to the list.'''
self.notes.append(Note(memo, tags))
def modify_memo(self, note_id, memo):
'''Find the note with the given id and change its
memo to the given value.'''
for note in self.notes:
if note.id == note_id:
note.memo = memo
break
def modify_tags(self, note_id, tags):
'''Find the note with the given id and change its
tags to the given value.'''
for note in self.notes:
if note.id == note_id:
note.tags = tags
break
def search(self, filer):
'''Find all notes that match the given filter
string.'''
return [note for note in self.notes if note.match(filter)]
When I create an instance of `Notebook()`, and perform a `search("hello")`, a
`TypeError` happened.
>>> from notebook import Note, Notebook
>>> n = Notebook()
>>> n.new_note("hello world")
>>> n.new_note("hello again")
>>> n.notes
[<notebook.Note object at 0x02A2F5F0>, <notebook.Note object at 0x02A51D90>]
>>> n.search("hello")
Traceback (most recent call last):
File "<pyshell#30>", line 1, in <module>
n.search("hello")
File "C:/Users/James/Desktop/Python3OOP/Cahpter2Notebook\notebook.py", line 60, in search
return [note for note in self.notes if note.match(filter)]
File "C:/Users/James/Desktop/Python3OOP/Cahpter2Notebook\notebook.py", line 60, in <listcomp>
return [note for note in self.notes if note.match(filter)]
File "C:/Users/James/Desktop/Python3OOP/Cahpter2Notebook\notebook.py", line 27, in match
return filter in self.memo or filter in self.tags
TypeError: 'in <string>' requires string as left operand, not type
The error tolds me the `filter` in `Note.match` must be a `string`, but when I
perform `n.search("hello")`, `"hello"` is already a string.
Can someone tell me what's really wrong inside? Thx!!!
Answer: You're using a `filter` but the variable is declared in the function's
signature as `filer`:
def search(self, filer):
so it probably takes a different `filter` from an outer scope (thanks to @grc
- filter is a builtin function) that is not defined as a string.
|
Multi-threaded Z3?
Question: I'm working on a Python project, where I'm currently trying to speed things up
in some horrible ways: I set up my Z3 solvers, then I fork the process, and
have Z3 perform the solve in the child process and pass a pickle-able
representation of the model back to the parent.
This works great, and represents the first stage of what I'm trying to do: the
parent process is now no longer CPU-bound. The next step is to multi-thread
the parent, so that we can solve multiple Z3 solvers in parallel.
I'm pretty sure I've mutexed away any concurrent accesses of Z3 in the setup
phase, and only one thread should be touching Z3 at any one time. However,
despite this, I'm getting random segfaults in libz3.so. It's important to
note, at this point, that it's not always the _same_ thread that touches Z3 --
the same object (not the solvers themselves, but the expressions) might be
handled by different threads at different times.
My question is, is it possible to multi-thread Z3? There is a brief note here
(<http://research.microsoft.com/en-us/um/redmond/projects/z3/z3.html>) saying
"It is not safe to access Z3 objects from multiple threads.", which I guess
would answer my question, but I'm holding out hope that it _means_ to say that
one shouldn't access Z3 from multiple threads _simultaneously_. Another
resource ([Again: Installing Z3 + Python on
Windows](http://stackoverflow.com/questions/14854956/again-
installing-z3-python-on-windows)) states, from Leonardo himself, that "Z3 uses
thread local storage", which, I guess, would sink this whole undertaking, but
a) that answer is from 2012, so maybe things have changed, and b) maybe it
uses thread-local storage for some unrelated stuff?
Anyways, is multi-threading Z3 possible (from Python)? I'd hate to have to
push the setup phase into the child processes...
Answer: Z3 does indeed use thread local storage, but as far as I can see, there is
only one point left in the code where it does so (to track how much memory
each thread is using; in memory_manager.cpp), but that should not be
responsible for the symptoms you see.
Z3 should behave nicely in a multi-threaded setting, if every thread strictly
uses only it's own context object (Z3_context, or in Python class Context).
This means that any object created through one of the Context's can not in any
way interact with any of the other Context's; if that is required, all objects
have to be translated from one Context to another first, e.g. in Python via
functions like translate(...) in class ASTRef.
That said, there surely are some bugs left to fix. My first target when seeing
random segfaults would be the garbage collector, because it might not interact
nicely with Z3's reference counting (which is the case in other APIs). There
is also a known bug that's triggered when many Context objects are created at
the same time (on my todo list though...)
|
Passing arguments in class in Python
Question: In the given code, lists x and y are randomly assigned numbers between 0-N
with some probability 0.5. I am randomly choosing an agent and removing it
using func1. I am adding one agent using func2. Now I have defined two class
objects x1 and x2. The input argument 'state1' of x2 is obtained from x1, and
similarly 'state2' of x1 is obtained from x2.
I am not able to find a proper way for passing arguments 'state2' and 'state1'
in x1 and x2. As you can see from last four lines of my code, state1 and
state2 should be defined before x1 and x2 but I can't do that as class objects
x1 and x2 should be defined first as I did.
What I am trying to achieve here is following: I have two populations N1 and
N2. I am randomly choosing one agent(say state1) from N1 and adding
it(preserving its state=state1) to N2. At the same time, I am removing one
agent from N2(say state2) and adding it(preserving its state=state2) to N1
population. This process is being repeated over time with fixed time interval.
Can somebody tell me a proper way to do it and make code run ?
import random
class func():
def __init__(self, N, state):
self.N = N
self.x = []
self.y = []
agents = range(self.N)
for i in range(self.N):
if random.random() < 0.5:
self.x.append(i)
else:
self.y.append(i)
agent = random.choice(agents)
def func1(self, agent):
if self.agent in self.x:
self.x.remove(agent)
return 1
elif self.agent in self.y:
self.y.remove(agent)
return 2
def func2(self, state):
if state == 1:
self.x.append(N)
return self.x
elif state == 2:
self.y.append(N)
return self.y
if __name__=='__main__':
N1 = 100
N2 = 100
x1 = state(N1, state2)
x2 = state(N2, state1)
state1 = func.func1()
state2 = func.func1()
Answer: You can remove the second argument from the state constructor and pass it
later, in a separate method like `set_state(state)`.
E.g.:
x1 = state(N1)
x2 = state(N2)
state1 = func.func1()
state2 = func.func2()
setState(x1, state2)
setState(x2, state1)
In the state function you have to strip the state argument and related code to
the new function setState. I do not see that code so I cannot show that here.
|
Regridding regular netcdf data
Question: I have a netcdf file containing global sea-surface temperatures. Using
matplotlib and Basemap, I've managed to make a map of this data, with the
following code:
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
filename = '/Users/Nick/Desktop/SST/SST.nc'
fh = Dataset(filename, mode='r')
lons = fh.variables['LON'][:]
lats = fh.variables['LAT'][:]
sst = fh.variables['SST'][:].squeeze()
fig = plt.figure()
m = Basemap(projection='merc', llcrnrlon=80.,llcrnrlat=-25.,urcrnrlon=150.,urcrnrlat=25.,lon_0=115., lat_0=0., resolution='l')
lon, lat = np.meshgrid(lons, lats)
xi, yi = m(lon, lat)
cs = m.pcolormesh(xi,yi,sst, vmin=18, vmax=32)
m.drawmapboundary(fill_color='0.3')
m.fillcontinents(color='0.3', lake_color='0.3')
cbar = m.colorbar(cs, location='bottom', pad="10%", ticks=[18., 20., 22., 24., 26., 28., 30., 32.])
cbar.set_label('January SST (' + u'\u00b0' + 'C)')
plt.savefig('SST.png', dpi=300)
The problem is that the data is very high resolution (9km grid) which makes
the resulting image quite noisy. I would like to put the data onto a lower
resolution grid (e.g. 1 degree), but I'm struggling to work out how this could
be done. I followed a worked solution to try and use the matplotlib griddata
function by inserting the code below into my above example, but it resulted in
'ValueError: condition must be a 1-d array'.
xi, yi = np.meshgrid(lons, lats)
X = np.arange(min(x), max(x), 1)
Y = np.arange(min(y), max(y), 1)
Xi, Yi = np.meshgrid(X, Y)
Z = griddata(xi, yi, z, Xi, Yi)
I'm a relative beginner to Python and matplotlib, so I'm not sure what I'm
doing wrong (or what a better approach might be). Any advice appreciated!
Answer: If you **regrid** your data to a coarser lat/lon grid using e.g. bilinear
interpolation, this will result in a **smoother** field.
The NCAR ClimateData guide has a nice [introduction to
regridding](https://climatedataguide.ucar.edu/climate-data-tools-and-
analysis/regridding-overview) (general, not Python-specific).
The most powerful implementation of regridding routines available for Python
is, to my knowledge, the [Earth System Modeling Framework (ESMF) Python
interface (ESMPy)](https://www.earthsystemcog.org/projects/esmp/). If this is
a bit too involved for your application, you should look into
1. [EarthPy](http://earthpy.org/) tutorials on regridding (e.g. using [Pyresample](http://earthpy.org/interpolation_between_grids_with_pyresample.html), [cKDTree](http://earthpy.org/interpolation_between_grids_with_ckdtree.html), or [Basemap](http://earthpy.org/interpolation_between_grids_with_basemap.html)).
2. Turning your data into an [Iris](http://scitools.org.uk/iris/) cube and using [Iris' regridding functions](http://scitools.org.uk/iris/docs/latest/userguide/interpolation_and_regridding.html#regridding).
Perhaps start by looking at the [EarthPy regridding tutorial using
Basemap](http://earthpy.org/interpolation_between_grids_with_basemap.html),
since you are using it already.
The way to do this in your example would be
from mpl_toolkits import basemap
from netCDF4 import Dataset
filename = '/Users/Nick/Desktop/SST/SST.nc'
with Dataset(filename, mode='r') as fh:
lons = fh.variables['LON'][:]
lats = fh.variables['LAT'][:]
sst = fh.variables['SST'][:].squeeze()
lons_sub, lats_sub = np.meshgrid(lons[::4], lats[::4])
sst_coarse = basemap.interp(sst, lons, lats, lons_sub, lats_sub, order=1)
This performs bilinear interpolation (`order=1`) on your SST data onto a sub-
sampled grid (every fourth point). Your plot will look more coarse-grained
afterwards. If you do not like that, interpolate back onto the original grid
with e.g.
sst_smooth = basemap.interp(sst_coarse, lons_sub[0,:], lats_sub[:,0], *np.meshgrid(lons, lats), order=1)
|
How does local rebinding of global names in Python make code faster/optimized?
Question: I was reading about [**Default Parameter Values in
Python**](http://effbot.org/zone/default-values.htm) on Effbot.
There is a section later in the article where the author talks about [**Valid
uses for mutable defaults**](http://effbot.org/zone/default-values.htm#valid-
uses-for-mutable-defaults) and cites the following example:
and, for highly optimized code, local rebinding of global names:
import math
def this_one_must_be_fast(x, sin=math.sin, cos=math.cos):
...
I haven't been able to locate how this causes fast/highly optimised execution
of code. Can somebody enlighten on this with a well informed (and preferably
with citations) answer?
Answer: CPython access to local variable is _index_ -based (involving the
[`LOAD_FAST`](https://docs.python.org/2/library/dis.html#opcode-LOAD_FAST)
opcode).
On the other hands, globals are accessed through _name lookup_ in a dictionary
(using opcode
[`LOAD_GLOBAL`](https://docs.python.org/2/library/dis.html#opcode-
LOAD_GLOBAL)). For module variables, it's a two step process. Using a first
look-up (`LOAD_GLOBAL`) to push the module object, and then using a second
look-up (`LOAD_ATTR`) to locate the appropriate member.
Even if dictionary lookup is highly optimized, it can't beat indirect access.
import math
def f():
math.sin(1)
4 0 LOAD_GLOBAL 0 (math) ***
3 LOAD_ATTR 1 (sin) ***
6 LOAD_CONST 1 (1)
9 CALL_FUNCTION 1
12 POP_TOP
13 LOAD_CONST 0 (None)
16 RETURN_VALUE
from math import sin
def f():
sin(1)
4 0 LOAD_GLOBAL 0 (sin) ***
3 LOAD_CONST 1 (1)
6 CALL_FUNCTION 1
9 POP_TOP
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
def f(sin=math.sin):
sin(1)
7 0 LOAD_FAST 0 (sin) ***
3 LOAD_CONST 1 (1)
6 CALL_FUNCTION 1
9 POP_TOP
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
|
Biopython parsing a GBK file without genome sequence
Question: I wrote a script that uses a GenBank file and Biopython to fetch the sequences
of given genes from the sequence part of the GBK file, which my colleagues use
for their work.
We had some problems now with a new data set, and it turned out that the GBK
file that was downloaded did not contain a sequence (which can easily happen
when you download from the GenBank website at NCBI). Instead of throwing an
error, Biopython returns a long sequence of Ns when using
`record.seq[start:end]`. What is the easiest way to catch that problem right
from the start to stop the script with an error message?
Answer: Right, I found a way. If I count the Ns in the sequence and check if there are
as many as the sequence is long, I know that the sequence is missing:
import sys
from Bio import SeqIO
for seq_record in SeqIO.parse("sequence.gb", "genbank"):
sequence = seq_record.seq
if len(sequence) == sequence.count("N"):
sys.exit("There seems to be no sequence in your GenBank file!")
I would have preferred a solution that checks the sequence type instead, since
the empty sequence is `Bio.Seq.UnknownSeq`, instead of `Bio.Seq.Seq` for a
real sequence, and would be thankful if anyone can suggest something in that
direction.
**Update**
@xbello made me try again to check the sequence type, now this also works:
import sys, Bio
from Bio import SeqIO
for seq_record in SeqIO.parse("sequence.gb", "genbank"):
sequence = seq_record.seq
if isinstance(sequence, Bio.Seq.UnknownSeq):
sys.exit("There seems to be no sequence in your GenBank file!")
|
Tweepy Update with media error
Question: I want to post an Image to twitter every hour out of an folder.
import os, tweepy, time, sys,
path="C:\Users\Kenny\Desktop\dunny"
files=os.listdir(path)
CONSUMER_KEY = 'hide'
CONSUMER_SECRET = 'hide'
ACCESS_KEY = 'hide'
ACCESS_SECRET = 'hide'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
for i in path:
api.update_with_media(files)
time.sleep(3600)
This is the error msg which I get if I try to run the code.
C:\Users\Kenny\Desktop>python htmlparse.py
Traceback (most recent call last):
File "htmlparse.py", line 14, in <module>
api.update_with_media(files)
File "C:\Python27\lib\site-packages\tweepy\api.py", line 98, in update_with_me
dia
headers, post_data = API._pack_image(filename, 3072, form_field='media[]', f
=f)
File "C:\Python27\lib\site-packages\tweepy\api.py", line 713, in _pack_image
if os.path.getsize(filename) > (max_size * 1024):
File "C:\Python27\lib\genericpath.py", line 49, in getsize
return os.stat(filename).st_size
TypeError: coercing to Unicode: need string or buffer, list found
Answer: You need to make your `path` string a raw string literal:
path = r"C:\Users\Kenny\Desktop\dunny"
or use double backward slashes:
path = "C:\\Users\\Kenny\\Desktop\\dunny"
or use forward slashes:
path = "C:/Users/Kenny/Desktop/dunny"
`\U` (from `"C:**\U** sers..."`) is an [escape
sequence](https://docs.python.org/2/reference/lexical_analysis.html#string-
literals) used to define a 32-bit hex value. This is why you're getting the
Unicode error.
* * *
The other issue is with your `for` loop at the bottom. Try this instead
(you'll need to `import os` at the top):
for i in files:
filename = os.path.join(path, i)
api.update_with_media(filename)
time.sleep(3600)
Previously, when you were using `for i in path:`, you were iterating over each
character in the string `path`. Then, in the body of the loop,
`api.update_with_media(files)` was trying to send the entire list of file
names, when the function only accepts one.
|
Showing element in html with flask and BeautifulSoup 3 and Python 2.7.8
Question: The question is: I have my @app.route and the relative def() which shows a
list of urls taken from "<http://annotaria.web.cs.unibo.it/documents/>". How
can i show this urls in a html format? When i click
`http://localhost:5000/articoli` i'd like to show a list of my urls.
Thank you very much
@app.route('/articoli', methods=['GET'])
def lista_articoli():
lista = []
import urllib2
from bs4 import BeautifulSoup
url = urllib2.urlopen`http://annotaria.web.cs.unibo.it/documents/.read()`
soup = BeautifulSoup(url)
for row in soup.findAll('a'):
if row.parent.name == 'td':
if row["href"] :
myArticle = row["href"]
if '.html' in myArticle:
print myArticle
lista.append({'url':myArticle}) }
Answer: To do this, have your function return the `render_template` method, to which
you'll pass your URL list.
At the end of your function, try:
render_template("articoli.html", lista = lista)
You'll need to have a corresponding `articoli.html` template saved in a
_templates_ folder, which in turn should be in the same location as your
Python script. Within the HTML of the template, you'll need to designate where
Flask/Jinja puts the URL list you've provided, which in this case would be
`{{lista}}`.
This is outlined in the [Flask
documentation](http://flask.pocoo.org/docs/0.10/quickstart/).
|
ImportError: No shared library could be loaded, make sure that librtmp is installed
Question: I am using Windows 8, and trying to work with python-librtmp. I have followed
the steps to install librtmp from here: `http://pythonhosted.org/python-
librtmp/`. For me, the two pip install lines worked successfully when run in
Windows Powershell. After installation, it says the libraries are in
`c:\python27\lib\site-packages`.
Now, I have opened a Python IDE (IDLE), and typed in `import librtmp`. This is
giving me the following error:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
import librtmp
File "C:\Python27\lib\site-packages\librtmp\__init__.py", line 14, in <module>
from librtmp_ffi.binding import librtmp
File "C:\Python27\lib\site-packages\librtmp_ffi\binding.py", line 13, in <module>
raise ImportError("No shared library could be loaded, "
ImportError: No shared library could be loaded, make sure that librtmp is installed.
The binding.py file:
import librtmp_config
from .ffi import ffi
from .verifier import verifier
for path in librtmp_config.library_paths:
try:
librtmp = ffi.dlopen(path)
break
except OSError:
pass
else:
raise ImportError("No shared library could be loaded, "
"make sure that librtmp is installed.")
librtmp = verifier.load_library()
The _init_.py file in librtmp_config folder:
"""Runtime configuration of python-librtmp.
This module provides access to variables used by this library
and makes it possible to customize some behaviour before :mod:`librtmp`
is imported.
"""
__all__ = ["library_paths"]
#: This is a list of filenames that python-librtmp
#: will attempt to dynamically load `librtmp` from.
library_paths = ["librtmp.so", "librtmp.so.0", "librtmp.dll", "librtmp.so.1", "librtmp.dylib"]
I am pretty new to Python, and this is the first time I am using Python in
Windows. When I installed librtmp, it said installation is successful. I
exactly followed the steps in the above link. I cannot understand, then why it
is saying `make sure that librtmp is installed`.
Is it some path issue or installation issue? I search for a solution online,
but nothing helped.
Do I need to install librtmp separately? After some reading I found, librtmp
is present in rtmpdump. I have downloaded rtmpdump zip file from windows. But
I don't know how to install it. README says, run "make SYS=mingw", but the zip
folder has no makefile!
But there is one subfolder in the rtmpdump folder. That folder contains
librtmp.dll. If you see above, the _init_.py mentions one librtmp.dll in its
library path. Does this mean, I have to refer to this .dll in the _init_.py.
But I don't know how to do that.
Can you please help?
Answer: Resolved!!! I copied `librtmp.dll` file from the rtmpdump package into
`C:\Python27\DLLs`. From the `binding.py` and the `_init_.py` file content I
figured, python is unable to locate the dll file. But, still I don't know why
it could locate the file in the DLLs folder. I just tried it randomly, and it
worked!
If anyone of you can explain the logic it would be great!
|
Can't work out why I'm getting NameError: name 'thread' is not defined
Question: I've downloaded this .py file and I'm trying to get it to run. However,
everytime I do I get the following callback error and I'm at a loss to work
out what is causing it. I'm running Python 3.4.1 if that's any help, but as
far as I can see it should all work. The error I get is:
C:\Users\******\Documents\****\>wkreator.py -d .\PsycOWPA -o .\PsycOWPA. txt Traceback (most recent call last): File "C:\Users\******\Documents\****\>", line 273, in <modu le>
main = WordlistKreator() File "C:\Users\******\Documents\****\>", line 21, in
__init
__
self.lock = thread.allocate_lock() NameError: name 'thread' is not defined
As far as I can see though, this error shouldn't be happening. I am new to
Python so forgive me if the answer is something stupid. Thank you!
checkinterval = 1000 ### CHANGE THIS IF YOU WANT MORE PRECISION INSTEAD OF SPEED!!! ###
import fnmatch
import sys
import time
import os
from threading import Thread, Lock
import math
class WordlistKreator(object):
"""
This is a little module that can merge or split wordlists. You can import it and set the
runningvar, and call run. To launch it from a shell, instantiate the class, call convert
to setup the runningvars dict with the cmdline args, then run. You need to import os,
sys, thread and time to use it.
"""
def __init__(self):
self.RunningVars = {'Mode':'merge', 'Dir':'', 'InWordlists':[], 'OutputWordlist':'', 'Suffix':0,
'WPAMode':0, 'Size':0}
self.Done = 0
self.lock = thread.allocate_lock()
self.OnWin = 0
def convert(self):
if fnmatch.fnmatch(sys.platform, '*win*'):
self.OnWin = 1
self.stampcomm('Processing cmdline arguments...')
actual = 0
for args in sys.argv:
actual = actual+1
if args == '-m':
self.RunningVars['Mode'] = sys.argv[actual]
elif args == '-d':
self.RunningVars['Dir'] = sys.argv[actual]
elif args == '-i':
if self.RunningVars['Mode'] == 'merge':
for wordlist in sys.argv[actual].split(':'):
self.RunningVars['InWordlists'].append(wordlist)
if self.RunningVars['Mode'] == 'split':
self.RunningVars['InWordlists'].append(sys.argv[actual])
elif args == '-o':
self.RunningVars['OutputWordlist'] = sys.argv[actual]
elif args == '-s':
self.RunningVars['Suffix'] = int(sys.argv[actual])
elif args == '-z':
self.RunningVars['Size'] = (int(sys.argv[actual])*1024)*1024
elif args == '-w':
self.RunningVars['WPAMode'] = 1
if self.RunningVars['InWordlists'] == [] and self.RunningVars['Mode'] == 'merge':
for wordlist in os.listdir(self.RunningVars['Dir']):
self.RunningVars['InWordlists'].append(os.path.split(wordlist)[1])
def run(self):
self.stampcomm('Starting the %s operations...'% self.RunningVars['Mode'])
if self.RunningVars['Mode'] == 'merge':
self.outlist = open(self.RunningVars['OutputWordlist'], 'a+')
thread.start_new(self.merge, ())
self.mergestats()
self.outlist.close()
self.stampcomm('Job completed!!!')
exit(0)
elif self.RunningVars['Mode'] == 'split':
self.mainlist = open(self.RunningVars['InWordlists'][0], 'r')
thread.start_new(self.split, ())
self.splitstats()
self.mainlist.close()
self.stampcomm('Job completed!!!')
exit(0)
else:
self.stampcomm('An error have occured, check your arguments and restart!!!')
exit(0)
def merge(self):
while True:
try:
self.lock.acquire(1)
self.actuallist = self.RunningVars['InWordlists'].pop()
if self.OnWin == 1:
tomerge = open(self.RunningVars['Dir'] + '\\' + self.actuallist, 'r')
else:
tomerge = open(self.RunningVars['Dir'] + '/' + self.actuallist, 'r')
self.lock.release()
while True:
try:
if self.RunningVars['WPAMode'] == 1:
word = tomerge.next()
if self.OnWin == 1:
if len(word) >= 10 and len(word) <= 65: # Add \r\n to the chars count;
self.outlist.write(word)
else:
if len(word) >= 9 and len(word) <= 64: # Add \n to the chars count;
self.outlist.write(word)
else:
self.outlist.write(tomerge.next())
except StopIteration:
break
tomerge.close()
except IndexError:
break
self.Done = 1
def split(self):
outpath, outname = os.path.split(self.RunningVars['OutputWordlist'])
extention = outname[-4:]
outname = outname[:-4]
if self.OnWin == 1:
outpath = outpath + '\\'
else:
outpath = outpath + '/'
requiredlist = int(math.ceil(float(os.path.getsize(self.RunningVars['InWordlists'][0])) / \
float(self.RunningVars['Size'])))
self.requiredliststat = requiredlist
list2work = []
if self.RunningVars['Suffix'] == 0:
try:
for listnum in range(requiredlist):
self.listnumstat = listnum
actuallistname = outpath + outname + str(listnum) + extention
self.actuallistnamestat = os.path.split(actuallistname)[1]
actualout = open(actuallistname, 'w')
loopcount = 0
while True:
if loopcount == checkinterval:
if os.path.getsize(actuallistname) >= self.RunningVars['Size']:
break
loopcount = 0
actualout.write(self.mainlist.next())
loopcount = loopcount + 1
except StopIteration:
actualout.close()
self.Done = 1
else:
try:
for listnum in range(requiredlist):
self.listnumstat = listnum
actuallistname = outpath + outname + str(listnum).zfill(self.RunningVars['Suffix']) + extention
self.actuallistnamestat = os.path.split(actuallistname)[1]
actualout = open(actuallistname, 'w')
loopcount = 0
while True:
if loopcount == 10000:
if os.path.getsize(actuallistname) >= self.RunningVars['Size']:
break
loopcount = 0
actualout.write(self.mainlist.next())
loopcount = loopcount + 1
except StopIteration:
actualout.close()
self.Done = 1
def stampcomm(self, message):
if self.OnWin == 1:
print('-=[' + time.asctime()[4:-8] + ']=-' + message)
else:
print('╟─' + time.asctime()[4:-8] + '─╫─' + message)
def mergestats(self):
Counter = 0
while self.Done == 0:
if Counter == 300:
self.lock.acquire(1)
self.stampcomm('Only %d more wordlist(s) to process... Actually working on %s' \
% (len(self.RunningVars['InWordlists']), self.actuallist))
self.lock.release()
Counter = 0
else:
time.sleep(1)
Counter = Counter + 1
def splitstats(self):
Counter = 0
while self.Done == 0:
if Counter == 300:
self.lock.acquire(1)
self.stampcomm('Currently %d list done out of %d... Actually working on %s' \
% (self.listnumstat, self.requiredliststat, self.actuallistnamestat))
self.lock.release()
Counter = 0
else:
time.sleep(1)
Counter = Counter + 1
if __name__ == '__main__':
if fnmatch.fnmatch(sys.platform, '*win*'):
usage = r"""
--== wkreator ==--
Wordlist Kreator(wkreator) Copyright (C) 2011 Mikael Lavoie
This program comes with ABSOLUTELY NO WARRANTY; This is free
software, and you are welcome to redistribute it under certain
conditions; Read GNU_GPL-3.0.pdf in the program directory for
more informations.
This program take an input dir, or multiple file seperated by :
and make one big file of them. It can also be used to split one
big wordlist into smaller chunks to use them one by one, during
a period of time, instead on crunching it one shot.
Usage: wkreator -m The mode of operation, that can be <merge>
or <split>.
-d The input directory. If used alone, all
.txt file in that directory will be used as
input files. Else you must provide all
wordlist name seperated by <:> using the -i
switch. To split use only -i.
-i The input wordlist(s) separated by : if
more than one. Ex: word1.txt:word2.txt:...
To split, enter full path to main list.
-o The output path and file name. If you enter
a path to an existing file, the inputs
wordlists will be appended to it.
-s The desired suffix number lenght, if you
desire zero padded numbers as suffix for
splitted wordlists.
-z The size in MB of the output wordlists in
split mode.
-w This toggle the WPA mode on; All < 8 and
> 63 chars words will be discarded.
--== By Mikael Lavoie in 2011 ==--
"""
else:
usage = r"""
╔════════════╗
┌─────────────────────────╢ wkreator ╟───────────────────────────┐
│ ╚════════════╝ │
│ Wordlist Kreator(wkreator) Copyright (C) 2011 Mikael Lavoie │
│ │
│ This program comes with ABSOLUTELY NO WARRANTY; This is free │
│ software, and you are welcome to redistribute it under certain │
│ conditions; Read GNU_GPL-3.0.pdf in the program directory for │
│ more informations. │
│ │
│ This program take an input dir, or multiple file seperated by : │
│ and make one big file of them. It can also be used to split one │
│ big wordlist into smaller chunks to use them one by one, during │
│ a period of time, instead on crunching it one shot. │
│ │
│ Usage: wkreator -m The mode of operation, that can be <merge> │
│ or <split>. │
│ -d The input directory. If used alone, all │
│ .txt file in that directory will be used as │
│ input files. Else you must provide all │
│ wordlist name seperated by <:> using the -i │
│ switch. To split use only -i. │
│ -i The input wordlist(s) separated by : if │
│ more than one. Ex: word1.txt:word2.txt:... │
│ To split, enter full path to main list. │
│ -o The output path and file name. If you enter │
│ a path to an existing file, the inputs │
│ wordlists will be appended to it. │
│ -s The desired suffix number lenght, if you │
│ desire zero padded numbers as suffix for │
│ splitted wordlists. │
│ -z The size in MB of the output wordlists in │
│ split mode. │
│ -w This toggle the WPA mode on; All < 8 and │
│ > 63 chars words will be discarded. │
│ ╔══════════════════════════╗ │
└───────────────────╢ By Mikael Lavoie in 2011 ╟───────────────────┘
╚══════════════════════════╝
"""
###### The Shell Args Interpreter ######
if len(sys.argv) > 1 and sys.argv[1] == '--help' or len(sys.argv) == 1 or sys.argv[1] == '-h':
print(usage)
exit(0)
main = WordlistKreator()
main.convert()
main.run()
Answer: Try adding `import _thread` as well. Currently you're importing a few classes
from the [`threading`
module](https://docs.python.org/3/library/threading.html), which is different
from the [`thread`
module](https://docs.python.org/3/library/_thread.html#module-_thread). You'll
also want to change the call to:
self.lock = _thread.allocate_lock()
Here's [an example in the Python
docs](https://docs.python.org/3/library/_thread.html#_thread.lock.locked).
As the Python docs recommend, it's a good idea to prefer the `threading`
module as it's higher level and won't break if you try to run the code in
Python 2. I would recommend looking into the [`Lock`
class](https://docs.python.org/3/library/threading.html#lock-objects).
|
How to Crawl Multiple Websites to find common Words (BeautifulSoup,Requests,Python3)
Question: I'm wondering how to crawl multiple different websites using beautiful
soup/requests without having to repeat my code over and over.
Here is my code right now:
import requests
from bs4 import BeautifulSoup
from collections import Counter
import pandas as pd
Website1 = requests.get("http://www.nerdwallet.com/the-best-credit-cards")
soup = BeautifulSoup(Website1.content)
texts = soup.findAll(text=True)
a = Counter([x.lower() for y in texts for x in y.split()])
b = (a.most_common())
makeaframe = pd.DataFrame(b)
makeaframe.columns = ['Words', 'Frequency']
print(makeaframe)
**What I am trying to do** is ideally crawl 5 different websites, find all of
the individual words on these websites, find the frequency of each word on
each website, ADD all the frequencies together for each particular word, then
combine all of this data into one dataframe that can be exported using Pandas.
Hopefully the output would look like this
Word Frequency
the 200
man 300
is 400
tired 300
My code can only do this for ONE website at a time right now and I'm trying to
avoid repeating my code.
Now, I can do this manually by repeating my code over and over and crawling
each individual website and then concatenating my results for each of these
dataframes together but that seems very unpythonic. I was wondering if anyone
had a faster way or any advice? Thank you!
Answer: Make a function:
import requests
from bs4 import BeautifulSoup
from collections import Counter
import pandas as pd
cnt = Counter()
def GetData(url):
Website1 = requests.get(url)
soup = BeautifulSoup(Website1.content)
texts = soup.findAll(text=True)
a = Counter([x.lower() for y in texts for x in y.split()])
cnt.update(a.most_common())
websites = ['http://www.nerdwallet.com/the-best-credit-cards','http://www.other.com']
for url in websites:
GetData(url)
makeaframe = pd.DataFrame(cnt.most_common())
makeaframe.columns = ['Words', 'Frequency']
print makeaframe
|
Uploading file with AngularJS fails
Question: Below are the snippets of my code regarding file upload.
Here is my HTML code where I will choose and upload the file:
<form ng-click="addImportFile()" enctype="multipart/form-data">
<label for="importfile">Import Time Events File:</label><br><br>
<label for="select_import_file">SELECT FILE:</label><br>
<input id="import_file" type="file" class="file btn btn-default" ng-disabled="CutOffListTemp.id== Null" data-show-preview="false">
<input class="btn btn-primary" type="submit" name="submit" value="Upload" ng-disabled="CutOffListTemp.id== Null"/><br/><br/>
</form>
This is my controller that will link both html and my python file:
angular.module('hrisWebappApp').controller('ImportPayrollCtrl', function ($scope, $state, $stateParams, $http, ngTableParams, $modal, $filter) {
$scope.addImportFile = function() {
$http.post('http://127.0.0.1:5000/api/v1.0/upload_file/' + $scope.CutOffListTemp.id, {})
.success(function(data, status, headers, config) {
console.log(data);
if (data.success) {
console.log('import success!');
} else {
console.log('importing of file failed' );
}
})
.error(function(data, status, headers, config) {});
};
This is my python file:
@api.route('/upload_file/<int:id>', methods=['GET','POST'])
@cross_origin(headers=['Content-Type'])
def upload_file(id):
print "hello"
try:
os.stat('UPLOAD_FOLDER')
except:
os.mkdir('UPLOAD_FOLDER')
file = request.files['file']
print 'filename: ' + file.filename
if file and allowed_file(file.filename):
print 'allowing file'
filename = secure_filename(file.filename)
path=(os.path.join(current_app.config['UPLOAD_FOLDER'], filename))
file.save(path) #The end of the line which save the file you uploaded.
return redirect(url_for('uploaded_file',
filename=filename))
return '''
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<p>opsss it seems you uploaded an invalid filename please use .csv only</p>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
'''
And the result in the console gave me this even if I select the correct format
of file:
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<p>opsss it seems you uploaded an invalid filename please use .csv only</p>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
This is not returning to my HTML and I cannot upload the file.
Answer: The first thing is about the post request. Without ng-click="addImportFile()",
the browser will usually take care of serializing form data and sending it to
the server. So if you try:
<form method="put" enctype="multipart/form-data" action="http://127.0.0.1:5000/api/v1.0/upload_file">
<label for="importfile">Import Time Events File:</label><br><br>
<label for="select_import_file">SELECT FILE:</label><br>
<input id="import_file" type="file" name="file" class="file btn btn-default" ng-disabled="CutOffListTemp.id== Null" data-show-preview="false">
<input class="btn btn-primary" type="submit" name="submit" value="Upload" ng-disabled="CutOffListTemp.id== Null"/><br/><br/>
</form>
and then in python, make your request url independent of
scope.CutOffListTemp.id: @api.route('/upload_file', methods=['GET','POST'])
It probably will work.
Alternatively, if you want to use your custom function to send post request,
the browser will not take care of the serialization stuff any more, you will
need to do it yourself.
In angular, the API for $http.post is: $http.post('/someUrl',
data).success(successCallback); If we use "{}" for the data parameter, which
means empty, the server will not find the data named "file" (file =
request.files['file']). Thus you will see Bad Request
To fix it, we need to use formData to make file upload which requires your
browser supports HTML5:
$scope.addImportFile = function() {
var f = document.getElementById('file').files[0]
var fd = new FormData();
fd.append("file", f);
$http.post('http://127.0.0.1:5000/api/v1.0/upload_file/'+$scope.CutOffListTemp.id,
fd,
headers: {'Content-Type': undefined})
.success......
Other than using the native javascript code above, there are plenty great
angular file upload libraries which can make file upload much easier for
angular, you may probably want to have a look at one of them (reference: [File
Upload using angularjs](http://stackoverflow.com/questions/18571001/file-
upload-using-angularjs)):
* <https://github.com/nervgh/angular-file-upload>
* <https://github.com/leon/angular-upload>
* ......
|
Nested Dictionary Python with fantasy football data
Question: I'm trying to run my own statistics model to try and get an edge in my fantasy
football league, and I'm having troubles creating my data structure of
dictionaries. My data looks like this.
Player Team G Att Yard TD Target Rec Yard TD FFPts FFpts/G
Adrian Peterson MIN 1 18 93 2 4 4 18 1 29.1 29.1
Reggie Bush DET 1 21 90 0 8 4 101 1 25.1 25.1
LeSean McCoy PHI 1 31 184 1 1 1 5 0 24.9 24.9
Joique Bell DET 1 6 25 2 6 5 67 0 21.2 21.2
Jamaal Charles KC 1 16 77 1 6 3 23 0 16.0 16.0
Shane Vereen NE 1 14 101 0 10 7 58 0 15.9 15.9
and I have files for each week of the season.
{'Adrian Peterson':
{'week1.txt':
{'Team':'MIN', 'G':'1','Att':'18'....}
}
{'week2.txt':
{'Team':'MIN', 'G':'1','Att':'20'....}
}
}
However, when I run my code it overwrites each week, and I can't figure out
why.
Code:
import os
def makeDict():
directory = os.listdir('.')
print directory
playerDict = {}
for item in directory:
if item != 'bayes.py' and item !='.DS_Store':
f = open(item)
headers = f.readline().split('\t')
for line in f:
stats = line.replace('\n','').split('\t')
statDict = {}
weekDict = {}
for i,stat in enumerate(stats):
if i ==0:
player = stat
else:
statDict[headers[i]] = stat
if i == len(stats)-1:
weekDict[item] = statDict
playerDict[player] = weekDict
return playerDict
print makeDict()
It ends up that the dictionary only holds the information from the last file.
What am I doing wrong?
Answer: The reason is because the way you have structured your output dictionary is
wrong. If `Adrian Peterson` 's stats are going to be in several files, and you
wish to aggregate them under one key (i.e., his name) then
`playerDict[player_name]` should be a list.
So change your code to do the following:
1. `from collections import defaultdict`
2. `playerDict = defaultdict(list)`
3. `playerDict[player].append(weekDict)` instead of `playerDict[player] = weekDict`
I duplicated `Adrian Peterson`'s data across different files, made the above
changes to your code and this is what I see:
{
'AdrianPeterson': [
{
'data': {
'FFPts': '29.1',
'Yard': '18',
'Target': '4',
'G': '1',
'Att': '188',
'Team': 'MIN',
'Rec': '4',
'TD': '1',
'FFpts/G\n': '29.1'
}
},
{
'data2': {
'FFPts': '29.1',
'Yard': '18',
'Target': '4',
'G': '1',
'Att': '188',
'Team': 'MIN',
'Rec': '4',
'TD': '1',
'FFpts/G\n': '29.1'
}
}
]
}
Here `data` and `data2` were the name of my files. In your case, it would be
`week1.txt` and `week2.txt`
|
UnicodeDecodeError with word stemming in Python
Question: I'm so stumped.
I have a list of a couple of thousand words
x = ['company', 'arriving', 'wednesday', 'and', 'then', 'beach', 'how', 'are', 'you', 'any', 'warmer', 'there', 'enjoy', 'your', 'day', 'follow', 'back', 'please', 'everyone', 'go', 'watch', 's', 'new', 'video', 'you', 'know', 'the', 'deal', 'make', 'sure', 'to', 'subscribe', 'and', 'like', '<http>', 'you', 'said', 'next', 'week', 'you', 'will', 'be', 'the', 'one', 'picking', 'me', 'up', 'lol', 'hindi', 'na', 'tl', 'huehue', 'that', 'works', 'you', 'said', 'everyone', 'of', 'us', 'my', 'little', 'cousin', 'keeps', 'asking', 'if', 'i', 'wanna', 'play', 'and', "i'm", 'like', 'yes', 'but', 'with', 'my', 'pals', 'not', 'you', "you're", 'welcome', 'pas', 'quand', 'tu', 'es', 'vers', '<num>', 'i', 'never', 'get', 'good', 'mornng', 'texts', 'sad', 'sad', 'moment', 'i', 'think', 'ima', 'go', 'get', 'a', 'glass', 'of', 'milk', 'ahah', 'for', 'the', 'first', 'time', 'i', 'actually', 'know', 'what', 'their', 'doing', 'd', 'thank', 'you', 'happy', 'birthday', 'hope', "you're"...........]
Now, I've confirmed the type of each element in this list to be a string
types = []
for word in x:
a.append(type(word))
print set(a)
>>>set([<type 'str'>])
Now, I attempt to stem each word using NLTK's porter stemmer
import nltk
porter = nltk.PorterStemmer()
stemmed_x = [porter.stem(word) for word in x]
And I get this error that is clearly related to the stemming package and
unicode somehow:
File "/Library/Python/2.7/site-packages/nltk-3.0.0b2-py2.7.egg/nltk/stem/porter.py", line 633, in stem
stem = self.stem_word(word.lower(), 0, len(word) - 1)
File "/Library/Python/2.7/site-packages/nltk-3.0.0b2-py2.7.egg/nltk/stem/porter.py", line 591, in stem_word
word = self._step1ab(word)
File "/Library/Python/2.7/site-packages/nltk-3.0.0b2-py2.7.egg/nltk/stem/porter.py", line 289, in _step1ab
if word.endswith("ied"):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 12: ordinal not in range(128)
I have tried everything, using `codecs.open`, trying to explicitly encode each
word as `utf8` \- still produce the same error.
Please advise.
EDIT:
I should mention that this code worked perfect on my PC running Ubuntu. I
recently got a macbook pro and I'm getting this error. I've checked the
terminal settings on my mac and it is set to utf8 encoding.
EDIT 2:
Interesting, with this piece of code, I have isolated the problem words:
for w in x:
try:
porter.stem(w)
except UnicodeDecodeError:
print w
#sagittarius”
#instadane…
#bleedblue”
#pr챕cieux
#على_شرفة_الماضي
#exploringsf…
#fishing…
#sindhubestfriend…
#الإستعداد_لإنهيار_ال_سعود
#jaredpreslar…
#femalepains”
#gobillings”
#juicing…
#instamood…
Seems like what they all have in common are extra punctuation at the end of
the word, except for the word #pr챕cieux
Answer: You have probably a multi-byte UTF8 char lurking around as `0xe2` is one of
the first byte possible for an [16-bit codepoint encoded as
UTF-8](http://en.wikipedia.org/wiki/UTF-8#Description). As your program assume
ASCII chars, with valid encoded values from `0x00` to `0x7F`, this value is
rejected.
You might be able to identify the "bad" value by a simple comprehension, then
fix it by hand (as I assume from your data your want only deal with ASCII
chars):
print [value for value in x if '\xe2' in x]
|
Passing a polymorphic object from C++ to a python function
Question: I have a C++ library with 2 classes defined in it: **t_foo_base** and
**t_foo**. t_foo is derived from t_foo_base. Both of them implement a virtual
function **text**. This function returns a string with the name of the current
class. I use boost.python to generate a wrapper for this library and for this
classes.
I import this library in a python script. In this script I implemented a
function. This function takes one parameter and call the function "text()" on
it.
Now i import this python script in a C++ application. I use boost.python
again. I get the function "test_function" from the python script and call it
this way:
t_foo_base foo_base;
test_function( foo_base );
t_foo foo;
test_function( foo );
t_foo_base* foo_base_tpr = new t_foo_base;
test_function( *foo_base_tpr );
t_foo_base* foo_ptr = new t_foo;
test_function( *foo_ptr );
The outpur is:
> t_foo_base
>
> t_foo
>
> t_foo_base
>
> t_foo_base
**I would expact the 4th line of the output to be "t_foo", not "t_foo_base".**
It seems, that passing a derived object by its base class pointer "cuts away"
all the features of the derived object. Is there a way to do fix this problem?
I am using Visual Studio 2013, Python 3.4 and Boost 1.56.0.
This is code from the C++ library:
\-- file t_foo.h --
class __declspec( dllexport ) t_foo_base
{
public:
t_foo_base(){};
virtual ~t_foo_base(){}
virtual std::string text( void ) const { return( "t_foo_base" ); };
};
class __declspec( dllexport ) t_foo: public t_foo_base
{
public:
t_foo(){};
virtual std::string text( void ) const override { return( "t_foo" ); };
};
\-- file t_foo.cpp --
#include "t_foo.h"
#include <boost/python.hpp>
using namespace boost::python;
BOOST_PYTHON_MODULE( ex_three_lib )
{
class_< t_foo_base >( "t_foo_base" )
.def( "text", &t_foo_base::text );
class_< t_foo, bases< t_foo_base > >( "t_simple_callback" )
.def( "text", &t_foo::text );
}
This is the python script:
import ex_three_lib
def test_function( item ):
print( item.text() )
print( "no function call here" )
This is the c++ application:
void
init_module_ex_three_lib
(
void
);
boost::python::object
import_python_object( const std::string& p_name, const std::string& p_path, boost::python::api::object& p_global )
{
using namespace boost::python;
try
{
dict locals;
locals[ "modulename" ] = p_name;
locals[ "path" ] = p_path;
exec
(
"import imp\n"
"newmodule = imp.load_module( modulename, open( path ), path,( 'py', 'U', imp.PY_SOURCE ) )\n",
p_global,
locals
);
return locals[ "newmodule" ];
}
catch( boost::python::error_already_set const & )
{
if( PyErr_ExceptionMatches( PyExc_ZeroDivisionError ) )
{
assert( false );
}
else
{
PyErr_Print();
}
}
}
int
main
(
void
)
{
Py_Initialize();
init_module_ex_three_lib();
try
{
boost::python::object main_module = boost::python::import( "__main__" );
boost::python::object main_namespace = main_module.attr( "__dict__" );
{
boost::python::object script = import_python_object
(
"ex_one_script",
"ex_three_script.py",
main_namespace
);
boost::python::object test_function = script.attr( "test_function" );
t_foo_base foo_base;
test_function( foo_base );
t_foo foo;
test_function( foo );
t_foo_base* foo_base_tpr = new t_foo_base;
test_function( *foo_base_tpr );
t_foo_base* foo_ptr = new t_foo;
test_function( *foo_ptr );
}
}
catch( boost::python::error_already_set const & )
{
if( PyErr_ExceptionMatches( PyExc_ZeroDivisionError ) )
{
assert( false );
}
else
{
PyErr_Print();
}
}
}
Answer: You should use `boost::ref(...)` here. This will basically convert any
value/pointer to its reference. Also see [Calling Python Functions and
Methods](http://www.boost.org/doc/libs/1_56_0/libs/python/doc/v2/callbacks.html)
t_foo_base foo_base;
test_function( boost::ref(foo_base) );
t_foo foo;
test_function( boost::ref(foo) );
t_foo_base* foo_base_tpr = new t_foo_base;
test_function( boost::ref(*foo_base_tpr) );
t_foo_base* foo_ptr = new t_foo;
test_function( boost::ref(*foo_ptr) );
->
t_foo_base
t_foo
t_foo_base
t_foo
|
Python groupby behaves strangely
Question:
from itertools import groupby
source = [ [1,2], [1,3], [2, 1] ]
gby = groupby(source, lambda x: x[0])
print 'as list'
for key, vals in list(gby):
print 'key {}'.format(key)
for val in vals:
print ' val {}'.format(val)
print
print 'as iter'
gby = groupby(source, lambda x: x[0])
for key, vals in gby:
print 'key {}'.format(key)
for val in vals:
print ' val {}'.format(val)
Results:
as list
key 1
key 2
val [2, 1]
as iter
key 1
val [1, 2]
val [1, 3]
key 2
val [2, 1]
What is wrong with `list(gby)`? I would expect `list` to be pure function, how
does it manage to corrupt internal state?
Answer: The
[documentation](https://docs.python.org/2/library/itertools.html#itertools.groupby)
makes a note about this:
> The returned group is itself an iterator that shares the underlying iterable
> with groupby(). Because the source is shared, when the groupby() object is
> advanced, the previous group is no longer visible. So, if that data is
> needed later, it should be stored as a list:
>
>
> groups = []
> uniquekeys = []
> data = sorted(data, key=keyfunc)
> for k, g in groupby(data, keyfunc):
> groups.append(list(g)) # Store group iterator as a list
> uniquekeys.append(k)
>
You're exhausting the `groupby` object (by turning it into a list) prior to
trying to iterate over the returned group iterators, so all the groups other
than the last group are lost.
The reason for this is easier to figure out by looking at the Python
implementation of the function:
class groupby(object):
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def next(self):
while self.currkey == self.tgtkey:
self.currvalue = next(self.it)
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey))
def _grouper(self, tgtkey): # This is the "group" iterator
while self.currkey == tgtkey: # self.currkey != tgtkey if you advance groupby and then try to use this object.
yield self.currvalue
self.currvalue = next(self.it)
self.currkey = self.keyfunc(self.currvalue)
Calling `next(groupby)` advances the internal pointer to the underlying
iterable (`self.currvalue`) to the next key, then returns the current key
(`self.currkey`) and the `_grouper` iterator. `_grouper` takes the current key
as an argument (called `tgtkey`), and will yield values (and recalculate
`self.currkey`), until `self.currkey` is different than the `tgtkey`, meaning
its returned all the values corresponding to the current key. So, if you
advance `groupby` prior to using a `_grouper` object, `self.currkey` will
_never_ be equal to `tgtkey`, so the `_grouper` iterator will return nothing.
If for some reason you _do_ need to store the `groupby` results in a list, you
have to do it like this:
gby_list = []
for key, vals in gby:
gby_list.append(key, list(vals))
Or:
gby_list = [key, list(vals) for key, vals in gby]
|
error creating virtualenv in existing folder
Question:
$ virtualenv virtenv
Overwriting virtenv/lib/python2.7/site.py with new content
New python executable in virtenv/bin/python
Overwriting virtenv/lib/python2.7/distutils/__init__.py with new content
Installing setuptools, pip...
Complete output from command /virtenv/bin/python -c "import sys, pip; sys...d\"] + sys.argv[1:]))" setuptools pip:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 245, in _get_queued_page
page = self._get_page(location, req)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 335, in _get_page
return HTMLPage.get_page(link, req, cache=self.cache)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 452, in get_page
resp = urlopen(url)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/download.py", line 85, in __call__
response = urllib2.urlopen(self.get_request(url))
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 396, in open
protocol = req.get_type()
File "/usr/lib/python2.7/urllib2.py", line 258, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: /usr/lib/python2.7/dist-packages
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 245, in _get_queued_page
page = self._get_page(location, req)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 335, in _get_page
return HTMLPage.get_page(link, req, cache=self.cache)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 452, in get_page
resp = urlopen(url)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/download.py", line 85, in __call__
response = urllib2.urlopen(self.get_request(url))
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 396, in open
protocol = req.get_type()
File "/usr/lib/python2.7/urllib2.py", line 258, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: /usr/share/python-virtualenv/
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 245, in _get_queued_page
page = self._get_page(location, req)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 335, in _get_page
return HTMLPage.get_page(link, req, cache=self.cache)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/index.py", line 452, in get_page
resp = urlopen(url)
File "/virtenv/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/download.py", line 85, in __call__
response = urllib2.urlopen(self.get_request(url))
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 396, in open
protocol = req.get_type()
File "/usr/lib/python2.7/urllib2.py", line 258, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: .
Ignoring indexes: http://pypi.python.org/simple/
Downloading/unpacking distribute
Could not find any downloads that satisfy the requirement distribute
No distributions at all found for distribute
Storing complete log in /home/collin/.pip/pip.log
----------------------------------------
...Installing setuptools, pip...done.
Traceback (most recent call last):
File "/usr/bin/virtualenv", line 3, in <module>
virtualenv.main()
File "/usr/lib/python2.7/dist-packages/virtualenv.py", line 825, in main
symlink=options.symlink)
File "/usr/lib/python2.7/dist-packages/virtualenv.py", line 993, in create_environment
install_wheel(to_install, py_executable, search_dirs)
File "/usr/lib/python2.7/dist-packages/virtualenv.py", line 961, in install_wheel
'PIP_NO_INDEX': '1'
File "/usr/lib/python2.7/dist-packages/virtualenv.py", line 903, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /virtenv/bin/python -c "import sys, pip; sys...d\"] + sys.argv[1:]))" setuptools pip failed with error code 1
Answer: The version of pip in the existing virtualenv is outdated (version 1.1).
upgrade it first:
./virtenv/bin/pip install --upgrade pip
or just delete it:
rm ./virtenv/bin/pip*
rm -r ./virtenv/lib/python*/site-packages/pip*
|
What's the best way to have QTableView and Database in sync after inserting the record?
Question: Let's suppose I have a QTableView with QSqlTableModel/Database. I don't want
to let user edit the cells in QTableView. There are CRUD buttons that open new
dialog forms and the user is supposed to enter data. After the user clicks
dialog's OK button, what is the best way to insert that new record to database
and view (to have them in sync), because database can be unavailable at the
time (for example, inserting to remote database while having internet
connection problems)?
My primary concern is I don't want to show phantom records in view and I want
the user to be aware the record is not entered in the database.
I put some python code (but for Qt my problem is the same) to illustrate this,
and have some other questions in comments:
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtSql import *
class Window(QWidget):
def __init__(self, parent=None):
QWidget.__init__(self, parent)
self.model = QSqlTableModel(self)
self.model.setTable("names")
self.model.setHeaderData(0, Qt.Horizontal, "Id")
self.model.setHeaderData(1, Qt.Horizontal, "Name")
self.model.setEditStrategy(QSqlTableModel.OnManualSubmit)
self.model.select()
self.view = QTableView()
self.view.setModel(self.model)
self.view.setSelectionMode(QAbstractItemView.SingleSelection)
self.view.setSelectionBehavior(QAbstractItemView.SelectRows)
#self.view.setColumnHidden(0, True)
self.view.resizeColumnsToContents()
self.view.setEditTriggers(QAbstractItemView.NoEditTriggers)
self.view.horizontalHeader().setStretchLastSection(True)
addButton = QPushButton("Add")
editButton = QPushButton("Edit")
deleteButton = QPushButton("Delete")
exitButton = QPushButton("Exit")
hbox = QHBoxLayout()
hbox.addWidget(addButton)
hbox.addWidget(editButton)
hbox.addWidget(deleteButton)
hbox.addStretch()
hbox.addWidget(exitButton)
vbox = QVBoxLayout()
vbox.addWidget(self.view)
vbox.addLayout(hbox)
self.setLayout(vbox)
addButton.clicked.connect(self.addRecord)
#editButton.clicked.connect(self.editRecord) # omitted for simplicity
#deleteButton.clicked.connect(self.deleteRecord) # omitted for simplicity
exitButton.clicked.connect(self.close)
def addRecord(self):
# just QInputDialog for simplicity
value, ok = QInputDialog.getText(self, 'Input Dialog', 'Enter the name:')
if not ok:
return
# Now, what is the best way to insert the record?
# 1st approach, first in database, then model.select()
# it seems like the most natural way to me
query = QSqlQuery()
query.prepare("INSERT INTO names (name) VALUES(:name)")
query.bindValue( ":name", value )
if query.exec_():
self.model.select() # now we know the record is inserted to db
# the problem with this approach is that select() can be slow
# somehow position the view to newly added record?!
else:
pass
# message to user
# if the record can't be inserted to database,
# there's no way I will show that record in view
# 2nd approach, first in view (model cache), then in database
# actually, I don't know how to do this
# can somebody instruct me?
# maybe:
# record = ...
# self.model.insertRecord(-1, record) #
# submitAll()
# what if database is unavailable?
# what if submitAll() fails?
# in that case, how to have view and model in sync?
# is this the right approach?
# 3. is there some other approach?
app = QApplication(sys.argv)
db = QSqlDatabase.addDatabase("QSQLITE")
db.setDatabaseName(":memory:")
db.open()
query = QSqlQuery()
query.exec_("DROP TABLE names")
query.exec_("CREATE TABLE names(id INTEGER PRIMARY KEY AUTOINCREMENT UNIQUE NOT NULL, name TEXT)")
query.exec_("INSERT INTO names VALUES(1, 'George')")
query.exec_("INSERT INTO names VALUES(2, 'Rita')")
query.exec_("INSERT INTO names VALUES(3, 'Jane')")
query.exec_("INSERT INTO names VALUES(4, 'Steve')")
query.exec_("INSERT INTO names VALUES(5, 'Maria')")
query.exec_("INSERT INTO names VALUES(6, 'Bill')")
window = Window()
window.resize(600, 400)
window.show()
app.exec_()
Answer: You can still use `QSqlTableModel`. You can turn off all edit triggers in your
table view and then pass the model to your data capturing forms and let the
widgets bind to the model using `QDataWidgetMapper`, making sure the submit
mode is set to manual so you can validate fields first.
|
external functions in Python 3.4
Question: I have been working on an elements dictionary and I have encountered a problem
with running external functions through inputs. I need to take an input and
use it to call a function with several variables. here is my code:
import atoms
import time
print("Hello, and welcome to the element dictionary. This app takes an element symbol")
print(" and outputs a small amount of data about the element.")
element=input("please input an elements symbol :")
(element)
print('catagory: ',cat )
print(' atomic number: ',atomn)
print(' atomic weight: ',atomw)
print('colour: ',colour )
print(' phase: ',phase )
print(' melting point: ',meltpoint)
print('boiling point: ',boilpoint)
print('crystal structure: ',cstruc)
time.sleep(100)
`(element)` is where an external function is needed, and 'atoms' is where the
functions are stored.
Answer: > I need the function to be inputted by the user
I don't really know if this what you are looking for. But you have to know
that Python has the so called [First-class
function](http://en.wikipedia.org/wiki/First-class_function). That is, you may
store a function in a variable like any other value. In a varaible or in a
dictionary.
To familiarize yourself with that idea, take some time experimenting with this
example:
def f():
print("This is f")
def g():
print("This is g")
def other():
print("Other choice")
actions = {
"f": f,
"F": f,
"g": g,
"G": g
}
your_choice=raw_input("Choose f or g: ")
your_fct = actions.get(your_choice, other)
# ^^^^^
# default value
your_fct()
Of course, you can pass arguments while calling `your_fct` just like any other
function.
|
the purpose of interpreter interactive mode keep file opening
Question: If the code is run as script:
$ cat open_sleep.py
import time
open("/tmp/test")
time.sleep(1000)
$ python open_sleep.py
OR I do this without interactive mode:
$ python -c 'import time;open("/tmp/test");time.sleep(1000)'
There is no file keep opening:
$ ls -la /proc/`pgrep python`/fd
total 0
dr-x------. 2 ack0hole ack0hole 0 Aug 30 14:19 .
dr-xr-xr-x. 8 ack0hole ack0hole 0 Aug 30 14:19 ..
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:19 0 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:19 1 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:19 2 -> /dev/pts/2
$
Unless I assign a variable return by `open()`:
$ cat open_sleep.py
import time
o = open("/tmp/test")
time.sleep(1000)
$ python open_sleep.py
OR
$ python -c 'import time;o=open("/tmp/test");time.sleep(1000)'
Then the file will keep opening::
$ ls -la /proc/`pgrep python`/fd
total 0
dr-x------. 2 ack0hole ack0hole 0 Aug 30 14:21 .
dr-xr-xr-x. 8 ack0hole ack0hole 0 Aug 30 14:21 ..
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:21 0 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:21 1 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:21 2 -> /dev/pts/2
lr-x------. 1 ack0hole ack0hole 64 Aug 30 14:21 3 -> /tmp/test
$
But the interactive mode is not the case, even i'm not assign variable to
open():
>>> import time;open("/tmp/test");time.sleep(1000)
<open file '/tmp/test', mode 'r' at 0xb7400128>
I still can see the file keep opening:
$ ls -la /proc/`pgrep python`/fd
total 0
dr-x------. 2 ack0hole ack0hole 0 Aug 30 14:16 .
dr-xr-xr-x. 8 ack0hole ack0hole 0 Aug 30 14:16 ..
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:16 0 -> /dev/pts/4
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:16 1 -> /dev/pts/4
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:16 2 -> /dev/pts/4
lr-x------. 1 ack0hole ack0hole 64 Aug 30 14:17 3 -> /tmp/test
$
If the indentation fail:
>>> import time;open("/tmp/test");time.sleep(1000)
File "<stdin>", line 1
import time;open("/tmp/test");time.sleep(1000)
^
IndentationError: unexpected indent
>>>
the socket is keep opening without filename:
$ ls -la /proc/`pgrep python`/fd
total 0
dr-x------. 2 ack0hole ack0hole 0 Aug 30 14:38 .
dr-xr-xr-x. 8 ack0hole ack0hole 0 Aug 30 14:38 ..
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:38 0 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:38 1 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:38 2 -> /dev/pts/2
lrwx------. 1 ack0hole ack0hole 64 Aug 30 14:38 3 -> socket:[411151]
I have two questions:
1. What is the purpose of interpreter interactive mode keep file opening, even though open(file) did not assign return value? If the purpose is debugging purpose, any example of this debugging?
2. Why interpreter interactive mode open file in the first place even there are indentation error exist?
Based on the comment, I want to said that I always try with new interactive
mode session, and i even try different terminal(e.g xterm), but it really
raise IndentationError.

Answer: The reason you _don't_ see open files in the examples where you don't see
them, is that right after the file is opened, the reference-count of the
`file` object drops to 0 because the result is not assigned to a variable, so
the file is closed immediately.
The reason this does not happen in interactive mode, is that a reference to
the file object is kept in the `_` variable while the second `sleep` function
runs, thus the file remains open.
See [here](https://stackoverflow.com/questions/5893163/underscore-in-python)
for a discussion about the `_` special variable.
As to question 2, it is not suppose to happen. You must have made a mistake in
you check. Nothing runs if your code raises IndentationError.
|
Python: Setting tuples from csv file
Question: So essentially what I have is a csv file which is loaded in via some function,
lets call it
get_csv
So when I have this data, I want to create a new function to format the data
sent from the server into tuples. Which I will call
csv_format
So assuming the csv comes with 9 columns, how would I set it up so the first
column is an int, the next two are floats and the last ones are strings? I
know this sounds difficult but I hope you can help me out here.
def csv_format(data):
...
....
return get_csv(data)
So essentially I just need to format the tuples so that it outputs like this:
(first, second, third, (fourth, fifth, sixth, seventh, eighth, ninth))
Thank you in advance
Answer: You should do something like this:
import csv
def get_csv(filename):
with open(filename) as f:
return list(csv.reader(f))
def csv_format(data):
return [(row[0], row[1], row[2], tuple(row[3:])) for row in data]
## Example
_Input_
first,second,third,fourth,fifth,sixth,seventh,eighth,ninth
_Output_
[('first', 'second', 'third', ('fourth', 'fifth', 'sixth', 'seventh', 'eighth', 'ninth'))]
|
When/how to change python sys.prefix from /usr when site-packages is in /usr/local?
Question: If I'm following <https://docs.python.org/2/library/site.html> correctly, I
need to either move the site-packages directory to /usr/lib/python2.7 or
change sys.prefix to /usr/local.
The former seems wrong. For the latter the options I can find are to edit
site.py directly or to re-install python. Is editing site.py considered too
hacky, or is it a standard-ish thing to do? (ETA: I would think it's a
standard thing to do, as that's what it's for. Guess I'm really asking if
that's the best choice in this instance.)
Or am I overlooking another option?
/usr/lib vs /usr/local/lib:
auto@virgo:/etc/apache2$ ls -ld /usr/lib/python2.7/site-packages
ls: cannot access /usr/lib/python2.7/site-packages: No such file or directory
auto@virgo:/etc/apache2$ ls -ld /usr/local/lib/python2.7/site-packages
drwxrwsr-x 2 root staff 4096 Aug 29 2013 /usr/local/lib/python2.7/site-packages
python sys.prefix:
auto@virgo:~$ python
Python 2.7.3 (default, Apr 10 2013, 05:46:21)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print sys.prefix
/usr
Thanks!
Answer: create either `sitecustomize.py` or `usercustomize.py` and append to
`site.PREFIXES`
import site
import os
SITEPKGS = "/usr/local/site-packages"
site.addsitedir(SITEPKGS)
site.PREFIXES += ['/usr/local']
from [site](https://docs.python.org/2/library/site.html) docs:
> After these path manipulations, an attempt is made to import a module named
> `sitecustomize`, which can perform arbitrary site-specific customizations.
> It is typically created by a system administrator in the site-packages
> directory. If this import fails with an `ImportError` exception, it is
> silently ignored. If Python is started without output streams available, as
> with `pythonw.exe` on Windows (which is used by default to start IDLE),
> attempted output from `sitecustomize` is ignored. Any exception other than
> `ImportError` causes a silent and perhaps mysterious failure of the process.
>
> After this, an attempt is made to import a module named `usercustomize`,
> which can perform arbitrary user-specific customizations, if
> `ENABLE_USER_SITE` is true. This file is intended to be created in the user
> site-packages directory (see below), which is part of `sys.path` unless
> disabled by `-s`. An `ImportError` will be silently ignored.
|
Why am I getting these errors and how to fix them?
Question: When I run this script I get a ton of errors. import urllib, urllib2
proxy = urllib2.ProxyHandler({
'http': '127.0.0.1',
'https': '127.0.0.1'
})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
# this way both http and https requests go through the proxy
urllib2.urlopen('http://www.google.com')
urllib2.urlopen('https://www.google.com')
I don't really understand what these errors are, hence why I am asking. Here
they are:
Traceback (most recent call last):
File "C:\Python27\Craig.py", line 10, in <module>
urllib2.urlopen('http://www.google.com')
File "C:\Python27\lib\urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 501: Not Implemented
Update: After I added the ports I got these errors:
Traceback (most recent call last): File "C:\Python27\Craig.py", line 10, in
urllib2.urlopen('<http://www.google.com>') File "C:\Python27\lib\urllib2.py",
line 127, in urlopen return _opener.open(url, data, timeout) File
"C:\Python27\lib\urllib2.py", line 404, in open response = self._open(req,
data) File "C:\Python27\lib\urllib2.py", line 422, in _open '_open', req) File
"C:\Python27\lib\urllib2.py", line 382, in _call_chain result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1214, in http_open return
self.do_open(httplib.HTTPConnection, req) File "C:\Python27\lib\urllib2.py",
line 1184, in do_open raise URLError(err) urllib2.URLError:
Answer: Are you running a local proxy?
If you are, it's probably not running on port 80. The default port is 3128.
proxy = urllib2.ProxyHandler({
'http': 'http://127.0.0.1:3128/',
'https': 'http://127.0.0.1:3128/'
})
|
Unable to repeat code according to user input in python
Question: I started programming a week ago and I decided I should write a program that
randomly assigns the numbers for the european lottery. I have a working script
that gives the 5 numbers and 2 stars that make a bet. However, I would like
the program to print as many bets as the user would like without having to run
the program multiple times. I tried to create a class that would be
instantiated as many times as the ones the user would require, but it didn't
work. This is the script I have as of now:
import random
print "This program doesn't increase the chances of winning the lottery!"
numbers[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50]
stars=[1,2,3,4,5,6,7,8,9,10,11]
a=random.sample(numbers, 5)
a.sort()
b=random.sample(stars, 2)
b.sort()
print "The numbers are", a
print "The stars are", b
I understand that to get input from the user I need to use raw_input("Enter
number of bets you want here") but then I have no idea how to use that input
to repeat the code. Also, if somebody knows how to make the lists shorter,
like instead of having to write every number from 1 to 50 just write like 1-50
I would be grateful to know how. Sorry for bad english and thanks in advance.
Answer: Put your code in a loop.
import random
numbers = range(1,51) # [1,2,3,... 49,50]
stars = range(1,12) # [1,2,3,... 10,11]
def makeNumbers(plays):
for i in range(plays): # makes numbers and stars i times
a = sorted(random.sample(numbers, 5))
b = sorted(random.sample(stars,2))
print 'numbers: ', a
print 'stars: ', b
Testing the function
>>> makeNumbers(3)
numbers: [3, 25, 40, 41, 47]
stars: [8, 11]
numbers: [22, 25, 42, 47, 50]
stars: [5, 9]
numbers: [6, 23, 34, 40, 44]
stars: [5, 7]
|
How to get log likelihood for exponential and gamma distributions
Question: I have some data and I can fit a gamma distribution using for example this
code taken from [Fitting a gamma distribution with (python)
Scipy](http://stackoverflow.com/questions/2896179/fitting-a-gamma-
distribution-with-python-scipy) .
import scipy.stats as ss
import scipy as sp
Generate some gamma data:
alpha=5
loc=100.5
beta=22
data=ss.gamma.rvs(alpha,loc=loc,scale=beta,size=10000)
print(data)
# [ 202.36035683 297.23906376 249.53831795 ..., 271.85204096 180.75026301
# 364.60240242]
Here we fit the data to the gamma distribution:
fit_alpha,fit_loc,fit_beta=ss.gamma.fit(data)
print(fit_alpha,fit_loc,fit_beta)
# (5.0833692504230008, 100.08697963283467, 21.739518937816108)
print(alpha,loc,beta)
# (5, 100.5, 22)
I can also fit an exponential distribution to the same data. I would however
like to do a [likelihood ratio test](http://en.wikipedia.org/wiki/Likelihood-
ratio_test). To do this I don't just need to fit the distributions but I also
need to return the likelihood. How can you do that in python?
Answer: You can compute the log-likelihood of `data` by calling the `logpdf` method of
`stats.gamma` and then summing the array.
The first bit of code is from your example:
In [63]: import scipy.stats as ss
In [64]: np.random.seed(123)
In [65]: alpha = 5
In [66]: loc = 100.5
In [67]: beta = 22
In [68]: data = ss.gamma.rvs(alpha, loc=loc, scale=beta, size=10000)
In [70]: data
Out[70]:
array([ 159.73200869, 258.23458137, 178.0504184 , ..., 281.91672824,
164.77152977, 145.83445141])
In [71]: fit_alpha, fit_loc, fit_beta = ss.gamma.fit(data)
In [72]: fit_alpha, fit_loc, fit_beta
Out[72]: (4.9953385276512883, 101.24295938462399, 21.992307537192605)
Here's how to compute the log-likelihood:
In [73]: loglh = ss.gamma.logpdf(data, fit_alpha, fit_loc, fit_beta).sum()
In [74]: loglh
Out[74]: -52437.410641032831
|
Extend the functionality of the print statement/function
Question: I want to print things to a certain specification, such as printing containers
(`[]`, `{}`, `()`) using their `str()` method instead of `repr()`. It's
trivial to write a function that will do that
def str_print_list(alist):
print "["+", ".join(map(str, alist))+"]"
but it'd be awfully nice if I could just extend or decorate either the
[`print`
statement](https://docs.python.org/2/reference/simple_stmts.html#print) or the
[`print()` function](https://docs.python.org/2/library/functions.html#print)
(in Python 2.7)
I could do something sort of like this, which works in python 3 (tested in
Python 3.2.3 at <http://www.compileonline.com/execute_python3_online.php>)
class foo():
def __str__(self):
return "String"
def __repr__(self):
return "Repr"
print([foo()])
def my_decorator(func):
def inner(alist):
if isinstance(alist, list):
return func("["+", ".join(map(str, alist))+"]")
else:
return func(alist)
return inner
print = my_decorator(print)
print([foo()])
Which gives me an output of
[Repr]
[String]
but gives me a syntax error in python 2.7.6 at
print = my_decorator(print)
until I import
from __future__ import print_function
which makes sense, because statements can't (as far as I know) be decorated or
reassigned.
So my question is
1. Is it possible to decorate the print statement so I don't have to import the function?
2. Is this even a good idea, or should I just be more explicit and use my `str_print_list()` function? In this case I know _for a fact_ that this would be for my personal use and would be used primarily for debugging
Answer: 1. No, statements can't be changed.
2. No. The way to handle this is to make your own `list` and pass that around -- this is exactly what inheritance is made for.
Like this:
class MyPrintableList(list):
def __repr__(self):
return "[{}]".format(",".join(self))
print MyPrintableList([foo()])
|
Weka python wrapper Loader
Question: I've installed weka python wrapper on Mc Os X 10.9.4. And I tried running the
sample code:
import weka.core.jvm as jvm
jvm.start()
from weka.core.converters import Loader
l = Loader("weka.core.converters.ArffLoader")
d = l.load_file("X.arff")
d.set_class_index(d.num_attributes() - 1)
print(d)
and I receive the following error:
Failed to get class weka/core/converters/ArffLoader
Exception in thread "Thread-2" java.lang.NoClassDefFoundError: weka/core/converters/ArffLoader
Failed to instantiate weka.core.converters.ArffLoader/weka/core/converters/ArffLoader: weka/core/converters/ArffLoader
Caused by: java.lang.ClassNotFoundException: weka.core.converters.ArffLoader
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Traceback (most recent call last):
File "/Users/hani/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-d835d9d560d2>", line 2, in <module>
l = Loader("weka.core.converters.ArffLoader")
File "/Users/hani/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/python_weka_wrapper-0.1.10-py2.7.egg/weka/core/converters.py", line 40, in __init__
self.enforce_type(jobject, "weka.core.converters.Loader")
File "/Users/hani/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/python_weka_wrapper-0.1.10-py2.7.egg/weka/core/classes.py", line 113, in enforce_type
raise TypeError("Object does not implement or subclass " + intf_or_class + "!")
TypeError: Object does not implement or subclass weka.core.converters.Loader!
After some tinkering I realized that setting the working directory to the
python-weka-wrapper folder using os.chdir remedies the problem in some
environments but not all of them (e.g. pycharm is not fixed using this trick).
Any ideas how I can fix this issue?
Answer: Apparently, calling jvm with the paths to weka jar files would solve the
problem:
jvm.start(class_path=['/some/where/python-weka-wrapper.jar',
'/some/where/weka.jar'])
Thanks to Peter Reutemann for this reply.
|
python managing redundant data by incrementing no. in a text file
Question: I am new to python.I have a text file, i need to avoid the redundancy not by
deleting ,but by incrementing the number in a text file if the lines are found
to be same.
Please help!Answers will be appreciated! eg of a random text file:
hello ram1
hello ram1
hello gate1
hello gate1
Expected output:
hello ram1
hello ram2
hello gate1
hello gate2
Answer: Using regular expression and
[`collections.defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict):
from collections import defaultdict
import re
numbers = defaultdict(int)
with open('/path/to/textfile.txt') as f:
for line in f:
line = re.sub(r'\d+', '', line.rstrip()) # Remove numbers.
numbers[line] += 1 # Increment number for the same line
print('{}{}'.format(line, numbers[line]))
* * *
**UPDATE** using slice notation, dictionary.
import re
numbers = {}
with open('1.txt') as f:
for line in f:
row = re.split(r'(\d+)', line.strip())
words = tuple(row[::2]) # Extract non-number parts to use it as key
if words not in numbers:
numbers[words] = [int(n) for n in row[1::2]] # extract number parts.
numbers[words] = [n+1 for n in numbers[words]] # Increase numbers.
row[1::2] = map(str, numbers[words]) # Assign back numbers
print(''.join(row))
|
www-data user unable to import installed python modules
Question: I am trying create a web-server which ultimately calls a python script
("MyScript.py") from PHP code with the following command
echo exec("MyScript.py ....some arguments")
MyScript.py actually contains a code which uses RDkit and other modules like
numpy.
When I run this MyScript.py from user (MKT)....it works like a charm.
But the problem arises when it get executed from PHP script from server. The
error message in /var/www/log/apache2/error.log is as follows:
ImportError: No module named rdkit
Traceback (most recent call last):
File "./preditar/preditar.py", line 7, in <module>
from rdkit import Chem
My rdkit is installed in /opt/RDKit_2013_06_1/rdkit/ and this the following
the result if run them from MKT user
MKT@mypc$ python -v
and then in python environment:
import rdkit
Here is the output:
import rdkit # precompiled from /opt/RDKit_2013_06_1/rdkit/\__init__.pyc
I think user www-data which executes this do not have access to this
module...then how to import this in MyScript.py???
Note: When I modify MyScript.py to a simple script which just create a file
and write data to it...it works properly without any issue that means I don't
have any problem related to permission.
Answer: Looks like problem with Python import search path. Try to change the import
like this:
import sys
sys.path.insert(0, "/opt/RDKit_2013_06_1")
import rdkit
|
bge python if statements getLinearVelocity
Question: Im new to scripting and programming in general, but I have a decent handle on
the concepts, however, I need a little help and I am not sure I'm even
searching and googling the right terms for what I want to do. I need help with
what I would imagine is a simple if statement in the Blender Game engine 2.71.
what I need to do is to get an objects linear velocity in global terms and
then check it to see if it is equal to 0 on all vectors, then I wish to add
some linear velocity in xyz.
this is what I have but ofcouse it doesnt work: (please dont hate, Im
hella-n00b)
import bge
from bge import logic
scene = logic.getCurrentScene()
cube = logic.getCurrentController().owner
speed = cube.getLinearVelocity()
if all(speed == 0):
cube.setLinearVelocity([0, 0, 10], False)
Answer: Since you're only checking for the case when the velocity is zero, you could
use:
if sum(speed) == 0:
cube.setLinearVelocity([0, 0, 10], False)
where `sum()` is the built-in python function for summing the values in a
`list`.
|
Raspberry Pi Camera auto capture python script
Question: I have been trying to set up my raspberry pi to auto take pictures every 5
seconds and save the file as image1,image2 etc. This is the Python code I have
been trying:
import time
import picamera
counter = 0
with picamera.PiCamera() as camera:
for each in range(5):
counter = counter + 1
camera.start_preview()
time.sleep(5)
camera.capture("/home/pi/python/Pictures/image",counter,".jpg")
camera.stop_preview()
But every time I run this I get this Error:
Traceback (most recent call last):
File "/home/pi/python/camera_repeated.py", line 9, in <module>
camera.capture("/home/pi/python/Pictures/image",counter,".jpg")
File "/usr/lib/python3/dist-packages/picamera/camera.py", line 1303, in capture
format = self._get_image_format(output, format)
File "/usr/lib/python3/dist-packages/picamera/camera.py", line 684, in _get_image_format
format[6:] if format.startswith('image/') else
AttributeError: 'int' object has no attribute 'startswith'
Answer: You're doing the string concatenation wrong. Strings in python should be
concatenated with `+` between them. You're using the `,` sign, that normally
separates arguments in function calls.
The signature for
[`PiCamera.capture`](http://picamera.readthedocs.org/en/release-1.7/api.html?highlight=capture#picamera.PiCamera.capture)
is defined as
capture(output, format=None, use_video_port=False, resize=None, splitter_port=0, **options)
So you will at first specify the output file as a string and then (optional)
the format. If not specified a format, the format will be derived from the
file extension of the given output file, so you can leave it empty here.
So the right call in line 9 should be:
camera.capture("/home/pi/python/Pictures/image" + str(counter) + ".jpg")
You're only giving in a string now. Before you gave in 3 params, with the
format being the value of your `counter` variable and the third
(`use_video_port`) with `.jpg`. Internally, the library seems to test the
given format for common known mime-types `startswith('image/')`, but as you
are giving in an integer, there is no such method `startswith`. That resulted
in the error.
|
TypeError: unsupported operand type(s) for -: 'str' and 'datetime.datetime'
Question: I am working on my python script to get the list of date format from the
sqlite database.
I'm creating the variable `startDelta` to get the list of rows from the
variable called `program_startDate`.
When I use this statement to execute the code, I get a problem with my code:
#get the programs list
cur.execute('SELECT channel, title, start_date, stop_date FROM programs
WHERE channel=?', [channel])
programList = list()
programs = cur
for row in programs:
program = row[1].encode('ascii'), str(row[2]), str(row[3])
program_startDate = str(row[2])
program_endDate = str(row[3])
programList.append(program)
cur.close()
idx = str(programList)
# find nearest half hour
viewStartDate = datetime.datetime.now()
viewStartDate -= datetime.timedelta(minutes = viewStartDate.minute % 30,
seconds = viewStartDate.second)
startDelta = program_startDate - viewStartDate
stopDelta = program_endDate - viewStartDate
I will get an error: TypeError: unsupported operand type(s) for -: 'str' and
'datetime.datetime'
The errors are jumping on this line:
startDelta = program_startDate - viewStartDate
stopDelta = program_endDate - viewStartDate
How to fix this?
Answer: The problem is exactly as the error states. `program_startDate` and
`program_endDate` are strings, and you're trying to subtract a datetime from
them, something python is unable to do.
Are these values stored as datetimes in your database?
If so, use:
program_startDate = row[2]
program_endDate = row[3]
Instead of converting them to strings. The db engine will get them as datetime
for you and the error will be solved.
If they're stored as strings in your db, [use
datetime.strptime](https://docs.python.org/2/library/datetime.html#strftime-
strptime-behavior) to convert them to datetimes according to the format
they're stored in.
Edit:
The OP is storing dates as long in the database, such as 20140831170500. To
solve the problem, use:
from datetime import datetime
...
...
for row in programs:
program = row[1].encode('ascii'), str(row[2]), str(row[3])
program_startDate = datetime.strptime(str(row[2]), '%Y%m%d%H%M%S')
program_endDate = datetime.strptime(str(row[3]), '%Y%m%d%H%M%S')
programList.append(program)
|
ipython notebook doesn't work on ipython 2.2.0
Question: I tried to run IPython notebook. I entered following in the commandline:
ipython notebook
I get the that error (stack trace)
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 11, in <module>
sys.exit(start_ipython())
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/__init__.py", line 120, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 563, in launch_instance
app.initialize(argv)
File "<string>", line 2, in initialize
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 92, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 321, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 92, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/application.py", line 381, in initialize
self.parse_command_line(argv)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 316, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 92, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 475, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 92, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 406, in initialize_subcommand
subapp = import_item(subapp)
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/utils/importstring.py", line 42, in import_item
module = __import__(package, fromlist=[obj])
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/html/notebookapp.py", line 40, in <module>
check_for_zmq('2.1.11', 'IPython.html')
File "/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/utils/zmqrelated.py", line 37, in check_for_zmq
raise ImportError("%s requires pyzmq >= %s"%(required_by, minimum_version))
ImportError: IPython.html requires pyzmq >= 2.1.11
I'm using python 2.7.8 and iphython 2.2.0
Answer: Maybe what you need is to install newest pyzmq version by pip:
pip install --upgrade pyzmq
|
Sorting a Dictionary by Value then Alphabetically by Key Python3
Question: Let's say I have a dictionary like:
h=[('a',5), ('f', 3), ('b',3), ('c',3), ('d',1), ('e',4) ]
I want it sorted like:
[('a',5), ('e',4), ('b',3), ('c',3), ('f',3), ('d',1)]
I can solve this with Python 2 with something like this:
sortedList= sorted(h.iteritems(),key=lambda(k,v):(-v,k))
I can get really close in Python 3 with something like this:
import operator
sortedList =sorted(h.items(), key=operator.itemgetter(1,0) , reverse=True)
but it comes out like this
[('a',5), ('e',4), ('f',3), ('c',3), ('b',3), ('d',1)]
How can I reverse the tiebreaker operation?
Answer: You can use this call to the sorted function in python 3:
sortedList = sorted(h, key=lambda k: (-k[1], k[0]))
This will give the same result as the python 2 sorting:
[('a',5), ('e',4), ('b',3), ('c',3), ('f',3), ('d',1)]
|
Using Python to read newline characters correctly in Linux
Question: In Windows, if I'm using Python's "for line in file" syntax to read the
contents of a normal text file, the final line will not end with a newline
character, but all the lines before it will. In Linux, this behavior seems to
be different -- when I read a file line-by-line and print its contents, I
always find that the final line ends with a newline character, even if one
didn't exist in the real file! I've also tried using the file.readline()
function, with identical results.
(I'm assuming that the text file does actually not end in a newline. If it
does, Linux prints two newlines from the end of the file, instead of just the
one that really exists.)
My code is very simple. The problem is that it produces different results in
Linux than in Windows.
with open(path, 'r') as file:
for line in file:
print('>', line, '<', sep='')
'''
# Alternate method:
while True:
line = file.readline()
if not line:
break
print('>', line, '<', sep='')
'''
On Linux (Ubuntu 13.10 to be exact), the final line always ends with a phantom
newline. I've tried in both Python 2 and 3. Any idea what's going on here?
Answer: What about removing the line break? Is this what you want?
with open('/Users/sebastian/Desktop/Untitled.txt', 'r') as file:
for line in file:
line = line.strip()
if line:
print('>', line, '<', sep='')
prints
>This is the first line<
>this is the second line<
>this is the third line<
PS: You will have problems with your alternate method if there is an empty
line somewhere in the middle of your text file.
### EDIT
Have a look at [PEP278](http://legacy.python.org/dev/peps/pep-0278/). The 'U'
mode might be helpful in your case
with open('/Users/sebastian/Desktop/Untitled.txt', 'rU') as file:
for line in file:
line = line.strip()
print('>', line, '<', sep='')
prints
>This is the first line<
>this is the second line<
><
>this is the fourth line (blank line above)<
><
And alternatively, you can add an extra line that is platform-specific, e.g.,
via
import platform
if platform.system()=='Windows':
# do sth
else:
# do sth
|
Can't get json through ajax and flask
Question: I am trying to make a simple web application for logs show. On the web side:
python 3.4 and flask and on the client side it's simple web form with ajax.
Flask:
import json
from flask import Flask, jsonify, render_template, request
app = Flask(__name__)
@app.route('/json_test', methods=['GET'])
def json_test():
return open('log.json').read()
@app.route('/')
def index():
return render_template('layout.html')
if __name__ == '__main__':
app.run(debug=True)
My HTML form
<!DOCTYPE html>
<script type=text/javascript src="{{
url_for('static', filename='jquery.js') }}"></script>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="{{
url_for('static', filename='jquery.js') }}">\x3C/script>')</script>
<script type=text/javascript>
$LOG = {{ request.script_root|tojson|safe }};
</script>
<script type=text/javascript>
$(function() {
$('a#log').bind('click', function() {
$.getJSON($LOG + '/json_test',
function(data){
$("#logs").text(data.result);
});
return false;
});
});
</script>
<p>
<span id=logs>Logs should be here</span>
<a href=# id=log>take log</a>
</p>
</html>
And my JSON example:
{
"data":
{
"misc":
[
{
"name" : "JSON 1",
"type" : "1"
},
{
"name" : "JSON 2",
"type" : "2"
}
]
}
}
I wanna send a part of logs every 5 second. For this, I plan to use js
function: setInterval. Am I right?
Answer: Your JSON data has no `result` key. You have a `data` key instead.
Placing the `data.data` result in the `<span>` will only insert the text
`[object Object]` however; you need to insert actual text. You could turn the
object back to JSON text perhaps, using `JSON.stringify()`:
$("#logs").text(JSON.stringify(data.data));
or you could send something more meaningful instead.
|
Passing directory to python script as command line argument
Question: I am trying to pass a particular directory to a python script and later use
that directory in the script . the Directory can be located anywhere. for
example, the script should run on the command line as
script.py directory_name
So far I looked but nothing as such.
Answer: You can do it as:
directory_name=sys.argv[n]
It is always good to catch the error, if directory name is not provided by the
user.
import sys
...
...
try:
directory_name=sys.argv[1]
print(directory_name)
except:
print('Please pass directory_name')
|
(unicode error) 'unicodeescape' codec can't decode bytes in position 16-17: truncated \uXXXX escape
Question: I want to import one module of pyusb library that is in the
**d:\pyusb-1.0.0a2\usb**. So first of all I must add its path to `sys.path`.
But I receive the below error.
Note : I successfully can import `d:\pyusb-1.0.0a2` !!!
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> import sys
>>> sys.path.append('d:\pyusb-1.0.0a2\usb')
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 16-17: truncated \uXXXX escape
Answer: You need to use a raw string
>>> sys.path.append(r'd:\pyusb-1.0.0a2\usb')
or escape the backslashes
>>> sys.path.append('d:\\pyusb-1.0.0a2\\usb')
or use forward slashes
>>> sys.path.append('d:/pyusb-1.0.0a2/usb')
Otherwise, Python will try to interpret `\usb` as a Unicode escape sequence
(like `\uBEEF`) which fails for obvious reasons.
|
Syntax error installing gunicorn
Question: I am following this Heroku tutorial:
<https://devcenter.heroku.com/articles/getting-started-with-python-o> and when
I am trying to install gunicorn in a virtualenv I am getting this error:
(venv)jabuntu14@ubuntu:~/Desktop/helloflask$ pip install gunicorn
Downloading/unpacking gunicorn
Downloading gunicorn-19.1.1-py2.py3-none-any.whl (104kB): 104kB downloaded
Installing collected packages: gunicorn
Compiling /home/jabuntu14/Desktop/helloflask/venv/build/gunicorn/gunicorn/workers /_gaiohttp.py ...
File "/home/jabuntu14/Desktop/helloflask/venv/build/gunicorn/gunicorn/workers /_gaiohttp.py", line 64
yield from self.wsgi.close()
^
SyntaxError: invalid syntax
Successfully installed gunicorn
Cleaning up...
However, when I run $foreman start it appears to work properly.
How important is this error? Any idea how to solve it?
Answer: The error can be ignored, your `gunicorn` package installed successfully.
The error is thrown by a bit of code that'd only work on Python 3.3 or newer,
but isn't used by older Python versions that Gunicorn supports.
See <https://github.com/benoitc/gunicorn/issues/788>:
> The error is a syntax error happening during install. It is harmless.
During installation the `setup.py` script tries to collect all files to be
installed, and compiles them to `.pyc` bytecache files. One file that is used
only on Python 3.3 or up is included in this and the compilation for that one
file fails.
The file in question adds support for the [aiohttp http client/server
package](https://pypi.python.org/pypi/aiohttp), which only works on Python 3.3
and up anyway. As such you can ignore this error entirely.
|
Differences between BaseHttpServer and wsgiref.simple_server
Question: I'm looking for a module that provides me a basic http server capabilities for
local access. It seems like Python has two methods to implement simple http
servers in the standard library:
[wsgiref.simple_server](https://docs.python.org/2/library/wsgiref.html#module-
wsgiref.simple_server) and
[BaseHttpServer](https://docs.python.org/2/library/basehttpserver.html).
What are the differences? Is there any strong reason to prefer one over the
other?
Answer: **Short answer:** `wsgiref.simple_server` is a WSGI adapter over
`BaseHTTPServer`.
**Longer answer:**
`BaseHTTPServer` is the module that actually implements the HTTP server part.
It can accept requests and return responses, but it has to know how to handle
those requests. When you are using pure `BaseHTTPServer`, you provide the
handlers by subclassing
[`BaseHTTPRequestHandler`](https://docs.python.org/2/library/basehttpserver.html#BaseHTTPServer.BaseHTTPRequestHandler),
for example:
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/plain')
self.end_headers()
self.wfile.write('Hello world!\n')
HTTPServer(('', 8000), MyHandler).serve_forever()
`wsgiref.simple_server` adapts this `BaseHTTPServer` interface to the
[WSGI](http://wsgi.readthedocs.org/en/latest/) specification, which is the
standard for server-independent Python web applications. In WSGI, you provide
the handler in the form of a function, for example:
from wsgiref.simple_server import make_server
def my_app(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
yield 'Hello world!\n'
make_server('', 8000, my_app).serve_forever()
The `make_server` function returns an instance of `BaseHTTPServer.HTTPServer`.
This means that the underlying HTTP logic is the same as in `BaseHTTPServer`.
What differs is the way you integrate your application code with that logic.
It really depends on the problem you’re trying to solve, but coding against
`wsgiref` is probably a better idea because it will make it easy for you to
move to a different, production-grade HTTP server, such as
[uWSGI](https://uwsgi-docs.readthedocs.org/en/latest/) or
[Gunicorn](http://gunicorn.org/), in the future.
|
How to check if a file exists and if so rename it in python
Question: I am looking for a more pythonic way to do what my code does currently. I'm
sure there is a better way to do this. I would like to search up until
filename-10, and if that exists create a file called filename-11.
If you can help that would be great.
EDIT: 9/1/14 9:46 PM
import re
import os
f=open('/Users/jakerandall/Desktop/Data Collection Python/temp.cnc', 'r')
text = re.search(r"(?<!\d)\d{4,5}(?!\d)", f.read())
JobNumber = text.string[text.start():text.end()]
if os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-10.cnc" % JobNumber):
f=open("/Users/jakerandall/Desktop/Data Collection Python/%s-11.cnc" % JobNumber, 'w+b')
f.close()
print '1'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-9.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-10.cnc' % JobNumber, 'w+b')
f.close()
print '2'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-8.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-9.cnc' % JobNumber, 'w+b')
f.close()
print '3'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-7.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-8.cnc' % JobNumber, 'w+b')
f.close()
print '4'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-6.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-7.cnc' % JobNumber, 'w+b')
f.close()
print '5'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-5.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-6.cnc' % JobNumber, 'w+b')
f.close()
print '6'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-4.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-5.cnc' % JobNumber, 'w+b')
f.close()
print '7'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-3.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-4.cnc' % JobNumber, 'w+b')
f.close()
print '8'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-2.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-3.cnc' % JobNumber, 'w+b')
f.close()
print '9'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s-1.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-2.cnc' % JobNumber, 'w+b')
f.close()
print '10'
elif os.path.isfile("/Users/jakerandall/Desktop/Data Collection Python/%s.cnc" % JobNumber):
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s-1.cnc' % JobNumber, 'w+b')
f.close()
print '11'
else:
f=open('/Users/jakerandall/Desktop/Data Collection Python/%s.cnc' % JobNumber, 'w+b')
f.close()
print '12'
f.close()
Answer: How about something simpler:
import glob
file_directory = '/Users/jakerandall/Desktop/Data Collection Python/'
files = glob.glob('{}{}*.cnc'.format(file_directory, JobNumber))
Now `files` will be a list of file names that actually exist in the directory
and match your pattern.
You can check the length of this list, and then:
1. If its empty, create the first file, which is just `'{}.cnc'.format(JobNumber)`.
2. If the length of the list is equal to 11, you need to create file number 11 (because the pattern will match the first file, the one without any `-`, so a length of 11 means the last file is `-10.cnc`).
3. Otherwise, the file you want is 1-the length of the list. So if the list has 5 items, it means the last file is `-4.cnc` (because the pattern will also match the very first file).
You'll still need to see if you can open them, because the user running the
Python script may not have sufficient permissions.
Here is an example putting all that together:
import glob
file_directory = '/Users/jakerandall/Desktop/Data Collection Python/'
files = glob.glob('{}{}*.cnc'.format(file_directory, JobNumber))
# Start by assuming there are no files:
filename = '{}.cnc'.format(JobNumber)
if len(files) <= 11:
# If there are less than 11 files, we need
# to use the existing file, and overwrite it
# If there are 4 files, in the directory, our
# list will have a length of 5:
# The original file, and then four files from -1, to -4
# In this case, we want to use file 4, which is 1 less than
# the length of the list:
filename = '{}-{}.cnc'.format(JobNumber, len(files)-1)
else:
# If we reach this point, it means
# there were more than 10 files that match the
# pattern. We want to use the next file,
# which is next number higher, which is also the length
# of the list, since it will include the first file.
# So if the last file is -20, the list will have 20 files (from -1, to -20)
# plus the original file, which has no - in the filename, giving
# a length of 21, which also happens to be the number of the file
# we want to create :)
filename = '{}-{}.cnc'.format(JobNumber, len(files))
# Now, try to create the file
try:
f = open(filename, 'w+b')
except IOError:
print('Cannot create {}, check permissions?'.format(filename))
|
Can someone explain this Python code for me?
Question: This code create pty (pseudo-terminals) in Python. I have commented the parts
that I do not understand
import os,select
pid, master_fd =os.forkpty() #I guess this function return the next available pid and fd
args=['/bin/bash']
if pid == 0:#I have no I idea what this if statement does, however I have noticed that it get executed twice
os.execlp('/bin/bash',*args)
while 1:
r,w,e=select.select([master_fd,0], [], [])
for i in r:
if i==master_fd:
data=os.read(master_fd, 1024)
"""Why I cannot do something like
f=open('/dev/pts/'+master_fd,'r')
data=f.read()"""
os.write(1, data) # What does 1 mean???
elif i==0:
data = os.read(0, 1024)
while data!='':
n = os.write(master_fd, data)
data = data[n:]
Answer: In Unix-like operating systems, the way to start a new process is a _fork_.
That is accomplished with `fork()` or its several cousins. What this does is
it duplicates the calling process, in effect having two exactly the same
programs.
The only difference is the return value from `fork()`. The parent process gets
the PID of the child, and the child gets `0`. What usually happens is that you
have an if statement like the one that you're asking about.
If the returned PID is `0` then you're "in the child". In this case the child
is supposed to be a shell, so `bash` is executed.
Else, you're "in the parent". In this case the parent makes sure that the
child's open file descriptors (`stdin`, `stdout`, `stderr` and any open files)
do what they're supposed to.
If you ever take an OS class or just try to write your own shell you'll be
following this pattern a lot.
* * *
As for your other question, what does the `1` mean in `os.write(1, data)`?
The file descriptors are integer offsets into an array inside the kernel:
* 0 is `stdin`
* 1 is `stdout`
* 2 is `stderr`
i.e. that line just writes to `stdout`.
When you want to set up pipes or redirections then you just change the meaning
of those three file descriptors (look up `dup2()`).
|
Changed proxy setting not visile on GUI
Question:
import _winreg as registry
key=registry.OpenKey(registry.HKEY_CURRENT_USER,"Software\Microsoft\Windows\CurrentVersion\Internet Settings",0,registry.KEY_ALL_ACCESS)
registry.SetValue(key, 'MigrateProxy', registry.REG_SZ, 'dword:00000001')
print registry.QueryValue(key, 'MigrateProxy')
registry.SetValue(key, 'ProxyEnable', registry.REG_SZ, 'dword:00000000')
print registry.QueryValue(key, 'ProxyEnable')
registry.SetValue(key, 'ProxyHttp1.1', registry.REG_SZ, 'dword:00000000')
print registry.QueryValue(key, 'ProxyHttp1.1')
registry.SetValue(key, 'ProxyServer', registry.REG_SZ, '192.168.50.224:808')
print registry.QueryValue(key, 'ProxyServer')
registry.SetValue(key, 'ProxyOverride', registry.REG_SZ, '<local>')
print registry.QueryValue(key, 'ProxyOverride')
Hi i am applying proxy though python by modifying registry contents.But the
same is not visible on GUI neither it shows on cmd line through
netsh winhttp show proxy
Is there a way to check if it has been applied ?or is there another better way
to achieve the same?
Answer: I faced the same problem and found that the key/value pair was created at
different location. Dont yet know the reason why.. Click Computer and "Clt +
F" of Regedit and you will find your entries.
|
Comparision of numpy module array with list
Question: I am trying to compare numpy array with list.... Because i am new to python, i
don't know about numpy array I want to know the application of numpy array
Please help out in understanding numpy array.
>>> from numpy import *
>>> res1 = []
>>> res2 = array([])
>>> if res1 == res2:
... print 'hi'
... else:
... print 'bye'
...
bye
>>> res1 = [1]
>>> res2 = array([1])
>>> if res1 == res2:
... print 'hi'
... else:
... print 'bye'
...
hi
>>> res1 = [1,2]
>>> res2 = array([1, 2])
>>> if res1 == res2:
... print 'hi'
... else:
... print 'bye'
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Any one please help why i am getting value error
Answer: The comparison `res1 == res2` creates a new array of booleans, which represent
an element-wise comparison of the list and array contents:
>>> res1 == res2
array([ True, True], dtype=bool)
As the error message tells you, you cannot directly assign a truth value to an
array, so you need to test whether `all` items are `True`:
>>> np.all(res1 == res2)
True
This only happens because your list and array are the same shape:
>>> a = np.array([1, 2, 3])
>>> b = [3, 2]
>>> a == b
False
Note that I have used `import numpy as np` rather than `from numpy import *`
\- this means that I don't override e.g. the built-in `all` with `numpy`'s
version.
|
Why is there too much on my sys.path?
Question: I am trying to uninstall some package I'm developing (residing in ...plugin-
iqf\test\performance\src), but from within Python shell, I'm still able to
import it.
My `PYTHONPATH` is empty, yet ...
Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> for p in sys.path: print p
...
c:\Python27\lib\site-packages\setuptools-3.3-py2.7.egg
c:\Python27\lib\site-packages\catchy-0.1.3-py2.7.egg
c:\Python27\lib\site-packages\argparse-1.2.1-py2.7.egg
c:\workspaces\remoteadmin-protocol\implementations\python\src
c:\workspaces\reslasher\src
c:\Python27\lib\site-packages\pygtk-2.24.0-py2.7-win32.egg
c:\workspaces\blizzard\closed-source\plugin-titech\plugin-iqf\test\performance\src
C:\Windows\system32\python27.zip
c:\Python27\DLLs
c:\Python27\lib
c:\Python27\lib\plat-win
c:\Python27\lib\lib-tk
c:\Python27
c:\Python27\lib\site-packages
c:\Python27\lib\site-packages\wx-2.8-msw-unicode
Where do things like
c:\workspaces\remoteadmin-protocol\implementations\python\src
c:\workspaces\reslasher\src
come from?
Answer: Python looks for `.pth` files in four specific directories, formed by
combining `sys.prefix` and `sys.exec_prefix` on the one hand, and the empty
string and (on Windows) `lib/site-packages`. Those `.pth` files can list
additional paths to add to `sys.path`, one per line. See the [`site` module
documentation](https://docs.python.org/2/library/site.html#module-site).
You probably have one or more of those; look for such files in
`c:\Python27\lib\site-packages`, but any files with names matching existing
eggs you did not uninstall, that start with `import` should be left alone;
these generally are hooks to set up namespaced packages.
You could also have a `sitecustomize` module that manipulates `sys.path`. You
can see if you have one by trying to import it:
import sitecustomize
sitecustomize
will either fail with an `ImportError` or echo the path.
|
Installing matplotlib via pip on Ubuntu 12.04
Question: I'm trying to use matplotlib on Ubuntu 12.04. So I built a wheel with pip:
**python .local/bin/pip wheel --wheel-dir=wheel/ --build=build/ matplotlib**
Then successfully installed it:
**python .local/bin/pip install --user --no-index --find-links=wheel/
--build=build/ matplotlib**
But when I'm trying to import it in ipython ImportError occures:
> In [1]: import matplotlib
>
> In [2]: matplotlib.get_backend()
> Out[2]: u'agg'
>
> In [3]: import matplotlib.pyplot
>
> ImportError
> Traceback (most recent call last) /place/home/yefremat/ in () \----> 1
> import matplotlib.pyplot
>
> /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/pyplot.py in ()
> 32 from matplotlib import docstring 33 from matplotlib.backend_bases import
> FigureCanvasBase \---> 34 from matplotlib.figure import Figure, figaspect 35
> from matplotlib.gridspec import GridSpec 36 from matplotlib.image import
> imread as _imread
>
> /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/figure.py in ()
> 38 import matplotlib.colorbar as cbar 39 \---> 40 from matplotlib.axes
> import Axes, SubplotBase, subplot_class_factory 41 from
> matplotlib.blocking_input import BlockingMouseInput, BlockingKeyMouseInput
> 42 from matplotlib.legend import Legend
>
> /home/yefremat/.local/lib/python2.7/site-
> packages/matplotlib/axes/**init**.py in () 2 unicode_literals) 3 \----> 4
> from ._subplots import * 5 from ._axes import *
>
> /home/yefremat/.local/lib/python2.7/site-
> packages/matplotlib/axes/_subplots.py in () 8 from matplotlib import
> docstring 9 import matplotlib.artist as martist \---> 10 from
> matplotlib.axes._axes import Axes 11 12 import warnings
>
> /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/_axes.py
> in () 36 import matplotlib.ticker as mticker 37 import matplotlib.transforms
> as mtransforms \---> 38 import matplotlib.tri as mtri 39 import
> matplotlib.transforms as mtrans 40 from matplotlib.container import
> BarContainer, ErrorbarContainer, StemContainer
>
> /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/tri/**init**.py
> in () 7 import six 8 \----> 9 from .triangulation import * 10 from
> .tricontour import * 11 from .tritools import *
>
> /home/yefremat/.local/lib/python2.7/site-
> packages/matplotlib/tri/triangulation.py in () 4 import six 5 \----> 6
> import matplotlib._tri as _tri 7 import matplotlib._qhull as _qhull 8 import
> numpy as np
>
> ImportError: /home/yefremat/.local/lib/python2.7/site-
> packages/matplotlib/_tri.so: undefined symbol:
> _ZNSt8__detail15_List_node_base9_M_unhookEv
May be I'm doing somethig wrong? Or may be there is a way to turn off gui
support of matplotlib?
Thanks in advance.
Answer: Okay, the problem was in gcc version. During building and creating wheel of
package pip uses system gcc (which version is 4.7.2). I'm using python from
virtualenv, which was built with gcc 4.4.3. So version of libstdc++ library is
different in IPython and one that pip used.
As always there are two solutions (or even more): pass LD_PRELOAD environment
variable with correct libstdc++ before entering IPython or to use same version
of gcc during creating wheel and building virtualenv. I prefred the last one.
Thank you all.
|
the code says it is a error type but
Question: I've got this code.
#!/usr/bin/python
from optparse import OptionParser #import the OptionParser object from this module
parser = OptionParser()
parser.add_option("-f", "--first", dest="meal", help="prix repas", type="float")
parser.add_option("-s", "--second", dest="tip", help="le tip", type="float")
parser.add_option("-t", "--third", dest="tax", help="tax", type="float")
(options, args) = parser.parse_args()
tax_value = options.meal * options.tax
meal_with_tax = tax_value + options.meal
tip_value = meal_with_tax * tip
if not (options.meal and options.tip):
parser.error("You need to supply an argument for -s")
print "le prix du repas est '{}'.".format(options.meal)
print "Le tip est de '{}'.".format(options.tip)
print "Le tip est de '{}'.".format(options.tip)
Each time I'm running it with the following command line
`./tip_re1_arg.py -s 5 2 3`
I've got this error `tax_value = options.meal * options.tax TypeError:
unsupported operand type(s) for *: 'NoneType' and 'NoneType'`
Why? It seems that I've done everything right in term of type. Or did I?
Sorry, I'm very much of a beginner in Python.
Answer: You have explicitly configured the parser to expect the parameters as options:
./tip_re1_arg.py -f 5 -s 2 -t 3
./tip_re1_arg.py --first 5 --second 2 --third 3
Currently your input arguments end up in `args`.
|
Using an arbitrary maximum in range()
Question: Is there a way in Python to iterate over every integer until something
happens? Right now I tend to do one of the following:
for i in range(999999999):
...
if something:
break
or
i = 0
status = True
while status:
...
if something:
status = False
i += 1
Both of these methods work for what I'm doing, but I'm sure there's a better
way to do it. Please point me in the right direction.
Answer: Try `itertools.count`.
>>> import itertools
>>> for x in itertools.count():
... print x
... if x > 10: break
...
0
1
2
3
4
5
6
7
8
9
10
11
|
Using MPI4PY in FedoraScientific
Question: Recently, I downloaded and installed Fedora Scientific 20 as I was impressed
with the list of included software. My interest in the software is due to the
inclusion of the MPI framework. I was able to compile and execute a simple C
program using mpicc and mpiexec. However, I need some help using MPI4PY to
call OpenMPI using Python code.
At the terminal prompt, if I try:
> $ /lib64/openmpi/bin/mpiexec -n 2 python3 helloworld.py
The Traceback reports that an
> ImportError: No module named 'mpi4py'
has been raised. The helloworld.py program was an example found online with
line 6 being `from mpi4py import MPI`.
Since Apper indicates that mpi4py has been installed for both Python2 and
Python3 for OpenMPI as part of the installation of Fedora Scientific, I'm not
sure what might be wrong. Could somebody please advise as to how to use this
package?
Answer: It sounds like there is something wrong with your environment. Perhaps mpi4py,
since you have confirmed it is installed, is installed in a a strange place.
Would setting PYTHONPATH help?
<https://docs.python.org/2/using/cmdline.html#environment-variables>
|
How to use Python Unittest TearDownClass with TestResult.wasSuccessful()
Question: I wanted to call `setUpClass` and `tearDownClass` so that `setup` and
`teardown` would be performed only once for each test. However, it keeps
failing for me when I call `tearDownClass`. I only want to record 1 test
result, either PASS if both tests passed or FAIL if both tests failed. If I
call only `setup` and `tearDown` then all works fine:
Calling `setUpClass` and `tearDownClass`:
#!/usr/bin/python
import datetime
import itertools
import logging
import os
import sys
import time
import unittest
LOGFILE = 'logfile.txt'
class MyTest(unittest.TestCase):
global testResult
testResult = None
@classmethod
def setUpClass(self):
## test result for DB Entry:
self.dbresult_dict = {
'SCRIPT' : 'MyTest.py',
'RESULT' : testResult,
}
def test1(self):
expected_number = 10
actual_number = 10
self.assertEqual(expected_number, actual_number)
def test2(self):
expected = True
actual = True
self.assertEqual(expected, actual)
def run(self, result=None):
self.testResult = result
unittest.TestCase.run(self, result)
@classmethod
def tearDownClass(self):
ok = self.testResult.wasSuccessful()
errors = self.testResult.errors
failures = self.testResult.failures
if ok:
self.dbresult_dict['RESULT'] = 'Pass'
else:
logging.info(' %d errors and %d failures',
len(errors), len(failures))
self.dbresult_dict['RESULT'] = 'Fail'
if __name__ == '__main__':
logger = logging.getLogger()
logger.addHandler(logging.FileHandler(LOGFILE, mode='a'))
stderr_file = open(LOGFILE, 'a')
runner = unittest.TextTestRunner(verbosity=2, stream=stderr_file, descriptions=True)
itersuite = unittest.TestLoader().loadTestsFromTestCase(MyTest)
runner.run(itersuite)
sys.exit()
unittest.main(module=itersuite, exit=True)
stderr_file.close()
Error:
test1 (__main__.MyTest) ... ok
test2 (__main__.MyTest) ... ok
ERROR
===================================================================
ERROR: tearDownClass (__main__.MyTest)
-------------------------------------------------------------------
Traceback (most recent call last):
File "testTearDownClass.py", line 47, in tearDownClass
ok = self.testResult.wasSuccessful()
AttributeError: type object 'MyTest' has no attribute 'testResult'
----------------------------------------------------------------------
Ran 2 tests in 0.006s
FAILED (errors=1)
Answer: Change `tearDownClass(self)` to `tearDownClass(cls)` and `setUpClass(self)` to
`setUpClass(cls)`.
|
Calling a thread within a module in python
Question: I have a log in screen which calls a module from a file which is
'HomescreenscrollCopy31' . This works perfectly fine.
The only issue is that there is a thread within the module which does not seem
to operate.
Here is the code in the login screen:
def login():
content = nameentry.get()
content1 = IDentry.get()
if content == "1" and content1 == "1":
root.destroy()
from HomescreenscrollCopy31 import FullScreenApp
The HomescreenscrollCopy31.py file looks like this,
class FullScreenApp(object):
def __init__(self, master, **kwargs):
self.master=master
pad=3
self._geom='200x200+0+0'
master.geometry("{0}x{1}+0+0".format(
master.winfo_screenwidth()-pad, master.winfo_screenheight()-pad))
master.bind('<Escape>',self.toggle_geom)
def toggle_geom(self,event):
geom=self.master.winfo_geometry()
print(geom,self._geom)
self.master.geometry(self._geom)
self._geom=geom
def trick(threadName, sleepTime):
while 1 < 2:
.....
try:
t = threading.Thread(target=trick, args=("Trick running", 5))
t.daemon = True
t.start()
except: Exception,
print "start up"
It seems that it runs through the HomescreenscrollCopy31.py and politely
prints the "start up" only to tease me, as it never actually runs the thread
and thus should print the "Trick running" output.
Why is this?
Answer: I think it's better to write it this way.
def login():
content = nameentry.get()
content1 = IDentry.get()
if content == "1" and content1 == "1":
root.destroy()
from HomescreenscrollCopy31 import FullScreenApp
myFullScreenApp = FullScreenApp(...) # you need to give it some input, like master, to initialize an instance for FullScreenApp class
class FullScreenApp(object):
def __init__(self, master, **kwargs):
self.master=master
pad=3
self._geom='200x200+0+0'
master.geometry("{0}x{1}+0+0".format(
master.winfo_screenwidth()-pad, master.winfo_screenheight()-pad))
master.bind('<Escape>',self.toggle_geom)
# these code will be executed when instance initializing
t = threading.Thread(target=self.trick, args=["Trick running", 5])
t.daemon = True
t.start()
def toggle_geom(self,event):
geom=self.master.winfo_geometry()
print(geom,self._geom)
self.master.geometry(self._geom)
self._geom=geom
def trick(threadName, sleepTime):
while 1 < 2:
.....
1. You just need to initialize an instance of the class to get thread running. If using staticmethod way, it will make code more complicated which I don't recommend.
2. When invoking a new thread, args should be a list([]) but not a tuple(()).
3. You can use while 1 instead of while 1<2\. Anyway, this is not the major issue. :)
Hope it helps.
|
tkinter variable for drop down selection empty
Question: I tried to program an app in tkinter that would load random lines from a file
you select from a pull down menu and display the selected line in a text
window.
It seems like the variable "var" in `insert_text` does not return the selected
"option" but rather an "empty" string resulting in a the following error:
> "File not found error" (FileNotFoundError: [Errno2] No such file or
> directory: '').
Please help!
#!/usr/bin/env python
# Python 3
import tkinter
from tkinter import ttk
import random
class Application:
def __init__(self, root):
self.root = root
self.root.title('Random Stuff')
ttk.Frame(self.root, width=450, height=185).pack()
self.init_widgets()
var = tkinter.StringVar(root)
script = var.get()
choices = ['option1', 'option2', 'option3']
option = tkinter.OptionMenu(root, var, *choices)
option.pack(side='right', padx=10, pady=10)
def init_widgets(self):
ttk.Button(self.root, command=self.insert_txt, text='Button', width='10').place(x=10, y=10)
self.txt = tkinter.Text(self.root, width='45', height='5')
self.txt.place(x=10, y=50)
def insert_txt(self):
var = tkinter.StringVar(root)
name = var.get()
line = random.choice(open(str(name)).readlines())
self.txt.insert(tkinter.INSERT, line)
if __name__ == '__main__':
root = tkinter.Tk()
Application(root)
root.mainloop()
Answer: That's because you're just creating an empty `StringVar` that isn't modified
later, thus returning an empty string.
The OptionMenu takes the `command` parameter that calls the specified method
every time another option is selected. Now, you can call a method like this,
replacing you `insert_txt`:
def __init__(self):
# ...
self.var = tkinter.StringVar()
self.options = tkinter.OptionMenu(root, var, *choices, command=self.option_selected)
# ...
def option_selected(self, event):
name = self.var.get()
# The stuff you already had
Additionally, you have to empty the `Text` widget, otherwise the previous text
would stay. I think the `Entry` widget is better for that, too.
|
Select specific columns in a text file by their column names and extract their contents
Question: I am a beginner in Python and I am finding it very difficult to come up with
the correct solution for this problem. I glanced through all the similar posts
in stackoverflow and couldn't find the solution.
I have a ".ext" file. I need to skip first two lines. The third line has the
column names for the table.
I need to search for the columns omega(n,n) and Sigma(n,n) column names where
n can be any number (Eg:sigma(1,1), omega(2,2)). Analyse the columns with
column names "sigma(n,n)" and "omega(n,n)" and check the values of these
columns for the row starting with '-1000000000'.If the value is <0.001, output
"true".
my code is:
import numpy as np
array=[]
array1=[]
b = np.genfromtxt(r'C:/nm73/proj/one.ext', delimiter=' ', names=True,dtype=None)[3:,:]
for n in range(len(b)-1):
array=b['Sigma(n,n)']
array1=b['omega(n,n)']
I don't know how to check the elements.
One.ext file is as shown below: I apologize if the file in not in correct
format. I am new to stackoverflow. Any help is highly appreciated.
TABLE NO. 1: First Order Conditional Estimation with Interaction: Goal Function=MINIMUM VALUE OF OBJECTIVE FUNCTION: Problem=1 Subproblem=0 Superproblem1=0 Iteration1=0 Superproblem2=0 Iteration2=0
ITERATION THETA1 THETA2 SIGMA(1,1) SIGMA(2,1) SIGMA(2,2) OMEGA(1,1) OMEGA(2,1) OMEGA(2,2) OBJ
0 2.50000E-01 1.00000E+01 1.00000E-01 0.00000E+00 1.00000E-01 1.00000E-01 0.00000E+00 1.00000E-01 9436.65314342255
5 2.34948E-01 3.67675E+00 9.04159E-02 0.00000E+00 2.74933E+00 1.98686E-01 0.00000E+00 1.75724E-01 8745.97204613658
10 2.11090E-01 4.30565E+00 1.34312E-01 0.00000E+00 1.12619E+00 1.32484E-01 0.00000E+00 1.36824E-02 8595.43106384756
15 2.10696E-01 4.35495E+00 1.23897E-01 0.00000E+00 1.29124E+00 1.28600E-01 0.00000E+00 1.24441E-02 8591.51400321872
20 2.11129E-01 4.36325E+00 1.24283E-01 0.00000E+00 1.28733E+00 1.28815E-01 0.00000E+00 1.24211E-02 8591.50022332770
-1000000000 2.11129E-01 4.36325E+00 1.24283E-01 0.00000E+00 1.28733E+00 1.28815E-01 0.00000E+00 1.24211E-02 8591.50022332770
-1000000001 8.07565E-03 6.97861E-02 5.28558E-03 1.00000E+10 4.20370E-01 1.78706E-02 1.00000E+10 3.15324E-03 0.000000000000000E+000
-1000000004 0.00000E+00 0.00000E+00 3.52538E-01 0.00000E+00 1.13460E+00 3.58908E-01 0.00000E+00 1.11450E-01 0.000000000000000E+000
-1000000005 0.00000E+00 0.00000E+00 7.49648E-03 1.00000E+10 1.85250E-01 2.48957E-02 1.00000E+10 1.41465E-02 0.000000000000000E+000
Answer: If you don't specify `delimiter`, then all _consecutive_ whitespace will be
understood to act as one delimiter. If you specify `delimiter=' '` then
literally _each_ space will act as a delimiter. That leads to a ValueError,
since `genfromtxt` will expect the wrong number of columns.
So if instead you use:
In [396]: b = np.genfromtxt(filename, names=True, dtype=None, skip_header=1)
Then you'll end up with a structured array like this:
In [397]: b
Out[397]:
array([(0, 0.25, 10.0, 0.1, 0.0, 0.1, 0.1, 0.0, 0.1, 9436.65314342255),
(5, 0.234948, 3.67675, 0.0904159, 0.0, 2.74933, 0.198686, 0.0, 0.175724, 8745.97204613658),
(10, 0.21109, 4.30565, 0.134312, 0.0, 1.12619, 0.132484, 0.0, 0.0136824, 8595.43106384756),
(15, 0.210696, 4.35495, 0.123897, 0.0, 1.29124, 0.1286, 0.0, 0.0124441, 8591.51400321872),
(20, 0.211129, 4.36325, 0.124283, 0.0, 1.28733, 0.128815, 0.0, 0.0124211, 8591.5002233277),
(-1000000000, 0.211129, 4.36325, 0.124283, 0.0, 1.28733, 0.128815, 0.0, 0.0124211, 8591.5002233277),
(-1000000001, 0.00807565, 0.0697861, 0.00528558, 10000000000.0, 0.42037, 0.0178706, 10000000000.0, 0.00315324, 0.0),
(-1000000004, 0.0, 0.0, 0.352538, 0.0, 1.1346, 0.358908, 0.0, 0.11145, 0.0),
(-1000000005, 0.0, 0.0, 0.00749648, 10000000000.0, 0.18525, 0.0248957, 10000000000.0, 0.0141465, 0.0)],
dtype=[('ITERATION', '<i4'), ('THETA1', '<f8'), ('THETA2', '<f8'), ('SIGMA11', '<f8'), ('SIGMA21', '<f8'), ('SIGMA22', '<f8'), ('OMEGA11', '<f8'), ('OMEGA21', '<f8'), ('OMEGA22', '<f8'), ('OBJ', '<f8')])
Notice the `dtype` at the end. The column names do not contain parentheses or
commas, so instead of `SIGMA(1,1)` you have `SIGMA11`. You can access this
column like this:
In [398]: b['SIGMA11']
Out[398]:
array([ 0.1 , 0.0904159 , 0.134312 , 0.123897 , 0.124283 ,
0.124283 , 0.00528558, 0.352538 , 0.00749648])
|
Pull large amounts of data from a remote server, into a DataFrame
Question: To give as much context as I can / is needed, I'm trying to pull some data
stored on a remote postgres server (heroku) into a pandas DataFrame, using
psycopg2 to connect.
I'm interested in two specific tables, _users_ and _events_ , and the
connection works fine, because when pulling down the user data
import pandas.io.sql as sql
# [...]
users = sql.read_sql("SELECT * FROM users", conn)
after waiting a few seconds, the DataFrame is returned as expected.
<class 'pandas.core.frame.DataFrame'>
Int64Index: 67458 entries, 0 to 67457
Data columns (total 35 columns): [...]
Yet when trying to pull the bigger, heavier _events_ data straight from
ipython, after a long time, it just crashes:
In [11]: events = sql.read_sql("SELECT * FROM events", conn)
vagrant@data-science-toolbox:~$
and when trying from an iPython notebook I get the _Dead kernel_ error
> The kernel has died, would you like to restart it? If you do not restart the
> kernel, you will be able to save the notebook, but running code will not
> work until the notebook is reopened.
* * *
**Update #1:**
To get a better idea of the size of the _events_ table I'm trying to pull in,
here are the number of records and the number of attributes for each:
In [11]: sql.read_sql("SELECT count(*) FROM events", conn)
Out[11]:
count
0 2711453
In [12]: len(sql.read_sql("SELECT * FROM events LIMIT 1", conn).columns)
Out[12]: 18
* * *
**Update #2:**
Memory is definitely a bottleneck for the current implementation of
`read_sql`: when pulling down the _events_ and trying to run another instance
of iPython the result is
vagrant@data-science-toolbox:~$ sudo ipython
-bash: fork: Cannot allocate memory
* * *
**Update #3:**
I first tried with a `read_sql_chunked` implementation that would just return
the array of partial DataFrames:
def read_sql_chunked(query, conn, nrows, chunksize=1000):
start = 0
dfs = []
while start < nrows:
df = pd.read_sql("%s LIMIT %s OFFSET %s" % (query, chunksize, start), conn)
start += chunksize
dfs.append(df)
print "Events added: %s to %s of %s" % (start-chunksize, start, nrows)
# print "concatenating dfs"
return dfs
event_dfs = read_sql_chunked("SELECT * FROM events", conn, events_count, 100000)
and that works well, but when trying to concatenate the DataFrames, the kernel
dies again.
And this is after giving the VM 2GB of RAM.
Based on Andy's explanation of `read_sql` vs. `read_csv` difference in
implementation and performance, the next thing I tried was to append the
records into a CSV and then read them all into a DataFrame:
event_dfs[0].to_csv(path+'new_events.csv', encoding='utf-8')
for df in event_dfs[1:]:
df.to_csv(path+'new_events.csv', mode='a', header=False, encoding='utf-8')
Again, the writing to CSV completes successfully – a 657MB file – but reading
from the CSV never completes.
How can one approximate how much RAM would be sufficient to read say a 657MB
CSV file, since 2GB seem not to be enough?
* * *
Feels like I'm missing some fundamental understanding of either DataFrames or
psycopg2, but I'm stuck, I can't even pinpoint the bottleneck or where to
optimize.
What's the proper strategy to pull larger amounts of data from a remote
(postgres) server?
Answer: I suspect there's a couple of (related) things at play here causing slowness:
1. `read_sql` is written in python so it's a little slow (especially compared to `read_csv`, which is written in cython - and carefully implemented for speed!) and it relies on sqlalchemy rather than some (potentially much faster) C-DBAPI. _The impetus to move to sqlalchmey was to make that move easier in the future (as well as cross-sql-platform support)._
2. You may be running out of memory as too many python objects are in memory (this is related to not using a C-DBAPI), but potentially could be addressed...
I think the immediate solution is a chunk-based approach (and there is a
[feature request](https://github.com/pydata/pandas/issues/2908) to have this
work natively in pandas `read_sql` and `read_sql_table`).
EDIT: As of Pandas v0.16.2 this chunk based approach is natively implemented
in `read_sql`.
* * *
Since you're using postgres you have access the the [LIMIT and OFFSET
queries](http://www.postgresql.org/docs/8.2/static/queries-limit.html), which
makes chunking quite easy. (Am I right in thinking these aren't available in
all sql languages?)
First, get the number of rows (or an
[estimate](https://wiki.postgresql.org/wiki/Count_estimate)) in your table:
nrows = con.execute('SELECT count(*) FROM users').fetchone()[0] # also works with an sqlalchemy engine
Use this to iterate through the table (for debugging you could add some print
statements to confirm that it was working/not crashed!) and then combine the
result:
def read_sql_chunked(query, con, nrows, chunksize=1000):
start = 1
dfs = [] # Note: could probably make this neater with a generator/for loop
while start < nrows:
df = pd.read_sql("%s LIMIT %s OFFSET %s" % (query, chunksize, start), con)
dfs.append(df)
return pd.concat(dfs, ignore_index=True)
_Note: this assumes that the database fits in memory! If it doesn't you'll
need to work on each chunk (mapreduce style)... or invest in more memory!_
|
How to call a python script with ajax in cherrypy app
Question: I am trying to get the output from a python script and put it into a table in
the html of my cherrypy app.
Example app:
import string, os
import cherrypy
file_path = os.getcwd()
html = """<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="content-type">
<title>CCMF</title>
<link rel='shortcut icon' type='image/x-icon' href='img/favicon.ico' />
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<script>
function b1() {
var request = $.ajax({
url: "b1.py",
type: "POST",
dataType: "text"
});
request.done(function(msg) {
$("#output").html(msg);
});
request.fail(function(jqXHR, textStatus) {
alert( "Request failed: " + textStatus );
});
}
</script>
</head>
<button onclick="b1()">call b1.py</button>
...
<td id = "output"; style="vertical-align: top; height: 90%; width: 100%;">
<--output goes here -->
</td>
...
</html>
"""
class ccmf(object):
@cherrypy.expose
def index(self):
return html
if __name__ == '__main__':
cherrypy.server.socket_host = "127.0.0.1"
cherrypy.server.socket_port = 8084
config = {
"/img": {
"tools.staticdir.on": True,
"tools.staticdir.dir": os.path.join(file_path, "img"),
}
}
cherrypy.tree.mount(ccmf(), "/", config=config)
cherrypy.engine.start()
cherrypy.engine.block()
and here's the example python script b1.py:
def b1():
op = "ajax b1 pushed"
print op
return op
b1()
The ajax get's called but returns the failure alert. I have tried GET, POST,
"text", "html", b1.py is in the same directory, no joy. All currently running
on my local box.
Any hints greatly appreciated!
Answer: You are completely misunderstanding how modern, CherryPy's for instance,
routing works. Unlike outdated approaches that were commonly employed with CGI
and Apache's mod_* (mod_php, mod_python, etc.), where you directly point to
the file containing the script with URL, modern routing is an application
level activity.
Your application receives all requests and dispatches them according to the
established method. CherryPy in that sense has two major approaches: [built-in
object tree
dispatcher](https://cherrypy.readthedocs.org/en/3.2.6/concepts/dispatching.html#default-
dispatcher) and [Routes
adapter](https://cherrypy.readthedocs.org/en/3.2.6/concepts/dispatching.html#other-
dispatchers). For most simple and mid-level cases built-in dispatcher is fair
enough.
Basically it can look like this.
_app.py_
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import cherrypy
from cherrypy.lib.static import serve_file
path = os.path.abspath(os.path.dirname(__file__))
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8
}
}
class App:
@cherrypy.expose
def index(self):
return serve_file(os.path.join(path, 'index.html'))
@cherrypy.expose
@cherrypy.tools.json_out()
def getData(self):
return {
'foo' : 'bar',
'baz' : 'another one'
}
if __name__ == '__main__':
cherrypy.quickstart(App(), '/', config)
_index.html_
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html; charset=utf-8'>
<title>CCMF</title>
<script type='text/javascript' src='http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js'></script>
<script type='text/javascript'>
$(document).ready(function()
{
$('button').on('click', function()
{
var request = $.ajax({'url': '/getData'});
request.done(function(response)
{
$('#foo').text(response.foo);
$('#baz').text(response.baz);
});
request.fail(function(jqXHR, textStatus)
{
alert('Request failed: ' + textStatus);
});
})
});
</script>
</head>
<body>
<button>make ajax call</button>
<h1>Foo</h1>
<div id='foo'></div>
<h1>Baz</h1>
<div id='baz'></div>
</body>
</html>
Here's [the runnable](http://runnable.com/VAbwozeDTpkHhLoJ/how-to-call-a-
python-script-with-ajax-in-cherrypy-app-for-stackoverflow) just in case.
|
Best way to store JSON data in a Python Module
Question: I have a Python module I'm developing that's includes a big JSON file as part
of the data it relies on. I want Python users to be able to import the JSON
file as a Python variable and for users of other programming languages, to be
able to use the JSON file directly.
So, what I'm trying to figure out is what's the best way to make the JSON
object "importable". Right now, my solution is in `__init__.py`:
import json
import os
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'seals.json')) as f:
seals_data = json.load(f)
Then Python devs can call:
from my_module import seals_data
And it more or less works, but this feels weird to me, and I want to make sure
there isn't a cleaner way to make the json importable.
Answer: Your option is probably the best way to store the data as a python structure
in a module for someone to use. The only down side I see with it is anything
that imports that module will have to wait for it to do IO, which is not
ideal, esp if the module is ever used for things that do not involved that
data.
I would use a class, or function to lazily load that data, allowing that IO
hit to be offset to only when that data is requested.
Here is an example:
SEALS_DATA = None
def get_seals_data():
global SEALS_DATA
if SEALS_DATA is None:
SEALS_DATA = json.load(open("%s/seals.json"%(os.path.dirname(__file__))))
return SEALS_DATA
After they import the module they will have no deal until the first time they
call get_seals_data(). Any call after that should already have the data
loaded.
|
Python: Possible to use urlparse to parse domain from cgi bin URL
Question: I have the following **input string** :
/cgi-bin/ivw/CP/dbb_ug_sp;?r=http%3A//www.some-super-domain.de/forum/viewtopic.php%3Ff%3D2%26t%3D18564%26start%3D75&d=76756.76050130278
I wanted to use `urlparse()` to **get the domain** , but getting the `netloc`
attribute returns an empty string in this case.
How can I extract the domain (bestcase: without www)?
**Output wanted:** some-super-domain.de
Please note: Sometimes there is **no www** in above input string!
Answer: I think `urlparse` dot give you what you want you can use this :
m=re.search(r'(?<=www\.)[a-zA-Z\-]+\.[a-zA-Z]+',s)
print m.group(0)
result:
some-super-domain.de
try it [HERE](http://regex101.com/#python) !
so if you use `urlparse` the result is this :
s='/cgi-bin/ivw/CP/dbb_ug_sp;?r=http%3A//www.some-super-domain.de/forum/viewtopic.php%3Ff%3D2%26t%3D18564%26start%3D75&d=76756.76050130278'
from urlparse import urlparse
o = urlparse(s)
print o
result:
ParseResult(scheme='', netloc='', path='/cgi-bin/ivw/CP/dbb_ug_sp', params='', query='r=http%3A//www.some-super-domain.de/forum/viewtopic.php%3Ff%3D2%26t%3D18564%26start%3D75&d=76756.76050130278', fragment='')
So in this result you can access to domain with `o.query` but it isn't what
you want it's contain extra character !
>>>print o.query
>>>r=http%3A//www.some-super-domain.de/forum/viewtopic.php%3Ff%3D2%26t%3D18564%26start%3D75&d=76756.76050130278
|
Python: Convert timedelta to int in a dataframe
Question: I would like to create a column in a pandas data frame that is an integer
representation of the number of days in a timedelta column. Is it possible to
use 'datetime.days' or do I need to do something more manual?
**timedelta column**
> 7 days, 23:29:00
**day integer column**
> 7
Answer: You could do this, where `td` is your series of timedeltas. The division
converts the nanosecond deltas into day deltas, and the conversion to int
drops to whole days.
import numpy as np
(td / np.timedelta64(1, 'D')).astype(int)
|
Python 3.4 strptime() not working
Question: I am trying to convert a string to datetime and it isn't working...
self.loadsList[loadID][5] = datetime.strptime(
self.loadsList[loadID][5]+" "+self.loadsList[loadID][6], "%x %X %z")
and it raises a Value Error.
ValueError: time data '11/08/2014 04:00:00 -0500' does not match format '%x %X %z'
What am I doing wrong? Thanks! (Python 3.4)
Answer: The default `%x` format is the equivalent of `%d/%m/%y`, where `%y` matches a
_two digit year_ :
>>> import locale
>>> locale.nl_langinfo(locale.D_FMT)
'%m/%d/%y'
Your input uses a 4 digit year instead:
>>> from datetime import datetime
>>> datetime.strptime('11/08/2014', '%x')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python3.4/_strptime.py", line 340, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: 14
I'd not use `%x` or `%X`, just spell out the date and time formats; based on
your input and the use of `%x` that'd be:
'%m/%d/%Y %H:%M:%S %z'
Demo:
>>> datetime.strptime('11/08/2014 04:00:00 -0500', '%m/%d/%Y %H:%M:%S %z')
datetime.datetime(2014, 11, 8, 4, 0, tzinfo=datetime.timezone(datetime.timedelta(-1, 68400)))
It is not clear if your input represents the 8th day of November or the 11th
day of August; your use of `%x` suggests the former but I suspect you should
instead interpret the value the other way around:
'%d/%m/%Y %H:%M:%S %z'
The alternative would be to switch your Python locale from the default `C` to
`en_US` where the year _would_ use 4 digits.
|
LASSO regression result different in Matlab and Python
Question: I am now trying to learn the ADMM algorithm (Boyd 2010) for LASSO regression.
I found out a very good example on this
[page](http://www.simonlucey.com/lasso-using-admm/).
The matlab code is shown
[here](https://dl.dropboxusercontent.com/u/22893361/code/lasso_admm.m).
I tried to convert it into python language so that I could develop a better
understanding.
Here is the code:
import scipy.io as io
import scipy.sparse as sp
import scipy.linalg as la
import numpy as np
def l1_norm(x):
return np.sum(np.abs(x))
def l2_norm(x):
return np.dot(x.ravel().T, x.ravel())
def fast_threshold(x, threshold):
return np.multiply(np.sign(x), np.fmax(abs(x) - threshold, 0))
def lasso_admm(X, A, gamma):
c = X.shape[1]
r = A.shape[1]
C = io.loadmat("C.mat")["C"]
L = np.zeros(X.shape)
rho = 1e-4
maxIter = 200
I = sp.eye(r)
maxRho = 5
cost = []
for n in range(maxIter):
B = la.solve(np.dot(A.T, A) + rho * I, np.dot(A.T, X) + rho * C - L)
C = fast_threshold(B + L / rho, gamma / rho)
L = L + rho * (B - C);
rho = min(maxRho, rho * 1.1);
cost.append(0.5 * l2_norm(X - np.dot(A, B)) + gamma * l1_norm(B))
cost = np.array(cost).ravel()
return B, cost
data = io.loadmat("lasso.mat")
A = data["A"]
X = data["X"]
B, cost = lasso_admm(X, A, gamma)
I have found the loss function did not converge after 100+ iterations. Matrix
B did not tend to be sparse, on the other hand, the matlab code worked in
different situations.
I have checked with different input data and compared with Matlab outputs, yet
I still could not get hints.
Could anybody take a try?
Thank you in advance.
Answer: My gut feeling as to why this is not working to your expectations is your
`la.solve()` call. `la.solve()` assumes that the matrix is full rank and is
independent (i.e. invertible). When you use `\` in MATLAB, what MATLAB does
under the hood is that if the matrix is full rank, the exact inverse is found.
However, should the matrix not be this way (i.e. overdetermined or
underdetermined), the solution to the system is solved by least-squares
instead. I would suggest you modify that call so that you're using
[`lstsq`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html)
instead of `solve`. As such, simply replace your `la.solve()` call with this:
sol = la.lstsq(np.dot(A.T, A) + rho * I, np.dot(A.T, X) + rho * C - L)
B = sol[0]
Note that `lstsq` returns a whole bunch of outputs in a 4-element tuple, in
addition to the solution. The solution of the system is in the first element
of this tuple, which is why I did `B = sol[0]`. What is also returned are the
sums of residues (second element), the rank (third element) and the singular
values of the matrix you are trying to invert when solving (fourth element).
* * *
Also some peculiarities that I have noticed:
* One thing that may or may not matter is the random generation of numbers. MATLAB and Python NumPy generate random numbers differently, so this may or may not affect your solution.
* In MATLAB, Simon Lucey's code initializes `L` to be a zero matrix such that `L = zeros(size(X));`. However, in your Python code, you initialize `L` to be this way: `L = np.zeros(C.shape);`. You are using different variables to ascertain the shape of `L`. Obviously, the code wouldn't work if there was a dimension mismatch, but that's another thing that's different. Not sure if this will affect your solution either.
* * *
So far I haven't found anything out of the ordinary, so try that fix and let
me know.
|
Should import be inside or outside a Python class?
Question: # Overview
Suppose I'm building a class for general usage: I might need to import it
wherever, use it in a couple other files, etc. Should the import go before the
class, as:
import foo
class Bar():
def __init__(self):
foo.spam()
Or inside the `__init__` method, as:
class Bar():
def __init__(self):
import foo
foo.spam()
# My Analysis
## Outside
\+ Brings the `foo` into the global namespace for use throughout `Bar`
~~\- Importing`Bar` also requires you to manually import `foo`~~
(@MartijnPieters _et al._)
## Inside
\+ Avoids loading `foo` before you actually need to
\+ `foo` loads when `Bar` is instantiated ~~imported (and used)~~ (@BrenBarn)
\- `foo` is unavailable elsewhere
Answer: Usually all imports go at the top of the module. That makes it easy to see the
dependencies of a module at a glance, either by visual inspection or in a code
checker like pyflakes. Your assumption that "importing `Bar` also requires you
to manually import `foo`" is false.
The only times you would imports inside functions or methods are when the
imports are very resource-intenstive (e.g., slow) or unreliable (e.g.,
optional dependencies that may not be installed, platform-specific modules or
modules that tend to break), and client code is not expected to always call
the functions in question.
|
OSError: [WinError 193] %1 is not a valid Win32 application
Question: I am trying to call a python file "hello.py" from within the python
interpreter with subprocess. But I am unable to resolve this error. [Python
3.4.1].
import subprocess
subprocess.call(['hello.py', 'htmlfilename.htm'])
Traceback (most recent call last):
File "<pyshell#42>", line 1, in <module>
subprocess.call(['hello.py', 'htmlfilename.htm'])
File "C:\Python34\lib\subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Python34\lib\subprocess.py", line 858, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1111, in _execute_child
startupinfo)
OSError: [WinError 193] %1 is not a valid Win32 application
Also is there any alternate way to "call a python script with arguments" other
than using subprocess? Thanks in advance.
Answer: The error is pretty clear. The file `hello.py` is not an executable file. You
need to specify the executable:
subprocess.call(['python.exe', 'hello.py', 'htmlfilename.htm'])
You'll need `python.exe` to be visible on the search path, or you could pass
the full path to the executable file that is running the calling script:
import sys
subprocess.call([sys.executable, 'hello.py', 'htmlfilename.htm'])
|
Converting values within a dictionary from strings to floats in python
Question: I have a dictionary that looks something like this that was imported from a
csv file.
X = {'0':['1.2', '3.2', '4.5', '']}
I would like for all of the values within the dictionary to be floats instead
of strings but am unsure of how to do this due to the empty quotes at the tail
end of the dictionary. Any help is greatly appreciated.
Answer:
x2 = {float(k):[float(x) for x in v if x != ''] for k,v in X.iteritems()}
|
SIGNALs are not working in PySide
Question: I wrote a simple application in Python and PySide. When I run it, SIGNALs are
not working. The application starts without errors.
from PySide.QtCore import *
from PySide.QtGui import *
import sys
class Form(QDialog):
def __init__(self, parent=None):
super(Form, self).__init__(parent)
dial = QDial()
dial.setNotchesVisible(True)
spinbox = QSpinBox()
layout = QHBoxLayout()
layout.addWidget(dial)
layout.addWidget(spinbox)
self.setLayout(layout)
self.connect(dial, SIGNAL("valueChaged(int)"), spinbox.setValue)
self.connect(spinbox, SIGNAL("valueChaged(int)"), dial.setValue)
self.setWindowTitle("Signals and Slots")
# END def __init__
# END class Form
def main():
app = QApplication(sys.argv)
form = Form()
form.show()
app.exec_()
# END def main
if __name__ == '__main__':
main()
# END if
I am using:
Pyside 1.2.2; Python 2.7.6; OS Centos; Windows 7
I am running the application with:
Sublime Text 3 and Eclipse Luna;
How can I make SIGNALs working?
Answer: Your signal name is incorrect;
**Incorrect :**
valueChaged (int)
**[Correct](http://pyqt.sourceforge.net/Docs/PyQt4/qabstractslider.html#valueChanged)
:**
valueChanged (int)
Test it, work fine;
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class QFormDialog (QDialog):
def __init__(self, parent = None):
super(QFormDialog, self).__init__(parent)
self.myQial = QDial()
self.myQSpinbox = QSpinBox()
self.myQHBoxLayout = QHBoxLayout()
self.myQial.setNotchesVisible(True)
self.myQHBoxLayout.addWidget(self.myQial)
self.myQHBoxLayout.addWidget(self.myQSpinbox)
self.setLayout(self.myQHBoxLayout)
self.connect(self.myQial, SIGNAL('valueChanged(int)'), self.myQSpinbox.setValue)
self.connect(self.myQSpinbox, SIGNAL('valueChanged(int)'), self.myQial.setValue)
self.setWindowTitle('Signals and Slots')
if __name__ == '__main__':
myQApplication = QApplication(sys.argv)
myQFormDialog = QFormDialog()
myQFormDialog.show()
myQApplication.exec_()
Note : PyQt4 & PySide is same way to implemented.
|
Determine Size of Pickled Datetime
Question: I am currently pickling a python `datetime` to be passed to a task via celery,
and am running into memory issues. I'd like to find a way to determine the
resulting size of pickling a `datetime` object so that I can compare it to
pickling the unix timestamp. I realize the timestamp will be smaller, but I
specifically want to compare the sizes of both pickled objects.
Answer: A pickled object is just an array of bytes (think ASCII encoded string). So,
use `dumps` to get the bytes and look at the length. On my machine, a pickled
`datetime` is 44 bytes. This includes some overhead, e.g., it will include a
head indicating the pickle protocol version.
import datetime
import pickle
dt = datetime.datetime.now()
size = len(pickle.dumps(dt))
print(size, 'bytes')
Also, if you use a higher protocol, the resulting pickled object should be
smaller. Try `protocol=pickle.HIGHEST_PROTOCOL` in the `dump`.
|
Missing file in compiled py2exe app selenium
Question: I am trying to get my Selenium app to work. It compiles everything, but when I
open the app it gives me this:
C:\Python34\dist>browse.exe
Traceback (most recent call last):
File "browse.py", line 9, in <module>
File "C:\Python34\lib\site-packages\selenium\webdriver\firefox\webdriver.py",
line 43, in __init__
self.profile = FirefoxProfile()
File "C:\Python34\lib\site-packages\selenium\webdriver\firefox\firefox_profile
.py", line 64, in __init__
WEBDRIVER_PREFERENCES)) as default_prefs:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Python34\\dist\\lib
rary.zip\\selenium\\webdriver\\firefox\\webdriver_prefs.json'
I'm using py2exe to bundle, and Firefox as my browser driver.
Setup.py:
from distutils.core import setup
import py2exe
setup(
console=['browse.py'],
options={
"py2exe":{
"skip_archive": True,
"unbuffered": True,
"optimize": 2
}
}
)
Answer: Check the original answer: [Python - Trouble in building
executable](http://stackoverflow.com/questions/7701855/python-trouble-in-
building-executable)
You have to manually copy both webdriver.xpi and webdriver_prefs.json from
C:\Python27\Lib\site-packages\selenium\webdriver\firefox to
dist\selenium\webdriver\firefox
|
Change the default encoding for automatic str to unicode conversion
Question: When doing the following concatenation:
a = u'Hello there '
b = 'pirate ®'
c = a + b # This will raise UnicodeDecodeError
in python 2, `'pirate ®'` is automatically converted to `unicode` type through
ascii encoding. And since there is a non-ascii unicode sequence (`®`) in the
string, it will fail.
Is there a way to change this default encoding to utf8?
Answer: It is possible, although it's considered a hack. You have to reload `sys`:
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
See this blog post for some explanation of the potential issues this raises:
<http://blog.startifact.com/posts/older/changing-the-python-default-encoding-
considered-harmful.html>
It may be the only option you have, but you should be aware that it can lead
to further problems. Which is why it's not a simple and easy thing to set.
|
Calling Pandas Data Frames Created with globals() Inside For Loop
Question: I am iterating through 50 files in python and dumping them each into pandas
data frames. Then from each data frame I create three new data frames based on
the values in a specific field in the original data frame. These three new
frames have new names that include the the value they were filtered on.
It works, yay! I get all my data frames!
The problem is, I'm creating these data frames using a global() call, and I do
not know how to access them without explicitly typing each individual data
frame name into a kernal.
Why do I want to do this, you may ask?
Well, I want to grab all of the data frames that end in 'cd', for example, and
append (union all) them into a final data frame. I don't want to have to
explicitly call all 50 of them. I want to loop through a list of the data
frames to accomplish this task.
Any suggestions on how to accomplish this, or rework the code?
I'm new to these more intensive processes with iPython, so change whatever.
filelist = os.listdir()
sum_list = ['CAKE', 'TWINKIES', 'DOUGHNUTS', 'CUPCAKES']
for f in filelist:
state = re.match('((\w+){2})\_', f)
state_df = str(state.group(1)) + '_df'
data = pd.read_csv(f, low_memory = False)
df = pd.DataFrame(data)
for x in sum_list:
sdo = state_df + '_' + x.lower()
globals()[sdo] = pd.DataFrame(df.loc[df['summary_level'] == x])
Answer: I think a much better way is to create your own dictionary rather than resort
to globals! Just create your own and append to some list or dictionary of
lists? (depending on the classification):
dfs = {}
for f in filelist:
...
df = pd.read_csv(f) # this returns a DataFrame
for x in sum_list:
...
dfs[sdo] = df[df.summary_level == x] # again, this return a DataFrame
You could use a default dict, and assign each to a sub dictionary:
from collection import defaultdict
dfs = defaultdict({})
...
dfs[x][sdo] = ...
_i.e.`dfs['CAKE']` will be all the CAKE DataFrames._
|
python code to play a particular song with reference
Question: There is a file `1.mp3` in the location `/home/<username>/Music/`
Whatever name is given to a value of varible **`a`** , the code should
automatically search the specific folder and play that particular (number)
song
The code structure
import pygame
import time
a=1 # value assignment to <<a>> here
pygame.init()
pygame.mixer.music.load("/home/prabhat/Music/a.mp3") # file to be opened
pygame.mixer.music.play()
time.sleep(1)
Answer: You need to use string formatting to get `a` in `'a.mp3'` be substituted with
the actual value.
You can do it either with
pygame.mixer.music.load("/home/prabhat/Music/%s.mp3" % a)
or
pygame.mixer.music.load("/home/prabhat/Music/{}.mp3".format(a))
See [string formatting
docs](https://docs.python.org/2/library/string.html#format-string-syntax) for
more info.
|
How to configure wsgi application to migrate to django 1.7?
Question: I'm trying to upgrade from Django 1.6 to 1.7.
When running python manage.py runserver, I got the following error :
django.core.exceptions.ImproperlyConfigured: WSGI application 'myapp.wsgi.application' could not be loaded; Error importing module: 'cannot import name get_path_info'
Here's the corresponding line in my settings.py :
WSGI_APPLICATION = 'myapp.wsgi.application'
Here's my wsgi.py file :
import os
# We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks
# if running multiple sites in the same mod_wsgi process. To fix this, use
# mod_wsgi daemon mode with each site in its own daemon process, or use
# os.environ["DJANGO_SETTINGS_MODULE"] = "myapp.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
from dj_static import Cling
application = Cling(get_wsgi_application())
Any idea to fix it ?
Answer: What version of dj_static are you using?
I upgraded to Django 1.7 and my site broke, leading me to your question.
I checked my dj_static version (I had been using 0.0.5 and noticed that the
latest release at the time of this writing is
[0.0.6](https://github.com/kennethreitz/dj-static/releases)).
Upon updating dj_static, my site appears to be functioning properly under
Django 1.7.
|
Python Socket gives "[Errno 24] Too many open files"
Question: I have the following UDP class sending arrays of data at about 100Hz
from six import string_types
import socket
import struct
def convert_data(iterable):
if isinstance(iterable, string_types):
return str(iterable)
data = tuple(iterable)
format = "{0}H".format(len(data))
print("Sending data:", format, data)
if max(data) > 2**16 - 1:
raise ValueError(max(data))
if min(data) < 0:
raise ValueError(min(data))
return struct.pack(format, *data)
class UDP(object):
def __init__(self, ip, port):
self._ip = ip
self._port = port
self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.socket.connect((ip, port))
def send_data(self, data):
message = convert_data(data)
return self.socket.sendall(message)
It gives the following error after **successfully sending** for about a
minute:
Traceback (most recent call last):
File "take_analogue_data.py", line 13, in <module>
File "take_analogue_data.py", line 8, in main
File "/home/pi/nio-integration/hardware/raspi/UDP.py", line 22, in __init__
File "/usr/lib/python2.7/socket.py", line 187, in __init__
socket.error: [Errno 24] Too many open files
I have looked for a solution. [This Stack Overflow answer suggests increasing
the number of possible
files](http://stackoverflow.com/questions/2569620/socket-accept-error-24-to-
many-open-files). I really don't think this is the solution I am looking for
though.
Is there something I can do? I was thinking that closing the connection each
time might work, but I have already played around with a bunch of things. (I
have tried `send`, `sendall`, and `sendto` \-- none have worked)
**Note: I am running Python2.6 on Raspbian Wheezy on a Raspberry Pi**
**Edit** Another module is sending the data. It could look something like
import UDP
udp = UDP.UDP(IP, PORT)
while(True):
udp.send_data(range(8))
sleep(0.01)
Answer: Likely, you are creating a new socket for every single iteration of
`while(True):`. Processes are limited to the number of file descriptors they
can have open (sockets are fds.) You can check `/etc/security/limits.conf` to
see what your limits are set to.
You should close your socket when you're done with it, or, ideally, only open
one and reuse it if possible.
You said that your other module "could look something like this." Is that code
snippet exactly what it looks like?
I doubt it, because if so that should only be making one socket. If you're
instancing the UDP object within the `while`, then the above is definitely
your issue.
|
HTTP 500 Error with Django Ajax
Question: I'm trying to use Django with AJAX calls, and it's giving me HTTP 500 even of
MultiValueDictKeyError even though there's nothing wrong?
I sent 3 variables: sendEmail, username, error
I was able to use a request.POST on the 3 variables, and get the following
output:
* * *
sendEmail = True
username = someUserName
error = Login
* * *
However, a webpage is returned with an HTTP 500
View.py:
def loginUser(request):
username = ""
type = ""
logger = logging.getLogger('views.logger.login')
try:
username = request.POST['username'];
logger.info("User:" + username + " in Login Page")
except MultiValueDictKeyError:
logger.info("Cannot Identify User")
try:
type = request.POST['submit']
logger.info("User:" + username + " requests:" + type)
except MultiValueDictKeyError:
logger.info("Cannot Identify User's Request")
if(type=="Login"):
try:
username = request.POST['username']
password = request.POST['password']
logger.info("UserName:" + username + " is trying to login")
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
login(request, user)
logger.info("User is active, and logged in")
return redirect('index.html')
else:
logger.info("User is not active, and will not be logged in")
return redirect('disabled.html')
else:
logger.info("User:" + username + " is not valid");
context = {'Status': "Please sign in", 'Error': "Invalid", 'username':username}
return render(request, 'webapp/login.html', context)
except MultiValueDictKeyError:
logger.info("The user have missing forms")
context = {'Status': "Please sign in", 'Error': "Null"}
return render(request, 'webapp/login.html', context)
except Exception as e:
context = {'Status': "Please sign in", 'Error': "Null"}
return render(request, 'webapp/register.html', context)
logger.error("Error occured in Registering user");
logger.error("Error:" + str(e.args))
elif(type == "Register"):
logger.info("Redirecting user to Register Page")
return redirect('/webapp/register.html')
else:
logger.info("Startup Login Page");
context = {'Status': "Please sign in", 'Error': "Null"}
return render(request, 'webapp/login.html', context)
def ajax_sendMail(request):
logger = logging.getLogger('views.logger.sendEmail')
sendEmail = request.POST['email']
username = request.POST['username']
error = request.POST['error']
logger.info("Sending email to admins, sendEmail:" + sendEmail + ", username:" + username + ", error:" + error)
if(sendEmail == "true"):
mail_admins("User:" + username + " failed to " + error, "The time of error is at:" + datetime.datetime.now())
return HttpResponse("Success in Sending Email")
login.html:
<!DOCTYPE html>
<html>
<head>
<!-- Load css -->
{% load staticfiles %}
<link rel="stylesheet" href="{% static 'WebApp/bootstrap-3.2.0-dist/css/bootstrap.min.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'WebApp/login.css' %}"/>
<title> WebStats Login </title>
</head>
<body>
<!-- Load javascripts -->
{% load staticfiles %}
<script type="text/javascript" src="{% static 'WebApp/jquery-2.1.1.min.js' %}"></script>
<script type="text/javascript" src="{% static 'WebApp/login.js' %}"></script>
<script type="text/javascript" src="{% static 'WebApp/bootstrap-3.2.0-dist/js/bootstrap.min.js' %}"></script>
<!-- Variables that we pass to javascript -->
<script type="text/javascript">
var errorMessage = "{{ Error }}";
var username = "{{ username }}";
</script>
<!-- Inputs and Buttons -->
<div class="container">
<form class="form-signin" role="form" action="{% url 'WebApp:login'%}" method="post">
{% csrf_token %}
<h2 class="form-signin-heading">{{ Status }}</h2>
<input type="username" id="username" name="username" class="form-control" placeholder="User Name" autofocus>
<input type="password" id="password" name="password" class="form-control" placeholder="Password">
<label class="checkbox">
<input type="checkbox" value="remember-me"> Remember me
</label>
<button class="btn btn-lg btn-primary btn-block" type="submit" value="Login" name="submit" id="login">Sign in</button>
<button class="btn btn-lg btn-primary btn-block" type="submit" value="Register" name="submit" id="register">Register</button>
</form>
</div>
<!-- Modal -->
<div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal"><span aria-hidden="true">×</span><span class="sr-only">Close</span></button>
<h4 class="modal-title" id="myModalLabel">Error during Sign In</h4>
</div>
<div class="modal-body">
<p> Possible Reasons: </p>
<ol>
<li> Wrong Username or Password </li>
<li> User does not exist </li>
<li> Login Server is down </li>
<li> Account is Disabled </li>
<li> Check your internet cable </li>
<li> Programming error done by the Tool Owner </li>
<br>
<form>
<button type="button" id="sendErrorEmail" value="send" class="btn btn-danger" data-dismiss="modal"> Report Problem </button>
</ol>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
</body>
</html>
url.py:
from django.conf.urls import patterns, include, url
from django.contrib import admin
from django.contrib.auth.views import logout
from WebApp import views
urlpatterns = patterns('',
url(r'^index', views.index, name='index'),
url(r'^login', views.loginUser, name='login'),
url(r'^logout', views.logoutUser, name='logout'),
url(r'^register', views.registerUser, name='register'),
url(r'^sendMail', views.ajax_sendMail, name='sendMail'),
)
login.js:
var main = function()
{
if(errorMessage == "Invalid")
{
$('#myModal').modal("show");
}
else
{
$('#myModal').modal("hide");
};
$("#sendErrorEmail").click(function(event)
{
var csrftoken = getCookie('csrftoken');
event.preventDefault();
$.ajax(
{
type:"POST",
url:"sendMail/",
data:{
'email': "true",
'error': "login",
'username' : username,
'csrfmiddlewaretoken':csrftoken
}
});
$('#myModal').modal("hide");
return false;
});
};
//To get the csrf token
function getCookie(name)
{
var cookieValue = null;
if (document.cookie && document.cookie != '')
{
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++)
{
var cookie = jQuery.trim(cookies[i]);
if (cookie.substring(0, name.length + 1) == (name + '='))
{
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
$(document).ready(main);
I was able to send AJAX POST calls successfully, and in my views.py of the
function Ajax_sendEmail I was able to get my data. This is the logger details:
2014-09-04 16:04:20,901 [INFO] views.logger.login: User:adfafasdfa requests:Login
2014-09-04 16:04:20,901 [INFO] views.logger.login: UserName:adfafasdfa is trying to login
2014-09-04 16:04:21,124 [INFO] views.logger.login: User:adfafasdfa is not valid
2014-09-04 16:04:22,095 [INFO] views.logger.sendEmail: Sending email to admins, sendEmail:true, username:adfafasdfa, error:login
I was able to get sendEmail, username, and Error, however it still calls HTTP
500 MultiValueDictKeyError.
MultiValueDictKeyError at /webapp/sendMail/
"'email'"
Request Method: GET
Request URL: http://127.0.0.1:8000/webapp/sendMail/
Django Version: 1.6.5
Exception Type: MultiValueDictKeyError
Exception Value:
"'email'"
Exception Location: C:\Python27\lib\site-packages\django\utils\datastructures.py in __getitem__, line 301
Python Executable: C:\Python27\python.exe
Python Version: 2.7.6
Does anyone know why this would happen? Very odd...
What is more strange is the chrome inspect element tool,
A screenshot of Chrome Inspect Element tool:

It says that it is a POST, however, when you go inside:

It shows a GET
Answer: I don't know!
:)
Really, your code looks fine.
Is it possible that you are calling the 'sendMail/' URL twice? Once with
'POST' (that actually goes through), and once with `GET` from somewhere else?
I suggest littering that `ajax_sendMail()` function with calls to logger (just
temporarily, of course) to see exactly when the exception is being raised and
if the view might be called twice for some reason.
* * *
Also maybe add a `if request.method == 'POST':` condition in the first line of
`ajax_sendMail()` body. And it's always a good idea to do
`request.POST.get('some_key', default_if_not_existing)` when there is a
possibility that a key might not be there. This doesn't solve your problem,
but it might help debug it.
|
Python unit test for nested if statement
Question: So I've been posting unit test questions because I'm trying to get good at
them. I'll try to be as clear as possible. Below I have a nested if statement,
I want to mock the input for the `difficulty` variable and check for the
stdout if the condition is met.
here is my run code..
def main():
print("''''''''''''''''''''''''''''''''''''")
print("''''''''''''''''''''''''''''''''''''")
print("''' \t\t\t\t '''")
print("''' \t\t\t\t '''")
print("''' \t Pirate Games\t\t '''")
print("''' \tStart a new game?\t '''")
print("''' \t\t\t\t '''")
print("''' \t\t\t\t '''")
print("''''''''''''''''''''''''''''''''''''")
print("''''''''''''''''''''''''''''''''''''")
newGame = input("").lower()
if newGame == "yes" or "y":
print("1.Scallywag\n2.Crew\n3.Pirate")
difficulty = input("Choose ye toughness!")
if difficulty == "1":
print("TEST")
elif newGame == "no" or "n":
print("Goodbye! Come Again")
else:
print("Enter correct input!")
Now here's my unit test, but this is only for the first If Statement, which
works fine. I don't know how to go about doing the second one. Any help is
appreciated, thank you all in advance.
import unittest
from unittest.mock import patch
import io
import sys
from RunFile import main
class GetInputTest(unittest.TestCase):
@patch('builtins.input', return_value='yes')
def test_output(self,m):
saved_stdout = sys.stdout
try:
out = io.StringIO()
sys.stdout = out
main()
output = out.getvalue().strip()
self.assertEqual(output, "1.Scallywag\n2.Crew\n3.Pirate")
finally:
sys.stdout = saved_stdout
if __name__ == "__main__":
unittest.main()
Answer: You can use the
[`side_effect`](https://docs.python.org/3.5/library/unittest.mock.html#unittest.mock.Mock.side_effect)
argument to `patch` to make `input` return `"yes"` the first time its called,
and `"1"` the second time:
class GetInputTest(unittest.TestCase):
@patch('builtins.input', side_effect=["yes", "1"])
def test_output(self,m):
saved_stdout = sys.stdout
try:
out = io.StringIO()
sys.stdout = out
main()
output = out.getvalue().strip()
# Make sure TEST appears at the end, in addition to the original list of items.
self.assertEqual(output, "1.Scallywag\n2.Crew\n3.Pirate\nTEST")
finally:
sys.stdout = saved_stdout
|
Xml parsing using xml.dom
Question: My xmls aren't like others. Here is an example of my xml:
"<msg t='sys'><body action='verChk' r='0'><ver v='153' /></body></msg>"
what I want is to get the value of action. How do I do that using xml.dom in
python...
Answer: Here is the code that uses xml.dom and extracts value of action attribute:
s = "<msg t='sys'><body action='verChk' r='0'><ver v='153' /></body></msg>"
from xml.dom import minidom
el = minidom.parseString(s)
el.getElementsByTagName('body')[0].attributes['action'].value
Out[4]: u'verChk'
|
Python - wrong logic in nested 'if' in a for-loop
Question: I am having script to read an Excel file, which cells A1 ~ A6 contains:
OK 17
OK 9
BKK 17
OK 16
OK 12
BKK 16
They are the only contents of the Excel file.
What I want to do is, to check the codes either ‘OK’ or ‘BKK’ is in the cell,
and tell me whether the code in the cell is the same with that one row above.
For example, row 2 has ‘OK’, which is the same with row 1 has ‘OK’, so it
shall tell me ‘OK found’ and ‘row no.2 and 1 found same code’.
However the result of running below skip some rows:
from xlrd import open_workbook
the_file = open_workbook('c:\\file.xls',formatting_info=True)
the_sheet = the_file.sheet_by_index(0)
for row in range(0, the_sheet.nrows):
a = the_sheet.cell(row, 0).value
above_a = the_sheet.cell(row-1, 0).value
if a[0:2] == above_a[0:2]:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' found same code'
if 'OK' in a:
print 'OK found'
else:
print 'BKK found'
As the result is:
row no.2 and 1 found same code
OK found
row no.5 and 4 found same code
OK found
The logic is wrong. Could you please help me? Thanks.
* * *
* * *
* * *
sorry for the confusion. there are 6 values to be checked so I expect 6
results, however there are only 4. 2 were skipped.
it can be worked out like this but there shall be a way to simplify it?
for row in range(0, the_sheet.nrows):
a = the_sheet.cell(row, 0).value
above_a = the_sheet.cell(row-1, 0).value
if a[0:2] == above_a[0:2] and 'OK' in a:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' found same code' + ' OK found'
if a[0:2] == above_a[0:2] and 'BKK' in a:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' found same code' + ' BKK found'
if a[0:2] != above_a[0:2] and 'BKK' in a:
print 'BKK found'
if a[0:2] != above_a[0:2] and 'OK' in a:
print 'OK found'
Answer: I'm not sure your code actually skips rows. You just don't print anything when
a match isn't found. If you add an `else` to your outer `if` loop as follows:
from xlrd import open_workbook
the_file = open_workbook('c:\\file.xls',formatting_info=True)
the_sheet = the_file.sheet_by_index(0)
for row in range(0, the_sheet.nrows):
a = the_sheet.cell(row, 0).value
above_a = the_sheet.cell(row-1, 0).value
if a[0:2] == above_a[0:2]:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' found same code'
if 'OK' in a:
print 'OK found'
else:
print 'BKK found'
else:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' do not match'
You should get the following results:
row no.1 and 0 do not match
row no.2 and 1 found same code
OK found
row no.3 and 2 do not match
row no.4 and 3 do not match
row no.5 and 4 found same code
OK found
row no.6 and 5 do not match
The bigger problem is that you are comparing a row with the row above by
starting at the first row (because the range in the for loop goes from 0 to
5). So the first comparison is between "OK 17" and "BKK 16" (i.e. row 0 and
row -1). You should be able to see this if you comment out the `if` loops and
tell python to `print a, above_a` within the `for` loop.
for row in range(0, the_sheet.nrows):
a = the_sheet.cell(row, 0).value
above_a = the_sheet.cell(row-1, 0).value
print a, above_a
In terms of row indices, you are comparing the following (a, above_a):
0 -1
1 0
2 1
3 2
4 3
5 4
You could fix this by starting at 0 and comparing with the row below, or more
simply, start your `for` loop at 1. That would give you the following results:
row no.2 and 1 found same code
OK found
row no.3 and 2 do not match
row no.4 and 3 do not match
row no.5 and 4 found same code
OK found
row no.6 and 5 do not match
==================================================================================
To address your edit:
Your second version of the `for` loop does better in that it includes the
cases where there is no match. But you still start your range at 0, so it is
comparing the first row (index 0) with the last row (index -1). This is not
ideal.
With regard to simplifying your if statements in the new `for` loop, you can
use `elif` and `else` instead of four `if` statements. You can also change the
last two `if` statements into a single `else` and nest an `if` to test if the
row has a "OK" or "BKK" in it. The following code is an example:
for row in range(1, the_sheet.nrows):
a = the_sheet.cell(row, 0).value
above_a = the_sheet.cell(row-1, 0).value
if a[0:2] == above_a[0:2] and 'OK' in a:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' found same code' + ' OK found'
elif a[0:2] == above_a[0:2] and 'BKK' in a:
print 'row no.' + str(row + 1) + ' and ' + str(row) + ' found same code' + ' BKK found'
else:
if 'BKK' in a:
print 'BKK found in row %d' % row
else:
print 'OK found in row %d' % row
There is a further issue to address. The above code only gives you 5 results.
It sounds like you want to know two separate things:
1. Do the contents of a cell contain "OK" or "BKK"?
2. Do the contents of a cell match the contents of the cell above it with regard to the first question?
The issue you might be running into is that the first question involves 6
answers but the second question only involves 5. The first row doesn't have a
row above it and thus doesn't have an answer to the second question. You might
change the code to answer each question separately or to combine the two
questions into a single print statement that includes a comparison for every
row but the first one.
If I'm misunderstanding the problem you're trying to answer, please clarify
further.
|
Python sub processes block when doing blocking read from stdin in main process
Question: I have a Python multiprocessing application which starts "workers" using the
multiprocessing API. The main process is itself started by a service process
which is not written in Python. The workers may themselves start other non-
Python sub process using `subprocess.Popen`.
For clarity, this is the entire process hierarchy:
* service.exe: service process (native EXE)
* python.exe: Python main process (program below)
* python.exe: Python sub process (task function started by Process)
* subprocess.exe: Native sub process (see explanation below)
When the service process is stopped, it must tell the Python process to exit.
I am using standard input for this. The advantage is that if the service
process crashes or is killed, then standard input of the Python process is
closed, so it will exit, and there will be no orphan processes.
import multiprocessing
import time
import sys
def task():
print("Task started...")
# TODO: Start a native process here using Subprocess.popen
time.sleep(3)
print("Task ended")
if __name__ == '__main__':
process = multiprocessing.Process(target=task)
process.start()
# time.sleep(3) # "workaround"
sys.stdin.read()
print("Terminating process...")
process.terminate()
However, it seems that when I add `sys.stdin.read()`, the Python sub process
starts, but it doesn't do anything. It just seems to hang.
A (bad) workaround was to add `time.sleep(3)` before reading from standard
input. Then the program above works. However, it seems that sub processes
started by the Python sub process can still block, and they will block only if
I do the blocking read in the main process.
This problem does not occur on all systems. It was observed on one Windows 8
machine and it never occurred on another Windows machine. I am using Python
2.7.2.
My question is: How can a blocking read in the main process affect sub
processes? Shouldn't the sub process start and run independently of whatever I
do in the main process? (I only want to understand this problem. If you find a
better solution for stopping the Python process from the service process, I
will be thankful, but it's the strange blocking behavior that is giving me
nightmares)
Answer: Your subprocesses aren't hanging. One of my favorite debugging techniques to
use when I'm using the multiprocessing library is to make the subprocesses
drop text files instead of printing to stdout, so you can avoid all of the
complications of pipes, such as wondering whether or not your subprocesses
inherited the same stdin/stdout, full pipes, etc. If we modify your task to be
the following:
def task():
with open('taskfile.txt', 'w') as fo:
fo.write("Task started...")
# TODO: Start a native process here using Subprocess.popen
time.sleep(3)
fo.write("Task ended")
It produces the text file 'taskfile.txt' which contains the following:
> Task started...Task ended
Therefore, your tasks are running and exiting just fine. Main is just waiting
for input from stdin. I suspect you weren't seeing the "Task started..." note
because processes launched with `multiprocessing.Process()` have their own
stdin and stdout pipes that aren't connect to the same console as main's.
|
Ring Buffer for n-dimensional Vectors
Question: I am working on an real-time application. For this I need to store around 20
arrays per second. Each arrays consists of n Points with their respective x
and y coordinate (z may follow as well in the future).
What I did come up with is some kind of a Ring Buffer, which takes the length
of the total arrays (it's frames of a video btw.) and the number of the points
with their coordinate (this doesn't change within one execution, but is
variable for executions following).
My Buffer inits with an numpy array filled with zeros:
`np.zeros((lengthOfSlices,numberOfTrackedPoints))`
However this seems to be problematic, because I write the whole Points for a
Slice into the array at once, not after another. That means I can't broadcast
the array as the shape is not correct.
Is there a numPythonic way to initialize the array with zeros and store
vectorwise afterwards?
Below you can find what I have now:
class Buffer():
def __init__(self, lengthOfSlices, numberOfTrackedPoints):
self.data = np.zeros((lengthOfSlices,numberOfTrackedPoints))
self.index = 0
def extend(self, x):
'adds array x to ring buffer'
x_index = (self.index + np.arange(x.size)) % self.data.size
self.data[x_index] = x
self.index = x_index[-1] + 1
def get(self):
'returns the first-in-first-out data in the ring buffer'
idx = (self.index + np.arange(self.data.size)) % self.data.size
return self.data[idx]
Answer: You need to reshape the array based on the lenght of the frame.
Simple example:
>>> import numpy as np
>>> A = np.zeros(100)
>>> B = np.reshape(A, (10,10))
>>> B[0]
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
So that's probably something like `self.data = np.reshape(self.data,
(lengthOfAFrame, 20))`
EDIT: Apparently reshaping is not your (only?) problem, you might check
[collections.deque](http://docs.python.org/library/collections.html#collections.deque)
for a python implementation of a circular buffer ([source and
example](http://stackoverflow.com/questions/4151320/efficient-circular-
buffer))
|
Can't set character_set_results to latin1
Question: I've decided to use Django 1.7 for the first time with Python 3. I need to be
able to work with legacy `latin1` database which contains `utf8` data. I know
it sucks, but the database is so huge that it's really impossible to change
this. So I tried following:
DATABASES = {
'ENGINE' : 'django.db.backends.mysql', // using MySQL-python fork with support for py3
...
'OPTIONS' : {
'init_command': "SET character_set_results = 'latin1'",
#'read_default_file': '/etc/my.cnf.d/client.cnf', // I've also tried this one
}
}
I've also tried python-mysql-connector from Oracle with following setup
DATABASES = {
'ENGINE' : 'mysql.connector.django', // using MySQL-python fork with support for py3
'OPTIONS' : {
'option_files': ['/etc/my.cnf.d/client.cnf'],
}
}
/etc/my.cnf.d/client.cnf
[client]
init-command='SET character_set_results = "latin1"'
# password, host, username
In both cases I can connect to database, but it seems like Django sets
character_set_results back to utf8.
I've tried following
from django.db import connection
with connection.cursor() as c:
// I expect variable to be 'latin1'
c.execute("show variables like 'character_set_results%'")
c.fetchone() // returns ('character_set_results', 'utf8')
// here I try to set it manually
c.execute("SET character_set_results = 'latin1'")
c.execute("show variables like 'character_set_results%'")
c.fetchone() // returns ('character_set_results', 'latin1') // now it's OK
* I'm sure django uses `client.cfg` file and correct `[section]`, because it contains username/password and it successfully connects to the database
* When I use `mysql` command in the linux terminal which uses the same configuration file, everything works as expected
So I guess Django somehow enforces `character_set_results` variable to be
`utf8`. Is it possible? Is there any way how can I solve this issue?
Thank you very much
Answer: I finally figured it out (I don't know why I always find a solution a while
after posting it to SO)
from django.db.backends.signals import connection_created
def connection_setup(**kwargs):
conn = kwargs['connection']
with conn.cursor() as cursor:
cursor.execute("SET character_set_results = 'latin1'")
cursor.close()
I've tried it before with Oracle's `python-mysql-connector` and it threw
`RuntimeError: maximum recursion depth exceeded in comparison`
but it works with `MySQL-driver` py3 branch. I guess it can be a bug in
`python-mysql-connector` or `Django` which I'll report. Maybe this will help
somebody.
|
Getting rid of 'nan' from list value of dictionary , python
Question: I have this dictionary:
dict_new =
{'extracted_layout': [nan, nan, nan, nan, nan, nan, nan, nan, nan, 'shyamanna layout', nan, nan, nan, nan, 'm t s layout', nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 'green glen layout', nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 'h s r layout', nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 'vikas layout', 'annaiah reddy layout', nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 'andhra muniyappa layout', nan, nan, nan, nan, nan, 'lake city layout', nan, 'h s r layout'}
It has one key, `extracted_layout` and its values are in a list that is
stuffed with nans . How do I get rid of them?
Answer: If `nan` is float nan, use
[`math.isnan`](https://docs.python.org/2/library/math.html#math.isnan) to
filter it out:
>>> import math
>>> nan = float('nan')
>>> nan
nan
>>> math.isnan(nan)
True
>>> math.isnan(1)
False
* * *
import math
dict_new['extracted_layout'] = [
x
for x in dict_new['extracted_layout']
if not (isinstance(x, float) and math.isnan(x))
]
|
Python Web Scrape Write Output to File
Question: I have a basic Python Script which can store the output to a file. This is
file is difficult to parse. Any other way to write scraped data to a file
which can be read easily into Python for analysis ?
import requests
from bs4 import BeautifulSoup as BS
import json
data='C:/test.json'
url="http://sfbay.craigslist.org/search/sby/sss?sort=rel&query=baby"
r=requests.get(url)
soup=BS(r.content)
links=soup.find_all("p")
#print soup.prettify()
for link in links:
connections=link.text
f=open(data,'a')
f.write(json.dumps(connections,indent=1))
f.close()
Output File contains this: " $25 Sep 5 Porcelain Baby Deer $25 (sunnyvale) pic
household items - by owner "" $7500 Sep 5 GEORGE STECK BABY GRAND PLAYER PIANO
$7500 (morgan hill) map musical instruments - by
Answer: If you want to write it from python to a file, and read it back into python
later, you can use Pickle - [Pickle
Tutorial](https://wiki.python.org/moin/UsingPickle).
Pickle files are in binary and will not be human-readable, if that's important
to you then you could look at yaml, which I'll admit has a bit of a learning
curve, but produces nicely formatted files.
import yaml
f = open(filename, 'w')
f.write( yaml.dump(data) )
f.close()
...
stream = open(filename, 'r')
data = yaml.load(stream)
|
SoundCloud python api and GAE upload larger sounds
Question: I've been testing the SoundCloud Python API and it was working great with
smaller files (< 1MB). Now, I'm trying to upload larger files (5MB+) and I'm
getting errors.
Here's the code:
import soundcloud
client = soundcloud.Client(
client_id = app.config['SOUNCLOUD_CLIENT_ID'] ,
client_secret= app.config['SOUNCLOUD_CLIENT_SECRET'],
username= app.config['SOUNCLOUD_CLIENT_USERNAME'],
password= app.config['SOUNCLOUD_CLIENT_PASSWORD'])
try:
track = client.post('/tracks', track={
'title': request.form['song_title'],
'sharing': 'public',
'asset_data': blob_reader})
except Exception, e:
logging.info(xstr(e))
Here's the error logs:
INFO 2014-09-05 18:29:27,199 connectionpool.py:657] Starting new HTTPS connection (1): api.soundcloud.com
INFO 2014-09-05 18:29:39,863 views.py:2308] HTTPSConnectionPool(host='api.soundcloud.com', port=443): Max retries exceeded with url: /tracks.json (Caused by <class 'google.appengine.dist27.gae_override.httplib.HTTPException'>: Deadline exceeded while waiting for HTTP response from URL: https://api.soundcloud.com/tracks.json)
INFO 2014-09-05 18:29:39,914 recording.py:665] Saved; key: __appstats__:064700, part: 157 bytes, full: 222910 bytes, overhead: 0.004 + 0.050; link: http://localhost:8080/_ah/stats/details?time=1409941764737
INFO 2014-09-05 18:29:39,926 module.py:640] default: "POST /scimageupload HTTP/1.1" 200 19
Answer: How long is the request taking before you get that error? Google App Engine
cuts off requests that take longer than a certain number of seconds (that
"Deadline exceeded" error, see the
[docs](https://developers.google.com/appengine/articles/deadlineexceedederrors)
for more info).
You'll probably need to move the uploading step outside of the web request.
Here are three methods GAE gives you for doing that:
* Scheduled tasks (cron):
* <https://developers.google.com/appengine/docs/python/config/cron>
* Task queue:
* <https://developers.google.com/appengine/docs/python/taskqueue/>
* Background threads:
* <https://developers.google.com/appengine/docs/python/modules/#Python_Background_threads>
|
Mocking a subprocess call in Python
Question: I have a method (`run_script`) would like to test. Specifically I want to test
that a call to `subprocess.Popen`occurs. It would be even better to test that
`subprocess.Popen` is called with certain parameters. When I run the test
however I get `TypeError: 'tuple' object is not callable`.
How can I test my method to ensure that **subprocess** is actually being
called using mocks?
@mock.patch('subprocess.Popen')
def run_script(file_path):
process = subprocess.Popen(['myscript', -M, file_path], stdout=subprocess.PIPE)
output,err = process.communicate()
return process.returncode
def test_run_script(self, mock_subproc_popen):
mock_subproc_popen.return_value = mock.Mock(communicate=('ouput','error'), returncode=0)
am.account_manager("path")
self.assertTrue(mock_subproc_popen.called)
Answer: It seems unusual to me that you use the patch decorator over the `run_script`
function, since you don't pass a mock argument there.
How about this:
def run_script(file_path):
process = subprocess.Popen(['myscript', -M, file_path], stdout=subprocess.PIPE)
output,err = process.communicate()
return process.returncode
@mock.patch('subprocess.Popen')
def test_run_script(self, mock_subproc_popen):
process_mock = mock.Mock()
attrs = {'communicate.return_value': ('output', 'error')}
process_mock.configure_mock(**attrs)
mock_subproc_popen.return_value = process_mock
am.account_manager("path") # this calls run_script somewhere, is that right?
self.assertTrue(mock_subproc_popen.called)
Right now, your mocked subprocess.Popen seems to return a tuple, causeing
process.communicate() to raise `TypeError: 'tuple' object is not callable.`.
Therefore it's most important to get the return_value on mock_subproc_popen
just right.
|
Optimize python loop
Question: The following loop creates a giant bottleneck in my program. Particularly
since records can be over 500k.
records = [item for sublist in records for item in sublist] #flatten the list
for rec in records:
if len(rec) > 5:
tag = '%s.%s' %(rec[4], rec[5].strip())
if tag in mydict:
mydict[tag][0] += 1
mydict[tag][1].add(rec[6].strip())
else:
mydict[tag] = [1, set(rec[6].strip())]
I don't see a way that I could do this with a dictionary/list comprehension,
and I'm not sure calling map would do me much good. Is there any way to
optimize this loop?
**Edit:** The dictionary contains information about certain operations
occurring in a program. `rec[4]` is the package which contains the operation
and `rec[5]` is the name of the operation. The raw logs contains an int
instead of the actual name, so when the log files are read into the list, the
int is looked up and replaced with the operation name. The incremental counter
counts how many times the operations was executed and the set contains the
parameters for the operation. I am using a set because I don't want duplicates
for the parameters. The strip is simply to remove white space. The existence
of this white space is unpredictable in `rec[6]`, but rether consistant in
`rec[4]` and `rec[5]`.
Answer: Instead of flattening such a huge list you can directly iterate over its
flattened iterator using
[`itertools.chain.from_iterable`](https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable).
from itertools import chain
for rec in chain.from_iterable(records):
#rest of the code
This is around 3X times faster than the equivalent nested for-loop based
genexp version as well:
In [13]: records = [[None]*500]*10000
In [14]: %%timeit
...: for rec in chain.from_iterable(records): pass
...:
10 loops, best of 3: 54.7 ms per loop
In [15]: %%timeit
...: for rec in (item for sublist in records for item in sublist): pass
...:
10 loops, best of 3: 170 ms per loop
In [16]: %%timeit #Your version
...: for rec in [item for sublist in records for item in sublist]: pass
...:
1 loops, best of 3: 249 ms per loop
|
relative import error in python and where can find python module source code
Question: I'm quite new to python. Though I understand the basic data types,control flow
,etc, I still feel a lit bit difficult from top-level view.
One of these questions is `relative import`. I have a piece of code from a
book trying to implement queue structure using python. When I run the code I
got error from the import, "ValueError: Attempted relative import in non-
package".
**Here is line of the import:**
from ..exceptions import Empty
I'm now in my working project folder. My question is how can I make adjustment
to this line to make the whole piece of code work? I guess this "exceptions"
module is made by the author not the built-in module and somehow the author
does not include the module in the current folder. Where can I find python
built-in module source code so I can take a look?
**My system is ubuntu**
Thank you.
Answer: These are _explicit relative imports_. That syntax means that the file where
that line of code resides is trying to `import` the `Empty` module (I'm making
an educated guess about that, not seeing the actual dir structure, although
it's unusual for a module to have a capital leading letter -- generally that's
reserved for classes) from the `exceptions` subfolder which resides in its
parent directory.
See the [Module: Packages
doc](https://docs.python.org/2/tutorial/modules.html#packages) for more info,
including a specific folder-structure example with relative imports.
The book really should've presented the code in a self-contained directory
structure -- say, a _git_ repo you could clone -- where these intra-package
dependencies will just work. It's unlikely to be related to your system
install if it's stock (although certainly one can set _PYTHONPATH_ and other
such methods that could affect the environment).
What does the module structure look like?
I'd also recommend you look into
[_virtualenv_](http://virtualenv.readthedocs.org/en/latest/) in order to
sandbox your Python environments. (Although for custom code, you _may_ also
need to run a local _Pypi_ server or similar, although there are other options
- see @abarnert's comment below). It wouldn't on its own help this particular
issue, but it's a good idea in general for keeping projects and their various
package requirements isolated.
|
Sending Mail from Google App Engine
Question: I am trying to send an email from google app engine using the python 2.7
library but I keep getting Unauthorized sender in the logs. I have tried my
gmail account I created the application with as the sender, I registered
another gmail address as a developer and tried that but still get Unauthorized
sender. I am not sure if it matters but I do have a domain name registered to
this application.
Here is the code I am trying:
message = mail.EmailMessage()
message.sender = "[email protected]"
message.subject = "Inquiry"
message.to = "[email protected]"
message.body = "Please work"
message.send()
I have looked at other articles to no avail.
[Google Appengine sending emails: [Error] unauthorized
sender](http://stackoverflow.com/questions/11621019/google-appengine-sending-
emails-error-unauthorized-sender)
[InvalidSenderError: Unauthorized sender (Google App
Engine)](http://stackoverflow.com/questions/4217972/invalidsendererror-
unauthorized-sender-google-app-engine)
Answer:
from google.appengine.api import mail
mail.send_mail(sender="stackoverflow.com Hossam <[email protected]>",
to="rsnyder <[email protected]>",
subject="How to send an e-mail using google app engine",
body="""
Dear rsnyder:
This example shows how to send an e-mail using google app engine
Please let me know if this is what you want.
Best regards,
""")
EDIT:
Note that `sender` must be an `administrator` of the application, so in case
that you are not and `administrator`, follow these steps from the post [google
app engine: how to add adminstrator
account](http://stackoverflow.com/questions/14020317/google-app-engine-how-to-
add-adminstrator-account)
|
What is the proper way I can invoke firefox from a python3 program?
Question: I'm trying to start firefox with an internet page by calling it in python 3 as
an argument to os.system or os.startfile.
The internet page I want to start is <https://schwab.com>
I can't bring it up at the command line with
C:\Python34\hsf\WSC>C:\Program Files(x86)\Mozilla Firefox\firefox.exe
<https://schwab.com>
It chokes on the spaces.
But I can by using
C:\Progra~2\Mozill~1\firefox.exe <https://schwab.com>
That works fine at the command line
So I put that address as the argument to os.system in my python program, and
got the error:
'C:\Progra~2\Mozill~1' is not recognized as an internal or external command,
operable program or batch file.
I tried it in os.startfile and got the error message:
Exception in Tkinter callback Traceback (most recent call last): File
"C:\Python34\lib\tkinter__init__.py", line 1482, in **call** return
self.func(*args)
File "C:\Python34\hsf\WSC\fm.py", line 59, in Schwab
res=os.startfile('C:\Progra~2\Mozill~1\firefox.exe https://schwab.com')
FileNotFoundError: [WinError 2] The system cannot find the file specified:
'C:\Progra~2\Mozill~1\x0cirefox.exe <https://schwab.com>'
Note that it echoed my argument correctly, but the FileNotFoundError has
inserted the string x0cire between '\' and 'firefox'
I deleted and retyped the '\f', and got the same erroroneous string inserted.
To avoid the path, I copied firefox.exe into my folder, but it won't run
outside its native milieu.
What is the proper way I can invoke firefox from a python3 program?
Answer: It depends what you want to do with this site. If all you want to do is open
the page use the [webbrowser
module](https://docs.python.org/3/library/webbrowser.html#webbrowser.open) to
open the url.
import webbrowser
webbrowser.open('https://www.schwab.com/')
If you need to something more complicated, you can use the
[Selenium](http://selenium-python.readthedocs.org/) module to interact with
the page in pretty much anyway you need.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.send_keys("selenium")
elem.send_keys(Keys.RETURN)
driver.close()
|
Json, urlib2 and pprint
Question: I have the following exercise:
_Use the[json](http://docs.python.org/library/json.html) module. First use
[urllib2](http://docs.python.org/howto/urllib2.html) to download this file,
then load the json as a python object and use **pprint** to make it look good
when written to the terminal._
Now until now I've only worked with standard Python things (such as the
codeacademy course and things such as lists).
What I understand is that I have to import urllib2 and apparently import json
in some other way and use pprint...???
This is what I have done, but not sure if I got it right...
import urllib2
response = urllib2.urlopen('https://dl.dropboxusercontent.com/u/153071/test.json')
html = response.read()
import json
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(c) #Just printing a list from earlier in the file, not sure what to print...
Answer: You don't need to import `pprint`. You can specify indentation using the json
module itself
import urllib2
import json
response = urllib2.urlopen('https://dl.dropboxusercontent.com/u/153071/test.json')
content_dict = json.loads(response.read())
print json.dumps(content_dict, indent=4)
|
Create users on Microsoft Active Directory using a Scripting Language
Question: I need to create and/or update the Students users in the school I work in. I
only have access to the "Remote Control" not directly on the server and I have
been told, that I will be able to remotely create, update and delete users and
groups.
The problem is: I must create a lot users every 1/2 Year and so, I want to do
that automatically. I have a database dump from the students and the classes
so I could read that out in a scripting language like Python, PHP or in a
Java/C++ program (Python would be my favourite)
I am looking for a way to create the groups and the users with a scripting
lanugage on a remote computer and if it also works I want to create
sharedrives and give the users/groups access to that automatically.
Each half year, the users get updated in the new class so I must fetch a
record of the users and check it against the new students list and update the
users.
Does anyone know some bindings/remote controll libraries/classes for Microsoft
Active Directory Management for one of those Programming Languages?
Answer: I've used PyAD for AD-work and was satisfied with the result. Here is a short
example of creating a user.
from pyad import *
pyad.set_defaults(ldap_server="dc1.domain.com", username="service_account", password="mypassword")
ou = ADContainer.from_dn("ou=users, dc=domain, dc=com")
new_user = ADUser.create("Daniel", ou, password="Secret")
It is also possible to edit users and groups using `set_attribute` and add
users to groups. For example:
new_user.set_attribute("mail", "[email protected]")
group = ADGroup.from_dn("so-users")
group.add_member(new_user)
And to delete:
new_user.delete()
You can find the documentation at: <https://zakird.com/pyad/>
Note: I do not have access to a Windows environment so this code isn't tested,
so expect some detail to be wrong.
|
Python - Reading web page after authentication
Question: First, sorry for my english, it's not my mother tongue. Anyway, some grammar
errors will not kill you :) Hopefully.
I'm not able to get some information from a web page due to authentication
system.
The website is : www.matchendirect.fr It's a French site and there is no way
to turn it into english (sorry for the inconvenience) This website displays
football game information.
My purpose is to get forecast data (displayed in the middle of the page, there
is a table with forecast displayed called "Pronostics des internautes" but the
content of this table is displayed only if you're logged in)
Here is my code :
import urllib2, cookielib
cookieJar = cookielib.CookieJar()
auth_url="http://www.matchendirect.fr/cgi/ajax/authentification.php?f_contexte=auth_form_action&f_email=pkwpa&f_mot_de_passe=pkw_pa"
url="http://www.matchendirect.fr/live-score/colombie-bresil.html"
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookieJar))
request = urllib2.Request(auth_url)
response = opener.open(request)
response = opener.open(url)
webpage=response.read()
To be sure to be log in, we can try this:
if webpage.find("prono_stat_data")!=-1:
print("I'm logged in")
I think my cookies managment isn't good...
Here are my credentials, play with them, it's obviously a fake account create
only for this topic.
username : pkwpa password : pkw_pa
Hope someone could help me.
Answer: here is what you're looking for : <http://docs.python-
requests.org/en/latest/user/install/#install> Use it like below: from requests
import session
with session() as c:
c.get('http://www.matchendirect.fr/cgi/ajax/authentification.php?f_contexte=auth_form_action&f_email=pkwpa&f_mot_de_passe=pkw_pa')
request = c.get('http://www.matchendirect.fr/live-score/colombie-bresil.html')
print request.headers
print request.text
Cheers
|
Parsing json element
Question: I am using Scrapy and a Regex to parse some none standard web source code. I
then wish to parse the first element of the dictionary returned:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapy.cmdline import execute
from scrapy.utils.markup import remove_tags
import time
import re
import json
import requests
class ExampleSpider(CrawlSpider):
name = "goal2"
allowed_domains = ["whoscored.com"]
start_urls = ["http://www.whoscored.com"]
download_delay = 5
rules = [Rule(SgmlLinkExtractor(allow=('\Teams'),deny=(),), follow=False, callback='parse_item')]
def parse_item(self, response):
sel = Selector(response)
titles = sel.xpath("normalize-space(//title)")
print '-' * 170
myheader = titles.extract()[0]
print '********** Page Title:', myheader.encode('utf-8'), '**********'
print '-' * 170
match1 = re.search(re.escape("DataStore.prime('stage-player-stat', defaultTeamPlayerStatsConfigParams.defaultParams , ") \
+ '(\[.*\])' + re.escape(");"), response.body)
if match1 is not None:
playerdata1 = match1.group(1)
teamid = json.loads(playerdata1[0])
print teamid
The key for the first element of 'playerdata1' is called 'TeamId'. I assumed
the above method would work, however I am getting the following error:
teamid = json.loads(playerdata1[0])
File "C:\Python27\lib\json\__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Python27\lib\json\decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python27\lib\json\decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
exceptions.ValueError: Expecting object: line 1 column 1 (char 0)
Can anyone see what the issue is here?
Answer: `match1.group(1)` returns _one string_. You then index that string:
teamid = json.loads(playerdata1[0])
Here, `[0]` will give you the _just the first character_ of that string.
Remove the indexing expression there to use the _whole_ string:
teamid = json.loads(playerdata1)
Now `teamid` is a list with player objects:
>>> len(teamid)
22
>>> teamid[0].keys()
[u'FirstName', u'LastName', u'KnownName', u'Field', u'GameStarted', u'AerialWon', u'TeamRegionCode', u'SecondYellow', u'ShotsBlocked', u'TotalShots', u'Assists', u'Red', u'Name', u'PositionText', u'Ranking', u'PositionLong', u'PlayerId', u'SubOff', u'Dispossesed', u'TeamId', u'TotalTackles', u'TotalLongBalls', u'Goals', u'SubOn', u'WasDribbled', u'AerialLost', u'Turnovers', u'ShotsOnTarget', u'WSName', u'Fouls', u'ManOfTheMatch', u'Height', u'TeamName', u'RegionCode', u'TotalPasses', u'TotalThroughBalls', u'Dribbles', u'DateOfBirth', u'OwnGoals', u'WasFouled', u'TotalClearances', u'Rating', u'PlayedPositionsRaw', u'Weight', u'AccurateLongBalls', u'OffsidesWon', u'AccuratePasses', u'Yellow', u'KeyPasses', u'TotalCrosses', u'AccurateCrosses', u'IsCurrentPlayer', u'Age', u'PositionShort', u'AccurateThroughBalls', u'Interceptions', u'Offsides']
|
AndroidViewClient remove package and broadcastreceiver
Question: I am using the AndroidViewClient library, and it works perfectly fine. I want
to know how do I remove a package and access the methods listed here:
<http://developer.android.com/tools/help/MonkeyDevice.html>
I tried using MonkeyRunner but androidviewclient doesnt support it anymore and
shows an import error.
Is there any means to use the methods listed in the link ? Here is my code :
import os
import re
import time
# This must be imported before MonkeyRunner and MonkeyDevice,
# otherwise the import fails.
# PyDev sets PYTHONPATH, use it
try:
for p in os.environ['PYTHONPATH'].split(':'):
if not p in sys.path:
sys.path.append(p)
except:
pass
try:
sys.path.append(os.path.join(os.environ['ANDROID_VIEW_CLIENT_HOME'], 'src'))
except:
pass
from com.dtmilano.android.viewclient import *
package = 'com.android.vending'
activity = 'com.android.vending.AssetBrowserActivity'
component = package + "/" + activity
device, serialno = ViewClient.connectToDeviceOrExit()
vc = ViewClient(device, serialno)
vc.dump(window='-1')
device.removePackage('com.mypackage.myapp')
I get the following error :
File "test1.py", line 38, in <module>
device.removePackage('com.mypackage.myapp')
AttributeError: AdbClient instance has no attribute 'removePackage'
Answer: Unfortunately not all the methods have been ported to `AdbClient` yet. In the
meantime you can use `device.shell('pm uninstall your.package.name')`
|
Python Tkinter how to take the input from multiple Entry widgets stored in one variable
Question: my code is the following.
from tkinter import *
def TakeInput():
print(tb.get()) #This will print Entry1 input
print(tb.get()) #This will print Entry2 input
tk=Tk()
#Entry 1
tb=Entry(tk) #Both Entry1 and Entry2 are stored in the same variable: tb
tb.pack()
#Entry 2
tb=Entry(tk) #Both Entry1 and Entry2 are stored in the same variable: tb
tb.pack()
#Button
b=Button(tk,text="PrintInput",command= TakeInput)
b.pack()
tk.mainloop()
All I am trying to do is to display both entry 1 and entry 2 input when both
are assigned to the same variable.
Note that I am a Python newbie.
Answer: If you want to do it automatically, you have to control strings in entry
widgets when they modified. You can do it with `StringVar`. You dont need a
button, when the entry1's text equals to entry2's text, it will automatically
prints.
from tkinter import *
def TakeInput():
print(tb1.get())
print(tb2.get())
def on_entry1_changed(*args):
if sv_entry1.get() == sv_entry2.get():
TakeInput()
def on_entry2_changed(*args):
if sv_entry1.get() == sv_entry2.get():
TakeInput()
tk=Tk()
#Entry 1
sv_entry1 = StringVar()
sv_entry1.set("Entry1 text")
sv_entry1.trace("w", on_entry1_changed)
tb1=Entry(tk, textvariable=sv_entry1)
tb1.pack()
#Entry 2
sv_entry2 = StringVar()
sv_entry2.set("Entry2 text")
sv_entry2.trace("w", on_entry2_changed)
tb2=Entry(tk, textvariable=sv_entry2)
tb2.pack()
tk.mainloop()
If you want to do it with pressing button, you have to modify TakeInput
function like this:
from tkinter import *
def TakeInput():
if tb1.get() == tb2.get():
print tb1.get()
tk=Tk()
#Entry 1
tb1=Entry(tk)
tb1.pack()
#Entry 2
tb2=Entry(tk)
tb2.pack()
#Button
b=Button(tk,text="PrintInput",command= TakeInput)
b.pack()
tk.mainloop()
|
Import Python codes inside HTML file
Question: As mentioned
[here](http://karrigell.sourceforge.net/en/pythoninsidehtml.html), I can
import Python codes inside .html files using `<%` and `%>` tags. Just to try
it, I wrote the below code in notepad and save it as a file named test.html :
<html>
<title>
</title>
<body>
<%print ("Hello")%>
</body>
Once I do a double click on the test.html, Chrome opens with the below line on
the top :
<%print ("Heloo")%>
What I must I do to have 'Hello' in output?
Note: **"print"** is an example, What kind of ways is there to import and run
python codes in html files?
Answer: That page is related to Karrigell a Python web framework, you can only have
Python and HTML files (Web pages) if you use a Python web framework like
[web.py](http://webpy.org/), [Pylons](http://www.pylonsproject.org/),
[Django](https://www.djangoproject.com), and
[others](https://wiki.python.org/moin/WebFrameworks).
Browsers can only execute JavaScript code, other programming languages have to
use special components to be executed by browsers.
|
Having trouble importing nltk.etree.elementtree
Question: I'm following along with Natural Language Processing with Python. when i run
from nltk.etree.ElementTree import ElementTree
I get this error:
ImportError: No module named etree.ElementTree
Why is this?
Answer: etree was removed from ntlk back in 2012
(<https://github.com/nltk/nltk/issues/80>):
> StevenBird1 said, at 2011-03-31T05:10:58.000Z:
>
> Yes -- etree has been in the standard library since version 2.5. It was just
> a temporary measure to include etree for the benefit of people using 2.4.
This should work for you:
from xml.etree.ElementTree import ElementTree
|
Check if a specific Key and a value exist in a dictionary
Question: I am trying to determine if a specific key and value pair exist in a
dictionary; however, if I use the contains or has-key method, it only checks
for the key. I need it to check both the key and the specific value. Some
background: We have a total of 4 dictionaries: one for A, B, CompareList, and
ChangeList. Once A is initialized, I put A's contents into CompareList (I
would compare them directly; but A and B are double hash tables. And I've
tried all of the methods here; but none of them work for me). So once we put A
into CompareList, I compare it with the ObjectAttributes dictionary in B to
see if anything changed. So for example, B may have the key,value pairs
shape:circle and fill:no. If CompareList had shape:circle and fill:yes, then I
want only fill:yes to be ChangeList. The problem lies in the "if
attributes.getName() not in self.CompareList:" line. Here is the code; I am
running it on Python 2.7.8. Thanks in advance for any help!!
class ObjectSemanticNetwork:
def __init__(self):
self.ObjectNames = {}
self.ObjectAttributes = {}
def setName(self, name):
self.ObjectNames[name] = self.ObjectAttributes
def setData(self, name, attribute):
self.ObjectAttributes[name] = attribute
def checkData(self, key):
print(key)
for key, value in self.ObjectAttributes.iteritems():
print(key)
print(value)
print("\n")
class Agent:
(self):
self.CompareList = {}
self.ChangeListAB = {}
self.ChangeListCD = {}
def addToCompareList(self, name, value):
self.CompareList[name] = value
def addToChangeListAB(self, name, value):
self.ChangeListAB[name] = value
def addToChangeListCD(self, name, value):
self.ChangeListCD[name] = value
def CheckList(self, List, ListName):
print '-------------------------',ListName,'--------------------------------'
for key, value in List.iteritems():
print(key)
print(value)
def Solve(self,problem):
OSNAB = ObjectSemanticNetwork()
for object in problem.getFigures().get("A").getObjects():
for attributes in object.getAttributes():
self.addToCompareList(attributes.getName(), attributes.getValue())
OSNAB.ObjectNames["A"] = OSNAB.setData(attributes.getName(), attributes.getValue())
#OSNAB.checkData("A")
self.CheckList(self.CompareList,"CompareList")
for object in problem.getFigures().get("B").getObjects():
for attributes in object.getAttributes():
if attributes.getName() not in self.CompareList:
self.addToChangeListAB(attributes.getName(), attributes.getValue())
OSNAB.ObjectNames["B"] = OSNAB.setData(attributes.getName(), attributes.getValue())
# OSNAB.checkData("B")
self.CheckList(self.ChangeListAB,"ChangeList")
OSNCD = ObjectSemanticNetwork()
for object in problem.getFigures().get("C").getObjects():
for attributes in object.getAttributes():
OSNCD.ObjectNames["C"] = OSNCD.setData(attributes.getName(), attributes.getValue())
# OSNCD.checkData("C")
for object in problem.getFigures().get("1").getObjects():
for attributes in object.getAttributes():
OSNCD.ObjectNames["D"] = OSNCD.setData(attributes.getName(), attributes.getValue())
# OSNCD.checkData("D")
return "6"
Answer: Use
if key in d and d[key] == value:
Or (only in Python 3)
if (key, value) in d.items():
In Python 3 `d.items()` returns a [Dictionary view
object](https://docs.python.org/3.4/library/stdtypes.html#dictionary-view-
objects), which supports fast membership testing. In Python 2 `d.items()`
returns a list, which is both slow to create and slow to to test membership.
Python 2.7 is a special case where you can use `d.viewitems()` and get the
same thing that you get with `d.items()` in Python 3.
**Edit:** In a comment you indicate that for performance reasons you prefer
`checkKeyValuePairExistence` over `key in d and d[key] == value`. Below are
some timings showing that `checkKeyValuePairExistence` is always slower (by
about 2x on my system when the key-value pair is present 16x when it is not).
I also tested larger and smaller dictionaries and found little variation in
the timings.
>>> import random
>>> from timeit import timeit
>>> def checkKeyValuePairExistence(dic, key, value):
... try:
... return dic[key] == value
... except KeyError:
... return False
...
>>> d = {random.randint(0, 100000):random.randint(0, 100000) for i in range(1000)}
>>> setup = 'from __main__ import k, d, v, checkKeyValuePairExistence'
>>> test_try_except = 'checkKeyValuePairExistence(d, k, v)'
>>> test_k_in_d_and = 'k in d and d[k] == v'
>>> k, v = random.choice(d.items()) # to test if found
>>> timeit(test_try_except, setup=setup)
0.1984054392365806
>>> timeit(test_k_in_d_and, setup=setup)
0.10442071140778353
>>> k = -1 # test if not found
>>> timeit(test_try_except, setup=setup)
1.2896073903002616
>>> timeit(test_k_in_d_and, setup=setup)
0.07827843747497809
|
Socket Connection between django servers
Question: I am newbie for django and python. The thing what I need is, connecting **more
than one django server with socket**. One of these servers (main server) will
get a request from mobile client with Django-REST API, and then, it should
transmit it to the other django servers related to an ID of the server. (e.g.
When main server gets data with an ID as 1, it should transmit the data to the
server#1, if it gets the data with ID 2, it should transmit the data to
server#2) I am looking forward your advices..
**p.s.** Http requests cannot be sent to the django servers except main one.
Each of them are intranet application and locations are different. The only
way to send data to these servers via http is, sending the request to main
server with the ID of the servers.
Answer: If you are not able to send an (internal) http request not even to localhost
you can try to speak to the WSGI API of the different django apps. The main
app might create a WSGI application object and fill it with pseudo request
data.
# views.py of the main server
def myview(self, request):
# do some stuff
if server_id = 1:
from server_1_app.wsgi import application
response = application(environ, pseudo_request)
# ...
|
Python equivalent of PowerShell Get-EventLog
Question: In PowerShell you can run this command to list all of the different Event Log
folders on a server:
Get-EventLog -list
Is there a way to do this in Python? I've seen many posts about how to get
logs from a specific folder (using e.g. `win32evtlog`), but not how to
retrieve a list of all the Event Log folders. I'm running Windows Server 2008.
Answer: Figured it out.
>>> import win32evtlog
>>> x = win32evtlog.EvtOpenChannelEnum()
>>> win32evtlog.EvtNextChannelPath(x)
u'Application'
>>> win32evtlog.EvtNextChannelPath(x)
u'HardwareEvents'
>>> win32evtlog.EvtNextChannelPath(x)
u'Internet Explorer'
>>>
|
Python mpmath not arbitrary precision?
Question: I'm trying to continue on my previous question in which I'm trying to
calculate Fibonacci numbers using Benet's algorithm. To work with arbitrary
precision I found `mpmath`. However the implementation seems to fail above
certain value. For instance the 99th value gives:
> 218922995834555891712
This should be ([ref](http://www.maths.surrey.ac.uk/hosted-
sites/R.Knott/Fibonacci/fibtable.html)):
> 218922995834555169026
Here is my code:
from mpmath import *
def Phi():
return (1 + sqrt(5)) / 2
def phi():
return (1 - sqrt(5)) / 2
def F(n):
return (power(Phi(), n) - power(phi(), n)) / sqrt(5)
start = 99
end = 100
for x in range(start, end):
print(x, int(F(x)))
Answer: `mpmath` provides arbitrary precision (as set in `mpmath.mp.dps`), but still
inaccuate calculation. For example, `mpmath.sqrt(5)` is not accurate, so any
calculation based on that will also be inaccurate.
To get an accurate result for `sqrt(5)`, you have to use a library which
supports abstract calculation, e.g. <http://sympy.org/> .
To get an accurate result for Fibonacci numbers, probably the simplest way is
using an algorithm which does only integer arithmetics. For example:
def fib(n):
if n < 0:
raise ValueError
def fib_rec(n):
if n == 0:
return 0, 1
else:
a, b = fib_rec(n >> 1)
c = a * ((b << 1) - a)
d = b * b + a * a
if n & 1:
return d, c + d
else:
return c, d
return fib_rec(n)[0]
|
Configuring Django 1.7 and Python 3 on mac osx 10.9.x
Question: I have installed the latest versions of both django and python. The default
"python" command is set to 2.7; if I want to use python 3, I have to type
"python3".
Having to type "python3" and a django command causes problems. For example if
I type: "python3 manage.py migrate" , I get an error. The error is:
Traceback (most recent call last): File "manage.py", line 8, in from
django.core.management import execute_from_command_line ImportError: No module
named 'django'
Django does not seem to recognize my python 3. How do I get around this? Your
help is greatly appreciated.
Answer: You need to install `django` for `python 3`, `pip3 install django`
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.