text
stringlengths 226
34.5k
|
---|
smoothing a resized image in Python
Question: I'm working my way through [How to Think Like a Computer
Scientist](http://interactivepython.org/runestone/static/thinkcspy/toc.html#),
and I've gotten stuck on the following exercise:
After you have scaled an image too much it looks blocky. One way of reducing
the blockiness of the image is to replace each pixel with the average values
of the pixels around it. This has the effect of smoothing out the changes in
color. Write a function that takes an image as a parameter and smooths the
image. Your function should return a new image that is the same as the old but
smoothed.
The tutorial provides a truncated image module, but I am supposed to be able
to solve this without using PIL. This is what I have so far:
import image
#image provided in tutorial
img = image.Image("luther.jpg")
win = image.ImageWin()
factor = 0
#resize the image
def resize(img, factor):
W = img.getWidth()
H = img.getHeight()
newW = int(W * factor)
newH = int(H * factor)
newImage = image.EmptyImage(newW, newH)
for col in range(newW):
for row in range (newH):
p = img.getPixel(col, row)
newImage.setPixel(col * factor, row * factor, p)
newImage.draw(win)
img = newImage
return img
#smooth out the image
def smooth(img):
newImage = image.EmptyImage(img.getWidth(), img.getHeight())
for col in range(img.getWidth()):
for row in range(img.getHeight()):
p1 = img.getPixel(col, row)
p2 = img.getPixel(col+1, row)
red1 = p1.getRed()
red2 = p2.getRed()
green1 = p1.getGreen()
green2 = p2.getGreen()
blue1 = p1.getBlue()
blue2 = p2.getBlue()
newRed = (red1 + red2)/2
newGreen = (green1 + green2)/2
newBlue = (blue1 + blue2)/2
newPixel = image.Pixel(newRed, newGreen, newBlue)
newImage.setPixel(col, row, newPixel)
newImage.draw(win)
When I call the function, I get "Error: local variable red2 referenced before
assignment." I have gone line-by-line with print statements; this error
message is not returned until I try to create newPixel. Any help would be much
appreciated.
Answer: Can you print red2, if then, what is the output? You probably get None for
red2. Please post the rest of your code, so we can see if something is wrong
with img
|
Using the same key in two dictionaries (Python)
Question: Here's what I have:
from pprint import pprint
Names = {}
Prices = {}
Exposure = {}
def AddName():
company_name = input("Please enter company name: ")
return company_name
def AddSymbol(company_name):
stock_symbol = input("Please enter a stock symbol: ")
Names[stock_symbol] = company_name
return Names
^^ this updates the Names dictionary fine as {symbol:company name}
def AddPrices(stock_symbol):
buy = float(input("Please enter buying price of stock: "))
sell = float(input("Please enter current price of stock: "))
Prices[stock_symbol] = buy, sell
return Prices
^^ this generates a TypeError: unhashable type: 'dict' - what I want is it to
update the Prices dictionary like {symbol: buy price, sell price, symbol2: buy
price, sell price etc..}
def printDicts(Names, Prices):
'''
For debug purposes, prints out contents of dictionaries
'''
print( "Names is now:" )
pprint(Names)
print("Prices now:")
pprint(Prices)
def main():
company_name = AddName()
stock_symbol = AddSymbol(company_name)
AddPrices(stock_symbol)
printDicts(Names, Prices)
main()
Being new to programming I'm not entirely sure how to fix this. Thanks for any
help!
Answer: Your `AddSymbol` returns `Names`, which is dictionary. Dictionary can't be
used as dictionary key.
Just use `return stock_symbol` in `AddSymbol`.
|
Django development server keeps logging out
Question: I set my SESSION_COOKIE_AGE settings to 360 in my settings.py,, but the it
keeps logging me out while I am developing my server :((
Why is this happening and how do I prevent this..?
Thanks!
Here is my settings.py:
**settings.py**
# Django settings for quora project.
import os.path
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
('myname', 'myemail'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'mydb', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': '',
'PASSWORD': '',
'HOST': 'localhost', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts
ALLOWED_HOSTS = []
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'America/Chicago'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
PROJECT_ROOT = os.path.dirname(__file__)
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = os.path.join(PROJECT_ROOT, 'media')
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = '/media/'
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = ''
# URL prefix for static files.
# Example: "http://example.com/static/", "http://static.example.com/"
STATIC_URL = '/static/'
# Additional locations of static files
STATICFILES_DIRS = (
os.path.join(PROJECT_ROOT, 'static'),
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = '..........'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'blog.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'blog.wsgi.application'
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
# Additional
'django.contrib.admin',
'rest_framework',
# Applications
'core',
'app_blog',
'app_registration',
)
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
LOGIN_URL = '/accounts/login/'
LOGIN_REDIRECT_URL = '/'
#Cookie
SESSION_COOKIE_AGE = 360
#Custom user model
AUTH_USER_MODEL = "app_registration.MyUser"
AUTH_PROFILE_MODULE = 'app_registration.MyUserProfile'
# Registration
REGISTRATION_OPEN = True
ACCOUNT_ACTIVATION_DAYS = 7
Answer: I believe that only cookies age only is not enough and 360 only mean available
for 360 second:
#Cookie name. this can be whatever you want
SESSION_COOKIE_NAME='sessionid' # use the sessionid in your views code
#the module to store sessions data
SESSION_ENGINE='django.contrib.sessions.backends.db'
#age of cookie in seconds (default: 2 weeks)
SESSION_COOKIE_AGE= 24*60*60*7 # the number of seconds for only 7 for example
#whether a user's session cookie expires when the web browser is closed
SESSION_EXPIRE_AT_BROWSER_CLOSE=False
#whether the session cookie should be secure (https:// only)
SESSION_COOKIE_SECURE=False
|
Setting up non-blocking socket for Jython for use in Chat server
Question: I'm trying to create a Jython(actually monkeyrunner) program which receives
messages from other python(CPython because it uses OpenCV)
First, I tried to implement a chatting program example(server-side) and I ran
into a problem.
While the example uses Blocking-socket for select, the Jython select cannot
support it.
Therefore, I put the code **'server_socket.setblocking(0)'** when setting the
socket, but nothing changed.
Also, I tried 'from select import cpython_compoatible_select as select', but
it causes Attribute error, **'function' object has no attribute 'select'**.
Below is my code
# coding: iso-8859-1
import socket,select
#Function to broadcast chat messages to all connected clients
def broadcast_data (sock, message):
#Do not send the message to master socket and the client who has send us the message
for socket in CONNECTION_LIST:
if socket != server_socket and socket != sock :
try :
socket.send(message)
except :
# broken socket connection may be, chat client pressed ctrl+c for example
socket.close()
CONNECTION_LIST.remove(socket)
if __name__ == "__main__":
# List to keep track of socket descriptors
CONNECTION_LIST = []
RECV_BUFFER = 4096 # Advisable to keep it as an exponent of 2
PORT = 5000
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# this has no effect, why ?
#JYTHON never supports blocking-mode socket so make it unblock
server_socket.setblocking(0)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind(("0.0.0.0", PORT))
server_socket.listen(10)
# Add server socket to the list of readable connections
CONNECTION_LIST.append(server_socket)
print "Chat server started on port " + str(PORT)
while 1:
# Get the list sockets which are ready to be read through select
#JYTHON never supports blocking-mode socket so make it unblock
server_socket.setblocking(0)
read_sockets,write_sockets,error_sockets = select.select(CONNECTION_LIST,[],[])
for sock in read_sockets:
#New connection
if sock == server_socket:
# Handle the case in which there is a new connection recieved through server_socket
#JYTHON never supports blocking-mode socket so make it unblock
server_socket.setblocking(0)
sockfd, addr = server_socket.accept()
CONNECTION_LIST.append(sockfd)
#print "Client (%s, %s) connected" % addr
broadcast_data(sockfd, "[%s:%s] entered room\n" % addr)
#Some incoming message from a client
else:
# Data recieved from client, process it
try:
#In Windows, sometimes when a TCP program closes abruptly,
# a "Connection reset by peer" exception will be thrown
data = sock.recv(RECV_BUFFER)
if data:
print data
broadcast_data(sock, "\r" + '<' + str(sock.getpeername()) + '> ' + data)
except:
broadcast_data(sock, "Client (%s, %s) is offline" % addr)
print "Client (%s, %s) is offline" % addr
sock.close()
CONNECTION_LIST.remove(sock)
continue
server_socket.close()
#see http://www.binarytides.com/code-chat-application-server-client-sockets-python/
and my error message
C:\NVPACK\android-sdk-windows\tools\lib>monkeyrunnerUTF chatserver.py
Chat server started on port 5000
130815 17:06:17.418:S [MainThread] [com.android.monkeyrunner.MonkeyRunnerOptions
] Script terminated due to an exception
130815 17:06:17.418:S [MainThread] [com.android.monkeyrunner.MonkeyRunnerOptions
]Traceback (most recent call last):
File "C:\NVPACK\android-sdk-windows\tools\chatserver.py", line 41, in <module>
read_sockets,write_sockets,error_sockets = select.select(CONNECTION_LIST,[],
[])
File "C:\NVPACK\android-sdk-windows\tools\lib\jython-standalone-2.5.3.jar\Lib\
select.py", line 225, in native_select
File "C:\NVPACK\android-sdk-windows\tools\lib\jython-standalone-2.5.3.jar\Lib\
select.py", line 106, in register
select.error: (20000, 'socket must be in non-blocking mode')
Thank you in advance :)
Answer: **AndroidViewClient's** tests implement a **MockViewServer** using
`monkeyrunner`, setting the socket as non-blocking and using
from select import cpython_compatible_select as select
for select.
See the source code at
<https://github.com/dtmilano/AndroidViewClient/blob/master/AndroidViewClient/tests/com/dtmilano/android/mocks.py#L758>
This works on Linux and OSX (your mileage may vary with Windows)
|
How to apply an adaptive filter in Python
Question: I would like to apply an adaptive filter in Python, but can't find any
documentation or examples online of how to implement such an algorithm. I'm
familiar with designing "static" filters using the `scipy.signal` toolbox, but
what I don't know how to do is design an adaptive filter.
To clarify: I have a recorded signal `S` which contains noise. Within this
recording there is a "true" function that I would like to access, call this
`T`. I also have an estimate of `T`. I want to design a filter such that the
error between the filtered `S` and `T` is minimised. Note that in this case a
static filter is not useful, as I am trying to filter a nonstationary signal.
Answer: Here's a basic LMS adaptive filter in Python with Numpy.
Comments are welcome, testcases most welcome.

""" lms.py: a simple python class for Least mean squares adaptive filter """
from __future__ import division
import numpy as np
__version__ = "2013-08-29 aug denis"
#...............................................................................
class LMS:
""" lms = LMS( Wt, damp=.5 ) Least mean squares adaptive filter
in:
Wt: initial weights, e.g. np.zeros( 33 )
damp: a damping factor for swings in Wt
# for t in range(1000):
yest = lms.est( X, y [verbose=] )
in: X: a vector of the same length as Wt
y: signal + noise, a scalar
optional verbose > 0: prints a line like "LMS: yest y c"
out: yest = Wt.dot( X )
lms.Wt updated
How it works:
on each call of est( X, y ) / each timestep,
increment Wt with a multiple of this X:
Wt += c X
What c would give error 0 for *this* X, y ?
y = (Wt + c X) . X
=>
c = (y - Wt . X)
--------------
X . X
Swings in Wt are damped a bit with a damping factor a.k.a. mu in 0 .. 1:
Wt += damp * c * X
Notes:
X s are often cut from a long sequence of scalars, but can be anything:
samples at different time scales, seconds minutes hours,
or for images, cones in 2d or 3d x time.
"""
# See also:
# http://en.wikipedia.org/wiki/Least_mean_squares_filter
# Mahmood et al. Tuning-free step-size adaptation, 2012, 4p
# todo: y vec, X (Wtlen,ylen)
#...............................................................................
def __init__( self, Wt, damp=.5 ):
self.Wt = np.squeeze( getattr( Wt, "A", Wt )) # matrix -> array
self.damp = damp
def est( self, X, y, verbose=0 ):
X = np.squeeze( getattr( X, "A", X ))
yest = self.Wt.dot(X)
c = (y - yest) / X.dot(X)
# clip to cmax ?
self.Wt += self.damp * c * X
if verbose:
print "LMS: yest %-6.3g y %-6.3g err %-5.2g c %.2g" % (
yest, y, yest - y, c )
return yest
#...............................................................................
if __name__ == "__main__":
import sys
filterlen = 10
damp = .1
nx = 500
f1 = 40 # chirp
noise = .05 * 2 # * swing
plot = 0
seed = 0
exec( "\n".join( sys.argv[1:] )) # run this.py n= ... from sh or ipython
np.set_printoptions( 2, threshold=100, edgeitems=10, linewidth=80, suppress=True )
np.random.seed(seed)
def chirp( n, f0=2, f1=40, t1=1 ): # <-- your test function here
# from $scipy/signal/waveforms.py
t = np.arange( n + 0. ) / n * t1
return np.sin( 2*np.pi * f0 * (f1/f0)**t )
Xlong = chirp( nx, f1=f1 )
# Xlong = np.cos( 2*np.pi * freq * np.arange(nx) )
if noise:
Xlong += np.random.normal( scale=noise, size=nx ) # laplace ...
Xlong *= 10
print 80 * "-"
title = "LMS chirp filterlen %d nx %d noise %.2g damp %.2g " % (
filterlen, nx, noise, damp )
print title
ys = []
yests = []
#...............................................................................
lms = LMS( np.zeros(filterlen), damp=damp )
for t in xrange( nx - filterlen ):
X = Xlong[t:t+filterlen]
y = Xlong[t+filterlen] # predict
yest = lms.est( X, y, verbose = (t % 10 == 0) )
ys += [y]
yests += [yest]
y = np.array(ys)
yest = np.array(yests)
err = yest - y
averr = "av %.2g += %.2g" % (err.mean(), err.std())
print "LMS yest - y:", averr
print "LMS weights:", lms.Wt
if plot:
import pylab as pl
fig, ax = pl.subplots( nrows=2 )
fig.set_size_inches( 12, 8 )
fig.suptitle( title, fontsize=12 )
ax[0].plot( y, color="orangered", label="y" )
ax[0].plot( yest, label="yest" )
ax[0].legend()
ax[1].plot( err, label=averr )
ax[1].legend()
if plot >= 2:
pl.savefig( "tmp.png" )
pl.show()
|
Returning generator from a function
Question: I'm slowly getting to wrap my head around Python generators.
While it's not a real life problem for now, I'm still wondering why I can't
return a generator from a function.
When I define a function with `yield`, it acts as a generator. But if I define
it _inside_ another function and try to return that instead, I get an ordinary
function, i.e. not a generator with `next` method.
In other words, why the `give_gen()` approach in code below does not work?
#!/usr/bin/python
import time
def gen(d):
n = 0
while True:
n = n + d
time.sleep(0.5)
yield n
def give_gen(d):
def fn():
n = 0
while True:
n = n + d
time.sleep(0.5)
yield n
return fn
if __name__ == '__main__':
g = give_gen(3) # does not work
g = gen(3) # works well
while True:
print g.next()
# AttributeError: 'function' object has no attribute 'next'
# in case of give_gen
Why can't I return a generator from a function?
Answer: A generator function returns a generator _only when called_. Call `fn` to
create the generator object:
return fn()
or call the returned object:
g = give_gen(3)()
You _did_ call `gen()`; had you referred to _just_ `gen` without calling it
you'd have a reference to that function.
|
Importing large integers from a mixed datatype file with numpy genfromtxt
Question: I have a file with the format:
1 2.5264 24106644528 astring
I would like to import the data. I am using:
>>> numpy.genfromtxt('myfile.dat',dtype=None)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
numpy.genfromtxt('myfile.dat',skip_header=27,dtype=None)
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 1691, in genfromtxt
output = np.array(data, dtype=ddtype)
OverflowError: Python int too large to convert to C long
I checked the maximum integer on my (32-bit) system:
>>> import sys
>>> sys.maxint
2147483647
Is there a way to increase the integer limit? Or can I get around my import
problem another way (without putting '.0' after all of the ints in file)?
Answer: Realised I can do this:
>>> numpy.genfromtxt('myfile.dat',dtype=['i4','f8','f8','a14'])
array((1, 2.5264, 24106644528.0, 'astring'),
dtype=[('f0', '<i4'), ('f1', '<f8'), ('f2', '<f8'), ('f3', 'S14')])
|
Income tax calculator and global var problems
Question: Complete beginner here. I've been trying to pick up programming in my spare
time and don't really have any interactive resources to consult. I've tried my
best to get a program working where I've tried to program an income tax
calculator. I've pasted my program in its entirety.
What I'm hoping to understand from this is why the `tax_calc()` function is
not saving variable `payable`. I've created a test line
print ('Test Tann:', tann,'Test Tmon', tmon,'Test tinc',tinc,'test payable',payable)
in order to check the var values and the only one that doesn't update is
`payable`. Is it a global var problem?
I'd also really appreciate any other advice regarding my coding. It's my first
program and I only really know how to use globals to change vars in this
regard despite a lot of experienced users expressing that the global call is
very unnecessary and messy or unpythonic. Also, whatever advice you have to
shorten or make my code more efficient is really appreciated.
from decimal import *
#Hmm, looks like I have to define all vars and dicts before functions even if I only call functions after declaration?
tinc = 0
tann = 0
tmon = 0
age = 0
payable = 0
#Define calculation for specific tax brackets
rates = {}
rates['T1'] = 0.18 * tinc
rates['T2'] = 29808 + (.25 * (tinc - 165600))
rates['T3'] = 53096 + (.30 * (tinc - 258750))
rates['T4'] = 82904 + (.35 * (tinc - 358110))
rates['T5'] = 132894 + (.38 * (tinc - 500940))
rates['T6'] = 185205 + (.40 * (tinc - 638600))
#Defines the actual range for deciding on tax brackets
tier = {}
tier['T1'] = range(0,165600)
tier['T2'] = range(165601,258750)
tier['T3'] = range(258751,358110)
tier['T4'] = range(358111,500940)
tier['T5'] = range(500941,638600)
tier['T6'] = range(638601, 5000000)
#Defines the brackets for age variable
tierage = {}
tierage['T1'] = 12080
tierage['T2'] = 12080 + 6750
tierage['T3'] = 12080 + 6750 + 2250
#Asks for whether you want to enter monthly or annual salary
def ask_choice():
print ('Would you like to input monthly or annual salary? Please select (m/a)')
global choice
choice = str(input('> '))
#Asks for age
def ask_age():
global age
age = int(input("Please enter your age: "))
#Asks for annual salary, all inputs done in floats to allow for cents
def ask_annual():
global tann, tinc
tann = 0
tann = float(input("Please enter your annual taxable income: "))
tinc = tann
print ('Your annual taxable income is',tinc)
#Asks for monthly salary, all inputs done in floats to allow for cents
def ask_monthly():
global tmon, tinc
tmon = 0
tmon = float(input("Please enter your monthly taxable income: "))
tinc = tmon*12
print ('Your annual taxable income is',tinc)
#Decides on and calls on which function to ask for for asking salary
def asking():
global error
error = True
#keeps looping until you enter Mm or Aa
while error == True:
if choice.lower() == "m":
ask_monthly()
error == False
break
elif choice.lower() == "a":
ask_annual()
error == False
break
else:
print ("Input error, please input either 'a' to select annual or 'm' to select monthly")
error == True
ask_choice()
def tax_calc():
global payable, decpayable, tinc
if tinc in tier['T1']:
payable = rates['T1']
print ('You fall in tax bracket 1')
elif tinc in tier['T2']:
payable = rates['T2']
print ('You fall in tax bracket 2')
elif tinc in tier['T3']:
payable = rates['T3']
print ('You fall in tax bracket 3')
elif tinc in tier['T4']:
payable = rates['T4']
print ('You fall in tax bracket 4')
elif tinc in tier['T5']:
payable = rates['T5']
print ('You fall in tax bracket 5')
elif tinc in tier['T6']:
payable = rates['T6']
print ('You fall in tax bracket 6')
decpayable = Decimal(payable).quantize(Decimal('0.01'))
#Decimal used specifically for money, defines two decimal places.
print ('Tax before rebates: R',decpayable)
print ('Test Tann:', tann,'Test Tmon', tmon,'Test tinc',tinc,'test payable',payable)
def age_calc():
global final
if age < 65:
final = payable - tierage['T1']
print('You qualify for a primary rebate')
elif 65 <= age < 75:
final = payable - tierage['T2']
print('You qualify for a primary and secondary rebate')
elif age >= 75:
final = payable - tierage['T3']
print('You qualify for a primary, secondary and tertiary rebate')
decfinal = Decimal(final).quantize(Decimal('.01'))
print ('Annual tax after rebates is: R'+str(decfinal))
print ('Monthly tax is: R', Decimal(final/12).quantize(Decimal('.01')))
print ('You net salary per month is therefore: ', (tinc/12 - payable),
'or',(tinc - payable*12),'per year')
def enter_another():
print ("Would you like to calculate tax on another amount? (y/n) ")
yesno = input('> ')
if yesno.lower() == "y" or yesno.lower() == "yes":
print ('Alright, let\'s start again\n')
ask_choice()
asking()
ask_age()
tax_calc()
age_calc()
enter_another()
elif yesno.lower() == "n" or yesno.lower() == "no":
print ('Thank you for trying out this calculator')
ask_choice()
asking()
ask_age()
tax_calc()
age_calc()
enter_another()
input()
Answer: I think the global variables are causing you trouble. You have this near the
top
tinc = 0
#...
rates = {}
rates['T1'] = 0.18 * tinc
rates['T2'] = 29808 + (.25 * (tinc - 165600))
rates['T3'] = 53096 + (.30 * (tinc - 258750))
rates['T4'] = 82904 + (.35 * (tinc - 358110))
rates['T5'] = 132894 + (.38 * (tinc - 500940))
rates['T6'] = 185205 + (.40 * (tinc - 638600))
This will use a value of 0 for `tinc` to set up the `rates`. However, you have
a function later where the user inputs the taxable income (in `ask_monthly` or
`ask_annual`). You will need to change the rates you use depending on the
value tinc takes.
**EDIT**
If you change this into a function and return the dictionary, you can pass
that to whichever functions use it
def setup_rates(tinc):
rates = {}
rates['T1'] = 0.18 * tinc
rates['T2'] = 29808 + (.25 * (tinc - 165600))
rates['T3'] = 53096 + (.30 * (tinc - 258750))
rates['T4'] = 82904 + (.35 * (tinc - 358110))
rates['T5'] = 132894 + (.38 * (tinc - 500940))
rates['T6'] = 185205 + (.40 * (tinc - 638600))
return rates
Change `tax_calc` to takes the rates:
def tax_calc(rates):
#... as you were
and then changes your "main" function to find it out:
asking()
ask_age()
rates = setup_rates(tinc)
tax_calc(rates)
You can probably gradually refactor the functions to return the variables that
are currently global and use that in the next functions, removing the globals
slowly.
|
Modify the list that is being iterated in python
Question: I need to update a list while it is being iterated over. Basically, i have a
list of tuples called `some_list` Each tuple contains a bunch of strings, such
as name and path. What I want to do is go over every tuple, look at the name,
then find all the tuples that contain the string with an identical path and
delete them from the list.
The order does not matter, I merely wish to go over the whole list, but
whenever I encounter a tuple with a certain path, all tuples (including
oneself) should be removed from the list. I can easily construct such a list
and assign it to `some_list_updated`, but the problem seems to be that the
original list does not update...
The code has more or less the following structure:
for tup in some_list[:]:
...
...somecode...
...
some_list = some_list_updated
It seems that the list does update appropriately when I print it out, but
python keeps iterating over the old list, it seems. What is the appropriate
way to go about it - if there is one? Thanks a lot!
Answer: You want to _count_ the paths using a dictionary, then use only those that
have a count of 1, then loop using a list comprehension to do the final
filter. Using a `collections.Counter()` object makes the counting part easy:
from collections import Counter
counts = Counter(tup[index_of_path] for tup in some_list)
some_list = [tup for tup in some_list if counts[tup[index_of_path]] == 1]
|
What am I doing wrong pypi missing "Links for"
Question: I'm trying to tryout pypi to publish some libraries. So I started with a
simple project. I have the following setup.py:
import os
from distutils.core import setup
setup(
name='event_amd',
packages = ["event_amd"],
description='Port for EventEmitter from nodejs',
version='1.0.7',
author="Borrey Kim",
author_email="[email protected]",
url="https://bitbucket.org/borreykim/event_amd",
download_url="https://bitbucket.org/borreykim/event_amd/downloads/event_amd-1.0.6.tar.gz",
keywords=['events'],
long_description = """\
This is an initial step to port over EventEmitter of nodejs. This is done with the goal of having libraries that are cross platform so that cross communication is easier, and collected together.
"""
)
I've registered it but: sudo pip install event_amd gives me an error:
DistributionNotFound: No distributions at all found for event-amd (I'm not
sure how event_amd turns to event-amd?) Also there is no links under (which
other projects seem to have ): <https://pypi.python.org/simple/event_amd/>
I was wondering if I am doing something wrong in the setup.py or what may be
causing this.
Thanks in advance.
Answer: You need to upload a source archive after registering the release: `python
setup.py register sdist upload`
|
freeswitch python scripts errno 10 no child processes
Question: I' ve got an issue when running freeswitch with some python scripts inside
dialplan using django.db models. Whenever it starts it causes errors:
freeswitch@ubuntu> 2013-08-15 06:56:08.094348 [ERR] mod_python.c:231 Error importing module
2013-08-15 06:56:08.094348 [ERR] mod_python.c:164 Python Error by calling script "fs_scripts.ringback": <type 'exceptions.IOError'>
Message: [Errno 10] No child processes
Exception: None
Traceback (most recent call last)
File: "/home/piotrek/lettel/fs_scripts/ringback.py", line 19, in <module>
File: "/home/piotrek/lettel/api/call.py", line 3, in <module>
File: "/usr/local/lib/python2.7/dist-packages/django/db/__init__.py", line 11, in <module>
File: "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 184, in inner
File: "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 42, in _setup
File: "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 93, in __init__
File: "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
File: "/home/piotrek/lettel/lettel/settings.py", line 13, in <module>
File: "/usr/local/lib/python2.7/dist-packages/djcelery/__init__.py", line 25, in <module>
File: "/usr/local/lib/python2.7/dist-packages/celery/__compat__.py", line 135, in __getattr__
File: "/usr/local/lib/python2.7/dist-packages/celery/_state.py", line 19, in <module>
File: "/usr/local/lib/python2.7/dist-packages/celery/utils/__init__.py", line 22, in <module>
File: "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 10, in <module>
File: "/usr/local/lib/python2.7/dist-packages/kombu/abstract.py", line 12, in <module>
File: "/usr/local/lib/python2.7/dist-packages/kombu/connection.py", line 24, in <module>
File: "/usr/local/lib/python2.7/dist-packages/kombu/log.py", line 8, in <module>
File: "/usr/local/lib/python2.7/dist-packages/kombu/utils/compat.py", line 68, in <module>
File: "/usr/lib/python2.7/platform.py", line 1337, in system
File: "/usr/lib/python2.7/platform.py", line 1304, in uname
File: "/usr/lib/python2.7/platform.py", line 1039, in _syscmd_uname
edit: the line that causes errors is a simple import from django.db:
from django.db import models
This whole setup is already running on some server that I dont have access to,
so it seems to be nothing wrong with django app or scripts...
Any help would be appreciated cause I am running out of ideas how to solve
this problem...
Answer: I dont know your performance and scalability requirements, despite that a
softswitch have realtime requirements to run rtp stream and signalling. So is
not a good solution run django applications under mod_python. Furthermore that
is not the same as python interpreter directly, somethings does not work. You
can check the mod_python issues here: [mod_python
issues](http://jira.freeswitch.org/issues/?jql=text%20~%20%22mod_python%22)
I would advice you to split your python solution in a client/server
architecture. The script running under mod_python will make queries to your
Django application. That way you will get rid of the complexity in Freeswitch
side, get scalable, improve performance and most probably get everything
working OK.
|
Python function that corrects a email domain
Question: Okay, I have this function **construct_email(name, domain):**
def construct_email(name, domain):
if domain == True:
print 'True'
else:
print'None'
return name + "@" + domain
This function isn't big or anything, it's suppose to output an email address.
But I also have this other function `correct_domain(domain):` that is suppose
to check the domain name that's been input in `construct_email(name, domain):`
import re
def correct_domain(domain):
if re.search(r'^\.|\.$', domain) or re.search(r'\.\.', domain):
return False
elif re.search(r'\.', domain):
return True
else:
return False
My question is, how do I do this?
Answer: If I'm understanding you correctly:
import re
def construct_email(name, domain):
if not check_domain(domain):
return False
return name + "@" + domain
def check_domain(domain):
dots = re.findall(r"\.", domain)
if (len(dots) != 1) or domain.startswith(".") or domain.endswith("."):
return False
return True
def main():
while True:
email = construct_email(raw_input("Name: "), raw_input("Domain: "))
if email:
break
print "Bad Domain, try again...\n"
print email
#other code here...
|
Embed an interactive 3D plot in PySide
Question: What is the best way to embed an interactive 3D plot in a PySide GUI? I have
looked at some examples on here of 2D plots embedded in a PySide GUI:
[Getting PySide to Work With
Matplotlib](http://stackoverflow.com/questions/6723527/getting-pyside-to-work-
with-matplotlib)
[Matplotlib Interactive Graph Embedded In
PyQt](http://stackoverflow.com/questions/16188037/matplotlib-interactive-
graph-embedded-in-pyqt)
[Python/Matplotlib/Pyside Fast Timetrace
Scrolling](http://stackoverflow.com/questions/16824718/python-matplotlib-
pyside-fast-timetrace-scrolling/16825869#16825869)
However, the functionality that I'm looking for is not quite the same. The
figure needs to rotate and zoom based on mouse input from the user in the same
way as if it were drawn in a separate window.
I'm trying to avoid having to go in manually and write functions for
transforming mouse click + move into a figure rotate and canvas repaint--even
if that's the only way, I'm not even sure how to do that. But I figure (no pun
intended) that there should be a way to reuse the functionality already
present for creating 3D plots in their own windows.
Here's my code. It works as intended, but the plot is not interactive. Any
advice is appreciated!
**EDIT:** I fixed the use of FigureCanvas according to
[tcaswell](http://stackoverflow.com/users/380231/tcaswell)'s corrections. I
also added a bit from the matplotlib [Event Handling and
Picking](http://matplotlib.org/users/event_handling.html) documentation to
show that the figure seems to be getting the events upon mouseclick.
**Final Edit:** The following code now produces the plot as desired.
# -*- coding: utf-8 -*-
from PySide import QtCore, QtGui
import numpy as np
import matplotlib
import sys
# specify the use of PySide
matplotlib.rcParams['backend.qt4'] = "PySide"
# import the figure canvas for interfacing with the backend
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg \
as FigureCanvas
# import 3D plotting
from mpl_toolkits.mplot3d import Axes3D # @UnusedImport
from matplotlib.figure import Figure
# Auto-generated code from QT Designer ----------------------------------------
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(750, 497)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.horizontalLayout_2 = QtGui.QHBoxLayout(self.centralwidget)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.frame_2 = QtGui.QFrame(self.centralwidget)
self.frame_2.setFrameShape(QtGui.QFrame.StyledPanel)
self.frame_2.setFrameShadow(QtGui.QFrame.Raised)
self.frame_2.setObjectName("frame_2")
self.verticalLayout = QtGui.QVBoxLayout(self.frame_2)
self.verticalLayout.setObjectName("verticalLayout")
self.label = QtGui.QLabel(self.frame_2)
self.label.setObjectName("label")
self.verticalLayout.addWidget(self.label)
self.label_2 = QtGui.QLabel(self.frame_2)
self.label_2.setObjectName("label_2")
self.verticalLayout.addWidget(self.label_2)
self.lineEdit = QtGui.QLineEdit(self.frame_2)
sizePolicy = QtGui.QSizePolicy(
QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(
self.lineEdit.sizePolicy().hasHeightForWidth())
self.lineEdit.setSizePolicy(sizePolicy)
self.lineEdit.setObjectName("lineEdit")
self.verticalLayout.addWidget(self.lineEdit)
spacerItem = QtGui.QSpacerItem(
20, 40, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.verticalLayout.addItem(spacerItem)
self.horizontalLayout_2.addWidget(self.frame_2)
self.frame_plot = QtGui.QFrame(self.centralwidget)
self.frame_plot.setMinimumSize(QtCore.QSize(500, 0))
self.frame_plot.setFrameShape(QtGui.QFrame.StyledPanel)
self.frame_plot.setFrameShadow(QtGui.QFrame.Raised)
self.frame_plot.setObjectName("frame_plot")
self.horizontalLayout_2.addWidget(self.frame_plot)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(QtGui.QApplication.translate(
"MainWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8))
self.label.setText(QtGui.QApplication.translate("MainWindow",
"This is a qlabel.", None, QtGui.QApplication.UnicodeUTF8))
self.label_2.setText(QtGui.QApplication.translate("MainWindow",
"And this is another one.", None, QtGui.QApplication.UnicodeUTF8))
self.lineEdit.setText(QtGui.QApplication.translate("MainWindow",
"Text goes here.", None, QtGui.QApplication.UnicodeUTF8))
# Auto-generated code from QT Designer ----------------------------------------
class MainWindow(QtGui.QMainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
# intialize the window
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
# create the matplotlib widget and put it in the frame on the right
self.ui.plotWidget = Mpwidget(parent=self.ui.frame_plot)
class Mpwidget(FigureCanvas):
def __init__(self, parent=None):
self.figure = Figure(facecolor=(0, 0, 0))
super(Mpwidget, self).__init__(self.figure)
self.setParent(parent)
# plot random 3D data
self.axes = self.figure.add_subplot(111, projection='3d')
self.data = np.random.random((3, 100))
self.axes.plot(self.data[0, :], self.data[1, :], self.data[2, :])
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
mw = MainWindow()
mw.show()
# adjust the frame size so that it fits right after the window is shown
s = mw.ui.frame_plot.size()
mw.ui.plotWidget.setGeometry(1, 1, s.width() - 2, s.height() - 2)
sys.exit(app.exec_())
Answer: You are not using `FigureCanvas` right:
class Mpwidget(FigureCanvas):
def __init__(self, parent=None):
self.figure = Figure(facecolor=(0, 0, 0))
super(Mpwidget, self).__init__(self.figure) # this object _is_ your canvas
self.setParent(parent)
# plot random 3D data
self.axes = self.figure.add_subplot(111, projection='3d')
self.data = np.random.random((3, 100))
self.axes.plot(self.data[0, :], self.data[1, :], self.data[2, :])
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
mw = MainWindow()
mw.show()
# adjust the frame size so that it fits right after the window is shown
s = mw.ui.frame_plot.size()
mw.ui.plotWidget.setGeometry(1, 1, s.width() - 2, s.height() - 2)
sys.exit(app.exec_())
Every time you called `FigureCanvas(..)` you were attaching the figure to a
new canvas (which were not the `FigureCanvas` you were seeing) hence the call-
backs were never firing (because they were listening on a `FigureCanvas` that
you couldn't see).
|
Terminating python script through emacs
Question: I am running a python interpreter through emacs. I often find myself running
python scripts and wishing I could terminate them without killing the entire
buffer. That is because I do not want to import libraries all over again...
Is there a way to tell python to stop executing a script and give me a prompt?
Answer: Try using keyboard interrupt which `comint` send to the interpreter through
`C-c``C-c`.
I generally hold down the `C-c` until it the prompt returns.
|
Exception: Cannot import python-ntlm module
Question: I am using suds 0.4 and running into below error,I read on the web the above
issue is fixed since 0.3.8..so am wondering what is wrong here?
File "script.py", line 532, in <module>
prism = Prism('http://prism:8000/SearchService.svc?wsdl')
File "script.py", line 31, in __init__
self.CR_soapclient = Client(self.CR_url, transport=WindowsHttpAuthenticated(username=user, password=passwd))
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/client.py", line 112, in __init__
self.wsdl = reader.open(url)
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/reader.py", line 152, in open
d = self.fn(url, self.options)
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/wsdl.py", line 136, in __init__
d = reader.open(url)
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/reader.py", line 79, in open
d = self.download(url)
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/reader.py", line 95, in download
fp = self.options.transport.open(Request(url))
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/transport/https.py", line 60, in open
return HttpTransport.open(self, request)
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/transport/http.py", line 62, in open
return self.u2open(u2request)
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/transport/http.py", line 113, in u2open
url = self.u2opener()
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/transport/http.py", line 127, in u2opener
return u2.build_opener(*self.u2handlers())
File "/usr/local/lib/python2.7/dist-packages/suds-0.4-py2.7.egg/suds/transport/https.py", line 95, in u2handlers
raise Exception("Cannot import python-ntlm module")
Exception: Cannot import python-ntlm module
Suds version
>>> import suds
>>> print suds.__version__
0.4
Answer: As @PauloAlmeida suggested in the comment, _python-ntlm_ is missing. To
install using pip just type into your OS shell:
pip install python-ntlm
On Linux you might need `sudo` before the command.
You can also download the package from <https://pypi.python.org/pypi/python-
ntlm>.
|
Best fit from a set of curves to data points
Question: I have a set of curves `F={f1, f2, f3,..., fN}`, each of them defined through
a set of points, ie: I don't have the _explicit_ form of the functions. So I
have a set of `N` tables like so:
#f1: x y
1.2 0.5
0.6 5.6
0.3 1.2
...
#f2: x y
0.3 0.1
1.2 4.1
0.8 2.2
...
#fN: x y
0.7 0.3
0.3 1.1
0.1 0.4
...
I also have a set of observed/measured data points `O=[p1, p2, p3,..., pM]`
where each point has `x, y` coordinates and a given weight between `[0, 1]` ,
so it looks like:
#O: x y w
0.2 1.6 0.5
0.3 0.7 0.3
0.1 0.9 0.8
...
Since `N ~ 10000` (I have a big number of functions) what I'm looking for is
an efficient (more precisely: **fast**) way to find the curve that best fits
my set of observed and _weighted_ points `O`.
I know how to find a best fit with `python` when I have the explicit form of
the functions
([scipy.optimize.curve_fit](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html)),
but how do I do that when I have the functions defined as tables?
Answer: You need two elements in order to have a fit, the data(which you already have)
and a model space(Linear Models, Gaussian Process, Support Vector Regression).
In your case your model has the additional constrain that some data points
should be weighted differently than others. May be something like this works
from you:
from scipy.interpolate import UnivariateSpline
temp = np.asarray([10, 9.6, 9.3, 9.0, 8.7])
height = np.asarray([129, 145, 167, 190, 213])
f = UnivariateSpline(height, temp)
Now you can evaluate `f` wherever you want:
test_points = np.arange(120, 213, 5)
plot(height, temp, 'o', regular_heights, f(test_points), 'x')

|
Python subprocess get output as process finishes
Question: Currently I am getting MAC addresses from devices via Bluetooth and I pass
these mac addresses one at a time to a method that calls a subprocess and
assigns the output to a variable. With that variable I run some filtering
functions to get a value from the command called in the subprocess. I then
return that value if it finds it from the output.
What I would like to do is pass all mac addresses to the method at once and
run them all at one time. How do I capture the output of each process and run
my filtering script as the processes complete, and at the same time notify me
if the process fails or errors.
Here is the current method that handles one mac at a time. Lets assume now
that I am passing a list of mac addresses.
def getchan(mac):
a = subprocess.Popen(["sdptool", "search","--bdaddr", mac, "OPUSH"], stdout = subprocess.PIPE).communicate()[0].split()[2:]
if a==[]:
print "ERROR"
return "ERROR"
else:
count = 0
for item in a:
if item == "Channel:":
return a[count + 1]
count += 1
return "Could not find the OPUSH channel"
It should look something like
def getchan(macs):
processes = set()
for mac in macs:
processes.add(subprocess.Popen(["sdptool", "search","--bdaddr", mac, "OPUSH"], stdout = subprocess.PIPE).communicate()[0].split()[2:])
#this is where I need the help
Thank you for taking a look. Any help or clarification of Subprocesses would
be much appreciated.
Answer:
import select
import subprocess
def in_parallel(processes):
pipes = {p.stdout.fileno(): (i, p.stdout) for i, p in enumerate(processes)}
poller = select.poll()
for fd, pipe in pipes.iteritems():
poller.register(fd, select.POLLIN)
outputs = [''] * len(processes)
while pipes:
active = poller.poll()
for fd, event in active:
idx, pipe = pipes[fd]
o = pipe.read()
if o:
outputs[idx] += o
else:
poller.unregister(fd)
pipe.close()
del pipes[fd]
for p in processes:
p.wait()
return outputs
args = ['a', 'b', 'c']
processes = [subprocess.Popen(['sleep 5; echo ' + arg], stdout=subprocess.PIPE, shell=True) for arg in args]
outputs = in_parallel(processes)
print outputs
* * *
$ time python test.py
['a\n', 'b\n', 'c\n']
real 0m5.042s
user 0m0.016s
sys 0m0.016s
|
Automating Login using python mechanize
Question: so this is my first time programming ever and I'm trying to automate logging
into a website using python/mechanize. So this is my code:
import mechanize
import cookielib
# Browser
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# Want debugging messages?
br.set_debug_http(True)
br.set_debug_redirects(True)
br.set_debug_responses(True)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
br.add_password('http://newiso.accellion.net/w', 'USERNAME', 'PASSWORD')
br.open('http://newiso.accellion.net/w')
# Show the html title
print br.title()
# Show the response headers
print br.response().info()
# Show the available forms
for f in br.forms():
print f
br.form["g_username"] = "USERNAME"
br.form["password"] = "PASSWORD"
import urllib
opener = urllib.FancyURLopener()
print opener.open('http://USERNAME:[email protected]/').read()
and I keep getting this error:
Traceback (most recent call last):
File "/Users/dancetrina/Documents/login.py", line 45, in <module>
br.form["g_username"] = "USERNAME"
TypeError: 'NoneType' object does not support item assignment
does that mean that I can't use mechanize to type in the username/password? Or
is there something I'm missing that would make it work? Thank you so much in
advance!
Answer: In the last (and _only_ :-) ) `for` loop, you should write:
f.form["g_username"] = "USERNAME"
f.form["password"] = "PASSWORD"
|
Grabbing the output from the terminal
Question: I need to run a proccess in the terminal to grab the output from it.
import subprocess
subprocess.check_output(["my_util", "some_file.txt", " | grep 'Some data1' | awk '{print $2}'"])
#or
subprocess.check_output(["my_util", "full_path/some_file.txt", "| grep 'Some data1'", "| awk '{print $2}'"])
And nothing happens in REPL, while running it in the terminal manually gives
me the proper output.
update:
the output from sublime text:
my_util FAIL formats: can't open input pipe `| grep 'Sample data1'': premature EOF
my_util FAIL formats: can't open input pipe `| awk '{print $2}'': premature EOF
Traceback (most recent call last):
File "/test1.py", line 4, in <module>
"| grep 'Sample data1'", "| awk '{print $2}'"])
File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '["my_util", "full_path/some_file.txt", "| grep 'Some data1'", "| awk '{print $2}'"]' returned non-zero exit status 2
Answer: os.system can be used instead of subprocess
import os
os.system("my_util some_file.txt | grep 'Some data1' | awk '{print $2}'" )
|
error with python sympy computing integral for cosine function
Question: So I was trying an example directly from the sympy documentation and I am
getting a strange error. I am using python 3.2 with sympy 0.7.3. I have been
working in the ipython notebook, though I don't think that should make a
difference. The error is that whenever I create a "x" symbol and try to
integrate the math.cos(x), I get an error saying "can't convert expression to
float."
Here is a code example. This is taken from the [sympy
documentation](http://docs.sympy.org/dev/modules/integrals/integrals.html).
import sympy
import math
x = sympy.Symbol('x')
sympy.integrate(x**2 * math.exp(x) * math.cos(x), x)
The error message that results is:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-123-84e55454fb60> in <module>()
----> 1 sympy.integrate(x**2 * math.exp(x) * math.cos(x), x)
/usr/local/lib/python3.2/dist-packages/sympy/core/expr.py in __float__(self)
242 if result.is_number and result.as_real_imag()[1]:
243 raise TypeError("can't convert complex to float")
--> 244 raise TypeError("can't convert expression to float")
245
246 def __complex__(self):
TypeError: can't convert expression to float
Any suggestions would be appreciated.
Answer: You cannot mix the symbolic mathematical expressions created by the `sympy`
library with normal functions that just calculate a value (like the ones from
the `math` library. If you're creating a symbolic expression, you should
always use the `sympy` functions (`sympy.exp`, `sympy.cos`, `sympy.log`,
etc.):
x = sympy.Symbol('x')
sympy.integrate(x**2 * sympy.exp(x) * sympy.cos(x), x)
Operators such as `*`, `+`, `-`... Are overloaded by objects in the `sympy`
library so you can use them in your expressions, but you cannot use normal
functions that directly calculate values.
|
Python IDLE becomes slow on very large program input
Question: Why does python idle become so slow when handling very large inputs, when the
python command line does not?
For example, if I run "aman"*10000000 in python IDLE, it becomes unresponsive,
but on python cmd line, it is quick.
Answer: I had to research a bit. When I invoked idle on my machine, I saw another
python process which uses `idlelib`
~$ ps -eaf | grep -in idle
234:1000 13122 1 5 16:44 ? 00:00:01 /usr/bin/python2.7 /usr/bin/idle-python2.7
235:1000 13124 13122 3 16:44 ? 00:00:01 /usr/bin/python2.7 -c __import__('idlelib.run').run.main(True) 60839
239:1000 13146 12061 0 16:44 pts/0 00:00:00 grep --color=auto -in idle
~$
The last parameter (60839) made me think. So I looked around for `idlelib` and
got the implementation here <https://github.com/pypy/pypy/blob/master/lib-
python/2.7/idlelib/run.py#L49> The comment there says
Start the Python execution server in a subprocess
In the Python subprocess, RPCServer is instantiated with handlerclass
MyHandler, which inherits register/unregister methods from RPCHandler via
the mix-in class SocketIO.
Now, things were clear to me. The IDLE sends commands to python interpreter
over a TCP connection. Still, I am not convinced. Then I read the complete
`Help`->`About IDLE`->`README`. It says
> IDLE executes Python code in a separate process, which is restarted for each
> Run (F5) initiated from an editor window. The environment can also be
> restarted from the Shell window without restarting IDLE.
**Conclusion**
When we have such a dependency (IDLE depending on response over a socket), the
delay what you experienced is perfectly fine.
|
Python 2.7.2: plistlib with itunes xml
Question: I'm reading an itunes generated xml playlist with plistib. The xml has a utf8
header.
When I read the xml with plistib, I get both unicode (e.g., 'Name':
u'Don\u2019t You Remember') and byte strings (e.g., 'Name': 'Where Eagles
Dare').
Standard advice is to decode what you read with the correct encoding as soon
as possible and use unicode within the program. However,
unicode_string.decode('utf8')
fails (as it should) with
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 3: ordinal not in range(128)
The solution would seem to be:
for name in names:
if isinstance(name, str):
name = name.decode('utf8')
# etc.
Is this the correct way of dealing with the problem? Is there a better way?
I'm on windows 7.
EDIT:
xml read with:
import plistlib
xml = plistlb.readPlist(fn)
for track in xml['Tracks']:
info = xml['Tracks'][track]
info['Name']
Produces in idle:
u'Don\u2019t You Remember'
'Where Eagles Dare'
Here's the xml file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Major Version</key><integer>1</integer>
<key>Minor Version</key><integer>1</integer>
<key>Date</key><date>2013-08-14T15:04:27Z</date>
<key>Application Version</key><string>10.6.3</string>
<key>Features</key><integer>5</integer>
<key>Show Content Ratings</key><true/>
<key>Music Folder</key><string>file://localhost/C:/Users/rdp/Music/iTunes/iTunes%20Media/</string>
<key>Library Persistent ID</key><string>FE28CCACD9A36C34</string>
<key>Tracks</key>
<dict>
<key>1019</key>
<dict>
<key>Track ID</key><integer>1019</integer>
<key>Name</key><string>Where Eagles Dare</string>
<key>Artist</key><string>Iron Maiden</string>
<key>Album</key><string>Piece Of Mind</string>
<key>Genre</key><string>Rock</string>
<key>Kind</key><string>MPEG audio file</string>
<key>Size</key><integer>7372755</integer>
<key>Total Time</key><integer>370128</integer>
<key>Track Number</key><integer>1</integer>
<key>Year</key><integer>1983</integer>
<key>Date Modified</key><date>2009-10-07T21:11:31Z</date>
<key>Date Added</key><date>2008-02-07T16:04:15Z</date>
<key>Bit Rate</key><integer>153</integer>
<key>Sample Rate</key><integer>44100</integer>
<key>Play Count</key><integer>4</integer>
<key>Play Date</key><integer>3414416760</integer>
<key>Play Date UTC</key><date>2012-03-12T21:06:00Z</date>
<key>Artwork Count</key><integer>1</integer>
<key>Persistent ID</key><string>FE28CCACD9A383E5</string>
<key>Track Type</key><string>File</string>
<key>Location</key><string>file://localhost/D:/music/Iron%20Maiden/Piece%20Of%20Mind/01%20Where%20Eagles%20Dare.mp3</string>
<key>File Folder Count</key><integer>-1</integer>
<key>Library Folder Count</key><integer>-1</integer>
</dict>
<key>11559</key>
<dict>
<key>Track ID</key><integer>11559</integer>
<key>Name</key><string>Don’t You Remember</string>
<key>Artist</key><string>Adele</string>
<key>Album</key><string>21</string>
<key>Genre</key><string>Pop</string>
<key>Kind</key><string>MPEG audio file</string>
<key>Size</key><integer>6120028</integer>
<key>Total Time</key><integer>229511</integer>
<key>Track Number</key><integer>4</integer>
<key>Track Count</key><integer>11</integer>
<key>Year</key><integer>2011</integer>
<key>Date Modified</key><date>2012-11-17T10:50:31Z</date>
<key>Date Added</key><date>2012-12-19T16:03:46Z</date>
<key>Bit Rate</key><integer>199</integer>
<key>Sample Rate</key><integer>44100</integer>
<key>Artwork Count</key><integer>1</integer>
<key>Persistent ID</key><string>7130C888606FB153</string>
<key>Track Type</key><string>File</string>
<key>Location</key><string>file://localhost/D:/music/Adele/21/04%20-%20Don%E2%80%99t%20You%20Remember.mp3</string>
<key>File Folder Count</key><integer>-1</integer>
<key>Library Folder Count</key><integer>-1</integer>
</dict>
</dict>
<key>Playlists</key>
<array>
<dict>
<key>Name</key><string>short</string>
<key>Playlist ID</key><integer>30888</integer>
<key>Playlist Persistent ID</key><string>166746C6572B0005</string>
<key>All Items</key><true/>
<key>Playlist Items</key>
<array>
<dict>
<key>Track ID</key><integer>11559</integer>
</dict>
<dict>
<key>Track ID</key><integer>1019</integer>
</dict>
</array>
</dict>
</array>
</dict>
</plist>
Answer: Wow this is a really weird behaviour. I would even say that this non-uniform
behaviour is a bug in the 2.X implementation of the `plistlib`. The `plistlib`
in Python 3 always returns unicode strings which is much better.
But you have to live with it :) So the answer to your question is yes. You
should protect yourself always when reading a string from a `plist`
def safe_unicode(s):
if isinstance(s, unicode):
return s
return s.decode('utf-8', errors='replace')
value = safe_unicode(info['Name'])
I added the `errors='replace'` just in case the string is not `utf-8` encoded.
You'll get a bunch of `\ufffd` characters if it cannot be decoded. If you
rather get an exception just leave it out and use `e.decode('utf-8')`.
_Update:_
When I tried with ElementTree:
from xml.etree import ElementTree as et
tree = et.parse('test.plist')
map(lambda x: x.text, tree.findall('dict/dict/dict')[1].findall('string'))
Which gave me:
[u'Don\u2019t You Remember',
'Adele',
'21',
'Pop',
'MPEG audio file',
'7130C888606FB153',
'File',
'file://localhost/D:/music/Adele/21/04%20-%20Don%E2%80%99t%20You%20Remember.mp3']
So there are unicode and byte string mixed :-/
|
Cannot convert array to floats python
Question: I'm having a problem that seems like the answer would be easily explained. I'm
struggling to convert my array elements to floats (so that I can multiply, add
on them etc)
import csv
import os
import glob
import numpy as np
def get_data(filename):
with open(filename, 'r') as f:
reader = csv.reader(f)
return list(reader)
all_data = []
path=raw_input('What is the directory?')
for infile in glob.glob(os.path.join(path, '*.csv')):
all_data.extend(get_data(infile))
a = np.array(all_data)
current_track_data=a[0:,[8,9,10,11,12]]
abs_track_data=a[0:,7]
and I get the error:
> --------------------------------------------------------------------------- ValueError Traceback (most recent call last) C:\Users\AClayton\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.3.1262.win-x86_64\lib\site-packages\IPython\utils\py3compat.pyc in execfile(fname, glob, loc)
174 else:
175 filename = fname
--> 176 exec compile(scripttext, filename, 'exec') in glob, loc
177 else:
178 def execfile(fname, *where):
>
> C:\Users\AClayton\Current\python begin\code_tester2.py in <module>()
> 18 for infile in glob.glob(os.path.join(path, '*.csv')): # performs loop for each file in the specified path with extension .csv
> 19 all_data.extend(get_data(infile))
> ---> 20 a = np.ndarray(all_data, dtype=float)
> 21
> 22 current_track_data=a[0:,[8,9,10,11,12]]
>
> ValueError: sequence too large; must be smaller than 32
Answer: Your script is not the same as the code you've posted... As the traceback of
your error shows, in line 20 you are calling `np.ndarray`. That's the [numpy
array
object](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html),
not the `np.array` [factory
function](http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html).
Unless you know very well what you are doing, follow the docs advice and:
> Arrays should be constructed using `array`, `zeros` or `empty` (refer to the
> See Also section below). The parameters given here refer to a low-level
> method (`ndarray(...)`) for instantiating an array.
So change your line #20 to:
a = np.array(all_data, dtype=float)
and you should be fine.
The error you are getting comes because `ndarray` takes your first input as
the shape of the array to be created. There is a harcoded limit to the number
of dimensions set to 32 on my Windows system (may be platform dependent, not
sure). Your `all_data` list has more than 32 entries (or whatever that value
is in your system), misinterpreted as sizes of dimensions, and that's what
triggers the error.
|
Are .pyds decompilable? - Python
Question: I was wondering, if I export my game as an .exe and all the extra material is
imported, turning it into a .pyd - can you decompile the .pyds? Thanks!
Thank you people, I got my answer for any future people who need help: .pyd
files are just shared libraries Any tools that allow people to inspect or
disassemble a shared library will work on them as well. & for more info : [How
hard to reverse engineer .pyd
files?](http://stackoverflow.com/questions/12075042/how-hard-to-reverse-
engineer-pyd-files)
Answer: In simple terms, `.pyd` files are just shared libraries (`.dll`s on Windows),
with a different name. Any tools that allow people to inspect or disassemble a
shared library will work on them as well.
|
Python: How do you iterate over a list of filenames and import them?
Question: Suppose I have a folder called "Files" which contains a number of different
python files.
path = "C:\Python27\Files"
os.chdir(path)
filelist = os.listdir(path)
print(filelist)
This gives me a list containing the names of all of the python files in the
folder "Files".
I want to import each one of these files into a larger python program, one at
a time, as a module. What is the best way to do this?
Answer: add `__init__.py` to folder and you can import files as `from Files import *`
|
need to compute change in time between start time and end time.(Python)
Question: I need to write a program that accepts a start time and end time and computes
the change between them in minutes. For example, the start time is 4:30 PM and
end time is 9:15 PM then the change in time is 285 min. How do I accomplish
this in Python? I only need to compute for a 24 hour period
Answer: Here's what you can do:
from datetime import datetime
def compute_time(start_time, end_time):
start_datetime = datetime.strptime(start_time, '%I:%M %p')
end_datetime = datetime.strptime(end_time, '%I:%M %p')
return (start_datetime - end_datetime).seconds / 60
print compute_time('9:15 PM', '4:30 PM')
prints `285`.
|
Rearrange a list of points to reach the shortest distance between them
Question: I have a list of 2D points for example:
1,1 2,2 1,3 4,5 2,1
The distance between these points is known (using math.hypot for example.) I
want to sort the list so that there is a minimum distance between them. I'm OK
with any possible solution order, as long as the points are in the shortest
order.
What is the most pythonic way to achieve this?
I was considering working out the distance between any item and any other
item, and choosing the smallest each time, but this would be a slow algorithm
on the lists I am working on (1,000 items would not be unusual.)
Answer: The technical question you're asking is similar to "What is the [minimum
hamiltonian path](http://en.wikipedia.org/wiki/Hamiltonian_path) of a graph"
(your tuples are vertices, and the distance between them are the weight of the
edges). This problem can't be solved in polynomial time, so your dataset had
better be small. Since your graph is complete (all nodes are connected), the
minimum hamiltonian path problem may not completely apply.
In any case, the answer below uses brute force. It permutes all possible
paths, calculates the distance of each path, and then gets the minimum.
import itertools as it
import math
def dist(x,y):
return math.hypot(y[0]-x[0],y[1]-x[1])
paths = [ p for p in it.permutations([(1,2),(2,3),(5,6),(3,4)]) ]
path_distances = [ sum(map(lambda x: dist(x[0],x[1]),zip(p[:-1],p[1:]))) for p in paths ]
min_index = argmin(path_distances)
print paths[min_index], path_distances[min_index]
Output:
((1, 2), (2, 3), (3, 4), (5, 6)) 5.65685424949
Note that the reverse path is an equivalent minimum
|
words as y-values in pyplot/matplotlib
Question: I am trying to learn how to use pylab (along with the rest of its tools). I'm
currently trying to understand pyplot, but I need to create a very specific
type of plot. It's basically a line plot with words instead of numbers on the
y-axis.
Something like this:
hello | +---+
world | +---------+
+---|---|---|---|---|-->
0 1 2 3 4 5
How would I do that with any python graphic library? Bonus points if you show
me how to with pyplot or pylab suite libraries.
Thanks! Chmod
Answer: I added all the explanations into the code:
# Import the things you need
import numpy as np
import matplotlib.pyplot as plt
# Create a matplotlib figure
fig, ax = plt.subplots()
# Create values for the x axis from -pi to pi
x = np.linspace(-np.pi, np.pi, 100)
# Calculate the values on the y axis (just a raised sin function)
y = np.sin(x) + 1
# Plot it
ax.plot(x, y)
# Select the numeric values on the y-axis where you would
# you like your labels to be placed
ax.set_yticks([0, 0.5, 1, 1.5, 2])
# Set your label values (string). Number of label values
# sould be the same as the number of ticks you created in
# the previous step. See @nordev's comment
ax.set_yticklabels(['foo', 'bar', 'baz', 'boo', 'bam'])
Thats it...

Or if you don't need the subplots just:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-np.pi, np.pi, 100)
y = np.sin(x) + 1
plt.plot(x, y)
plt.yticks([0, 0.5, 1, 1.5, 2], ['foo', 'bar', 'baz', 'boo', 'bam'])
This is just a shorter version of doing the same thing if you don't need a
figure and subplots.
|
displaying calendar items closest to today using datetime
Question: I have a dictionary of my calendar items for a month (_date_ as "key", _items_
in the form of a list as "value") that I want to print out a certain way (That
dictionary in included in the code, assigned to `dct`). I only want to display
items that are **on** or **after** the current date (i.e. today). The display
format is:
day: item1, item2
I also want those items to span only 5 lines of stdout with each line 49
characters wide (spaces included). This is necessary because the output will
be displayed in conky (app for linux).
Since a day can have multiple agenda items, the output will have to be wrapped
and printed out on more than one line. I want the code to account for that by
selecting only those days whose items can fit in 5 or less lines instead of
printing 5 days with associated items on >5 lines. For e.g.
day1: item1, item2
item3
day2: item1
day3: item1,
item2
Thats 3 days on/after current day printing on 5 lines with each line 49 char
wide. Strings exceeding 49 char are wrapped on newline.
Here is the code i've written to do this:
#!/usr/bin/env python
from datetime import date, timedelta, datetime
import heapq
import re
import textwrap
pattern_string = '(1[012]|[1-9]):[0-5][0-9](\\s)?(?i)(am|pm)'
pattern = re.compile(pattern_string)
# Explanation of pattern_string:
# ------------------------------
#( #start of group #1
#1[012] # start with 10, 11, 12
#| # or
#[1-9] # start with 1,2,...9
#) #end of group #1
#: # follow by a semi colon (:)
#[0-5][0-9] # follw by 0..5 and 0..9, which means 00 to 59
#(\\s)? # follow by a white space (optional)
#(?i) # next checking is case insensitive
#(am|pm) # follow by am or pm
# The 12-hour clock format is start from 0-12, then a semi colon (:) and follow by 00-59 , and end with am or pm.
# Time format that match:
# 1. "1:00am", "1:00 am","1:00 AM" ,
# 2. "1:00pm", "1:00 pm", "1:00 PM",
# 3. "12:50 pm"
d = date.today() # datetime.date(2013, 8, 11)
e = datetime.today() # datetime.datetime(2013, 8, 11, 5, 56, 28, 702926)
today = d.strftime('%a %b %d') # 'Sun Aug 11'
dct = {
'Thu Aug 01' : [' Weigh In'],
'Thu Aug 08' : [' 8:00am', 'Serum uric acid test', '12:00pm', 'Make Cheesecake'],
'Sun Aug 11' : [" Awais chotu's birthday", ' Car wash'],
'Mon Aug 12' : ['10:00am', 'Start car for 10 minutes'],
'Thu Aug 15' : [" Hooray! You're Facebook Free!", '10:00am', 'Start car for 10 minutes'],
'Mon Aug 19' : ['10:00am', 'Start car for 10 minutes'],
'Thu Aug 22' : ['10:00am', 'Start car for 10 minutes'],
'Mon Aug 26' : ['10:00am', 'Start car for 10 minutes'],
'Thu Aug 29' : ['10:00am', 'Start car for 10 minutes']
}
def join_time(lst):
'''Searches for a time format string in supplied list and concatenates it + the event next to it as an single item
to a list and returns that list'''
mod_lst = []
for number, item in enumerate(lst):
if re.search(pattern, item):
mod_lst.append(item + ' ' + lst[number+1]) # append the item (i.e time e.g '1:00am') and the item next to it (i.e. event)
del lst[number+1]
else:
mod_lst.append(item)
return mod_lst
def parse_date(datestring):
return datetime.strptime(datestring + ' ' + str(date.today().year), "%a %b %d %Y") # returns a datetime obj for the time string; "Sun Aug 11" = datetime.datetime(1900, 8, 11, 0, 0)
deltas = [] # holds datetime.timedelta() objs; timedelta(days, seconds, microseconds)
val_len = []
key_len = {}
for key in dct:
num = len(''.join(item for item in dct[key]))
val_len.append(num) # calculate the combined len of all items in the
# list which are the val of a key and add them to val_len
if num > 37:
key_len[key] = 2
else:
key_len[key] = 1
# val_len = [31, 9, 61, 31, 31, 49, 31, 32, 31]
# key_len = {'Sun Aug 11': 1, 'Mon Aug 12': 1, 'Thu Aug 01': 1, 'Thu Aug 15': 2, 'Thu Aug 22': 1, 'Mon Aug 19': 1, 'Thu Aug 08': 2, 'Mon Aug 26': 1, 'Thu Aug 29': 1}
counter = 0
for eachLen in val_len:
if eachLen > 37:
counter = counter + 2
else:
counter = counter + 1
# counter = 11
if counter > 5: # because we want only those 5 events in our conky output which are closest to today
n = counter - 5 # n = 6, these no of event lines should be skipped
for key in dct:
deltas.append(e - parse_date(key)) # today - key date (e.g. 'Sun Aug 11') ---> datetime.datetime(2013, 8, 11, 5, 56, 28, 702926) - datetime.datetime(1900, 8, 11, 0, 0)
# TODO: 'n' no of event lines should be skipped, NOT n no of days!
for key in sorted(dct, key=parse_date): # sorted() returns ['Thu Aug 01', 'Thu Aug 08', 'Sun Aug 11', 'Mon Aug 12', 'Thu Aug 15', 'Mon Aug 19', 'Thu Aug 22', 'Mon Aug 26', 'Thu Aug 29']
tdelta = e - parse_date(key)
if tdelta in heapq.nlargest(n, deltas): # heapq.nlargest(x, iterable[, key]); returns list of 'x' no. of largest items in iterable
pass # In this case it should return a list of top 6 largest timedeltas; if the tdelta is in
# that list, it means its not amongst the 5 events we want to print
else:
if key == today:
value = dct[key]
val1 = '${color green}' + key + '$color: '
mod_val = join_time(value)
val2 = textwrap.wrap(', '.join(item for item in mod_val), 37)
print val1 + '${color 40E0D0}' + '$color\n ${color 40E0D0}'.join(item for item in val2) + '$color'
else:
value = dct[key]
mod_val = join_time(value)
output = key + ': ' + ', '.join(item for item in mod_val)
print '\n '.join(textwrap.wrap(output, 49))
else:
for key in sorted(dct, key=parse_date):
if key == today:
value = dct[key]
val1 = '${color green}' + key + '$color: '
mod_val = join_time(value)
val2 = textwrap.wrap(', '.join(item for item in mod_val), 37)
print val1 + '${color 40E0D0}' + '$color\n ${color 40E0D0}'.join(item for item in val2) + '$color'
else:
value = dct[key]
mod_val = join_time(value)
output = key + ': ' + ', '.join(item for item in mod_val)
print '\n '.join(textwrap.wrap(output, 49))
The result is:
Thu Aug 22: 10:00am Start car for 10 minutes
Mon Aug 26: 10:00am Start car for 10 minutes
Thu Aug 29: 10:00am Start car for 10 minutes
I've commented the code heavily so it shouldn't be difficult to figure out how
it works. I'm basically calculating the days farthest away from current day
using datetime and skipping those days and their items. The code usually works
well but once in a while it doesn't. In this case the output should have been:
Mon Aug 19: 10:00am Start car for 10 minutes
Thu Aug 22: 10:00am Start car for 10 minutes
Mon Aug 26: 10:00am Start car for 10 minutes
Thu Aug 29: 10:00am Start car for 10 minutes
since these are the days after the current day (Fri 16 Aug) whose items fit in
5 lines. _How do I fix it to skip`n` no of lines rather than no of days
farthest away from today?_
I was thinking of using `key_len` dict to somehow filter the output further,
by printing the items of only **those** days whose items length sum up to < or
= 5...
I'm stuck.
Answer: It's very hard to tell what you're asking here, and your code is a huge
muddle.
However, the reason you're getting the wrong output in the given example is
very obvious, and matches the `TODO` comment in your code, so I'm going to
assume that's the only part you're asking about:
# TODO: 'n' no of event lines should be skipped, NOT n no of days!
I don't understand why you want to skip to the _last_ 5 lines after today
instead of the first 5, but I'll assume you have some good reason for that.
The easiest way to solve this is to just do them in reverse, prepend the lines
to a string instead of `print`ing them directly, stop when you've reached 5
lines, and then print the string. (This would also save the wasteful re-
building of the heap over and over, etc.)
For example, something like this:
outlines = []
for key in sorted(dct, key=parse_date, reverse=True): # sorted() returns ['Thu Aug 01', 'Thu Aug 08', 'Sun Aug 11', 'Mon Aug 12', 'Thu Aug 15', 'Mon Aug 19', 'Thu Aug 22', 'Mon Aug 26', 'Thu Aug 29']
if parse_date(key) < parse_date(today):
break
tdelta = e - parse_date(key)
if key == today:
value = dct[key]
val1 = '${color green}' + key + '$color: '
mod_val = join_time(value)
val2 = textwrap.wrap(', '.join(item for item in mod_val), 37)
outstr = val1 + '${color 40E0D0}' + '$color\n ${color 40E0D0}'.join(item for item in val2) + '$color'
outlines[:0] = outstr.splitlines()
else:
value = dct[key]
mod_val = join_time(value)
output = key + ': ' + ', '.join(item for item in mod_val)
outstr = '\n '.join(textwrap.wrap(output, 49))
outlines[:0] = outstr.splitlines()
if len(outlines) >= 5:
break
print '\n'.join(outlines)
There are a lot of ways you could simplify this. For example, instead of
passing around string representations of dates and using `parse_date` all over
the place, just pass around dates, and format them once at the end. Use string
formatting instead of 120-character multiple-concatenation expressions. Build
your data structures once and use them, instead of rebuilding them over and
over where you need them. And so on. But this should be all you need to get it
to work.
|
Stream Json with python localy
Question: I would like to stream a JSON locally with python (so as another program read
it). Is there any package that streams in a clean way the json in a local
address? (as I used print but instead of the terminal, a local url).
Thanks
Answer: This should do it:
import SocketServer
import json
class Server(SocketServer.ThreadingTCPServer):
allow_reuse_address = True
class Handler(SocketServer.BaseRequestHandler):
def handle(self):
self.request.sendall(json.dumps({'id':'3000'})) # your JSON
server = Server(('127.0.0.1', 50009), Handler)
server.serve_forever()
Test with:
~ ᐅ curl 127.0.0.1:50009
{"id": 3000}
|
How to pass an array of integers as a parameter from javascript to python?
Question: I have a javascript code that obtains the value of several checked boxes and
inserts their value (integers) into an array:
var my_list = $('.item:checked').map(function(){return $(this).attr('name');}).get();
I want to pass this javascript array to a python function as a paramter. This
is how I am doing it right now, with ajax and jQuery:
$.ajax({
url : "{{tg.url('/foo/bar')}}",
data : {
my_list: JSON.stringify(my_list),
...
},
On the python side, I have this code, to get the parameters:
item_list = get_paramw(kw, 'my_list', unicode)
This works, but I am not receiving the array as an array of integers, but as a
single string containing the "[", "]", "," and """ symbols, which I would have
to parse and isn't the most elegant way of dealing with this whole situation I
think.
How should it be done to receive a javascript array of integers as an array of
integers in python (not as a string)?
Answer: The easiest way to send simple arbitrarily-structured data from Javascript to
Python is JSON. You've already got the Javascript side of it, in that
`JSON.stringify(my_list)`. All you need is the Python side. And it's one line
of code:
item_list_json = get_paramw(kw, 'my_list', unicode)
item_list = json.loads(item_list_json)
If `item_list` in your JS code were the Javascript array of numbers `[1, 2, 3,
4]`, `item_list` in your Python code would be the Python list of ints `[1, 2,
3, 4]`.
(Well, you also have to `import json` at the top of your script, so I guess
it's two lines.)
|
How to properly import a library (?) in Python
Question: I've been trying to use the tldextract library available here.
After many attempts, I was able to get it installed. However, now when it
comes to run the main file, the compiler says that it can't find any reference
to my library. Below the code I used and that raise the exception.
import tldextract
I appreciate this is a very basilar question and it is not totally connected
with the library I'm trying to use, but I wonder if you can point me in the
direction on how to "link" or make sure the compiler know that I have that
library.
As far as I can understand as long a library is avaialble in the site-packages
folder, this should sort the problem.
In my circumstance the file is at
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/tldextract
So in theory this should be ok, but I get the following error when I try to
use it.
Traceback (most recent call last):
File "test.py", line 12, in <module>
import tldexport
ImportError: No module named tldexport
I hope this question doesn't make you upset for it's simplicity. I'm here to
learn after all.
Thanks
Answer: Based on the error code, file test.py is calling a module named 'tldexport'
If that's a dependency, install it.
If it's a typo intended to be **tldextract** , then change it :)
|
Django ModelForm not saving data to database
Question: A Django beginner here having a lot of trouble getting forms working. Yes I've
worked through the tutorial and browsed the web a lot - what I have is mix of
what I'm finding here and at other sites. I'm using Python 2.7 and Django 1.5.
(although the official documentation is extensive it tends to assume you know
most of it already - not good as a beginners reference or even an advanced
tutorial)
I am trying to create a form for "extended" user details - eg. company name,
street address, etc but the form data is not being saved to the database.
Initially I tried to create a model extending the standard `User` model, but I
gave up on that - too much needed modifying and it was getting into nitty
gritty that was way beyond me at this stage. So instead I have created a new
model called `UserProfile`:
class UserProfile(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, unique=True)
company = models.CharField(max_length=50, blank=True)
address1 = models.CharField(max_length=50, blank=True)
address2 = models.CharField(max_length=50, blank=True)
city = models.CharField(max_length=20, blank=True)
region = models.CharField(max_length=20, blank=True)
postcode = models.CharField(max_length=10, blank=True)
country = models.CharField(max_length=20, blank=True)
phone = models.CharField(max_length=30, blank = True)
I have seen different references online as to whether I should link to the
User model with a `ForeignKey` (as above) or with a OneToOne.
I am trying to use a ModelForm (keep it simple for what should be a simple
form). Here is my forms.py
from django.forms import ModelForm
from .models import UserProfile
class UserDetailsForm(ModelForm):
class Meta:
model = UserProfile
fields = ['company','address1','address2','city','region', 'postcode','country','phone']
Here is my view:
def UserDetailsView(request):
#f = 0
if request.method == 'POST':
f = UserDetailsForm(request.POST, instance = request.user)
if f.is_valid():
f.save()
else:
f = UserDetailsForm(request.POST , instance = request.user)
print "UserDetails objects: ", (UserProfile.objects.all())
return render_to_response('plagweb/console/profile.html',
{ 'form': f},
context_instance=RequestContext(request))
(yes there is an inconsistency in UserProfile vs UserDetail - this is a
product of all my hacking and will be fixed once I get it working)
Diagnostics show `f.is_valid()` returning True. Similarly, diagnostics show
`UserProfile.objects.all()` as being empty. I tried this in the above view
after the save(), and also at the Django console.
Here is my template:
<form method="POST" action="">
<table>{{ form }}</table>
<input type="submit" value="Update" />
{% csrf_token %}
</form>
At the moment the main problem is that form data is not being saved to the
database. I do not know if is being read yet or not (once I have some data in
the database...)
One thought is that the `User` relationship might be causing a problem?
* * *
_**Addenda, following on from Daniel Roseman's useful comment/help:_**
Now the form is saving correctly (confirmed with diagnostics and command line
checks. However when I go back to the form, it is not displaying the existing
data. A form with empty data fields is displayed. As far as I can tell, I'm
passing the instance data correctly.
Is there a ModelForm setting that needs to change?
Here's the modified View:
def UserDetailsView(request):
#print "request co:", request.user.profile.company
f = UserDetailsForm(request.POST, instance = request.user.profile )
if request.method == 'POST':
if f.is_valid():
profile = f.save(commit=False)
profile.user = request.user
profile.save()
return render_to_response('plagweb/console/profile.html',
{ 'form': f},
context_instance=RequestContext(request))
The diagnostics show that `request.user.profile` is set correctly
(specifically the company field). However the form is displayed with empty
fields. The HTML source doesn't show any data values, either. As an extra
check, I also tried some template diagnostics:
<table border='1'>
{% for field in form%}
<tr>
<td>{{field.label}}</td>
<td>{{field.value}}</td>
</tr>
{% endfor%}
</table>
This lists the field labels correctly, but the values are all reported as
`None`.
For completeness, the UserProfile's user field is now defined as `user =
models.OneToOneField(settings.AUTH_USER_MODEL)` and the User.profile lambda
expression is unchanged.
Answer: I'm not sure exactly what the problem is here, but one issue certainly is that
you are passing `instance=request.user` when instantiating the form. That's
definitely wrong: `request.user` is an instance of User, whereas the form is
based on `UserProfile`.
Instead, you want to do something like this:
f = UserDetailsForm(request.POST)
if f.is_valid():
profile = f.save(commit=False)
profile.user = request.user
profile.save()
As regards ForeignKey vs OneToOne, you should absolutely be using OneToOne.
The advice on using ForeignKeys was given for a while in the run-up to the
release of Django 1 - almost five years ago now - because the original
implementation of OneToOne was poor and it was due to be rewritten. You
certainly won't find any up-to-date documentation advising the use of FK here.
**Edit after comments** The problem now is that you're instantiating the form
with the first parameter, data, even when the request is not a POST and
therefore `request.POST` is an empty dictionary. But the data parameter, even
if it's empty, takes precedence over whatever is passed as initial data.
You should go back to the original pattern of instantiating the form within
the `if` statement - but be sure not to pass `request.POST` when doing it in
the `else` clause.
|
python Hiding raw_input
Question: so this is my code and i want to hide my password, but i dont know how. i have
looked around and none of them seem to fit in my coding, this is the current
coding. i mean i have seen show="*" and also getpass but i dont know how to
place them into this coding. im using python 2.7.3 and im coding on a
raspberry pi.
ans = True
while ans:
print("""
-------------
| 1. Shutdown |
| 2. Items |
-------------
""")
ans=raw_input("""
Please Enter A Number: """)
if ans == "1":
exit()
elif ans == "2":
pa=raw_input("""
Please Enter Password: """)
if pa == "zombiekiller":
print("""
----------------
| 1. Pi password |
| 2. Shutdown |
----------------
""")
pe=raw_input ("""
Please Enter A Number: """)
if pe == "1":
print ("""
Pi's Password Is Adminofpi""")
import time
time.sleep(1)
exit()
elif pe == "2":
exit()
else:
print("""
You Have Entered An Inccoredt Option. Terminating Programm""")
import time
time.sleep(1)
exit()
else:
print("""
You Have Entered An Inccorect Password. Terminating Programm""")
import time
time.sleep(1)
exit()
Answer: `getpass` hides the input, just replace `raw_input` after importing the module
`getpass`, like this:
import getpass
.
.
.
pa = getpass.getpass()
|
Python module for HTTP: fill in forms, retrieve result
Question: I'd like to use Python to access an HTTP website, fill out a form, submit the
form, and retrieve the result.
What modules are suitable for the task?
Answer: We cannot advise you with detailed instructions since you never gave us
details of your problem. However, most probably you want to use urllib2 to
fetch an HTML page:
import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()
You should then parse the form, find out all the data fields you need to send
with their names , and then create your own POST or GET request, depending on
the form type.
To send a POST request:
import urllib
import urllib2
url = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
To send a GET request:
import urllib2
import urllib
data = {}
data['name'] = 'Somebody Here'
data['location'] = 'Northampton'
data['language'] = 'Python'
url_values = urllib.urlencode(data)
url = 'http://www.example.com/example.cgi'
full_url = url + '?' + url_values
data = urllib2.urlopen(full_url)
|
Python logging typeerror
Question: Could you please help me, whats wrong.
import logging
if (__name__ == "__main__"):
logging.basicConfig(format='[%(asctime)s] %(levelname)s::%(module)s::%(funcName)s() %(message)s', level=logging.DEBUG)
logging.INFO("test")
And I can't run it, I've got an error:
Traceback (most recent call last):
File "/home/htfuws/Programming/Python/just-kidding/main.py", line 5, in
logging.INFO("test")
TypeError: 'int' object is not callable
Thank you very much.
Answer: [`logging.INFO` denotes](http://docs.python.org/2/howto/logging.html#logging-
advanced-tutorial) an integer constant with value of 20
> INFO Confirmation that things are working as expected.
What you need is
[`logging.info`](http://docs.python.org/2/library/logging.html#logging.info)
logging.info("test")
|
Django template dir strange behaviour
Question: I am really having problems to set TEMPLATE_DIR correctly after searching
through bunch of topics and trying various things.
Here are my project settings:
#settings.py
DEBUG = True
TEMPLATE_DEBUG = DEBUG
import os
PROJECT_PATH = os.path.realpath(os.path.dirname(__file__))
MEDIA_ROOT = ''
MEDIA_URL = ''
STATIC_ROOT = ''
STATIC_URL = '/static/'
STATICFILES_DIRS = (
PROJECT_PATH + '/static/',
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
)
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
)
TEMPLATE_DIRS = (
PROJECT_PATH + '/TrainingBook/templates/',
PROJECT_PATH + '/RestClient/templates/',
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'TrainingBook',
)
TEMPLATE_CONTEXT_PROCESSORS = (
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
# 'django.core.context_processors.request',
"TrainingBook.context_processors.global_context",
)
print PROJECT_PATH # /Users/Kuba/Development/University/RestClient
print STATICFILES_DIRS # ('/Users/Kuba/Development/University/RestClient/static/',)
print TEMPLATE_DIRS
# ('/Users/Kuba/Development/University/RestClient/TrainingBook/templates/',
# '/Users/Kuba/Development/University/RestClient/RestClient/templates/')
My project structure:
$ pwd
/Users/Kuba/Development/University/RestClient
$ tree
.
├── RestClient
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.pyc
│ ├── templates
│ │ ├── base.html
│ │ ├── home.html
│ │ └── login_form.html
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
├── TrainingBook
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── context_processors.py
│ ├── context_processors.pyc
│ ├── models.py
│ ├── models.pyc
│ ├── templates
│ │ ├── friends.html
│ │ ├── statistics.html
│ │ └── workouts.html
│ ├── tests.py
│ ├── views.py
│ └── views.pyc
├── manage.py
├── settings.py
├── settings.pyc
└── static
├── css
│ ├── bootstrap-glyphicons.css
│ ├── bootstrap.css
│ ├── bootstrap.min.css
│ └── main.css
└── js
├── bootstrap.js
├── bootstrap.min.js
└── jquery-1.10.2.js
I moved "settings.py" one level up in order to get PROJECT_PATH set to
"/RestClient/" instead of "/RestClient/RestClient/".
I also modified manage.py from
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "RestClient.settings")
to
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
When I run server TemplateDoesNotExist is raised and I am seeing something
strange:
Template-loader postmortem
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
/Users/Kuba/Development/University/RestClient/TrainingBook/templates/templates/home.html (File does not exist)
/Users/Kuba/Development/University/RestClient/RestClient/templates/templates/home.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
/Users/Kuba/.virtualenvs/client/lib/python2.7/site-packages/django/contrib/auth/templates/templates/home.html (File does not exist)
/Users/Kuba/Development/University/RestClient/TrainingBook/templates/templates/home.html (File does not exist)
As you can see there is "/templates/templates" even though I didn't specify it
so.
On the other hand if I switch TEMPLATE_DIRS to:
TEMPLATE_DIRS = (
PROJECT_PATH + '/TrainingBook/',
PROJECT_PATH + '/RestClient/',
)
after TemplateDoesNotExist is raised I can see that loader was looking for
templates at:
/Users/Kuba/Development/University/RestClient/RestClient/home.html
/Users/Kuba/Development/University/RestClient/TrainingBook/home.html
What did I do wrong?
EDIT: The problem was that I defined some views like this:
class Home(Base):
template_name = 'templates/home.html'
Answer: You specified `templates/home.html` instead of just `home.html`.
A template name will be appended to a TEMPLATE_DIRS, so `/foo/templates/` as
TEMPLATE_DIRS will become `/foo/templates/templates/home.html` if
`templates/home.html` is a template name. Instead, the template name should be
just `home.html` and the resulting template path would be
`/foo/templates/home.html` which is correct.
|
Python 2.7 decode error using UTF-8 header: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3
Question: Traceback:
Traceback (most recent call last):
File "venues.py", line 22, in <module>
main()
File "venues.py", line 19, in main
print_category(category, 0)
File "venues.py", line 13, in print_category
print_category(subcategory, ident+1)
File "venues.py", line 10, in print_category
print u'%s: %s' % (category['name'].encode('utf-8'), category['id'])
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128)
Code:
# -*- coding: utf-8 -*-
# Using https://github.com/marcelcaraciolo/foursquare
import foursquare
# Prints categories and subcategories
def print_category(category, ident):
for i in range(0,ident):
print u'\t',
print u'%s: %s' % (category['name'].encode('utf-8'), category['id'])
for subcategory in category.get('categories', []):
print_category(subcategory, ident+1)
def main():
client = foursquare.Foursquare(client_id='id',
client_secret='secret')
for category in client.venues.categories()['categories']:
print_category(category, 0)
if __name__ == '__main__':
main()
Answer: The trick is, keep all your string processing in the source completely
Unicode. Decode to Unicode when reading input (files/pipes/console) and encode
when writing output. If `category['name']` is Unicode, keep it that way
(remove `.encode('utf8').
Also Per your comment:
> However, the error still occurs when I try to do: python venues.py >
> categories.txt, but not when output goes to the terminal: python venues.py
Python can usually determine the terminal encoding and will automatically
encode to that encoding, which is why writing to the terminal works. If you
use shell redirection to output to a file, you need to tell Python the I/O
encoding you want via an environment variable, for example:
set PYTHONIOENCODING=utf8
python venues.py > categories.txt
Working example, using my US Windows console that uses `cp437` encoding. The
source code is saved in "UTF-8 without BOM". It's worth pointing out that the
_source code_ bytes are UTF-8, but declaring the source encoding and using a
Unicode string in allows Python to decode the source correctly, and encode the
`print` output automatically to the terminal using its default encoding
#coding:utf8
import sys
print sys.stdout.encoding
print u'üéâäàåçêëèïîì'
Here Python uses the default terminal encoding, but when redirected, does not
know what the encoding is, so defaults to `ascii`:
C:\>python example.py
cp437
üéâäàåçêëèïîì
C:\>python example.py >out.txt
Traceback (most recent call last):
File "example.py", line 4, in <module>
print u'üéâäàåçêëèïîì'
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-12: ordinal not in range(128)
C:\>type out.txt
None
Since we're using shell redirection, use a shell variable to tell Python what
encoding to use:
C:\>set PYTHONIOENCODING=cp437
C:\>python example.py >out.txt
C:\>type out.txt
cp437
üéâäàåçêëèïîì
We can also force Python to use another encoding, but in this case the
terminal doesn't know how to display `UTF-8`. The terminal is still decoding
the bytes in the file using `cp437`:
C:\>set PYTHONIOENCODING=utf8
C:\>python example.py >out.txt
C:\>type out.txt
utf8
üéâäàåçêëèïîì
|
Running 7zip command line silently via Python
Question: I've seen plenty of questions regarding the python execution of a .exe file
using popen and mentions of using PIPE to stop output of the process.
Apologies if my terminology is incorrect, i'm very new to python.
My main aim of this question to to add stdout=PIPE or something similar to
prevent any output showing, such as
> "Extracting filename..." This is very bad as some rars are large.
I am trying to run 7zip silently/hidden/quite. So that the entire process can
run in the background and not interfere with current on screen operations.
At this time, the current script works fine.
Here is my python code: **Python 2.7**
Pythonw.exe used to execute code:
pythonw.exe script.py
if ".rar" in input_file:
subprocess.Popen("C:\\Program Files (x86)\\7-Zip\\7z e " + input_file + " -o" + output_dest + " -y")
ALl help appreciated, thanks.
Answer: Pipe stdout and stderr to your system's null file:
import os
with open(os.devnull, 'w') as null:
subprocess.Popen(['7z', 'e', input_file, '-o', output_dest, '-y'], stdout=null, stderr=null)
|
How can I make my default python homebrew?
Question: I've recently given up on macports and gone to homebrew. I'm trying to be able
to import numpy and scipy. I seem to have installed everything correctly, but
when I type python in terminal, it seems to run the default mac python.
I'm on OSX 10.8.4
I followed this post: [python homebrew by
default](http://stackoverflow.com/questions/5157678/python-homebrew-by-
default) and tried to move the homebrew directory to the front of my %PATH by
entering
export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH
then "echo $PATH" returns
/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
however when I look for where my python is by "which python", I get
/usr/bin/python
For some reason when I import numpy in interpreter it works but not so for
scipy.
Python 2.7.2 (default, Oct 11 2012, 20:14:37)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named scipy
>>>
What do I need to do to get python to run as my homebrew-installed python?
Should this fix my problem and allow me to import scipy?
Answer: Homebrew puts things in a `/usr/local/Cellar/<appname>` directory, if I'm not
mistaken. You should find the bin of the python in there and put it in your
path before hitting `/usr/bin`.
For example, on my 10.8, python is located at
`/usr/local/Cellar/python/2.7.5/bin` and I put that directory before
`/usr/bin/python` in my `PATH` variable.
I do that similarly for other instances of me wanting to use homebrew version
of an app, another example being sqlite.
|
TypeError when using tkinter (python)
Question: I´ve written a testprogram to simulate my error. Here´s the code:
from random import *
from tkinter import *
class match:
def __init__(self):
self.players = 4*[None]
def commandos(self):
print("show commands:")
print("now commands for you!")
def choice(self, choose):
print("No choice")
class Application(Frame):
def __init__(self, master, match):
Frame.__init__(self, master)
self.grid()
self.create_widgets()
self.match = match
self.match.commandos()
def create_widgets(self):
self.submit_button = Button(self, text = "Submit", command = self.button_click)
self.submit_button.grid(row = 2, column = 0, sticky = W)
self.entry = Entry(self)
self.entry.grid(row = 1, column = 1, sticky = W)
def button_click(self):
choose = self.entry.get()
while choose != 'S':
self.match.choice(choose)
choose = input()
root = Tk()
root.title("StackQuestion")
root.geometry("250x150")
app = Application(root, match)
root.mainloop()
When I run it I get this error:
Traceback (most recent call last):
File "H:/Dropbox/Programmering/Python/stachquestion.py", line 40, in <module>
app = Application(root, match)
File "H:/Dropbox/Programmering/Python/stachquestion.py", line 22, in __init__
self.match.commandos()
TypeError: commandos() missing 1 required positional argument: 'self'
How do I fix this? I need to fix this so I can use a GUI in my tennisprogram
which I´m working on.
Answer: You did not created match object. (missing `()`)
Replace following line:
self.match = match
with
self.match = match()
|
Confused about python imports
Question: I reviewed [the Python 2.7.5
documentation](http://docs.python.org/2/tutorial/modules.html#packages). I am
having issues with my real project, but created a small test project here to
concisely reproduce the issue. Imagine a package with the following layout
stored at ~/Development/Test
Here is the structure:
Test/
__init__.py
foo.py
sub/
__init__.py
test_foo.py
And the code (`__init__.py` files are empy):
**foo.py**
def bar():
print("hello world")
**test_foo.py**
import Test.foo
# also tried from Test import foo
def main():
foo.bar()
if __name__ == "__main__":
main()
When trying to run `test_foo.py` from the terminal (i.e. `python test_foo.py`)
I'm getting:
Traceback (most recent call last):
File "test_foo.py", line 1, in <module>
import Test.foo
ImportError: No module named Test.foo
I'm trying to import the main file in the package (foo.py) from the test file
in the sub module (in my real project the sub module is the unit testing
code). Oddly using [Sublime text 2](http://www.sublimetext.com/2) editor and
the plugin [python test runner](https://github.com/lyapun/sublime-
text-2-python-test-runner), I can run my individual tests just fine, but I
cannot build the test file. It gives me the above error.
Answer: Module names are case-sensitive. Use:
import Test.foo as foo
(The `as foo` is so that you can call `foo.bar` in `main`.)
* * *
You must also have `~/Development` listed in PYTHONPATH.
If using Unix and your login shell is bash, to add `~/Development` to
`PYTHONPATH` edit ~/.profile to include
export PYTHONPATH=$PYTHONPATH:$HOME/Development
Here are [instructions for
Windows](http://docs.python.org/2/using/windows.html#configuring-python).
* * *
Further suggestions for debugging:
Place
import sys
print(sys.path)
import Test
print(Test)
import Test.foo
at the top of `test_foo.py`. Please post the output.
|
retrieve sequence alignment score produced by emboss in biopython
Question: I'm trying to retrieve the alignment score of two sequences compared using
emboss in biopython. The only way that I know is to retrieve it from an output
text file produced by emboss. The problem is that there will be hundreds of
these files to iterate over. Is there an easier/cleaner method to retrieve the
alignment score, without resorting to that? This is the main part of the code
that I'm using.
From Bio.Emboss.Applications import StretcherCommandline
needle_cline = StretcherCommandline(asequence=,bsequence=,gapopen=,gapextend=,outfile=)
stdout, stderr = needle_cline()
Answer: I had the same problem and after some time spent on searching for a neat
solution I popped up a white flag.
However, to speed up significantly the processing of output files I did the
following things:
1) I used _re_ python module for handling regular expressions to extract all
data needed.
2) I created a ramdisk space for the output files. The use of a ramdisk here
allowed for processing and exchanging all the data in RAM memory (much faster
than writing and reading the output files from a hard drive, not to mention it
saves your hdd in case of processing massive number of alignments).
|
Python webpage source read with special characters
Question: I am reading a page source from a webpage, then parsing a value from that
source. There I am facing a problem with special characters.
In my python controller file iam using `# -*- coding: utf-8 -*-`. But I am
reading a webpage source which is using `charset=iso-8859-1`
So when I read the page content without specifying any encoding it is throwing
error as `UnicodeDecodeError: 'utf8' codec can't decode byte 0xfc in position
133: invalid start byte`
when I use `string.decode("iso-8859-1").encode("utf-8")` then it is parsing
data without any error. But it is displaying the value as 'F\u00fcnke' instead
of 'Fünke'.
Please let me know how I can solve this issue. I would greatly appreciate any
suggestions.
Answer: Encoding is a PITA in Python3 for sure (and 2 in some cases as well). Try
checking these links out, they might help you:
[Python - Encoding string - Swedish
Letters](http://stackoverflow.com/questions/7315629/python-encoding-string-
swedish-letters)
[Python3 - ascii/utf-8/iso-8859-1 can't decode byte 0xe5 (Swedish
characters)](http://stackoverflow.com/questions/18260859/python3-ascii-
utf-8-iso-8859-1-cant-decode-byte-0xe5-swedish-characters)
<http://docs.python.org/2/library/codecs.html>
Also it would be nice with the code for `"So when I read the page content
without specifying any encoding"` My best guess is that your console doesn't
use utf-8 (for instance, windows.. your `# -*- coding: utf-8 -*-` only tells
Python what type of characters to find within the sourcecode, not the actual
data the code is going to parse or analyze itself. For instance i write:
# -*- coding: iso-8859-1 -*-
import time
# Här skriver jag ut tiden (Translation: Here, i print out the time)
print(time.strftime('%H:%m:%s'))
|
Why is my Python code returning an error when I try to fetch YouTube videos for a given keyword?
Question: Whenever I try to run my code, I receive the following error: "comment_content
error! 'nonetype' object has no attribute 'href'" I am new to Python, and did
not write this code myself; it was given to me to use. My understanding is
that it was functioning properly before? Could this have to do with changes in
the YouTube Data API since it was written?
import pdb
import gdata.youtube
import gdata.youtube.service
import codecs
import time
client = gdata.youtube.service.YouTubeService()
query = gdata.youtube.service.YouTubeVideoQuery()
### the input words are here
query.vq = "4b hair"
#######
# the out put file are here
viewFile = codecs.open('views4b_hair.csv', 'w')
commentFile=codecs.open('comments4b_hair.csv', 'w')
##########
query.max_results = 50
query.start_index = 0
query.safesearch = "moderate"
#query.format = 5
query.orderby = "relevance"
#query.author = "hawaiinani"
#pdb.set_trace()
for i in range(19):
#pdb.set_trace()
query.start_index=str(int(query.start_index)+50)
feed = client.YouTubeQuery(query)
print len(feed.entry)
youtubeid=[]
youtubetitle=[]
for entry in feed.entry:
#youtubetitle.append(entry.title.text)
youtubeid.append(entry.id.text[38:])
print entry.id.text[38:],i
try:
entry_comment = client.GetYouTubeVideoEntry(video_id=entry.id.text[38:])
comment_feed = client.GetYouTubeVideoCommentFeed(video_id=entry.id.text[38:])
viewFile.write(','.join([entry.id.text[38:],entry_comment.published.text,
str(entry_comment.media.duration.seconds), str(entry_comment.statistics.view_count),comment_feed.total_results.text,entry_comment.media.title.text.decode('ascii', errors='ignore').encode('ascii', 'ignore')]) + '\n')
#videop.append("%s, %s,%s, %s, %s, %s" % (search_result["id"]["videoId"],entry.published.text,
# entry.media.duration.seconds, entry.statistics.view_count,comment_feed.total_results.text,entry.media.title.text))
#
#time.sleep(3)
except Exception, ex:
print 'View_content Error', ex
time.sleep(10)
try:
comment_content = client.GetYouTubeVideoCommentFeed(video_id=entry.id.text[38:])
indexh=0
#while comment_content:
while indexh<10:
indexh=indexh+1
for comment_entry in comment_content.entry:
pubText = comment_entry.published.text
#print pubText
titleText = comment_entry.content.text.decode('ascii', errors='ignore').encode('ascii', 'ignore')
#print titleText
#print 'Got title'
#pubText, titleText = comment_entry.published.text, comment_entry.title.text
commentFile.write(','.join([entry.id.text[38:],pubText,titleText]) + '\n'+'\n')
#commentFile.write(u',')
#commentFile.write(pubText + u',')
#print 'About to write title'
#print titleText
#print 'Wrote title'
#commentlist.append("%s, %s,%s" % (search_result["id"]["videoId"],pubText, titleText))
comment_content=client.Query(comment_content.GetNextLink().href)
#time.sleep(3)
#time.sleep(3)
except Exception, ex:
print 'Comment_content Error!', ex
time.sleep(5)
#pdb.set_trace()
viewFile.close()
commentFile.close()
Answer: The error occurs when `comment_content.GetNextLink()` becomes `None`. In order
to fix it, replace:
while indexh < 10:
with:
while indexh < 10 and comment_content:
also replace:
comment_content=client.Query(comment_content.GetNextLink().href)
with:
next_link = comment_content.GetNextLink()
if next_link:
comment_content = client.Query(next_link.href)
else:
comment_content = None
Hope that helps.
|
Why does python print ascii rather than unicode despire that I declare coding=UTF-8?
Question:
# coding=UTF-8
with open('/home/marius/dev/python/navn/list.txt') as f:
lines = f.read().splitlines()
print lines
The file `/home/marius/dev/python/navn/list.txt` contains a list of strings
with some special characters, such as æ,ø,å,Æ,Ø,Å. In the terminal, these are
all rendered as hexadecimals. I want these to be rendered as UTF-8. How is
this done?
Answer: By decoding the data from UTF-8 to Unicode values, then having Python encode
those values back to your terminal encoding automatically:
with open('/home/marius/dev/python/navn/list.txt') as f:
for line in f:
print line.decode('utf8')
You can use `io.open()` and have the data decoded for you as you read:
import io
with io.open('/home/marius/dev/python/navn/list.txt', encoding='utf8') as f:
for line in f:
print line
|
mongdb pymongo disappearing list of items from collection
Question: So I run a local mongodb by running `$ mongod` from the terminal. I then
connect to it and create a small database with a python script using `pymongo`
:
import random
import string
import pymongo
conn = pymongo.Connection("localhost", 27017)
collection = conn.db.random_strings
strings = numbers = []
for i in range(0,1000):
char_set = string.ascii_uppercase + string.digits
num_set = [ str(num) for num in [0,1,2,3,4,5,6,7,8,9] ]
strings.append( ''.join( random.sample( char_set * 6, 6 ) ) )
numbers.append( int(''.join( random.sample( num_set * 6, 6 ) ) ) )
collection.insert( { 'str' : strings[ i ], 'num' : numbers[ i ] } )
I now have a database with lots of random strings and numbers in it. Now comes
the thing that bugs me and I don't understand:
things = collection.find()
first_list = list( things )
second_list = list( things )
print( first_list )
print( second_list )
The first print statements returns a list of 1000 objects while the second
print statement returns an empty list (`[]`). Why?
Answer: This line:
things = collection.find()
actually returns a `Cursor`
([docs](http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.find)):
> Returns an instance of `Cursor` corresponding to this query.
So, when you create a `list` from the `things` `Cursor`, the entire results
from the `find` query are returned and copied into `first_list`. The second
time, the `Cursor` instance stored in `things` is at the end of the results,
so, there are no more to populate `second_list`.
|
Make a python program with PySide an executable
Question: I have a python program:
import sys
from PySide.QtCore import *
from PySide.QtGui import *
from PySide.QtWebKit import *
app = QApplication(sys.argv)
web = QWebView()
web.load(QUrl("htpp://www.google.com"))
web.show()
web.resize(650, 750)
web.setWindowTitle('Website')
sys.exit(app.exec_())
I used google.com just for example. But if i want to make an executable of
this program with py2exe but it wont work. I get this error:

With other programs without PySide it does work. But with PySide it doesnt.
How can I make it work?
Answer: You need Microsoft Visual C runtime.
You should take a look at this: <http://qt-
project.org/wiki/Packaging_PySide_applications_on_Windows> . In the py2exe
tutorial it explains about the runtime you should install.
|
Initialize all the classes in a module into nameless objects in a list
Question: Is there a way to initialize all classes from a python module into a list of
nameless objects?
**Example** I have a module `rules` which contains all child classes from a
class Rule. Because of that, I'm certain they will all implement a method
`run()` and will have a attribute `name` which will be generated during the
call of `__init__`
I would like to have a list of objects dynamically initiated from those
classes. By dynamically initialized i mean that they don't have to be named
explicitly.
The questions are:
Is it possible to iterate through all classes in a module?
Can a nameless object be initiated?
Answer: There are at least two approaches you can take. You can get all of the
subclasses of a class by calling a class's `__subclasses__()` method. So if
your parent class is called `Rule`, you could call:
rule_list = [cls() for cls in Rule.__subclasses__()]
This will give you all subclasses of `Rule`, and will not limit them to the
ones found in a particular module.
If you have a handle to your module, you can iterate over its content. Like
so:
import rule
rule_list = []
for name in dir(rule):
value = getattr(rule, name)
if isinstance(value, type) and issubclass(value, Rule):
rule_list.append(value())
Unfortunately, `issubclass` throws `TypeError` if you give it an object that
is not a class as its first argument. So you have to handle that somehow.
EDIT: dealing with the `issubclass` quirk per @Blckknght's suggestion.
|
Animate a Histogram in Python
Question: I'm trying to animate a histogram over time, and so far the code I have is the
following one:
import matplotlib.pyplot as plt
import numpy as np
import time
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
alphab = ['A', 'B', 'C', 'D', 'E', 'F']
frequencies = [1, 44, 12, 11, 2, 10]
pos = np.arange(len(alphab))
width = 1.0 # gives histogram aspect to the bar diagram
ax.set_xticks(pos + (width / 2))
ax.set_xticklabels(alphab)
for bin_idx in np.linspace(0,1000000,100000000):
t = time.time()
#Here we just change the first bin, so it increases through the animation.
frequencies[0] = bin_idx
line1 =plt.bar(pos, frequencies, width, color='r')
plt.draw()
elapsed = time.time() - t
print elapsed
The code works, but the outputs shows how after some iterations it just
becomes way slower than at the beginning. Is there a way to speed things up,
we want to update this in real time, and the process in which it runs is
pretty fast.
Also, it is important to notice, that I do not want a post processing
animation, we want real time updates, so the histogram animation example was
not working for this particular process.
Thanks
Answer: If you have a newer version of Matplotlib there is an
[`animations.FuncAnimation`
class](http://matplotlib.org/api/animation_api.html#matplotlib.animation.FuncAnimation)
which can help reduce some of the boiler-plate code. ([See this
page](http://matplotlib.org/examples/animation/histogram.html) for an
example.) It is pretty fast (~ **52 frames per second**):
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import timeit
clock = timeit.default_timer
fig, ax = plt.subplots()
alphab = ['A', 'B', 'C', 'D', 'E', 'F']
frequencies = [1, 44, 12, 11, 2, 10]
pos = np.arange(len(alphab))
width = 1.0 # gives histogram aspect to the bar diagram
ax.set_xticks(pos + (width / 2))
ax.set_xticklabels(alphab)
rects = plt.bar(pos, frequencies, width, color='r')
start = clock()
def animate(arg, rects):
frameno, frequencies = arg
for rect, f in zip(rects, frequencies):
rect.set_height(f)
print("FPS: {:.2f}".format(frameno / (clock() - start)))
def step():
for frame, bin_idx in enumerate(np.linspace(0,1000000,100000000), 1):
#Here we just change the first bin, so it increases through the animation.
frequencies[0] = bin_idx
yield frame, frequencies
ani = animation.FuncAnimation(fig, animate, step, interval=10,
repeat=False, blit=False, fargs=(rects,))
plt.show()
* * *
If you don't have a newer version of Matplotlib, here is the older way to do
it. It is slightly slower (~ **45 frames per second**):
Don't call `plt.bar` with each iteration of the loop. Instead, call it just
once, save the `rects` return value, and then call `set_height` to modify the
height of those `rects` on subsequent iterations of the loop. This trick (and
others) is explained in the [Matplotlib Animations
Cookbook](http://www.scipy.org/Cookbook/Matplotlib/Animations).
import sys
import matplotlib as mpl
mpl.use('TkAgg') # do this before importing pyplot
import matplotlib.pyplot as plt
import numpy as np
import timeit
clock = timeit.default_timer
fig, ax = plt.subplots()
alphab = ['A', 'B', 'C', 'D', 'E', 'F']
frequencies = [1, 44, 12, 11, 2, 10]
pos = np.arange(len(alphab))
width = 1.0 # gives histogram aspect to the bar diagram
ax.set_xticks(pos + (width / 2))
ax.set_xticklabels(alphab)
def animate():
start = clock()
rects = plt.bar(pos, frequencies, width, color='r')
for frameno, bin_idx in enumerate(np.linspace(0,1000000,100000000), 2):
#Here we just change the first bin, so it increases through the animation.
frequencies[0] = bin_idx
# rects = plt.bar(pos, frequencies, width, color='r')
for rect, f in zip(rects, frequencies):
rect.set_height(f)
fig.canvas.draw()
print("FPS: {:.2f}".format(frameno / (clock() - start)))
win = fig.canvas.manager.window
win.after(1, animate)
plt.show()
For comparison, adding `plt.clf` to your original code, on my machine reaches
about **12 frames per second**.
* * *
Some comments about timing:
You won't get accurate measurements by calculating the very small time
differences with each pass through the loop. The time resolution of
`time.time()` \-- at least on my computer -- is not great enough. You'll get
more accurate measurements by measuring one starting time and calculating the
large time difference between the start time and the current time, and then
dividing by the number of frames.
I also changed `time.time` to `timeit.default_timer`. The two are the same on
Unix computers, but `timeit.default_timer` is set to `time.clock` on Windows
machines. Thus `timeit.default_timer` chooses the more accurate timer for each
platform.
|
Click on any one of "1 2 3 4 5 ..." on a page by using Selenium in Python (e.g., Splinter):
Question: I have HTML that looks like the three following sample statements:
<a href="javascript:__doPostBack('ctl00$FormContent$gvResults','Page$10')">...</a>
<a href="javascript:__doPostBack('ctl00$FormContent$gvResults','Page$12')">12</a>
<a href="javascript:__doPostBack('ctl00$FormContent$gvResults','Page$13')">13</a>
(I'd presently be on pg. 11.)
I don't know the Py/Selenium/Splinter syntax for selecting one of the page
numbers in a list and clicking on it to go to that page. (Also, I need to be
able to identify the element in the argument as, for example, 'Page$10' or
'Page$12', as seen in the __doPostBack notation. Maybe just a 'next page', in
so many words, would be fine, but I don't even know how to do that.)
Thank you for any help.
**UPDATE II:** Here's the code I have to work from:
import time
import win32ui
import win32api
import win32con
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from ctypes import *
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('http://[site]');
**UPDATE III:**
Traceback (most recent call last):
File "montpa_05.py", line 47, in <module>
continue_link = driver.find_element_by_link_text('4')
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", l
ine 246, in find_element_by_link_text
return self.find_element(by=By.LINK_TEXT, value=link_text)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", l
ine 680, in find_element
{'using': by, 'value': value})['value']
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", l
ine 165, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py"
, line 164, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: u'no such element\n
(Session info: chrome=28.0.1500.95)\n (Driver info: chromedriver=2.2,platform=
Windows NT 6.1 SP1 x86_64)'
Answer: The `<a>` element is defined as a link. That means that you can select it by
link text.
I don't know Python, but the java syntax would be `By.linkText(##)` where `##`
is the number you want to click on.
|
sign() much slower in python than matlab?
Question: I have a function in python that basically takes the sign of an array
(75,150), for example. I'm coming from Matlab and the time execution looks
more or less the same less this function. I'm wondering if sign() works very
slowly and you know an alternative to do the same.
Thx,
Answer: I can't tell you if this is faster or slower than Matlab, since I have no idea
what numbers you're seeing there (you provided no quantitative data at all).
However, as far as alternatives go:
import numpy as np
a = np.random.randn(75, 150)
aSign = np.sign(a)
Testing using `%timeit` in [`IPython`](http://ipython.org/):
In [15]: %timeit np.sign(a)
10000 loops, best of 3: 180 µs per loop
Because the loop over the array (and what happens inside it) is implemented in
optimized C code rather than generic Python code, it tends to be about an
order of magnitude faster—in the same ballpark as Matlab.
* * *
Comparing the exact same code as a numpy vectorized operation vs. a Python
loop:
In [276]: %timeit [np.sign(x) for x in a]
1000 loops, best of 3: 276 us per loop
In [277]: %timeit np.sign(a)
10000 loops, best of 3: 63.1 us per loop
So, only 4x as fast here. (But then `a` is pretty small here.)
|
how to get the integer value of a single pyserial byte in python
Question: I'm using pyserial in python 2.7.5 which according to the
[docs](http://pyserial.sourceforge.net/pyserial_api.html):
> read(size=1) Parameters: size – Number of bytes to read. Returns:
> Bytes read from the port. Read size bytes from the serial port. If a
> timeout is set it may return less characters as requested. With no timeout
> it will block until the requested number of bytes is read.
>
> Changed in version 2.5: Returns an instance of bytes when available (Python
> 2.6 and newer) and str otherwise.
Mostly I want to use the values as hex values and so when I use them I use the
following code:
ch = ser.read()
print ch.encode('hex')
This works no problem.
But now I'm trying to read just ONE value as an integer, because it's read in
as a string from serial.read, I'm encountering error after error as I try to
get an integer value.
For example:
print ch
prints nothing because it's an invisible character (in this case chr(0x02)).
print int(ch)
raises an error
ValueError: invalid literal for int() with base 10: '\x02'
trying `print int(ch,16)`, `ch.decode()`, `ch.encode('dec')`, `ord(ch)`,
`unichr(ch)` all give errors (or nothing).
In fact, the only way I have got it to work is converting it to hex, and then
back to an integer:
print int(ch.encode('hex'),16)
this returns the expected `2`, but I know I am doing it the wrong way. How do
I convert a a chr(0x02) value to a 2 more simply?
Believe me, I have searched and am finding ways to do this in python 3, and
work-arounds using imported modules. Is there a native way to do this without
resorting to importing something to get one value?
edit: I have tried `ord(ch)` but it is returning `90` and I KNOW the value is
`2`, 1) because that's what I'm expecting, and 2) because when I get an error,
it tells me (as above)
Here is the code I am using that generates `90`
count = ser.read(1)
print "count:",ord(ch)
the output is `count: 90`
and as soon as I cut and pasted that code above I saw the error `count !=
ch`!!!!
Thanks
Answer: Use the `ord` function. What you have in your input is a `chr(2)` (which, as a
constant, can also be expressed as '\x02').
i= ord( chr(2) )
i= ord( '\x02' )
would both store the integer 2 in variable `i`.
|
How do I filter nested cases to be filter out python
Question: I have an ascii plain text file input file with main case and nested case as
below: I want to compare the instances start with '$' between details and
@ExtendedAttr = nvp_add functions in input file below for each case under
switch($specific-trap), but when i run the script under section python script,
all nested cases are also print out, I dont want the nested cases to be print
out here and for script to only consider cases under switch($specific-case).
How should i do this help! :
Input file:
************
case ".1.3.6.1.4.1.27091.2.9": ### - Notifications from JNPR-TIMING-MIB (1105260000Z)
log(DEBUG, "<<<<< Entering... juniper-JNPR-TIMING-MIB.include.snmptrap.rules
>>>>>")
@Agent = "JNPR-TIMING-MIB"
@Class = "40200"
$OPTION_TypeFieldUsage = "3.6"
switch($specific-trap)
{
case "1": ### trapMsgNtpStratumChange
##########
# $1 = trapAttrSource
# $2 = trapAttrSeverity
##########
$trapAttrSource = $1
$trapAttrSeverity = lookup($2, TrapAttrSeverity)
$OS_EventId = "SNMPTRAP-juniper-JNPR-TIMING-MIB-trapMsgNtpStratumChange"
@AlertGroup = "NTP Stratum Status"
@AlertKey = "Source: " + $trapAttrSource
@Summary = "NTP Stratum Changes" + " ( " + @AlertKey + " ) "
switch($2)
{
case "1":### clear
$SEV_KEY = $OS_EventId + "_clear"
@Summary = "End of: " + @Summary
$DEFAULT_Severity = 1
$DEFAULT_Type = 2
$DEFAULT_ExpireTime = 0
case "2":### none
$SEV_KEY = $OS_EventId + "_none"
$DEFAULT_Severity = 2
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
case "3":### minor
$SEV_KEY = $OS_EventId + "_minor"
$DEFAULT_Severity = 3
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
case "4":### major
$SEV_KEY = $OS_EventId + "_major"
$DEFAULT_Severity = 4
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
case "5":### critical
$SEV_KEY = $OS_EventId + "_critical"
$DEFAULT_Severity = 5
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
default:
$SEV_KEY = $OS_EventId + "_unknown"
$DEFAULT_Severity = 2
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
}
update(@Severity)
$trapAttrSeverity = $trapAttrSeverity + " ( " + $2 + " )"
@Identifier = @Node + " " + @AlertKey + " " + @AlertGroup + " " +
$DEFAULT_Type + " " + @Agent + " " + @Manager + " " + $specific-trap
if(match($OPTION_EnableDetails, "1") or match($OPTION_EnableDetails_juniper,
"1")) {
details($trapAttrSource,$trapAttrSeverity)
}
@ExtendedAttr = nvp_add(@ExtendedAttr, "trapAttrSource", $trapAttrSource,
"trapAttrSeverit")
case "2": ### trapMsgNtpLeapChange
##########
# $1 = trapAttrSource
# $2 = trapAttrSeverity
##########
$trapAttrSource = $1
$trapAttrSeverity = lookup($2, TrapAttrSeverity)
$OS_EventId = "SNMPTRAP-juniper-JNPR-TIMING-MIB-trapMsgNtpLeapChange"
@AlertGroup = "NTP Leap Status"
@AlertKey = "Source: " + $trapAttrSource
@Summary = "NTP Leap Changes" + " ( " + @AlertKey + " ) "
switch($2)
{
case "1":### clear
$SEV_KEY = $OS_EventId + "_clear"
@Summary = "End of: " + @Summary
$DEFAULT_Severity = 1
$DEFAULT_Type = 2
$DEFAULT_ExpireTime = 0
case "2":### none
$SEV_KEY = $OS_EventId + "_none"
$DEFAULT_Severity = 2
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
case "3":### minor
$SEV_KEY = $OS_EventId + "_minor"
$DEFAULT_Severity = 3
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
case "4":### major
$SEV_KEY = $OS_EventId + "_major"
$DEFAULT_Severity = 4
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
case "5":### critical
$SEV_KEY = $OS_EventId + "_critical"
$DEFAULT_Severity = 5
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
default:
$SEV_KEY = $OS_EventId + "_unknown"
$DEFAULT_Severity = 2
$DEFAULT_Type = 1
$DEFAULT_ExpireTime = 0
}
update(@Severity)
$trapAttrSeverity = $trapAttrSeverity + " ( " + $2 + " )"
@Identifier = @Node + " " + @AlertKey + " " + @AlertGroup + " " +
$DEFAULT_Type + " " + @Agent + " " + @Manager + " " + $specific-trap
if(match($OPTION_EnableDetails, "1") or match($OPTION_EnableDetails_juniper,
"1")) {
details($trapAttrSource,$trapAttrSeverity)
}
@ExtendedAttr = nvp_add(@ExtendedAttr, "trapAttrSource", $trapAttrSource,
"trapAttrSeverity", $trapAttrSeverity)
Below is the code which I use suggested by Vaibhav Aggarwal one of the member in
this stakeoverflow.
Python Script
**************
import re
`caselines_index = []
cases = []
readlines = []
def read(in_file):
global cases
global caselines_index
global readlines
with open(in_file, 'r') as file:
for line in file.readlines():
readlines.append(line.strip())
for line in readlines:
case_search = re.search("case\s\".+?\"\:\s", line)
if case_search:
caselines_index.append(readlines.index(line))
#print caselines_index
caselines_index_iter = iter(caselines_index)
int_line_index = int(next(caselines_index_iter))
int_next_index = int(next(caselines_index_iter))
while True:
try:
case_text = ' '.join(readlines[int_line_index:int_next_index]).strip()
case = [readlines[int_line_index].strip(), case_text]
cases.append(case)
int_line_index = int_next_index
int_next_index = int(next(caselines_index_iter))
except StopIteration:
case_text = ' '.join(readlines[int_line_index:len(readlines) - 1]).strip()
case = [readlines[int_line_index].strip(), case_text]
cases.append(case)
break
def work():
MATCH = 1
for case_list in cases:
details = []
nvp_add = []
caseline = case_list[0].strip()
nvp = re.findall("details\(.+?\)", case_list[1].strip())
for item in nvp:
result_list = re.findall("(\$.+?)[\,\)]", item)
for result in result_list:
if "$*" not in result:
details.append(result)
nvp = re.findall("nvp_add\(.+?\)", case_list[1].strip())
for item in nvp:
result_list = re.findall("(\$.+?)[\,\)]", item)
for result in result_list:
if "$*" not in result:
nvp_add.append(result)
missing_from_details, missing_from_nvp_add = [], []
missing_from_details = [o for o in nvp_add if o not in set(details)]
missing_from_nvp_add = [o for o in details if o not in set(nvp_add)]
if missing_from_nvp_add or missing_from_details:
MATCH = 0
print caseline + " LINE - " + str(readlines.index(caseline) + 1)
for mismatch in missing_from_details:
print "Missing from details:"
print mismatch
for mismatch in missing_from_nvp_add:
print "Missing from nvp_add:"
print mismatch
print "\n"
if MATCH == 1:
print "MATCH"
else:
print "MISMATCHES"
def main():
in_file = "C:/target1.txt"
read(in_file)
work()
if __name__=="__main__":
main()
Answer:
import re
from sys import stdout
#stdout = open("result.txt", 'w+')
def read(in_file):
cases = []
caselines_index = []
readlines = []
readlines_num = []
with open(in_file, 'r') as file:
readfile = file.read().strip()
for line in readfile.split('\n'):
readlines_num.append(line.strip())
regex = re.compile("switch\(\$\d\).+?\}", re.DOTALL)
readfile = re.sub(regex, ' ', readfile)
for line in readfile.split('\n'):
readlines.append(line.strip())
for line in readlines:
case_search = re.search("case\s\".+?\"\:\s", line)
if case_search:
caselines_index.append(readlines.index(line))
#print caselines_index
caselines_index_iter = iter(caselines_index)
try:
int_line_index = int(next(caselines_index_iter))
except:
print "No cases found"
try:
int_next_index = int(next(caselines_index_iter))
except:
int_next_index = len(readlines) - 1
while True:
try:
case_text = ' '.join(readlines[int_line_index:int_next_index]).strip()
match1 = re.search("nvp_add", case_text)
match2 = re.search("details", case_text)
if match1 or match2:
case = [readlines[int_line_index].strip(), readlines_num.index(readlines[int_line_index]) + 1, case_text]
cases.append(case)
int_line_index = int_next_index
int_next_index = int(next(caselines_index_iter))
except StopIteration:
case_text = ' '.join(readlines[int_line_index:len(readlines) - 1]).strip()
case = [readlines[int_line_index].strip(), readlines_num.index(readlines[int_line_index]), case_text]
cases.append(case)
break
return cases
def work(cases):
MATCH = 1
for case_list in cases:
details = []
nvp_add = []
caseline = case_list[0].strip()
nvp = re.findall("details\(.+?\)", case_list[2].strip())
for item in nvp:
result_list = re.findall("(\$.+?)[\,\)]", item)
for result in result_list:
if "$*" not in result:
details.append(result)
nvp = re.findall("nvp_add\(.+?\)", case_list[2].strip())
for item in nvp:
result_list = re.findall("(\$.+?)[\,\)]", item)
for result in result_list:
if "$*" not in result:
nvp_add.append(result)
missing_from_details, missing_from_nvp_add = [], []
missing_from_details = [o for o in nvp_add if o not in set(details)]
missing_from_nvp_add = [o for o in details if o not in set(nvp_add)]
if missing_from_nvp_add or missing_from_details:
MATCH = 0
print caseline + " LINE - " + str(case_list[1] + 1)
for mismatch in missing_from_details:
print "Missing from details:"
print mismatch
for mismatch in missing_from_nvp_add:
print "Missing from nvp_add:"
print mismatch
print "\n"
if MATCH == 1:
print "MATCH"
else:
print "MISMATCHES"
def main():
in_file = "target1.txt"
cases = read(in_file)
work(cases)
if __name__=="__main__":
main()
This will filter out all the switches that are nested. This will only work in
your case with your input file.
|
Python script couldnt detect mismatch when $instance deleted in only one of the value nvp_add in double if else input statement
Question: This is continuation question from _stackoverflow_ question below: How do I
filter nested cases to be filter out python [How to compare the attributes
start with $ in 2 functions and display match or
mismatch](http://stackoverflow.com/questions/18325605/how-to-compare-the-
attributes-start-with-in-2-functions-and-display-match-or-m/18325700#18325700)
When i delete one of the `$apsChanConfigNumber` from `nvp_add` in first block
of if, the compare python script from link above couldn't detect the mismatch,
there are 2 `nvp_add` function under this case. How to resolve the issue
help!!!
Input file ASCII plain text contain text below:
* * *
if(exists($snmpTrapEnterprise))
{
if(match($OPTION_EnableDetails, "1") or
match($OPTION_EnableDetails_juniper, "1")) {
details($snmpTrapEnterprise,$apsChanStatusSwitchovers,$apsChanStatusCurrent,$apsChanConfigGroupName,$apsChanConfigNumber)
}
@ExtendedAttr = nvp_add(@ExtendedAttr, "snmpTrapEnterprise", $snmpTrapEnterprise, "apsChanStatusSwitchovers", $apsChanStatusSwitchovers, "apsChanStatusCurrent", $apsChanStatusCurrent,
"apsChanConfigGroupName", , "apsChanConfigNumber",)
}
else
{
if(match($OPTION_EnableDetails, "1") or
match($OPTION_EnableDetails_juniper, "1")) {
details($apsChanStatusSwitchovers,$apsChanStatusCurrent,$apsChanConfigGroupName,$apsChanConfigNumber)
}
@ExtendedAttr = nvp_add(@ExtendedAttr, "apsChanStatusSwitchovers",
$apsChanStatusSwitchovers, "apsChanStatusCurrent", $apsChanStatusCurrent,
"apsChanConfigGroupName", $apsChanConfigGroupName,
"apsChanConfigNumber", $apsChanConfigNumber)
}
Answer: since ive started, ill finish :P
import re
import sys
from collections import Counter
#sys.stdout = open("result.txt", 'w+')
def intersect(list1, list2):
for o in list1:
if o in list2:
list1.remove(o)
list2.remove(o)
return list1, list2
def read(in_file):
cases = []
caselines_index = []
readlines = []
readlines_num = []
with open(in_file, 'r') as file:
readfile = file.read().strip()
for line in readfile.split('\n'):
readlines_num.append(line.strip())
regex = re.compile("switch\(\$\d\).+?\}", re.DOTALL)
readfile = re.sub(regex, ' ', readfile)
for line in readfile.split('\n'):
readlines.append(line.strip())
for line in readlines:
case_search = re.search("case\s\".+?\"\:\s", line)
if case_search:
caselines_index.append(readlines.index(line))
#print caselines_index
caselines_index_iter = iter(caselines_index)
try:
int_line_index = int(next(caselines_index_iter))
except:
print "No cases found"
try:
int_next_index = int(next(caselines_index_iter))
except:
int_next_index = len(readlines) - 1
while True:
try:
case_text = ' '.join(readlines[int_line_index:int_next_index]).strip()
match1 = re.search("nvp_add", case_text)
match2 = re.search("details", case_text)
if match1 or match2:
case = [readlines[int_line_index].strip(), readlines_num.index(readlines[int_line_index]) + 1, case_text]
cases.append(case)
int_line_index = int_next_index
int_next_index = int(next(caselines_index_iter))
except StopIteration:
case_text = ' '.join(readlines[int_line_index:len(readlines) - 1]).strip()
case = [readlines[int_line_index].strip(), readlines_num.index(readlines[int_line_index]), case_text]
cases.append(case)
break
return cases
def work(cases):
MATCH = 1
for case_list in cases:
details = []
nvp_add = []
caseline = case_list[0].strip()
nvp = re.findall("details\(.+?\)", case_list[2].strip())
for item in nvp:
result_list = re.findall("(\$.+?)[\,\)]", item)
for result in result_list:
if "$*" not in result:
details.append(result)
nvp = re.findall("nvp_add\(.+?\)", case_list[2].strip())
for item in nvp:
result_list = re.findall("(\$.+?)[\,\)]", item)
for result in result_list:
if "$*" not in result:
nvp_add.append(result)
nvp_add_c = Counter(nvp_add)
details_c = Counter(details)
missing_from_details = list((nvp_add_c - details_c).elements())
missing_from_nvp_add = list((details_c - nvp_add_c).elements())
if missing_from_nvp_add or missing_from_details:
MATCH = 0
print caseline + " LINE - " + str(case_list[1] + 1)
for mismatch in missing_from_details:
print "Missing from details:"
print mismatch
for mismatch in missing_from_nvp_add:
print "Missing from nvp_add:"
print mismatch
print "\n"
if MATCH == 1:
print "MATCH"
else:
print "MISMATCHES"
def main():
in_file = "target1.txt"
cases = read(in_file)
work(cases)
if __name__=="__main__":
main()
|
Making a Queue for a function so it only runs once at a time in python
Question: I have a multithreaded function, which all write to the same log file. How can
I make this function (maybe with a function decorator) to add the execution of
writing to the log file to a queue. Small example:
#!/usr/bin/python
import thread
import time
# Define a function for the thread
def print_time( threadName, delay):
count = 0
while count < 5:
time.sleep(delay)
count += 1
writeToLog(threadName, time.ctime(time.time()))
print "%s: %s" % ( threadName, time.ctime(time.time()) )
# Create two threads as follows
try:
thread.start_new_thread( print_time, ("Thread-1", 2, ) )
thread.start_new_thread( print_time, ("Thread-2", 4, ) )
except:
print "Error: unable to start thread"
def writeToLog(threadName, time):
self.fileWriter = open("log.txt", "w")
self.fileWriter.write("ThreadName: " + threadName + "\n")
self.fileWriter.write("Time: " + time + "\n")
self.fileWriter.close()
How can I make this function writeToLog add to a queue when executed? Now I
get every time both threads call the writeToLog function an error because the
other writeToLog function (from the other thread) already closed the file.
When having a global variable for this writer, which is closed in the end, I
get output like this:
ThreadName: thread1
ThreadName: thread2
Time: 9:50AM
Time: 9:50AM
And the output I always want has to look like this:
ThreadName: Thread-1
Time: 9:50AM
ThreadName: Thread-2
Time: 9:50AM
Answer: Concurrency access to a shared resource is a well known problem. Python thread
provide some mechanism to avoid issues. Use python locks :
<http://docs.python.org/2/library/threading.html#lock-objects> Lock are used
to synchronize access to a shared resource :
lock = Lock()
lock.acquire() # will block if lock is already held
... access shared resource
lock.release()
More information : <http://effbot.org/zone/thread-synchronization.htm>
Search for "Python synchronization"
|
Size of objects in memory during an IPython session (with Guppy?)
Question: I recall [reading](http://stackoverflow.com/questions/563840/how-can-i-check-
the-memory-usage-of-objects-in-ipython?rq=1) that it is hard to pin down the
exact memory usage of objects in Python. However, that thread is from 2009,
and since then I have read about various memory profilers in Python (see the
examples in [this thread](http://stackoverflow.com/questions/1331471/in-
memory-size-of-python-stucture)). Also, IPython has matured substantially in
recent months (version 1.0 was released a few days ago)
IPython already has a magic called `whos`, that prints the variable names,
their types and some basic Data/Info.
In a similar fashion, is there any way to get the size in memory of each of
the objects returned by `who` ? Any utilities available for this purpose
already in IPython?
# Using Guppy
[Guppy](http://guppy-pe.sourceforge.net/) (suggested in [this
thread](http://stackoverflow.com/questions/110259/which-python-memory-
profiler-is-recommended)) has a command that allows one to get the
**cummulative** memory usage **per object type** , but unfortunately:
1. It does not show memory usage **per object**
2. It prints the sizes in bytes (not in human readable format)
For the second one, it may be possible to apply `bytes2human` from [this
answer](http://stackoverflow.com/a/13449587/283296), but I would need to first
collect the output of `h.heap()` in a format that I can parse.
But for the first one (the most important one), is there any way to have Guppy
show memory usage **per object**?
In [6]: import guppy
In [7]: h = guppy.hpy()
In [8]: h.heap()
Out[8]:
Partition of a set of 2871824 objects. Total size = 359064216 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 522453 18 151469304 42 151469304 42 dict (no owner)
1 451503 16 36120240 10 187589544 52 numpy.ndarray
2 425700 15 34056000 9 221645544 62 sklearn.grid_search._CVScoreTuple
3 193439 7 26904688 7 248550232 69 unicode
4 191061 7 22696072 6 271246304 76 str
5 751128 26 18027072 5 289273376 81 numpy.float64
6 31160 1 12235584 3 301508960 84 list
7 106035 4 9441640 3 310950600 87 tuple
8 3300 0 7260000 2 318210600 89 dict of 0xb8670d0
9 1255 0 3788968 1 321999568 90 dict of module
<1716 more rows. Type e.g. '_.more' to view.>
Answer: Why not use something like:
h.heap().byid
But this will only show you immediate sizes (i.e. not the total size of a list
including the other lists it might refer to).
If you have a particular object you wish to get the size of you can use:
h.iso(object).domisize
To find the approximate amount of memory that would freed if it were deleted.
|
Linking a SWIG wrapper with other libraries
Question: I have a C++ function that I want to call from Python. The function itself is
pretty simple, but it involves an IPC call that can only be done in C++. To
compile that C++ code requires linking a ton of other libraries in. I'm trying
to use SWIG for this. I have a Makefile that looks like this:
all: swig object shared
object: swig
${CC} -c ${MODULE}_wrap.cxx ${INCLUDES}
shared:
${CC} -Wl,--verbose -shared ${MODULE}_wrap.o -o _${MODULE}.so
swig:
${SWIG} -c++ -python ${MODULE}.i
With this, everything compiles fine, but then importing my module in Python
gives me "undefined symbol" errors. If I change the `shared` line to:
shared:
${CC} -Wl,--verbose -shared ${MODULE}_wrap.o -o _${MODULE}.so ${LIBS}
it fails to compile with `collect2: ld returned 1 exit status` but doesn't
tell me exactly what the error is. The verbose linker output has a ton of
lines that say "attempt to open [some lib] failed" but a handful of those
lines say "succeeded". Could it just be the ones that say failed are
preventing linking from happening? Is what I'm trying to do even possible with
SWIG modules?
Answer: Yes, it is possible. You need to make sure the shared libraries your wrapped
C/C++ module depends on are resolved when Python loads the module. Check the
library (Swig generated) with `ldd` to see what libs it depends on. It is best
to set up rpath (runtime path; something like: `-rpath=/path/to/your/libs` for
gcc) to point to those libs' location(s) when you link your module.
Alternatively you can set `LD_LIBRARY_PATH` appropriately before running
Python.
|
Add images dynamically based on random number
Question: I work on a simple mathematics learning programs for my 4 year old daughter
with the help of images. Based on a random number that we can call X, a for
loop will run X number of times and print an image X number of times. The
image will be selected from the list by a random number also that we can call
Y.
If X is 2 the image Y will be printed to the screen 2 times using for loop.
The problem is I do not know how to do. :P If anyone can help me with this I
would be grateful! I'm using python 3.2 and Tkinter.
Here is example code for my image list.
self.imageListRegular = []
self.imageListRegular.append(ImageTk.PhotoImage(file="Bilder/Bird-icon.png"))
self.imageListRegular.append(ImageTk.PhotoImage(file="Bilder/elephant-icon.png"))
self.imageListRegular.append(ImageTk.PhotoImage(file="Bilder/fish-icon.png"))
self.imageListRegular.append(ImageTk.PhotoImage(file="Bilder/ladybird-icon.png"))
self.imageListRegular.append(ImageTk.PhotoImage(file="Bilder/zebra-icon.png"))
Sincerely, Robert
Answer: use the python `random` module
import random
image = random.choice(self.imageListRegular) #this is your 'Y' variable
times = random.randint(1, 4) #this is your 'X' variable
Then do you for loop, which I imagine looks something like this (I don't have
Tkinter, so I can't test the code. This is derived from a sample
[here](http://www.wadsworth.org/spider_doc/spider/docs/python/spipylib/tkinter.html)):
import Tkinter
root = Tkinter.Tk()
for i in xrange(times):
Tkinter.Label(root, image=image).pack()
root.mainloop() # Start the GUI
`random.choice` returns a random element in a given sequence, so
`random.choice(["apples", "bananas", "oranges"])` would return either
"apples", "bananas", or "oranges"
`random.randint(low, high)` will return a random integer between low and high,
including low and high. So if you wanted to display the image between 1 and 4
times, `random.randint(1, 4)` would do the trick.
|
python pyodbc - connecting to sql server 2008 on windows/python2.7 but not on centOS6.32/python2.6.6
Question: I have the following code:
import pyodbc
cnxn = pyodbc.connect("DRIVER={SQL Server};"
+"SERVER=something.example.com;"
+"DATABASE=something;")
cursor = cnxn.cursor()
name=('Smith, Joe', )
cursor.execute('SELECT id FROM Users WHERE displayname=?', name)
rows = cursor.fetchall()
for row in rows:
print row
The code executes as desired on windows/python2.7. However, when I try to run
it on linux, I get the following error:
Traceback (most recent call last):
File "/something/script.py", line 125, in <module>
main()
File "/something/script.py", line 77, in main
+"DATABASE=something;")
pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnectW)')
The traceback seems to indicate that the `DRIVER` entry is missing, which
isn't the case. Is this a version difference? What is the issue with pyodbc?
EDIT: contents of /etc/odbcinst.ini:
# Example driver definitions
# Driver from the postgresql-odbc package
# Setup from the unixODBC package
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/psqlodbc.so
Setup = /usr/lib/libodbcpsqlS.so
Driver64 = /usr/lib64/psqlodbc.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage = 1
# Driver from the mysql-connector-odbc package
# Setup from the unixODBC package
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/libmyodbc5.so
Setup = /usr/lib/libodbcmyS.so
Driver64 = /usr/lib64/libmyodbc5.so
Setup64 = /usr/lib64/libodbcmyS.so
FileUsage = 1
Answer: You don't have an odbc driver for sql server configured - you need to install
and configure one. The drivers section on
[unixodbc](http://www.unixodbc.org/drivers.html)'s webpage suggests
[freetds](http://www.freetds.org/), alternatively you could also try
microsoft's own [implementation](http://technet.microsoft.com/en-
us/library/hh568451.aspx).
freedts should be installable using `yum`.
The basic configuration then should look something like this:
`/etc/odbcinst.ini`:
[SQL Server]
Description = FreeTDS driver for SQL Server
Driver = /usr/lib/libtdsodbc.so
Driver64 = /usr/lib64/libtdsodbc.so
Now you should already be able to connect. For more detailled information on
configuration, look [here](http://www.freetds.org/userguide/).
edit:
alternatively there are also other ways to connect to an sql server from
python, like [python-tds](https://pypi.python.org/pypi/python-tds),
[pymssql](https://pypi.python.org/pypi/pymssql) and
[more...](http://wiki.python.org/moin/SQL%20Server)
|
Python update json file
Question: I have python utility i wrote that doing some WMI monitoring, the the data is
written in that format
example1
CPU = [{'TS':'2013:12:03:30','CPUVALUES':['0','1','15']}]
Now i need occasionally update data that will look eventually like following
CPU = [
{'TS':'2013:12:03:30','CPUVALUES':['0','1','15']},
{'TS':'2013:14:00:30','CPUVALUES':['0','75','15']}
]
Any suggestion how to accomplish that
Please advice
Thanks
Answer: You can either read, parse and modify the file every time you need to add new
data to it and that would look like:
import json
def append(filename, obj):
with open(filename, 'rb') as f:
data = json.load(f, encoding='utf-8')
data.append(obj)
with open(filename, 'wb') as f:
json.dump(data, f, encoding='utf-8')
But that could be very slow, especially if you have a large file, since you'll
have to read the whole file into memory every time, deserialize it, append,
serialize it again, and write it down...
If you need the extra speed, you could do a little hackery by just append the
new data to the file:
import io
import json
def append(filename, data):
with open(filename, 'r+b') as f:
f.seek(-2, 2)
f.write(b',\n')
f.write(b' ' + json.dumps(data).encode('utf-8'))
f.write(b'\n]')
This code will open the file, move before the last `\n]`, append `,\n`, dump
the new data and add the final `\n]`. You just have to be careful not to have
a newline at the end of the file, because that would mess up things. But if
you need to have a newline at the end, then you'll just move to `-3` and at
the last write append `b'\n]\n'`.
**Note:** This code assumes that you use UNIX line endings, for Windows line
endings you would have to change the moves and the `\n`.
Example IPython session:
In [29]: %%file test.json
CPU = [
{"TS": "2013:12:03:30", "CPUVALUES": ["0", "1", "15"]},
{"TS": "2013:14:00:30", "CPUVALUES": ["0", "75", "15"]}
]
In [30]: !cat test.json
CPU = [
{"TS": "2013:12:03:30", "CPUVALUES": ["0", "1", "15"]},
{"TS": "2013:14:00:30", "CPUVALUES": ["0", "75", "15"]}
]
In [31]: append('test.json', {'TS':'2013:14:00:30','CPUVALUES':['0','80','15']})
In [32]: !cat test.json
CPU = [
{"TS": "2013:12:03:30", "CPUVALUES": ["0", "1", "15"]},
{"TS": "2013:14:00:30", "CPUVALUES": ["0", "75", "15"]},
{"TS": "2013:14:00:30", "CPUVALUES": ["0", "80", "15"]}
]
|
Can't call python script with "python" command
Question: I normally program in Java, but started learning Python for a course I'm
taking.
I couldn't really start the first exercise because the command
python count_freqs.py gene.train > gene.counts
didn't work, I keep getting "`incorrect syntax`" messages. I tried solving
this looking at dozens of forums but nothing works, and I'm going crazy.
import count_freqs
ran without errors, but I can't do anything with it. When I try running
something involving the file `gene.train` I get "`gene is not defined`".
Can anyone tell me what I'm doing wrong? Thanks.
Answer: type `which python` at the command prompt to see if the python executable is
in your path. If not it either isn't installed or you need to amend your path
to include it.
|
Multiprocessing with python3 only runs once
Question: I have a problem running multiple processes in python3 .
My program does the following: 1\. Takes entries from an sqllite database and
passes them to an input_queue 2\. Create multiple processes that take items
off the input_queue, run it through a function and output the result to the
output queue. 3\. Create a thread that takes items off the output_queue and
prints them (This thread is obviously started before the first 2 steps)
My problem is that currently the 'function' in step 2 is only run as many
times as the number of processes set, so for example if you set the number of
processes to 8, it only runs 8 times then stops. I assumed it would keep
running until it took all items off the input_queue.
Do I need to rewrite the function that takes the entries out of the database
(step 1) into another process and then pass its output queue as an input queue
for step 2?
Edit: Here is an example of the code, I used a list of numbers as a substitute
for the database entries as it still performs the same way. I have 300 items
on the list and I would like it to process all 300 items, but at the moment it
just processes 10 (the number of processes I have assigned)
#!/usr/bin/python3
from multiprocessing import Process,Queue
import multiprocessing
from threading import Thread
## This is the class that would be passed to the multi_processing function
class Processor:
def __init__(self,out_queue):
self.out_queue = out_queue
def __call__(self,in_queue):
data_entry = in_queue.get()
result = data_entry*2
self.out_queue.put(result)
#Performs the multiprocessing
def perform_distributed_processing(dbList,threads,processor_factory,output_queue):
input_queue = Queue()
# Create the Data processors.
for i in range(threads):
processor = processor_factory(output_queue)
data_proc = Process(target = processor,
args = (input_queue,))
data_proc.start()
# Push entries to the queue.
for entry in dbList:
input_queue.put(entry)
# Push stop markers to the queue, one for each thread.
for i in range(threads):
input_queue.put(None)
data_proc.join()
output_queue.put(None)
if __name__ == '__main__':
output_results = Queue()
def output_results_reader(queue):
while True:
item = queue.get()
if item is None:
break
print(item)
# Establish results collecting thread.
results_process = Thread(target = output_results_reader,args = (output_results,))
results_process.start()
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
# Perform multi processing
perform_distributed_processing(dbList,10,Processor,output_results)
# Wait for it all to finish.
results_process.join()
Answer: Don't try to rewrite the whole multiprocessing library again. I think you can
use any of `multiprocessing.Pool` methods depending on your needs - if this is
a batch job you can even use the synchronous `multiprocessing.Pool.map()` \-
only instead of pushing to input queue, you need to write a generator that
yields input to the threads.
|
'datetime.datetime' object has no attribute 'microseconds'
Question: I am writing a script in _python_ and I need to know how many milliseconds are
between two points in my code.
I have a global variable when the program starts like this:
from datetime import datetime
a=datetime.now()
When I need to know how many milliseconds have passed, I execute this:
b=datetime.now()
print (b.microseconds-a.microseconds)*1000
However I get this error:
AttributeError: 'datetime.datetime' object has no attribute 'microseconds'
What's wrong? How can I fix this?
Answer: It is `microsecond`, without an "s" at the end
|
UnknownJavaServerError when trying to upload data to the Google app engine data store
Question: I am trying to follow the Google app engine
[tutorial](https://cloud.google.com/resources/articles/how-to-build-mobile-
app-with-app-engine-backend-tutorial#tcbc)
This code runs on my local development server. When I execute :
appcfg.py upload_data --config_file bulkloader.yaml --url=http://localhost:8888/remote_api --filename places.csv --kind=Place -e [email protected]
I get a UnknownJavaServerError. Any ideas why this is happening? [My OS is
Windows, python version is 2.7]
This is the full output I get:
C:\EclipseWorkspace\Android\MobileAssistant2-AppEngine\src>appcfg.py upload_data --config_file=bulkloader.yaml --filename=places.csv --kind=Place --url=http://localhost:8888/remote_api -e [email protected]
08:46 PM Uploading data records.
[INFO ] Logging to bulkloader-log-20130821.204602
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
Password for [email protected]:
[INFO ] Opening database: bulkloader-progress-20130821.204602.sql3
[INFO ] Connecting to localhost:8888/remote_api
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 171, in <module>
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 167, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4282, in <module>
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4273, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2409, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4003, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3815, in PerformUpload
run_fn(args)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3706, in RunBulkloader
sys.exit(bulkloader.Run(arg_dict))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bulkloader.py", line 4395, in Run
return _PerformBulkload(arg_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bulkloader.py", line 4260, in _PerformBulkload
loader.finalize()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\bulkload\bulkloader_config.py", line 382, in finalize
self.reserve_keys(self.keys_to_reserve)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bulkloader.py", line 1228, in ReserveKeys
datastore._GetConnection()._reserve_keys(ConvertKeys(keys))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastore\datastore_rpc.py", line 1880, in _reserve_keys
self._async_reserve_keys(None, keys).get_result()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastore\datastore_rpc.py", line 838, in get_result
results = self.__rpcs[0].get_result()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 612, in get_result
return self.__get_result_hook(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastore\datastore_rpc.py", line 1921, in __reserve_keys_hook
self.check_rpc_success(rpc)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\datastore\datastore_rpc.py", line 1234, in check_rpc_success
rpc.check_success()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 578, in check_success
self.__rpc.CheckSuccess()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 236, in _MakeRealSyncCall
raise UnknownJavaServerError("An unknown error has occured in the "
google.appengine.ext.remote_api.remote_api_stub.UnknownJavaServerError: An unknown error has occured in the Java remote_api handler for this call.
My source files are given below-
* * *
web.xml file:
<?xml version="1.0" encoding="utf-8" standalone="no"?><web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.5" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<servlet>
<servlet-name>SystemServiceServlet</servlet-name>
<servlet-class>com.google.api.server.spi.SystemServiceServlet</servlet-class>
<init-param>
<param-name>services</param-name>
<param-value>com.google.samplesolutions.mobileassistant.CheckInEndpoint,com.google.samplesolutions.mobileassistant.DeviceInfoEndpoint,com.google.samplesolutions.mobileassistant.MessageEndpoint,com.google.samplesolutions.mobileassistant.PlaceEndpoint</param-value>
</init-param>
</servlet>
<servlet-mapping>
<servlet-name>SystemServiceServlet</servlet-name>
<url-pattern>/_ah/spi/*</url-pattern>
</servlet-mapping>
<servlet>
<display-name>Remote API Servlet</display-name>
<servlet-name>RemoteApiServlet</servlet-name>
<servlet-class>com.google.apphosting.utils.remoteapi.RemoteApiServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>RemoteApiServlet</servlet-name>
<url-pattern>/remote_api</url-pattern>
</servlet-mapping>
</web-app>
bulkloader.yaml file:
#!/usr/bin/python
#
# Copyright 2013 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
python_preamble:
- import: base64
- import: re
- import: google.appengine.ext.bulkload.transform
- import: google.appengine.ext.bulkload.bulkloader_wizard
- import: google.appengine.ext.db
- import: google.appengine.api.datastore
- import: google.appengine.api.users
transformers:
- kind: Offer
connector: csv
connector_options:
property_map:
- property: __key__
external_name: key
export_transform: transform.key_id_or_name_as_string
- property: description
external_name: description
# Type: String Stats: 2 properties of this type in this kind.
- property: title
external_name: title
# Type: String Stats: 2 properties of this type in this kind.
- property: imageUrl
external_name: imageUrl
- kind: Place
connector: csv
connector_options:
property_map:
- property: __key__
external_name: key
export_transform: transform.key_id_or_name_as_string
- property: address
external_name: address
# Type: String Stats: 6 properties of this type in this kind.
- property: location
external_name: location
# Type: GeoPt Stats: 6 properties of this type in this kind.
import_transform: google.appengine.api.datastore_types.GeoPt
- property: name
external_name: name
# Type: String Stats: 6 properties of this type in this kind.
- property: placeId
external_name: placeId
# Type: String Stats: 6 properties of this type in this kind.
- kind: Recommendation
connector: csv
connector_options:
property_map:
- property: __key__
external_name: key
export_transform: transform.key_id_or_name_as_string
- property: description
external_name: description
# Type: String Stats: 4 properties of this type in this kind.
- property: title
external_name: title
# Type: String Stats: 4 properties of this type in this kind.
- property: imageUrl
external_name: imageUrl
- property: expiration
external_name: expiration
import_transform: transform.import_date_time('%m/%d/%Y')
places.csv file:
name,placeId,location,key,address
A store at City1 Shopping Center,store101,"47,-122",1,"Some address of the store in City 1"
A big store at Some Mall,store102,"47,-122",2,"Some address of the store in City 2"
* * *
Thanks!
Answer: There is a
[bug](https://code.google.com/p/googleappengine/issues/detail?id=9666) in
Google_appengine 1.8.2 and 1.8.3. Downgrade to version 1.8.1 to workaround the
bug. Checked on Windows 8 x64 and Python 2.7.5 x64
|
More efficient for loops in Python (single line?)
Question: I put together this code which generates a string of 11 random printable ascii
characters:
import random
foo=[]
for n in range(11):
foo.append(chr(random.randint(32,126)))
print "".join(foo)
It works fine, but I can't help feel that there might be a more efficient way
than calling "append" 11 times. Any tips in making it more Pythonic?
Answer: Use a list comprehension:
foo = [chr(random.randint(32,126)) for _ in xrange(11)]
You can combine that with the `str.join()`:
print ''.join([chr(random.randint(32,126)) for _ in xrange(11)])
I've used `xrange()` here since you don't need the list produced by `range()`;
only the sequence.
Quick demo:
>>> import random
>>> ''.join([chr(random.randint(32,126)) for _ in xrange(11)])
'D}H]qxfD6&,'
|
Trying to understand this simple python code
Question: I was reading Jeff Knupp's blog and I came across this easy little script:
import math
def is_prime(n):
if n > 1:
if n == 2:
return True
if n % 2 == 0:
return False
for current in range(3, int(math.sqrt(n) + 1), 2):
if n % current == 0:
return False
return True
return False
print(is_prime(17))
(note: I added the import math at the beginning. You can see the original
here: <http://www.jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-
generators-explained/>)
This is all pretty straightforward and I get the majority of it, but I'm not
sure what's going on with his use of the range function. I haven't ever used
it this way or seen anyone else use it this way, but then I'm a beginner. What
does it mean for the range function to have three parameters, and how does
this accomplish testing for primeness?
Also (and apologies if this is a stupid question), but the very last 'return
False' statement. That is there so that if a number is passed to the function
that is less than one (and thus not able to be prime), the function won't even
waste its time evaluating that number, right?
Answer: [The third is the step.](http://docs.python.org/3/library/functions.html#func-
range) It iterates through every odd number less than or equal to the square
root of the input (3, 5, 7, etc.).
|
Python, pdb, adding breakpoint which only breaks once
Question: I sometimes set breakpoints in deep loop code as follows:
import pdb; pdb.set_trace()
If I press `c` then it continues, but breaks again on the next iteration of
the loop. Is there a way of clearing this breakpoint from within pdb? The `b`
command doesn't list it.
Or is there a one liner I can insert into my Python source file that will set
a 'soft' breakpoint that can be cleared?
Or ideally a one liner that sets the trace, then clears itself?
* * *
Edit: I'd be interested in any editor that lets you set breakpoints.
I currently run my script from emacs as follows:
M-x pdb
Run ~/.virtualenvs/.../python2.7/pdb.py (like this):
~/.virtualenvs/.../python2.7/pdb.py ~/start-myserver.py
Answer: Instead of setting the breakpoint using `set_trace`, you could set up and run
the debugger manually. `pdb.Pdb.set_break()` takes an argument `temprorary`
which will cause the breakpoint to be cleared the first time it's hit.
import pdb
def traced_function():
for i in range(4):
print(i) # line 5
if __name__ == '__main__':
import pdb
p = pdb.Pdb()
# set break on the print line (5):
p.set_break(__file__, 5, temporary=True)
p.run('traced_function()')
example output:
$ python pdb_example.py
> <string>(1)<module>()
(Pdb) c
Deleted breakpoint 1 at /tmp/pdb_example.py:5
> /tmp/test.py(5)traced_function()
-> print(i) # line 5
(Pdb) c
0
1
2
3
The same could be achieved by running the program using `pdb` from the command
line, but setting it up like this allows you to preserve the breakpoints
between invokations and not loosing them when exiting the debugger session.
|
OpenCV crash on OS X when reading USB cam in separate process
Question: I'm running OpenCV 2.4.5 via the cv2 python bindings, using OS X (10.8.4). I'm
trying to capture images from a USB webcam in a separate process via the
multiprocessing module. Everything seems to work if I use my laptop's (2011
macbook air) internal webcam, but when I attempt to read from a usb webcam
(Logitech C920), I get a crash (no crash when I use the USB cam without the
multiprocessing encapsulation). The crash log is
[here](https://gist.github.com/mike-lawrence/6306597). Code I'm using that
will reliably reproduce the crash is below. Getting this working is pretty
mission-critical for me, so any help would be greatly appreciated!
import multiprocessing
import cv2 #doesn't matter if you import here or in cam()
def cam():
vc = cv2.VideoCapture(0) #modify 0/1 to toggle between USB and internal camera
while True:
junk,image = vc.read()
camProcess = multiprocessing.Process( target=cam )
camProcess.start()
while True:
pass
Answer: Your problem stems from the way python spans its subprocess using os.fork. The
OpenCV video backend on Mac uses QTKit which uses CoreFoundation these parts
of MacOS are not save to run in a forked subprocess, sometimes they just
complain, sometimes they crash.
You need to create the subprocess without using os.fork. This can be achieved
with python 2.7.
You need to use billiard
(<https://github.com/celery/billiard/tree/master/billiard>) It serves as a
replacement for pythons multiprocessing and has some very useful improvements.
from billiard import Process, forking_enable
import cv2 #does matter where this happens when you don't use fork
def cam():
vc = cv2.VideoCapture(0) #modify 0/1 to toggle between USB and internal camera
while True:
junk,image = vc.read()
forking_enable(0) # Is all you need!
camProcess = Process( target=cam )
camProcess.start()
while True:
pass
alright, lets add a more complete example:
from billiard import Process, forking_enable
def cam(cam_id):
import cv2 #doesn't matter if you import here or in cam()
vc = cv2.VideoCapture(cam_id) #modify 0/1 to toggle between USB and internal camera
while True:
junk,image = vc.read()
cv2.imshow("test",image)
k = cv2.waitKey(33)
if k==27: # Esc key to stop
break
def start():
forking_enable(0) # Is all you need!
camProcess = Process(target=cam, args=(0,))
camProcess.start()
if __name__ == '__main__':
start()
cam(1)
You need two cameras attached for this:It should open a window and run each
camera in a separate process (one on the main process one in a spawned one). I
use this strategy to stream images form multiple cameras at once each in its
own python process.
|
How to remove blank lines in text file python?
Question: In my python script, I write specific columns from a text_file to a
new_text_file separated by `,` because the new_text_file will later become a
csv_file. There are white space lines left over in the new_text_file because
of lines I skipped writing over that need to be removed from the file.
I can't use `.strip()` or `.rstrip()` because I get the error:
`AttributeError: '_io.TextIOWrapper' object has no attribute 'strip'`.
I can't use `ip_file.write("".join(line for line in ip_file if not
line.isspace()))` because I get the error: `UnsupportedOperation: not
readable`.
I also tried importing `sys` and `re`, and have tried every other answer found
on this site, but it still returns errors.
My code is:
for ip in open("list.txt"):
with open(ip.strip()+".txt", "a") as ip_file:
for line in open("data.txt"):
new_line = line.split(" ")
if "blocked" in new_line:
if "src="+ip.strip() in new_line:
#write columns to new text file
ip_file.write(", " + new_line[11])
ip_file.write(", " + new_line[12])
try:
ip_file.write(", " + new_line[14] + "\n")
except IndexError:
pass
The resulting ip_file looks like:
, dst=00.000.00.000, proto=TCP, dpt=80
, dst=00.000.00.000, proto=TCP, dpt=80
, dst=00.000.00.000, proto=TCP, dpt=80
, dst=00.000.00.000, proto=TCP, dpt=80
, dst=00.000.00.000, proto=TCP, dpt=80
I was coding under the last line of the above script, within the loops. The
`new_text_file` is `ip_file` in my script and everything must be in Python.
**Question:** Is there another way to remove the blank lines in `ip_file`? OR
prevent them from ever being written?
Answer: I think I understand what you're saying. Try making these changes:
for line in open("data.txt"):
new_line = line.rstrip().split()
^^^^^^^^^
if "blocked" in new_line:
if "src="+ip.strip() in new_line:
#write columns to new text file
ip_file.write(", " + new_line[11])
ip_file.write(", " + new_line[12])
try:
ip_file.write(", " + new_line[14])
# ^^^^
except IndexError:
pass
ip_file.write("\n")
#
It seems that the problem was that when `new_line[14]` existed, it already
contained a newline, so you were appending two newlines. The above code
rstrips any newline off line before you split it, then appends a single
newline no matter what at the end of the inner for loop.
|
Sort dict by sub-value in Python
Question: I spent some time reading on SOF and am having issues solving this problem. I
cannot seem to find how to get the following data structure sorted by the sub-
value:
data = {}
data[1] = {name: "Bob", ...}
data[2] = {name: "Carl", ...}
data[3] = {nane: "Alice", ...}
I need to get this data into some form of a list/tuple/order dict structure
which is alphabetized so that the final result is something like this:
finalData = [{name: "Alice", ...}, {name: "Bob", ...}, {name: "Carl", ...}]
Thanks.
Answer: Do you mean something like
sorted(data.values(), key=itemgetter(name))
* * *
>>> from operator import itemgetter
>>> data = {}
>>> name = 'name'
>>>
>>> data[1] = {name: "Bob"}
>>> data[2] = {name: "Carl"}
>>> data[3] = {name: "Alice"}
>>>
>>> sorted(data.values(), key=itemgetter(name))
[{'name': 'Alice'}, {'name': 'Bob'}, {'name': 'Carl'}]
|
Python - Traceback, how to show filename of imported
Question: I've got the following:
try:
package_info = __import__('app') #app.py
except:
print traceback.extract_tb(sys.exc_info()[-1])
print traceback.tb_lineno(sys.exc_info()[-1])
And what i get from this is:
[('test.py', 18, '<module>', 'package_info = __import__(\'app\')')]
18
Now this is almost what i want, this is where the actual error begins but i
need to follow this through and get the actual infection, that is `app.py`
containing an `ä` on row 17 not 18 for instance.
Here's my actual error message if untreated:
> Non-ASCII character '\xc3' in file C:\app.py on line 17, but no encoding
> declared; see <http://www.python.org/peps/pep-0263.html> for details",
> ('C:\app.py', 17, 0, None)), )
I've found some examples but all of them show the point of impact and not the
actual cause to the problem, how to go about this (pref Python2 and Python3
cross-support but Python2 is more important in this scenario) to get the
filename, row and cause of the problem in a similar manner to the tuple above?
Answer: Catch the specific exception and see what information it has. The message is
formatted from the exception object's parameters so its a good bet that its
there. In this case, SyntaxError includes a filename attribute.
try:
package_info = __import__('app') #app.py
except SyntaxError, e:
print traceback.extract_tb(sys.exc_info()[-1])
print traceback.tb_lineno(sys.exc_info()[-1])
print e.filename
|
Regex to match only letters between two words
Question: Say that I've these two strings:
Ultramagnetic MC's
and
Ultramagnetic MC’s <-- the apostrophe is a different char
in Python, but generally speaking, how do I write a regex to match the first
string letters against the second one?
I mean I'd like to match only letters between two strings and ignore special
characters, so I'd be able to match `Ultramagnetic MCs` in a string like this:
"Ultramagnetic Mc!s"
Answer: I guess you're looking for something like this:
import re
def equal_letters(x, y):
return re.sub(r'\W+', '', x) == re.sub(r'\W+', '', y)
>>> equal_letters("Ultramagnetic MC's", "Ultramagnetic MC’s")
True
>>> equal_letters("Ultramagnetic MC's", "Ultramagnetic Foo")
False
|
Splitting or stripping a variable number of characters from a line of text in Python?
Question: I have a large amount of data of this type:
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "[email protected]"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
I wrote a for loop in python which reads through the file, creating a
dictionary for each array and storing elements like thus:
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
I know my code is terrible, but I am fairly new to Python. As you can see, the
bulk of my project is complete. What's left is to strip away the code tags
from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to
become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is
always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7.
Answer: **BAD SOLUTION Based on current question**
but to answer your question just use
info_string = lines[i + 1]
value_str = info_string.split(" ",1)[-1].strip(" \"")
**BETTER SOLUTION**
do you have access to the php generating that .... if you do just do `echo
json_encode($data);` instead of using `var_dump`
if instead you have them output json it(the json output) will look like
{"variable":"value","variable2","value2"}
you can then read it in like
import json
json_str = requests.get("http://url.com/json_dump").text # or however you get the original text
data = json.loads(json_str)
print data
|
ipython pandas plot does not show
Question: I am using the anaconda distribution of ipython/Qt console. I want to plot
things inline so I type the following from the ipython console:
%pylab inline
Next I type the tutorial at (<http://pandas.pydata.org/pandas-
docs/dev/visualization.html>) into ipython...
import matplotlib.pyplot as plt
import pandas as pd
ts = pd.Series(randn(1000), index = pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
ts.plot()
... and this is all that i get back:
<matplotlib.axes.AxesSubplot at 0x109253410>
But there is no plot. What could be wrong? Is there another command that I
need to supply? The tutorial suggests that that is all that I need to type.
Answer: Plots are not displayed until you run
> plt.show()
|
PLS-DA algorithm in python
Question: Partial Least Squares (PLS) algorithm is implemented in the scikit-learn
library, as documented here: <http://scikit-
learn.org/0.12/auto_examples/plot_pls.html> In the case where y is a binary
vector, a variant of this algorithm is being used, the Partial least squares
Discriminant Analysis (PLS-DA) algorithm. Does the PLSRegression module in
sklearn.pls implements also this binary case? If not, where can I find a
python implementation for it? In my binary case, I'm trying to use the
PLSRegression:
pls = PLSRegression(n_components=10)
pls.fit(x, y)
x_r, y_r = pls.transform(x, y, copy=True)
In the transform function, the code gets exception in this line:
y_scores = np.dot(Yc, self.y_rotations_)
The error message is "ValueError: matrices are not aligned". Yc is the
normalized y vector, and self.y_rotations_ = [1.]. In the fit function,
self.y_rotations_ = np.ones(1) if the original y is a univariate vector
(y.shape[1](http://scikit-learn.org/0.12/auto_examples/plot_pls.html)=1).
Answer: PLS-DA is really a "trick" to use PLS for categorical outcomes instead of the
usual continuous vector/matrix. The trick consists of creating a dummy
identity matrix of zeros/ones which represents membership to each of the
categories. So if you have a binary outcome to be predicted (i.e. male/female
, yes/no, etc) your dummy matrix will have TWO columns representing the
membership to either category.
For example, consider the outcome gender for four people: 2 males and 2
females. The dummy matrix should be coded as :
import numpy as np
dummy=np.array([[1,1,0,0],[0,0,1,1]]).T
, where each column represents the membership to the two categories (male,
female)
Then your model for data in variable Xdata ( shape 4 rows,arbitrary columns )
would be:
myplsda=PLSRegression().fit(X=Xdata,Y=dummy)
The predicted categories can be extracted from comparison of the two indicator
variables in mypred:
mypred= myplsda.predict(Xdata)
For each row/case the predicted gender is that with the highest predicted
membership.
|
Executing shell command from python
Question: I am trying to compile a set of lines and execute them and append the output
to text file. Instead of writing the same thing, I used a python script to
compile and execute in background.
import subprocess
subprocess.call(["ifort","-openmp","mod1.f90","mod2.f90","pgm.f90","-o","op.o"])
subprocess.call(["nohup","./op.o",">","myout.txt","&"])
The program `pgm.f90` is getting compliled using the ifort compiler, but the
ouput is not getting appended to `myout.txt`. Instead it is appending output
to `nohup.out` and the program is **not** running in the background even after
specifying `"&"` in the python script.
What obvious error have I made here?
Thanks in advance
Answer: You can call a subprocess as if you were in the shell by using `Popen()` with
the argument `shell=True`:
subprocess.Popen("nohup ./op.o > myout.txt &", shell=True)
|
finding and Importing xml file into python
Question: I am attempting to import an xml file into python with minidom.
>>> from xml.dom import minidom
>>> import os
>>> xmldoc = minidom.parse('c/Users/WFD/Documents/VXWorks/XML_Parasing_Python')
and python cannot find this file even though I have specifically spelled out
where it is. I am new to python so if there is a glaring error I apologize but
can anyone explain how to import an xml file in mydocments? Here is the error:
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
xmldoc = minidom.parse('c/Users/WFD/Documents/VXWorks/XML_Parasing_Python')
File "C:\Python27\lib\xml\dom\minidom.py", line 1920, in parse
return expatbuilder.parse(file)
File "C:\Python27\lib\xml\dom\expatbuilder.py", line 922, in parse
fp = open(file, 'rb')
IOError: [Errno 2] No such file or directory: 'c/Users/WFD/Documents/VXWorks/XML_Parasing_Python'
Thanks!
Answer: It looks as if the path you are giving is not correct: for Windows, the proper
way to format an absolute file path is:
blah("C:/Users/WFD/Documents/VXWorks/XML_Parasing_Python")
Also, `XML_Parasing_Python` may be mispelled; it probably should be
`XML_Parsing_Python`.
You can use the [`os.path`
module](http://docs.python.org/2/library/os.path.html#os.path.expanduser) for
more cross-platform scripts.
|
eclipse,python, NameError: name <MyModule> is not defined
Question: I create the following package in eclipse via `PyDev`:
class Repository(object):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
print("salaam")
class Materials(Repository):
'''
'''
def __init__(self):
'''
constructor
'''
My main file is:
if __name__ == '__main__':
pass
import repository;
x = Repository();
When i run my application, i get the following error:
x = Repository();
NameError: name 'Repository' is not defined
Of course, i got a warning on importing my module.
I know my import and relation of my main file and my package or eclipse
configuration have problem.

Answer: first of all, when you import like this, you can only refer to your class as
either `repository.Repository` or `repository.repository.Repository`,
depending on the whether you import the module or the package.
second, what you import depends on where eclipse thinks you are. You can check
that with
import os
print(os.pwd)
at the top of your main script.
third, if you want to import your package like this, you should put it in your
search path. You can do that by placing it in site-packages, or for instance
by adding
import sys
import os
sys.path.append(os.path.abspath(__file__))
at the top of your main script
additionally, you might want to avoid confusion by giving your module a
different name than the package (or the other way round)
(and a little nitpick: `__init__` is not the constructor, merely an
initializing routine).
|
How to encode nested Python Protobuf
Question: Been stumped on this for a while and pulling what is left of my hair out.
Sending non-nested Protobufs from Python to Java and Java to Python without an
issue with WebSockets. My problem is sending a nested version over a
WebSocket. I believe my issue is on the Python encoding side.
Your guidance is appreciated.
.proto file
message Response {
// Reflect back to caller
required string service_name = 1;
// Reflect back to caller
required string method_name = 2;
// Who is responding
required string client_id = 3;
// Status Code
required StatusCd status_cd = 4;
// RPC response proto
optional bytes response_proto = 5;
// Was callback invoked
optional bool callback = 6 [default = false];
// Error, if any
optional string error = 7;
//optional string response_desc = 6;
}
message HeartbeatResult {
required string service = 1;
required string timestamp = 2;
required float status_cd = 3;
required string status_summary = 4;
}
A Heartbeat result is supposed to get sent in the reponse_proto field of the
Response Protobuf. I am able to do this in Java to Java but Python to Java is
not working.
I've included two variations of the python code. Neither of which works.
def GetHeartbeat(self):
print "GetHeartbeat called"
import time
ts = time.time()
import datetime
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
heartbeatResult = rpc_pb2.HeartbeatResult()
heartbeatResult.service = "ALERT_SERVICE"
heartbeatResult.timestamp = st
heartbeatResult.status_cd = rpc_pb2.OK
heartbeatResult.status_summary = "OK"
response = rpc_pb2.Response()
response.service_name = ""
response.method_name = "SendHeartbeatResult"
response.client_id = "ALERT_SERVICE"
response.status_cd = rpc_pb2.OK
response.response_proto = str(heartbeatResult).encode('utf-8')
self.sendMessage(response.SerializeToString())
print "GetHeartbeat finished"
def GetHeartbeat2(self):
print "GetHeartbeat called"
import time
ts = time.time()
import datetime
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
heartbeatResult = rpc_pb2.HeartbeatResult()
heartbeatResult.service = "ALERT_SERVICE"
heartbeatResult.timestamp = st
heartbeatResult.status_cd = rpc_pb2.OK
heartbeatResult.status_summary = "OK"
response = rpc_pb2.Response()
response.service_name = ""
response.method_name = "SendHeartbeatResult"
response.client_id = "ALERT_SERVICE"
response.status_cd = rpc_pb2.OK
response.response_proto = heartbeatResult.SerializeToString()
self.sendMessage(response.SerializeToString())
print "GetHeartbeat finished"
Errors on the Java server side are:
(GetHeartbeat) Protocol message end-group tag did not match expected tag
and
(GetHeartbeat2)
Message: [org.java_websocket.exceptions.InvalidDataException: java.nio.charset.MalformedInputException: Input length = 1
at org.java_websocket.util.Charsetfunctions.stringUtf8(Charsetfunctions.java:80)
at org.java_websocket.WebSocketImpl.deliverMessage(WebSocketImpl.java:561)
at org.java_websocket.WebSocketImpl.decodeFrames(WebSocketImpl.java:328)
at org.java_websocket.WebSocketImpl.decode(WebSocketImpl.java:149)
at org.java_websocket.server.WebSocketServer$WebSocketWorker.run(WebSocketServer.java:593)
Caused by: java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(CoderResult.java:277)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:798)
at org.java_websocket.util.Charsetfunctions.stringUtf8(Charsetfunctions.java:77)
Answer: Solution
Also posted this question on protobuf group
Credit to Christopher Head and Ilia Mirkin for providing input on the google
group
<https://groups.google.com/forum/#!topic/protobuf/Cp7zWiWok9I>
response.response_proto = base64.b64encode(heartbeatResult.SerializeToString())
self.sendMessage(response.SerializeToString())
FYI, Ilia also suggested base64 encoding the entire message but this seems to
be working at the moment.
|
Python program to manage python script as child
Question: I am looking for a python equivalent of following:
until python program.py; do
echo "Crashed...Restarting..." >&2
sleep 1
done
Also, I need to kill program.py when the parent program is killed. Any
suggestions?
Answer: Modules `subprocess` and `psutil` should provide most (if not all) you need.
import sys, subprocess
while True :
retCode= subprocess.call(["python","program.py"])
if retCode == 0 : break
print('Crashed...Restarting...', file=sys.stderr )
|
Why is my python script that runs the adb shell monkey command crashing for large values of events?
Question: I have written a small python function that runs an adb shell monkey -p -v
command along with an adb logcat command using subprocess.popen. For values
larger than 100, this program crashes and I'm not sure why.
here is monkey_runner.py
import os, subprocess
def run_monkey_process(package, num_commands):
monkeycmd = "adb shell monkey -p " + package + " -v " + num_commands
monkeyprocess = subprocess.Popen(monkeycmd, stdout=subprocess.PIPE)
logcatcmd = "adb logcat"
logcatprocess = subprocess.Popen(logcatcmd, stdout=subprocess.PIPE)
monkeystring = monkeyprocess.communicate(input=None)[0]
logcatstring = logcatprocess.communicate(input=None)[0]
monkeyreturncode = monkeyprocess.poll()
logcatreturncode = logcatprocess.poll()
if(monkeyreturncode >=0):
monkeyprocess.kill()
logcatprocess.kill()
return monkeystring, logcatstring
else:
print 'command failure'
return 'you', 'fail'
I can put my gui code here too, but it's nothing special.
Here is the stacktrace
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1470, in __call__
return self.func(*args)
File "C:\Users\brandon.dalesandro\Desktop\Zonar\mankey\monkey_runner_gui.py", line 25, in goCallBack
returned = run_monkey_process(package, num)
File "C:\Users\brandon.dalesandro\Desktop\Zonar\mankey\monkey_runner.py", line 8, in run_monkey_process
logcatstring = logcatprocess.communicate(input=None)[0]
File "C:\Python27\lib\subprocess.py", line 798, in communicate
stdout = _eintr_retry_call(self.stdout.read)
File "C:\Python27\lib\subprocess.py", line 478, in _eintr_retry_call
return func(*args)
KeyboardInterrupt
Answer: This _might_ be your problem, but it's hard to tell without more information…
You've given the subprocess a pipe, but you don't read from it until after
it's finished. The
[warning](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.poll)
in the docs explains why this is bad:
> This will deadlock when using `stdout=PIPE` and/or `stderr=PIPE` and the
> child process generates enough output to a pipe such that it blocks waiting
> for the OS pipe buffer to accept more data. Use `communicate()` to avoid
> that.
It makes perfect sense that it would work for small numbers of commands—when
there's not enough data to fill the pipe buffer—but hang for larger numbers.
Looping around `poll` instead of calling `wait` doesn't help anything; all it
does is burn 100% CPU for no reason. You're still not reading from the pipe.
And calling `communicate` after the process has finished doesn't help either.
If the pipe has filled up, the subprocess will be blocked forever, `poll` will
never return a value, and you won't ever even get to `communicate`.
And since `communicate` already does its own `wait`, it's really all you need:
monkeyprocess = subprocess.Popen(monkeycmd, stdout=subprocess.PIPE, bufsize=1)
monkeystring = monkeyprocess.communicate(input=None)[0]
returncode = monkeyprocess.returncode
|
Most appropriate way to combine features of a class to another?
Question: Hey guys I'm new here but hope my question is clear.
My code is written in Python. I have a base class representing a general
website, this class holds some basic methods to fetch the data from the
website and save it. That class is extended by many many other classes each
representing a different website each holding attributes specific to that
website, each subclass uses the base class methods to fetch the data. All
sites should have the data parsed on them but many sites share the same
parsing functionality . So I created several parsing classes that hold the
functionality and properties for the different parsing methods (I have about
six) . I started to think what would be the best way to integrate those
classes with the website classes that need them.
At first I thought that each website class would hold a class variable with
the parser class that corresponds to it but then I thought there must be some
better way to do it.
I read a bit and thought I might be better off relying on Mixins to integrate
the parsers for each website but then I thought that though that would work it
doesn't "sound" right since the website class has no business inheriting from
the parser class (even thought it is only a Mixin and not meant to be a full
on class inheritance) since they aren't related in any way except that the
website uses the parser functionality.
Then I thought I might rely on some dependency injection code I saw for python
to inject the parser to each website but it sounded a bit of an overkill.
So I guess my question basically is, when is it best to use each case (in my
project and in any other project really) since they all do the job but don't
seem to be the best fit.
Thank you for any help you may offer, I hope I was clear.
Adding a small mock example to illustrate:
class BaseWebsite():
def fetch(): # Shared by all subclasses websites
....
def save(): # Shared by all subclasses websites
....
class FirstWebsite(BaseWebsite): # Uses parsing method one
....
class SecondWebsite(BaseWebsite): # Uses parsing method one
....
class ThirdWebsite(BaseWebsite): # Uses parsing method two
....
and so forth
Answer: I think your problem is that you're using subclasses where you should be using
instances.
From your description, there's one class for each website, with a bunch of
attributes. Presumably you create singleton instances of each of the classes.
There's rarely a good reason to do this in Python. If each website needs
different data—a base URL, a parser object/factory/function, etc.—you can just
store it in instance attributes, so each website can be an instance of the
same class.
If the websites actually need to, say, override base class methods in
different ways, then it makes sense for them to be different classes (although
even there, you should consider whether moving that functionality into
external functions or objects that can be used by the websites, as you already
have with the parser). But if not, there's no good reason to do this.
Of course I could be wrong here, but the fact that you defined old-style
classes, left the `self` parameter out of your methods, talked about class
attributes, and generally used Java terminology instead of Python terminology
makes me think that this mistake isn't too unlikely.
In other words, what you want is:
class Website:
def __init__(self, parser, spam, eggs):
self.parser = parser
# ...
def fetch(self):
data = # ...
soup = self.parser(data)
# ...
first_website = Website(parser_one, urls[0], 23)
second_website = Website(parser_one, urls[1], 42)
third_website = Website(parser_two, urls[2], 69105)
* * *
Let's say you have 20 websites. If you're creating 20 subclasses, you're
writing half a dozen lines of boilerplate for each, and there's a whole lot
you can get wrong with the details which may be painful to debug. If you're
creating 20 instances, it's just a few characters of boilerplate, and a lot
less to get wrong:
websites = [Website(parser_one, urls[0], 23),
Website(parser_two, urls[1], 42),
# ...
]
Or you can even move the data to a data file. For example, a CSV like this:
url,parser,spam
http://example.com/foo,parser_one,23
http://example.com/bar,parser_two,42
…
You can edit this more easily—or even use a spreadsheet program to do it—with
no need for any extraneous typing. And you can import it into Python with a
couple lines of code:
with open('websites.csv') as f:
websites = [Website(**row) for row in csv.DictReader(f)]
|
A fast method for calculating the probabilities of items in a distribution using python
Question: Is there a quick method or a function than automatically computes
probabilities of items in a distribution without importing random?
For instance, consider the following distribution (dictionary):
y = {"red":3, "blue":4, "green":2, "yellow":5}
1. I would like to compute the probability of picking each item.
2. I would also like to compute the probability of picking a red and two greed.
Any suggestions?
Answer: For the frequencies:
y = {"red":3, "blue":4, "green":2, "yellow":5}
frequencies = {key:float(value)/sum(y.values()) for (key,value) in y.items()}
And the probabilities of having a given combination is the probability of each
of them multiplied by the previous ones.
combination = ["red", "green", "green"]
prob = 1. # initialized to 1
for ii in combination:
prob *= frequencies[ii]
print prob
0.00437317784257
Does that sound reasonable?
|
redirecting python logging messages with streams
Question: I want to redirect logging messages to some handling method (e.g. in order so
save all messages in a queue). Currently I'm trying to make use of
logging.StreamHandler in order to write to a StringIO and then read it
somewhere else. In my case this might be a thread continuously reading from
the stream but it could also be a callback method being called on every log
entry.
import threading
import time
import sys
import logging
from StringIO import StringIO
# this thread shall read from a stream
# continuously and
def tread_fn( stream ):
while not stream.eof(): <=== this is not valid but my current approach
l = stream.readline()
do_something( l )
stream = StringIO()
handler = logging.StreamHandler(stream)
log = logging.getLogger()
log.setLevel( logging.INFO )
# replace all log handlers
for handler in log.handlers:
log.removeHandler(handler)
log.addHandler(handler)
thread = threading.Thread(target = tread_fn, args=[stream])
thread.start()
for i in range(3):
time.sleep(1)
log.error("test") <=== should be handled line by line
I feel like I have overlooked the very obvious and simple best practice but
I'm struggling for a while now :) Maybe I don't need streams at all but
currently I'm even failing to write to a stream and reading from it somewhere
else.. So in short my questions are:
* how is the main goal achieved the python way?
* how do I write strings to a stream and continuously read from it in another thread?
Answer: You've asked two questions in one - they should be separate questions. Your
main goal can be achieved using e.g. a `QueueHandler`, available in Python 3.2
and later but also available for earlier Python versions through the
[`logutils`](https://pypi.python.org/pypi/logutils) project.
|
Python record audio on detected sound
Question: I am looking to have a python script run in the background and use pyaudio to
record sound files when the threshold of the microphone has reached a certain
point. This is for a monitor on a two way radio network. So hence we only want
to record transmitted audio.
Tasks in mind:
* Record audio input on a n% gate threshold
* stop recording after so many seconds of silence
* keep recording for so many seconds after audio
* Phase 2: input data into MySQL database to search the recordings
I am looking at a file structure of the similar
/home/Recodings/2013/8/23/12-33.wav would be a recording of the transmision on
23/08/2013 @ 12:33.wav
I have used the code from
[Detect and record a sound with
python](http://stackoverflow.com/questions/2668442/detect-and-record-a-sound-
with-python)
I am at a bit of a loss where to go from here now and a little guidance would
be greatly appreciated
thank you
Answer: Some time ago I wrote some of the steps
* `Record audio input on a n% gate threshold`
A: Start a Boolean variable type for "Silence" and you can calculate
[RMS](http://en.wikipedia.org/wiki/Root_mean_square) to decide if Silence is
true or False, Set one RMS Threshold
* `stop recording after so many seconds of silence`
A: Do you need calculate one timeout, for it get the Frame Rate, Chunk Size
and how many seconds do you want, to calculate your timeout make (FrameRate /
chunk * Max_Seconds)
* `keep recording for so many seconds after audio`
A: If Silence is false == (RMS > Threshold) get the last chunk of data of
audio (LastBlock) and just keep record :-)
* `Phase 2: input data into MySQL database to search the recordings`
A: This step is up to you
Source code:
import pyaudio
import math
import struct
import wave
#Assuming Energy threshold upper than 30 dB
Threshold = 30
SHORT_NORMALIZE = (1.0/32768.0)
chunk = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
swidth = 2
Max_Seconds = 10
TimeoutSignal=((RATE / chunk * Max_Seconds) + 2)
silence = True
FileNameTmp = '/home/Recodings/2013/8/23/12-33.wav'
Time=0
all =[]
def GetStream(chunk):
return stream.read(chunk)
def rms(frame):
count = len(frame)/swidth
format = "%dh"%(count)
shorts = struct.unpack( format, frame )
sum_squares = 0.0
for sample in shorts:
n = sample * SHORT_NORMALIZE
sum_squares += n*n
rms = math.pow(sum_squares/count,0.5);
return rms * 1000
def WriteSpeech(WriteData):
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(FileNameTmp, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(WriteData)
wf.close()
def KeepRecord(TimeoutSignal, LastBlock):
all.append(LastBlock)
for i in range(0, TimeoutSignal):
try:
data = GetStream(chunk)
except:
continue
#I chage here (new Ident)
all.append(data)
print "end record after timeout";
data = ''.join(all)
print "write to File";
WriteSpeech(data)
silence = True
Time=0
listen(silence,Time)
def listen(silence,Time):
print "waiting for Speech"
while silence:
try:
input = GetStream(chunk)
except:
continue
rms_value = rms(input)
if (rms_value > Threshold):
silence=False
LastBlock=input
print "hello ederwander I'm Recording...."
KeepRecord(TimeoutSignal, LastBlock)
Time = Time + 1
if (Time > TimeoutSignal):
print "Time Out No Speech Detected"
sys.exit()
p = pyaudio.PyAudio()
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
output = True,
frames_per_buffer = chunk)
listen(silence,Time)
|
Using adb sendevent in python
Question: I am running into a strange issue, running `adb shell sendevent x x x`
commands from commandline works fine, but when I use any of the following:
`subprocess.Popen(['adb', 'shell', 'sendevent', 'x', 'x','x'])`
`subprocess.Popen('adb shell sendevent x x x', shell=True)`
`subprocess.call(['adb', 'shell', 'sendevent', 'x', 'x','x'])`
They all fail - the simulated touch even that works in a shell script does not
work properly when called through python. Furthermore I tried `adb push` the
shell script to the device, and using `adb shell /system/sh /sdcard/script.sh`
I was able to run it successfully, but when I try to run that commandline
through python, the script fails.
What's even stranger, is that he script runs, but for example, it does not
seem to execute the command `sleep 1` half way through the script, `echo`
commands work, `sendevent` commands don't seem to work.
Doesn't even seem possible, but there it is. How do I run a set of `adb shell
sendevent x x x` commands through python?
Answer: * `sendevent` takes 4 parameters
* `args` for `Popen` should be `['adb', 'shell', 'sendevent /dev/input/eventX type code value']` \- do not split the remote command
* timings are important for `sendevent` sequences and `adb shell` call itself is kind of expensive - so using shell script on the device works better
* pay attention to the newline characters in your shell scripts - make sure it's unix style (single `\n` instead of the `\r\n`)
|
Console Program in C++
Question: I was recently messing around with c++ console programming. I was wondering if
there was a way to make text appear on the console for a specific amount of
time, then go to some more text. Essentially, I'm trying to create a timer
object. Or if you're familiar with python, it would be something like
import timer
print "Hello World"
timer.sleep(2)
print "Hello Again World"
timer.sleep(2)
If someone could help me with this, I would appreciate it, thanks in advance.
Answer: Before C++11 there was no standard way to do it, either use a system library
or use a cross platform library that wraps system libraries for you. In C++ 11
it takes the thread, and chrono libraries to get it done.
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
std::cout << "Hello world" << std::endl;
std::chrono::milliseconds twoSeconds( 2000 );
std::this_thread::sleep_for( twoSeconds);
std::cout << "Hello Again World" << std::endl;
}
|
Is there a faster way to test if two lists have the exact same elements than Pythons built in == operator?
Question: If I have two lists, each 800 elements long and filled with integers. Is there
a faster way to compare that they have the exact same elements (and short
circuit if they don't) than using the built in `==` operator?
a = [6,2,3,88,54,-486]
b = [6,2,3,88,54,-486]
a == b
>>> True
Anything better than this?
I'm curious only because I have a _giant_ list of lists to compare.
Answer: Let's not assume, but run some tests!
The set-up:
>>> import time
>>> def timeit(l1, l2, n):
start = time.time()
for i in xrange(n):
l1 == l2
end = time.time()
print "%d took %.2fs" % (n, end - start)
Two giant equal lists:
>>> hugeequal1 = [10]*30000
>>> hugeequal2 = [10]*30000
>>> timeit(hugeequal1, hugeequal2, 10000)
10000 took 3.07s
Two giant lists where the first element is not equal:
>>> easydiff1 = [10]*30000
>>> easydiff2 = [10]*30000
>>> easydiff2[0] = 0
>>> timeit(easydiff1, easydiff2, 10000)
10000 took 0.00s
>>> timeit(easydiff1, easydiff2, 1000000)
1000000 took 0.14s
So it appears the built-in list equality operator does indeed do the short-
circuiting.
EDIT: Interestingly, using the `array.array` module doesn't make it any
faster:
>>> import array
>>> timeit(hugeequal1, hugeequal2, 1000)
1000 took 0.30s
>>> timeit(array.array('l', hugeequal1), array.array('l', hugeequal2), 1000)
1000 took 1.11s
`numpy` does get you a good speed-up, though:
>>> import numpy
>>> timeit(hugeequal1, hugeequal2, 10000)
10000 took 3.01s
>>> timeit(numpy.array(hugeequal1), numpy.array(hugeequal2), 10000)
10000 took 1.11s
|
Python pandas timeseries resample giving unexpected results
Question: The data here is for a bank account with a running balance. I want to resample
the data to only use the end of day balance, so the last value given for a
day. There can be multiple data points for a day, representing multiple
transactions.
In [1]: from StringIO import StringIO
In [2]: import pandas as pd
In [3]: import numpy as np
In [4]: print "Pandas version", pd.__version__
Pandas version 0.12.0
In [5]: print "Numpy version", np.__version__
Numpy version 1.7.1
In [6]: data_string = StringIO(""""Date","Balance"
...: "08/09/2013","1000"
...: "08/09/2013","950"
...: "08/09/2013","930"
...: "08/06/2013","910"
...: "08/02/2013","900"
...: "08/01/2013","88"
...: "08/01/2013","87"
...: """)
In [7]: ts = pd.read_csv(data_string, parse_dates=[0], index_col=0)
In [8]: print ts
Balance
Date
2013-08-09 1000
2013-08-09 950
2013-08-09 930
2013-08-06 910
2013-08-02 900
2013-08-01 88
2013-08-01 87
I expect "2013-08-09" to be 1000, but definitely not the 'middle' number 950.
In [10]: ts.Balance.resample('D', how='last')
Out[10]:
Date
2013-08-01 88
2013-08-02 900
2013-08-03 NaN
2013-08-04 NaN
2013-08-05 NaN
2013-08-06 910
2013-08-07 NaN
2013-08-08 NaN
2013-08-09 950
Freq: D, dtype: float64
I expect "2013-08-09" to be 930, or "2013-08-01" to be 88.
In [12]: ts.Balance.resample('D', how='first')
Out[12]:
Date
2013-08-01 87
2013-08-02 900
2013-08-03 NaN
2013-08-04 NaN
2013-08-05 NaN
2013-08-06 910
2013-08-07 NaN
2013-08-08 NaN
2013-08-09 1000
Freq: D, dtype: float64
Am I missing something here? Does resampling with 'first' and 'last' not work
the way I'm expecting it to?
Answer: To be able to resample your data Pandas first have to sort it. So if you load
your data and sort it by index you get the following thing:
>>> pd.read_csv(data_string, parse_dates=[0], index_col=0).sort_index()
Balance
Date
2013-08-01 87
2013-08-01 88
2013-08-02 900
2013-08-06 910
2013-08-09 1000
2013-08-09 930
2013-08-09 950
Which explains why you got the results you got. @Jeff explained why the order
is "arbitrary" and according to your comment the solution is to use
`mergesort` algorithm on the data before the operations...
>>> df = pd.read_csv(data_string, parse_dates=[0],
index_col=0).sort_index(kind='mergesort')
>>> df.Balance.resample('D',how='last')
2013-08-01 88
2013-08-02 900
2013-08-03 NaN
2013-08-04 NaN
2013-08-05 NaN
2013-08-06 910
2013-08-07 NaN
2013-08-08 NaN
2013-08-09 1000
>>> df.Balance.resample('D', how='first')
2013-08-01 87
2013-08-02 900
2013-08-03 NaN
2013-08-04 NaN
2013-08-05 NaN
2013-08-06 910
2013-08-07 NaN
2013-08-08 NaN
2013-08-09 930
|
SQLAlchemy order_by formula result
Question: I am a novice in Python. Based on
[this](http://stackoverflow.com/questions/592209/find-closest-numeric-value-
in-database/) SO post, I created a SQL query using PYODBC to search a MSSQL
table of historic option prices and select the option symbol with a strike
value closest to the desired value I specified. However, I am now trying to
teach myself OOP by re-factoring this program, and to that end I am trying to
implement the ORM in SQLAlchemy.
I cannot figure out how to implement a calculated Order_By statement. I don't
think a calculated column would work because desired_strike is an argument
that that is specified by the user(me) at each method call.
Here is the (simplified) original code:
import pyodbc
def get_option_symbol(stock, entry_date, exp_date, desired_strike):
entry_date = entry_date.strftime('%Y-%m-%d %H:%M:%S')
exp_date = exp_date.strftime('%Y-%m-%d %H:%M:%S')
cursor.execute("""select top(1) optionsymbol
from dbo.options_pricestore
where underlying=?
and quotedate=?
and expiration=?
and exchange='*'
and option_type=?
order by abs(strike - ?)""",
stock,
entry_date,
exp_date,
desired_strike,
)
row = cursor.fetchone()
return row
Maybe not the most Pythonic, but it worked. I am now encapsulating my formerly
procedural code into classes, and to use SQLAlchemy's ORM, except that in this
one case I cannot figure out how to represent abs(strike - desired_strike) in
the Order_By clause. I have not used lambda functions much in the past, but
here is what I came up with:
import sqlalchemy
class Option(Base):
__tablename__= 'options_pricestore'
<column definitions go here>
def get_option_symbol(stock, entry_date, exp_date, desired_strike):
entry_date = entry_date.strftime('%Y-%m-%d %H:%M:%S')
exp_date = exp_date.strftime('%Y-%m-%d %H:%M:%S')
qry = session.query(Option.optionsymbol).filter(and_
(Option.underlying == stock,
Option.quotedate == entry_date,
Option.expiration == exp_date,
Option.option_type== "put",
Option.exchange == "*")
).order_by(lambda:abs(Option.strike - desired_strike))
return qry
I get "ArgumentError: SQL expression object or string expected" - Any help
would be greatly appreciated.
Answer: `order_by` wants a string - give it to it:
qry = session.query(Option.optionsymbol).filter(and_
(Option.underlying == stock,
Option.quotedate == entry_date,
Option.expiration == exp_date,
Option.option_type== "put",
Option.exchange == "*")
).order_by('abs(strike - %d)' % desired_strike)
|
Python/Django: How to convert utf-16 str bytes to unicode?
Question: Fellows,
I am unable to parse a unicode text file submitted using django forms. Here
are the quick steps I performed:
1. Uploaded a text file ( encoding: utf-16 ) ( File contents: `Hello World 13` )
2. On server side, received the file using `filename = request.FILES['file_field']`
3. Going line by line: `for line in filename: yield line`
4. `type(filename)` gives me `<class 'django.core.files.uploadedfile.InMemoryUploadedFile'>`
5. `type(line)` is `<type 'str'>`
6. `print line` : `'\xff\xfeH\x00e\x00l\x00l\x00o\x00 \x00W\x00o\x00r\x00l\x00d\x00 \x001\x003\x00'`
7. `codecs.BOM_UTF16_LE == line[:2]` returns `True`
8. **Now** , I want to re-construct the unicode or ascii string back like "Hello World 13" so that I can parse the integer from line.
One of the ugliest way of doing this is to retrieve using `line[-5:]` (=
`'\x001\x003\x00'`) and thus construct using `line[-5:][1]`, `line[-5:][3]`.
I am sure there must be better way of doing this. Please help.
Thanks in advance!
Answer: Use
[`codecs.iterdecode()`](http://docs.python.org/2/library/codecs.html#codecs.iterdecode)
to decode the object on the fly:
from codecs import iterdecode
for line in iterdecode(filename, 'utf16'): yield line
|
Memory usage keep growing with Python's multiprocessing.pool
Question: Here's the program:
#!/usr/bin/python
import multiprocessing
def dummy_func(r):
pass
def worker():
pass
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=16)
for index in range(0,100000):
pool.apply_async(worker, callback=dummy_func)
# clean up
pool.close()
pool.join()
I found memory usage (both VIRT and RES) kept growing up till close()/join(),
is there any solution to get rid of this? I tried maxtasksperchild with 2.7
but it didn't help either.
I have a more complicated program that calles apply_async() ~6M times, and at
~1.5M point I've already got 6G+ RES, to avoid all other factors, I simplified
the program to above version.
**EDIT:**
Turned out this version works better, thanks for everyone's input:
#!/usr/bin/python
import multiprocessing
ready_list = []
def dummy_func(index):
global ready_list
ready_list.append(index)
def worker(index):
return index
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=16)
result = {}
for index in range(0,1000000):
result[index] = (pool.apply_async(worker, (index,), callback=dummy_func))
for ready in ready_list:
result[ready].wait()
del result[ready]
ready_list = []
# clean up
pool.close()
pool.join()
I didn't put any lock there as I believe main process is single threaded
(callback is more or less like a event-driven thing per docs I read).
I changed v1's index range to 1,000,000, same as v2 and did some tests - it's
weird to me v2 is even ~10% faster than v1 (33s vs 37s), maybe v1 was doing
too many internal list maintenance jobs. v2 is definitely a winner on memory
usage, it never went over 300M (VIRT) and 50M (RES), while v1 used to be
370M/120M, the best was 330M/85M. All numbers were just 3~4 times testing,
reference only.
Answer: Use `map_async` instead of `apply_async` to avoid excessive memory usage.
For your first example, change the following two lines:
for index in range(0,100000):
pool.apply_async(worker, callback=dummy_func)
to
pool.map_async(worker, range(100000), callback=dummy_func)
It will finish in a blink before you can see its memory usage in `top`. Change
the list to a bigger one to see the difference. But note `map_async` will
first convert the iterable you pass to it to a list to calculate its length if
it doesn't have `__len__` method. If you have an iterator of a huge number of
elements, you can use `itertools.islice` to process them in smaller chunks.
I had a memory problem in a real-life program with much more data and finally
found the culprit was `apply_async`.
P.S., in respect of memory usage, your two examples have no obvious
difference.
|
How to install python 2.7.5 as 64bit?
Question: When downloading the python 2.7.5 [here](http://www.python.org/getit/), I
download the python installer with the link "Python 2.7.5 Mac OS X
64-bit/32-bit x86-64/i386 Installer (for Mac OS X 10.6 and later [2])".
Installed the python, I cd the directory
"/Library/Frameworks/Python.framework/Versions/2.7" and execute the following
python code:
import sys
print sys.maxint
and I get 2147483647 which means I am runing the python of 32bit version. How
can I install the python of 64bit version?
Answer: Make sure you are really running the Python you think you are. `cd
/Library/Frameworks/Python.framework/Versions/2.7` doesn't help by itself. If
you did not change any of the default installer options,
`/Library/Frameworks/Python.framework/Versions/2.7/bin` should now be first in
your shell `PATH` (you need to open a new terminal window after installing to
see this) and there should now be `python` and `python2.7` links in
`/usr/local/bin` to the new Python.
$ which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
$ python
Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.maxsize
9223372036854775807
>>> sys.maxint
9223372036854775807
|
Most labels update well, except for one
Question: I apologize for pasting all of my code. I'm at a loss as to how I should post
this question. I did look for other answers throughout this week, but I cannot
for the life of me figure this out. I know that there is more I have to do to
get this program to work, but I'm just trying to get the team_at_play label to
update. I've tried printing the variables and team_at_play.set() and
team_at_play = team_b.get(). The print team_at_play.get() shows the team whose
turn it is, but the label does not update.
Also, have I put the functions, like coin_toss(), etc. in the right place with
respect to the mainloop?
Here is a link to a text file that can be loaded from the menu item: "Abrir
Capitulo":
<http://www.mariacarrillohighschool.com/Teachers/JamesBaptista/Spanish7-8/ClassDocuments/Handouts/expres_1_1.txt>
Any help would be greatly appreciated!
#!/usr/local/bin/env python
# -*- coding: utf-8 -*-
## Recall Game
"""This will be a timed game of vocabulary recall that will focus on
all the previous units studied. If possible, the sessions will have
a time limit, so teams might compete against one another.
Teams will win points according to how many answers are given correctly.
Questions will be randomly selected."""
import sys
import time
import csv
import StringIO
import random
import string
from Tkinter import *
import ttk
import tkFont
def score_calc (answer):
##Returns a point value for the expression in play
score = 0
##print ('Answer'), answer
for i in len(answer): ##Thanks to user2357112
score += 1
return score
master = Tk()
master.title("Araña en la Cabaña")
master.geometry=("800x800+200+200")
master.configure(background="black")
from random import choice
d={}
team_A= StringVar()
team_B= StringVar()
team_A_score= IntVar()
team_B_score= IntVar()
team_at_play = StringVar()
pregunta = StringVar()
answer = StringVar()
turn = 0
correct_answer = StringVar()
feedback=StringVar()
correct = ['!Sí!', '¡Muy bien!', '¡Excelente!', '¡Fabuloso!']
incorrect =['¡Caramba!', '¡Ay, ay, ay!', '¡Uy!']
def select_expression():
## Returns at random an expression
##print ('select_expression beginning')
# print len(d)
selected_question = ''
global pregunta
print ('select_expression at work')
try:
selected_question =random.choice(d.keys()) ##Problem selecting random key
pregunta.set(selected_question)
print 'Pregunta =', pregunta.get()
answer.set(d[selected_question])
print 'Answer =', answer.get()
##return pregunta Thanks to user2357112
##return answer Thanks to user2357112
except IndexError:
print ('Error')
pass
##print pregunta
def coin_toss ():
## Tosses a coin to see who goes first. Returns even or odd integer
print ('Coin toss at work.')
from random import randint
coin_toss = randint(0,1)
if coin_toss == 0:
turn = 3
if coin_toss == 1:
turn = 4
return turn
def player_turn():
## Prompts players or teams during turns. Updates scoreboard.
print ('Player_turn() at work.')
global team_at_play
global turn
while turn < 1:
turn = coin_toss()
team_A_score.set(0)
team_B_score.set(0)
print 'turn =', turn
if turn %2== 0:
print 'turn=',turn
print ('Team_B:'), team_B.get()
team_at_play= team_B.get()
print 'Team_at_play:', team_at_play
select_expression()
if turn %2!= 0:
print 'Turn=', turn
print ('Team_A:'), team_A.get()
team_at_play= team_A.get()
print 'Team_at_play:', team_at_play
select_expression()
def nombrar_equipos():
nombrar_equipos = Toplevel()
##Dialog box for entering the team names.
nombrar_equipos.title("Nombrar los Equipos")
first_team_label = Label(nombrar_equipos,text="El primer equipo:")
first_team_label.grid(column=0, row=1)
second_team_label = Label(nombrar_equipos,text="El segundo equipo:")
second_team_label.grid(column=0, row=0)
team_A_entry = Entry(nombrar_equipos,width =20, textvariable=team_A)
team_A_entry.grid(column=1, row=0)
team_A_entry.focus_set()
team_B_entry = Entry(nombrar_equipos, width =20, textvariable=team_B)
team_B_entry.grid(column=1, row=1)
entregar_button=Button(nombrar_equipos, text ="Entregar", command=nombrar_equipos.destroy)
entregar_button.grid(column=1,row=2)
def abrir_capitulo():
##Dialog box for selecting the chapter to be loaded.
#this will hide the main window
import tkFileDialog
WORDLIST_FILENAME = tkFileDialog.askopenfilename(parent=master,title="Archivo para abrir", defaultextension=".txt")
global d
d = {}
with open(WORDLIST_FILENAME) as fin:
rows = (line.split('\t') for line in fin)
d = {row[0]:row [1] for row in rows}
for k in d:
d[k]= d[k].strip('\n')
## print ('Line 68')
inv_d = {v:k for k, v in d.items()}
##print inv_d
d.update(inv_d)
print d
print ('¡'), len(d), ('expresiones cargadas!')
return d
def check_response(*args):
##checks a team's answer, rewards points if correct.
if team_at_play.get() == team_A.get():
if team_answer==answer:
team_A_score.set(team_A_score.get() + score_calc (d[pregunta]))
turn += 1
if team_answer != answer:
turn += 1
if team_at_play.get() == team_B.get():
if team_answer==answer:
team_B_score.set(team_B_score.get() + score_calc (d[pregunta]))
turn += 1
if team_answer != answer:
turn += 1
class App:
def __init__(self, master):
frame = Frame(master)
master.puntuacion= Label(master, text="Araña en la Cabaña", font=("American Typewriter", 30),bg="black", fg="red", justify=CENTER)
master.puntuacion.grid(row=0, column=2)
master.team_A_label= Label(master, textvariable= team_A, font=("American Typewriter", 24),bg="black", fg="red")
master.team_A_label.grid(row=1, column=1)
master.team_B_label= Label(master, textvariable= team_B, font=("American Typewriter", 24),bg="black", fg="red")
master.team_B_label.grid(row=1, column=3)
master.team_A_score_label= Label(master, textvariable= team_A_score, font=("04B", 24),bg="black", fg="yellow").grid(row=2, column=1)
# team_A_score_label= tkFont.Font(family="Times", size=20, weight=bold, color=red)
master.team_B_score_label= Label(master, textvariable= team_B_score, font=("04B", 24),bg="black", fg="yellow")
master.team_B_score_label.grid(row=2, column=3)
master.team_at_play_label= Label(master, textvariable= team_at_play, font=("American Typewriter", 24),fg="yellow", bg="black")
master.team_at_play_label.grid(row=4, column=2)
master.pregunta_start = Label(master, text="¿Cómo se traduce....?", font=("American Typewriter", 24),fg="blue",bg="black")
master.pregunta_start.grid(row=6, column=2)
master.pregunta_finish = Label(master, textvariable = pregunta, font=("American Typewriter", 24),fg="green",bg="black")
master.pregunta_finish.grid(row=7, column=2)
master.team_answer = Entry(master, width=50)
master.team_answer.grid(row=8, column=2)
master.team_answer.focus_set()
master.feedback_label = Label(textvariable= feedback, font=("American Typewriter", 24),fg="green",bg="black")
master.feedback_label.grid(row=9, column=2)
respond_button = Button(master, text="Responder",bg="black", command=check_response, justify=CENTER, borderwidth=.001)
respond_button.grid(row=10, column=3)
master.bind("<Return>", check_response)
continue_button = Button(master, text="Adelante", bg="black", command=player_turn)
continue_button.grid(row=10, column=4)
menubar = Menu(master)
filemenu= Menu(menubar,tearoff=0)
filemenu.add_command(label="Nombrar Equipos",command=nombrar_equipos)
filemenu.add_command(label="Abrir Capítulo",command=abrir_capitulo)
filemenu.add_separator()
filemenu.add_command(label="Cerrar", command=master.quit)
menubar.add_cascade(label="Archivo",menu=filemenu)
master.config(menu=menubar)
master.columnconfigure(0, weight=1)
master.rowconfigure(0, weight=1)
app= App(master)
master.mainloop()
Answer: I had to make a change to player_turn(). I experimented with changing the
textvariable for different labels in the master frame and decided that the
problem was isolated to player_turn. Then I looked at the difference between
that and the other functions I had written. There might be a better way to set
a StringVar equal to another, but I came up with this, which is to create an
intermediate variable to which I set team_at_play.
def player_turn():
## Prompts players or teams during turns. Updates scoreboard.
print ('Player_turn() at work.')
# global team_at_play
global turn
while turn < 1:
turn = coin_toss()
team_A_score.set(0)
team_B_score.set(0)
print 'turn =', turn
if turn %2== 0:
print 'turn=',turn
print ('Team_B:'), team_B.get()
playing_team= team_B.get()
team_at_play.set(playing_team)
print 'Team_at_play:', team_at_play
select_expression()
if turn %2!= 0:
print 'Turn=', turn
print ('Team_A:'), team_A.get()
playing_team=team_B.get()
team_at_play.set(playing_team)
print 'Team_at_play:', team_at_play
select_expression()
|
Protcol Buffers - Python - Issue with tutorial
Question: **Context**
* I'm working through this tutorial: <https://developers.google.com/protocol-buffers/docs/pythontutorial>
* I've created files by copy and pasting from the above tutorial
**Issue**
When I run the below file in `python launcher`nothing happens:
#! /usr/bin/python
import addressbook_pb2
import sys
# Iterates though all people in the AddressBook and prints info about them.
def ListPeople(address_book):
for person in address_book.person:
print "Person ID:", person.id
print " Name:", person.name
if person.HasField('email'):
print " E-mail address:", person.email
for phone_number in person.phone:
if phone_number.type == addressbook_pb2.Person.MOBILE:
print " Mobile phone #: ",
elif phone_number.type == addressbook_pb2.Person.HOME:
print " Home phone #: ",
elif phone_number.type == addressbook_pb2.Person.WORK:
print " Work phone #: ",
print phone_number.number
# Main procedure: Reads the entire address book from a file and prints all
# the information inside.
if len(sys.argv) != 2:
print "Usage:", sys.argv[0], "ADDRESS_BOOK_FILE"
sys.exit(-1)
address_book = addressbook_pb2.AddressBook()
# Read the existing address book.
f = open(sys.argv[1], "rb")
address_book.ParseFromString(f.read())
f.close()
ListPeople(address_book)
**Result:**

**Question**
* What steps should I take to work out the issue?
Answer: That's not "nothing happens". You got an error message showing that you didn't
call the program correctly. Specifically, you didn't pass it an address book
file to use.
|
script working only in spyder console
Question: I everybody, I usually use spyder to write in python and I write these simple
lines of code for plotting some graph but I can't understand why it doesn't
work properly when I run it, but if I copy and paste the lines in the python
console it works perfetly. This is the code:
import matplotlib.pyplot as plt
import numpy as np
z=np.arange(0,250,1)
f_z1=np.append([z[0:100]*0],[z[100:150]/2500 -(1/25) ])
f_z3=np.append(f_z2,[z[200:] *0])
plt.plot(z,f_z3)
I like to understand why I have this problem, thank for help.
Answer: Division in Python < 3 works differently from what you may expect if you are
used to for instance Matlab. So from a standard Python console you will get
this (dividing integers results in an integer):
>>> 1/2
0
This has been changed in Python 3. To get the new behaviour put
from __future__ import division
above all the other imports in your script. Alternatively you could force
floating point behaviour as follows:
>>> 1./2.
0.5
The reason why your code works in the Spyder console is because that already
does the above import automatically.
|
selenium python click on element nothing happens
Question: I am trying to click on the Gmail link on the Google frontpage in Selenium
with the WebDriver on Python. My code basically replicates the one found here:
[Why Cant I Click an Element in
Selenium?](http://stackoverflow.com/questions/16511059/why-cant-i-click-an-
element-in-selenium)
My Code:
import selenium.webdriver as webdriver
firefox = webdriver.Firefox()
firefox.get("http://www.google.ca")
element = firefox.find_element_by_xpath(".//a[@id='gb_23']")
element.click()
The webdriver loads the page and then nothing happens. I've tried using the
ActionChains and move_to_element(element), click(element), then perform() but
nothing happens either.
Answer: Use `find_element_by_id` method:
element = firefox.find_element_by_id("gb_23")
element.click()
or correct your xpath to:
"//a[@id='gb_23']"
[Here you have nice
tutorial.](http://zvon.org/xxl/XPathTutorial/General/examples.html)
|
ImportError: cannot import name PyJavaClass
Question: I check [my old script](http://code.activestate.com/recipes/502222-creating-
java-class-description-files/?in=user-4028109) written in 2007 in
Python/Jython and it throw the error:
ImportError: cannot import name PyJavaClass
What happen with this class, I use Xubuntu 13.4 with Jython 2.5.2
Answer: `PyJavaClass` was part of Jython 2.2:
<https://bitbucket.org/jython/jython/src/bed9f9de4ef3c6d38bc009409c95ebfc55e0c7d0/src/org/python/core?at=2.2>.
It is gone in Jython 2.5. Now there is `PyJavaType` instead. See
* <http://www.jython.org/javadoc/index.html>
* <https://bitbucket.org/jython/jython/commits/a173ad16080621b6d7a29fb764087758eb453ba1>
I cannot find anything about this change in the release notes
(<http://www.jython.org/latest.html>).
|
Python: how do I create a list of combinations from a series of ranges of numbers
Question: For a list of numerical values of n length, e. g. `[1, 3, 1, 2, ...]`, I would
like to create a list of the lists of all possible combinations of values from
`range[x+1]` where x is a value from the list. The output might look something
like this:
for list[1, 3, 2] return all possible lists of range[x+1] values:
# the sequence of the list is unimportant
[
[0,0,0],[1,0,0],[0,1,0],[0,2,0],[0,3,0],[0,0,1],[0,0,2],[1,1,0],
[1,2,0],[1,3,0],[1,0,1],[1,0,2],[0,1,1],[0,2,1],[0,3,1],[0,1,2],
[0,2,2],[0,3,2],[1,1,1],[1,2,1],[1,3,1],[1,1,2],[1,2,2],[1,3,2]
]
So in this example I am looking for all variations of `[e1, e2, e3]` from `e1
in [0,1], e2 in [0,1,2,3] and e3 in [0,1,2]`
Answer: Python's itertools module has a tool that does what you need:
import itertools
p = itertools.permutations([0, 1, 2, 3])
p_as_list = list(p)
Edit: As your needs are fairly specific you could benefit from having your own
function that does something alike this one: (note I haven't got the
implementation down just yet, maybe someone might refine this):
def magic_permutations (*args):
lists = []
larg = len(args)
for i in range(larg):
lists.append([])
i = 0
for nums in args:
for num in nums:
if i >= larg:
i = 0
lists[i].append(num)
i += 1
return lists
Edit: I misunderstood your question the first time, so I'll apologize for
that. I'll however leave this be.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.