text
stringlengths 226
34.5k
|
---|
Optionally replacing a substring python
Question: My list of replacement is in the following format.
lstrep = [('A',('aa','aA','Aa','AA')),('I',('ii','iI','Ii','II')),.....]
What I want to achieve is optionally change the occurrence of the letter by
all the possible replacements. The input word should also be a member of the
list. e.g.
input - `DArA`
Expected output -
['DArA','DaarA','Daaraa','DAraa','DaArA','DAraA','DaAraA','DAarA','DAarAa', 'DArAa','DAArA','DAArAA','DArAA']
My try was
lstrep = [('A',('aa','aA','Aa','AA'))]
def alte(word,lstrep):
output = [word]
for (a,b) in lstrep:
for bb in b:
output.append(word.replace(a,bb))
return output
print alte('DArA',lstrep)
The output I received was `['DArA', 'Daaraa', 'DaAraA', 'DAarAa', 'DAArAA']`
i.e. All occurrences of 'A' were replaced by 'aa','aA','Aa' and 'AA'
respectively. What I want is that it should give all permutations of optional
replacements.
Answer: `itertools.product` will give all of the permutations. You can build up a list
of substitutions and then let it handle the permutations.
import itertools
lstrep = [('A',('aa','aA','Aa','AA')),('I',('ii','iI','Ii','II'))]
input_str = 'DArA'
# make substitution list a dict for easy lookup
lstrep_map = dict(lstrep)
# a substitution is an index plus a string to substitute. build
# list of subs [[(index1, sub1), (index1, sub2)], ...] for all
# characters in lstrep_map.
subs = []
for i, c in enumerate(input_str):
if c in lstrep_map:
subs.append([(i, sub) for sub in lstrep_map[c]])
# build output by applying each sub recorded
out = [input_str]
for sub in itertools.product(*subs):
# make input a list for easy substitution
input_list = list(input_str)
for i, cc in sub:
input_list[i] = cc
out.append(''.join(input_list))
print(out)
|
Python Help | Read data from text file from the bottom
Question: Can someone have a look at my program? There's an error which I cannot
identify and it's really confusing me. Also, please explain to me the problem
so that I can understand how to use the function and program in another
situation.
input_name = input("\nPlease enter students name » ").strip()
datafile = '1.txt'
while open(datafile, 'r') as f:
data = {}
for line in f:
name, value = line.split('=')
name = name.strip
value = str(value)
data.setdefault(name, []).append(value)
else:
break
avg = {}
for name, scores in data.items():
last_scores = scores[-3:]
avg[name] = sum(last_scores) / len(last_scores)
print("\n", input_name,"'s average score is", avg(input_name))
Answer: ## Single steps
Read all data into a dict:
data = {}
for line in f:
name, value = line.split('=')
name = name.strip()
value = int(value)
data.setdefault(name, []).append(value)
The content of `data`:
{'Chris': [9, 9],
'John': [6, 4, 5],
'Sarah': [4, 7],
'Tanzil': [4, 4, 10, 5, 3],
'Tom': [2]}
Now you can calculate the average:
last_scores = data['Tanzil'][-3:]
avg = sum(last_scores) / len(last_scores)
>>> avg
6.0
or averages for all:
avg = {}
for name, scores in data.items():
last_scores = scores[-3:]
avg[name] = sum(last_scores) / len(last_scores)
now `avg` holds:
{'Chris': 9.0, 'John': 5.0, 'Sarah': 5.5, 'Tanzil': 6.0, 'Tom': 2.0}
Show all results:
for name, value in avg.items():
print(name, value)
prints:
Tanzil 6.0
Chris 9.0
Sarah 5.5
John 5.0
Tom 2.0
or nicely ordered highest to lowest average score:
from operator import itemgetter
for name, value in sorted(avg.items(), key=itemgetter(1), reverse=True):
print(name, value)
prints:
Chris 9.0
Tanzil 6.0
Sarah 5.5
John 5.0
Tom 2.0
## Full program
input_name = input("\nPlease enter students name » ").strip()
datafile = '1.txt'
with open(datafile, 'r') as f:
data = {}
for line in f:
name, value = line.split('=')
name = name.strip()
value = int(value)
data.setdefault(name, []).append(value)
avg = {}
for name, scores in data.items():
last_scores = scores[-3:]
avg[name] = sum(last_scores) / len(last_scores)
print("\n", input_name,"'s average score is", avg[input_name])
|
'AttributeError: 'NoneType' object has no attribute 'to_dict'
Question: I am developing an API using Google App Engine in Python. I am having trouble
sending a GET request to a particular url. I get the 'NoneType' object has no
attribute `to_dict` error. The trouble comes in at `out = client.to_dict()` in
the apiClient.py, which is routed to in main.py by
app.router.add(webapp2.Route(r'/api/client/<clientid:[0-9]+><:/?>', 'apiClient.Client'))
I do not understand why `ndb.Key(db_defs.Client,
int(kwargs['clientid'])).get()` is returning None
apiClient.py:
import webapp2
from google.appengine.ext import ndb
import db_defs
import json
class Client(webapp2.RequestHandler):
#returns all or a specified client(s)
def get(self, **kwargs):
if 'application/json' not in self.request.accept:
self.response.status = 406
self.response.status_message = "Not acceptable: json required"
return
if 'clientid' in kwargs:
client = ndb.Key(db_defs.Client, int(kwargs['clientid'])).get()
out = client.to_dict()
self.response.write(json.dumps(out))
else:
q = db_defs.Client.query()
keys = q.fetch(keys_only=True)
results = { 'keys' : [x.id() for x in keys]}
self.response.write(json.dumps(results))
db_defs.py:
from google.appengine.ext import ndb
#http://stackoverflow.com/questions/10077300/one-to-many-example-in-ndb
class Model(ndb.Model):
def to_dict(self):
d = super(Model, self).to_dict()
d['key'] = self.key.id()
return d
class Pet(Model):
name = ndb.StringProperty(required=True)
type = ndb.StringProperty(choices=set(["cat", "dog"]))
breed = ndb.StringProperty(required=False)
weight = ndb.IntegerProperty(required=False)
spayed_or_neutered = ndb.BooleanProperty()
photo = ndb.BlobProperty()
owner = ndb.KeyProperty(kind='Client')
class Client(Model):
lname = ndb.StringProperty(required=True)
fname = ndb.StringProperty(required=False)
phone = ndb.StringProperty(required=False)
email = ndb.StringProperty(required=False)
staddr = ndb.StringProperty(required=False)
pets = ndb.KeyProperty(kind='Pet', repeated=True, required=False)
def to_dict(self):
d = super(Client, self).to_dict()
d['pets'] = [p.id() for m in d['pets']]
return d
EDIT:
When I do a GET request to <http://localhost:8080/api/client/> I get a list of
client ids:
> {"keys": [4679521487814656, 4855443348258816, 5136918324969472,
> 5242471441235968, 5277655813324800, 5559130790035456, 5699868278390784,
> 5805421394657280, 6051711999279104, 6368371348078592, 6544293208522752,
> 6614661952700416, 6685030696878080]}
which I have verified are the same as those present in the GAE Datastore
Viewer.
But when I do a GET request to
<http://localhost:8080/api/client/4679521487814656> I get the NoneType Error.
Answer: `client` is set to `None`, which is not an object with a `to_dict()` method.
`client` is `None` because the following expression returned `None`:
client = ndb.Key(db_defs.Client, int(kwargs['clientid'])).get()
e.g. there is no `Client` object with that key.
|
K-means clustering using sklearn.cluster
Question: I came across this tutorial on K-means clustering on [Unsupervised Machine
Learning: Flat Clustering](https://pythonprogramming.net/flat-clustering-
machine-learning-python-scikit-learn/), and below is the code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
from sklearn.cluster import KMeans
X = np.array([[1,2],[5,8],[1.5,1.8],[1,0.6],[9,11]])
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
print (centroid)
print(labels)
colors = ["g.","r.","c."]
for i in range(len(X)):
print ("coordinate:" , X[i], "label:", labels[i])
plt.plot(X[i][0],X[i][1],colors[labels[i]],markersize=10)
plt.scatter(centroid[:,0],centroid[:,1], marker = "x", s=150, linewidths = 5, zorder =10)
plt.show()
In this example, the array has only 2 features `[1,2],[5,8],[1.5,1.8]` etc.
I have tried to replace the `X` with 10 x 750 matrix (750 features) stored in
an `np.array()`. The graph it created just does not make any sense.
How could I alter the above code to solve my problem?
Answer: Visualizing 750 dimensions is hard.
Figure out _independent_ of k-means how to visualize.
But don't expect k-means to return meaningful results on such data... it is
very sensitive to preprocessing and normalization, and most likely your 750
dimensions are not on the exact same continuous numerical scale.
|
from scipy.integrate import odeint returns error
Question: I ran this command `from scipy.integrate import odeint` but I get the errors
below. I am new to python. I have installed `scipy` and `numpy` but I don't
have any idea what more is missing to run this. Please help.
Traceback (most recent call last):
File "<pyshell#8>", line 1, in <module>
from scipy.integrate import odeint
File "C:\Python34\lib\site-packages\scipy\integrate\__init__.py", line 51, in <module>
from .quadrature import *
File "C:\Python34\lib\site-packages\scipy\integrate\quadrature.py", line 6, in <module>
from scipy.special.orthogonal import p_roots
File "C:\Python34\lib\site-packages\scipy\special\__init__.py", line 601, in <module>
from ._ufuncs import *
ImportError: DLL load failed: The specified module could not be found.
Answer: you need to install the `Numpy` from the following link which has `Numpy+MKL`
build linked to the Intel® Math Kernel Library.
link: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
|
Django 1.8: django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet
Question: I have read other questions about this issue, but cannot fix from that
answers.
What I am doing: on my local PC I have Django 1.8.5 and several apps, all is
working properly. To 1.8.5 I was upgraded from 1.6.10 some time ago. But when
I install django 1.8.5 on production server i have errors like
`django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet` (full
traceback below).
Strange, why the same code on localhost works?
My guess why this happens: as pointed here [Django 1.7 throws
django.core.exceptions.AppRegistryNotReady: Models aren't loaded
yet](http://stackoverflow.com/questions/25537905/django-1-7-throws-django-
core-exceptions-appregistrynotready-models-arent-load) it can be a Pinax-
account module issue <https://github.com/pinax/django-user-accounts> but i
tried to fix, and error still persists.
**Also, if I reinstall django to 1.6.10 on server it works, even registration
and authorization for users!** Please help how to fix errors on Django 1.8.5.
I am attaching settings.py:
# -*- coding: utf-8 -*-
# Django settings for engine project.
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ALLOWED_HOSTS = ['engine.domain.com']
ADMINS = (
('Joomler', '[email protected]'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'gdb', # Or path to database file if using sqlite3.
'USER': 'postgres', # Not used with sqlite3.
'PASSWORD': '111111', # Not used with sqlite3.
'HOST': 'localhost', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
TIME_ZONE = 'Europe/Moscow'
LANGUAGE_CODE = 'ru-ru'
SITE_ID = int(os.environ.get("SITE_ID", 1))
USE_I18N = True
USE_L10N = True
MEDIA_ROOT = os.path.join(PROJECT_ROOT, '..', 'media')
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(PROJECT_ROOT, '..', 'static')
# URL prefix for static files.
# Example: "http://media.lawrence.com/static/"
STATIC_URL = '/static/'
DOMAIN_URL = '/post/'
# Additional locations of static files
STATICFILES_DIRS = (
os.path.join(PROJECT_ROOT, 'static'),
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
#'djangobower.finders.BowerFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'cr@-7=ymmhr68y6xjhb#h$!y054baa(h)8e$q0t+oizv&e0o)j'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
"account.middleware.LocaleMiddleware",
"account.middleware.TimezoneMiddleware",
)
CRISPY_TEMPLATE_PACK = 'bootstrap3'
from django.conf.global_settings import TEMPLATE_CONTEXT_PROCESSORS as TCP
TEMPLATE_CONTEXT_PROCESSORS = TCP + (
'django.core.context_processors.request',
'django.core.context_processors.csrf',
"account.context_processors.account",
)
ROOT_URLCONF = 'engine.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'engine.wsgi.application'
TEMPLATE_DIRS = (
#os.path.join(PROJECT_ROOT, 'templates'),
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
os.path.join(PROJECT_PATH, '..', 'templates'),
os.path.join(PROJECT_PATH, '..', 'article', 'templates'),
os.path.join(PROJECT_PATH, '..', 'loginsys', 'templates'),
os.path.join(PROJECT_PATH, '..', 'books', 'templates'),
os.path.join(PROJECT_PATH, '..', 'booksin', 'templates'),
os.path.join(PROJECT_PATH, '..', 'vkontakte_groups', 'templates'),
os.path.join(PROJECT_PATH, '..', 'vk_wall', 'templates'),
os.path.join(PROJECT_PATH, '..', 'registration', 'templates'),
os.path.join(PROJECT_PATH, '..', 'account', 'templates'),
)
INSTALLED_APPS = (
'suit',
'crispy_forms',
'django.contrib.sites',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_countries',
#'debug_toolbar',
'article',
#'loginsys', #commented because use registration and django-profile
#'registration',
#'userprofile', #https://github.com/alekam/django-profile/blob/master/INSTALL.txt
#'demoprofile',
#'south', #removed in django 1.8
#'books',
#'booksin',
'contact',
'vk_wall',
# theme
"bootstrapform",
"pinax_theme_bootstrap",
# external
"account",
"metron",
"pinax.eventlog",
# 'oauth_tokens',
# 'taggit',
# 'vkontakte_api',
# 'vkontakte_places',
# 'vkontakte_users',
# 'vkontakte_groups',
# 'vkontakte_wall',
# 'vkontakte_board',
# 'm2m_history',
#'tastypie',
#'rest_framework',
#'blog',
#'regme',
#'django_nvd3',
#'djangobower',
#'demoproject',
)
# Site information (domain and name) for use in activation mail messages
SITE = {'domain': 'site.domain.com', 'name': 'site'}
#EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# A sample logging configuration. The only tangible logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
DEFAULT_FROM_EMAIL = '[email protected]'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = '[email protected]'
EMAIL_HOST_PASSWORD = '11111111111'
EMAIL_PORT = 587
ACCOUNT_OPEN_SIGNUP = True
ACCOUNT_EMAIL_UNIQUE = True
ACCOUNT_EMAIL_CONFIRMATION_REQUIRED = False
ACCOUNT_LOGIN_REDIRECT_URL = "/vk/projects/" # "/settings/"
ACCOUNT_LOGOUT_REDIRECT_URL = "/"
ACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS = 2
ACCOUNT_USE_AUTH_AUTHENTICATE = True
AUTHENTICATION_BACKENDS = [
"account.auth_backends.UsernameAuthenticationBackend",
]
Here is the full traceback from server error log:
[Sun Dec 06 15:00:00.097892 2015] [:error] [pid 20184:tid 140014929971072] Exception ignored in: <module 'threading' from '/usr/lib/python3.4/threading.py'>
[Sun Dec 06 15:00:00.097973 2015] [:error] [pid 20184:tid 140014929971072] Traceback (most recent call last):
[Sun Dec 06 15:00:00.098003 2015] [:error] [pid 20184:tid 140014929971072] File "/usr/lib/python3.4/threading.py", line 1288, in _shutdown
[Sun Dec 06 15:00:00.099232 2015] [:error] [pid 20184:tid 140014929971072] assert tlock is not None
[Sun Dec 06 15:00:00.099279 2015] [:error] [pid 20184:tid 140014929971072] AssertionError:
[Sun Dec 06 15:00:00.121192 2015] [:error] [pid 20183:tid 140014929971072] Exception ignored in: <module 'threading' from '/usr/lib/python3.4/threading.py'>
[Sun Dec 06 15:00:00.121320 2015] [:error] [pid 20183:tid 140014929971072] Traceback (most recent call last):
[Sun Dec 06 15:00:00.121355 2015] [:error] [pid 20183:tid 140014929971072] File "/usr/lib/python3.4/threading.py", line 1288, in _shutdown
[Sun Dec 06 15:00:00.122557 2015] [:error] [pid 20183:tid 140014929971072] assert tlock is not None
[Sun Dec 06 15:00:00.122599 2015] [:error] [pid 20183:tid 140014929971072] AssertionError:
[Sun Dec 06 15:00:00.181301 2015] [:error] [pid 20184:tid 140014929971072] Exception ignored in: <module 'threading' from '/usr/lib/python3.4/threading.py'>
[Sun Dec 06 15:00:00.181371 2015] [:error] [pid 20184:tid 140014929971072] Traceback (most recent call last):
[Sun Dec 06 15:00:00.181402 2015] [:error] [pid 20184:tid 140014929971072] File "/usr/lib/python3.4/threading.py", line 1288, in _shutdown
[Sun Dec 06 15:00:00.182642 2015] [:error] [pid 20184:tid 140014929971072] assert tlock is not None
[Sun Dec 06 15:00:00.182712 2015] [:error] [pid 20184:tid 140014929971072] AssertionError:
[Sun Dec 06 15:00:00.438789 2015] [mpm_event:notice] [pid 20180:tid 140014929971072] AH00491: caught SIGTERM, shutting down
[Sun Dec 06 15:00:01.196734 2015] [:warn] [pid 20557:tid 139670936528768] mod_wsgi: Compiled for Python/3.4.0.
[Sun Dec 06 15:00:01.196827 2015] [:warn] [pid 20557:tid 139670936528768] mod_wsgi: Runtime using Python/3.4.3.
[Sun Dec 06 15:00:01.197563 2015] [mpm_event:notice] [pid 20557:tid 139670936528768] AH00489: Apache/2.4.7 (Ubuntu) mod_wsgi/3.4 Python/3.4.3 configured -- resuming normal operations
[Sun Dec 06 15:00:01.197591 2015] [core:notice] [pid 20557:tid 139670936528768] AH00094: Command line: '/usr/sbin/apache2'
[Sun Dec 06 15:00:09.294007 2015] [:error] [pid 20561:tid 139670650316544] Internal Server Error: /vk/projects/
[Sun Dec 06 15:00:09.294064 2015] [:error] [pid 20561:tid 139670650316544] Traceback (most recent call last):
[Sun Dec 06 15:00:09.294070 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/base.py", line 108, in get_response
[Sun Dec 06 15:00:09.294076 2015] [:error] [pid 20561:tid 139670650316544] response = middleware_method(request)
[Sun Dec 06 15:00:09.294114 2015] [:error] [pid 20561:tid 139670650316544] File "/var/www/engine/account/middleware.py", line 29, in process_request
[Sun Dec 06 15:00:09.294120 2015] [:error] [pid 20561:tid 139670650316544] translation.activate(self.get_language_for_user(request))
[Sun Dec 06 15:00:09.294125 2015] [:error] [pid 20561:tid 139670650316544] File "/var/www/engine/account/middleware.py", line 20, in get_language_for_user
[Sun Dec 06 15:00:09.294130 2015] [:error] [pid 20561:tid 139670650316544] if request.user.is_authenticated():
[Sun Dec 06 15:00:09.294135 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 225, in inner
[Sun Dec 06 15:00:09.294141 2015] [:error] [pid 20561:tid 139670650316544] self._setup()
[Sun Dec 06 15:00:09.294145 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 365, in _setup
[Sun Dec 06 15:00:09.294150 2015] [:error] [pid 20561:tid 139670650316544] self._wrapped = self._setupfunc()
[Sun Dec 06 15:00:09.294155 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/middleware.py", line 22, in <lambda>
[Sun Dec 06 15:00:09.294160 2015] [:error] [pid 20561:tid 139670650316544] request.user = SimpleLazyObject(lambda: get_user(request))
[Sun Dec 06 15:00:09.294176 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/middleware.py", line 10, in get_user
[Sun Dec 06 15:00:09.294182 2015] [:error] [pid 20561:tid 139670650316544] request._cached_user = auth.get_user(request)
[Sun Dec 06 15:00:09.294187 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 167, in get_user
[Sun Dec 06 15:00:09.294192 2015] [:error] [pid 20561:tid 139670650316544] user_id = _get_user_session_key(request)
[Sun Dec 06 15:00:09.294196 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 59, in _get_user_session_key
[Sun Dec 06 15:00:09.294201 2015] [:error] [pid 20561:tid 139670650316544] return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
[Sun Dec 06 15:00:09.294206 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 150, in get_user_model
[Sun Dec 06 15:00:09.294210 2015] [:error] [pid 20561:tid 139670650316544] return django_apps.get_model(settings.AUTH_USER_MODEL)
[Sun Dec 06 15:00:09.294215 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 199, in get_model
[Sun Dec 06 15:00:09.294236 2015] [:error] [pid 20561:tid 139670650316544] self.check_models_ready()
[Sun Dec 06 15:00:09.294241 2015] [:error] [pid 20561:tid 139670650316544] File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 131, in check_models_ready
[Sun Dec 06 15:00:09.294246 2015] [:error] [pid 20561:tid 139670650316544] raise AppRegistryNotReady("Models aren't loaded yet.")
[Sun Dec 06 15:00:09.294253 2015] [:error] [pid 20561:tid 139670650316544] django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
[Sun Dec 06 15:00:09.294263 2015] [:error] [pid 20561:tid 139670650316544]
[Sun Dec 06 15:00:09.321835 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] mod_wsgi (pid=20561): Exception occurred processing WSGI script '/usr/django/engine.wsgi'.
[Sun Dec 06 15:00:09.321920 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] Traceback (most recent call last):
[Sun Dec 06 15:00:09.321989 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/base.py", line 108, in get_response
[Sun Dec 06 15:00:09.321998 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] response = middleware_method(request)
[Sun Dec 06 15:00:09.322033 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/var/www/engine/account/middleware.py", line 29, in process_request
[Sun Dec 06 15:00:09.322042 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] translation.activate(self.get_language_for_user(request))
[Sun Dec 06 15:00:09.322073 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/var/www/engine/account/middleware.py", line 20, in get_language_for_user
[Sun Dec 06 15:00:09.322110 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] if request.user.is_authenticated():
[Sun Dec 06 15:00:09.322153 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 225, in inner
[Sun Dec 06 15:00:09.322162 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] self._setup()
[Sun Dec 06 15:00:09.322217 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 365, in _setup
[Sun Dec 06 15:00:09.322226 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] self._wrapped = self._setupfunc()
[Sun Dec 06 15:00:09.322261 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/middleware.py", line 22, in <lambda>
[Sun Dec 06 15:00:09.322269 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] request.user = SimpleLazyObject(lambda: get_user(request))
[Sun Dec 06 15:00:09.322300 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/middleware.py", line 10, in get_user
[Sun Dec 06 15:00:09.322308 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] request._cached_user = auth.get_user(request)
[Sun Dec 06 15:00:09.322339 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 167, in get_user
[Sun Dec 06 15:00:09.322347 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] user_id = _get_user_session_key(request)
[Sun Dec 06 15:00:09.322379 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 59, in _get_user_session_key
[Sun Dec 06 15:00:09.322387 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
[Sun Dec 06 15:00:09.322436 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 150, in get_user_model
[Sun Dec 06 15:00:09.322445 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] return django_apps.get_model(settings.AUTH_USER_MODEL)
[Sun Dec 06 15:00:09.322477 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 199, in get_model
[Sun Dec 06 15:00:09.322485 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] self.check_models_ready()
[Sun Dec 06 15:00:09.322515 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 131, in check_models_ready
[Sun Dec 06 15:00:09.322523 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] raise AppRegistryNotReady("Models aren't loaded yet.")
[Sun Dec 06 15:00:09.322551 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
[Sun Dec 06 15:00:09.322569 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192]
[Sun Dec 06 15:00:09.322575 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] During handling of the above exception, another exception occurred:
[Sun Dec 06 15:00:09.322580 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192]
[Sun Dec 06 15:00:09.322593 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] Traceback (most recent call last):
[Sun Dec 06 15:00:09.322810 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/wsgi.py", line 189, in __call__
[Sun Dec 06 15:00:09.322820 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] response = self.get_response(request)
[Sun Dec 06 15:00:09.322858 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/base.py", line 218, in get_response
[Sun Dec 06 15:00:09.322867 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
[Sun Dec 06 15:00:09.322899 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/base.py", line 261, in handle_uncaught_exception
[Sun Dec 06 15:00:09.322908 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] return debug.technical_500_response(request, *exc_info)
[Sun Dec 06 15:00:09.323343 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/views/debug.py", line 97, in technical_500_response
[Sun Dec 06 15:00:09.323355 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] html = reporter.get_traceback_html()
[Sun Dec 06 15:00:09.323391 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/views/debug.py", line 383, in get_traceback_html
[Sun Dec 06 15:00:09.323399 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] c = Context(self.get_traceback_data(), use_l10n=False)
[Sun Dec 06 15:00:09.323430 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/views/debug.py", line 328, in get_traceback_data
[Sun Dec 06 15:00:09.323438 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] frames = self.get_traceback_frames()
[Sun Dec 06 15:00:09.323468 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/views/debug.py", line 501, in get_traceback_frames
[Sun Dec 06 15:00:09.323484 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] 'vars': self.filter.get_traceback_frame_variables(self.request, tb.tb_frame),
[Sun Dec 06 15:00:09.323521 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/views/debug.py", line 234, in get_traceback_frame_variables
[Sun Dec 06 15:00:09.323530 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] cleansed[name] = self.cleanse_special_types(request, value)
[Sun Dec 06 15:00:09.323560 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/views/debug.py", line 189, in cleanse_special_types
[Sun Dec 06 15:00:09.323568 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] if isinstance(value, HttpRequest):
[Sun Dec 06 15:00:09.323599 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 225, in inner
[Sun Dec 06 15:00:09.323607 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] self._setup()
[Sun Dec 06 15:00:09.323637 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 365, in _setup
[Sun Dec 06 15:00:09.323645 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] self._wrapped = self._setupfunc()
[Sun Dec 06 15:00:09.323675 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/middleware.py", line 22, in <lambda>
[Sun Dec 06 15:00:09.323683 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] request.user = SimpleLazyObject(lambda: get_user(request))
[Sun Dec 06 15:00:09.323713 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/middleware.py", line 10, in get_user
[Sun Dec 06 15:00:09.323721 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] request._cached_user = auth.get_user(request)
[Sun Dec 06 15:00:09.323751 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 167, in get_user
[Sun Dec 06 15:00:09.323758 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] user_id = _get_user_session_key(request)
[Sun Dec 06 15:00:09.323787 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 59, in _get_user_session_key
[Sun Dec 06 15:00:09.323795 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
[Sun Dec 06 15:00:09.323825 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/contrib/auth/__init__.py", line 150, in get_user_model
[Sun Dec 06 15:00:09.323833 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] return django_apps.get_model(settings.AUTH_USER_MODEL)
[Sun Dec 06 15:00:09.323863 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 199, in get_model
[Sun Dec 06 15:00:09.323870 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] self.check_models_ready()
[Sun Dec 06 15:00:09.323899 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 131, in check_models_ready
[Sun Dec 06 15:00:09.323915 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] raise AppRegistryNotReady("Models aren't loaded yet.")
[Sun Dec 06 15:00:09.323940 2015] [:error] [pid 20561:tid 139670650316544] [client 87.117.12.160:5192] django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
EDIT: I have app account Pinax-account module issue
<https://github.com/pinax/django-user-accounts>, in that app there are several
places with 'get_user_model': views.py:
def create_user(self, form, commit=True, **kwargs):
**user = get_user_model()**(**kwargs)
username = form.cleaned_data.get("username")
if username is None:
username = self.generate_username(form)
user.username = username
user.email = form.cleaned_data["email"].strip()
password = form.cleaned_data.get("password")
if password:
user.set_password(password)
else:
user.set_unusable_password()
if commit:
user.save()
return user
and more:
def send_email(self, email):
User = get_user_model()
protocol = getattr(settings, "DEFAULT_HTTP_PROTOCOL", "http")
current_site = get_current_site(self.request)
email_qs = EmailAddress.objects.filter(email__iexact=email)
for user in User.objects.filter(pk__in=email_qs.values("user")):
uid = int_to_base36(user.id)
token = self.make_token(user)
password_reset_url = "{0}://{1}{2}".format(
protocol,
current_site.domain,
reverse("account_password_reset_token", kwargs=dict(uidb36=uid, token=token))
)
ctx = {
"user": user,
"current_site": current_site,
"password_reset_url": password_reset_url,
}
hookset.send_password_reset_email([user.email], ctx)
and 3rd:
def get_user(self):
try:
uid_int = base36_to_int(self.kwargs["uidb36"])
except ValueError:
raise Http404()
return get_object_or_404(get_user_model(), id=uid_int)
Answer: Try to upgrade to Django 1.7 first. and check the Deprecation Warning. You can
find the guilty module soon.
|
filter strings by regex in a list
Question: I'd like to filter a list of strings in python by using regex. In the
following case, keeping only the files with a '.npy' extension.
The code that doesn't work:
import re
files = [ '/a/b/c/la_seg_x005_y003.png',
'/a/b/c/la_seg_x005_y003.npy',
'/a/b/c/la_seg_x004_y003.png',
'/a/b/c/la_seg_x004_y003.npy',
'/a/b/c/la_seg_x003_y003.png',
'/a/b/c/la_seg_x003_y003.npy', ]
regex = re.compile(r'_x\d+_y\d+\.npy')
selected_files = filter(regex.match, files)
print(selected_files)
The same regex works for me in Ruby:
selected = files.select { |f| f =~ /_x\d+_y\d+\.npy/ }
What's wrong with the Python code?
Answer: Just use `search`\- since match starts matching from the beginning to end
(i.e. entire) of string and search matches anywhere in the string.
import re
files = [ '/a/b/c/la_seg_x005_y003.png',
'/a/b/c/la_seg_x005_y003.npy',
'/a/b/c/la_seg_x004_y003.png',
'/a/b/c/la_seg_x004_y003.npy',
'/a/b/c/la_seg_x003_y003.png',
'/a/b/c/la_seg_x003_y003.npy', ]
regex = re.compile(r'_x\d+_y\d+\.npy')
selected_files = filter(regex.search, files)
print(selected_files)
Output-
['/a/b/c/la_seg_x005_y003.npy', '/a/b/c/la_seg_x004_y003.npy', '/a/b/c/la_seg_x003_y003.npy']
|
HDF5 library and header mismatch error
Question: I am using Anaconda on Ubuntu x64. When I run a simple python program, I get
this error message and a [python] kernel dump. I have seen other questions
with similar problems, but all the answers I see don't resolve my issue. I
have tried removing and reinstalling h5py with both `pip` and `conda`, but I
get the same error:
idf@DellInsp:~/Documents/Projects/python3$ python3 testtables.py
With this code inside testtables.py
import tables
tables.test()
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
PyTables version: 3.2.2
HDF5 version: 1.8.11
NumPy version: 1.10.1
Numexpr version: 2.4.4 (not using Intel's VML/MKL)
Zlib version: 1.2.8 (in Python interpreter)
Blosc version: 1.4.4 (2015-05-05)
Blosc compressors: blosclz (1.0.2.1), lz4 (1.2.0), lz4hc (1.2.0), snappy (1.1.1), zlib (1.2.8)
Cython version: 0.23.4
Python version: 3.5.0 |Anaconda 2.4.0 (64-bit)| (default, Oct 19 2015, 21:57:25)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
Platform: Linux-3.19.0-39-lowlatency-x86_64-with-debian-jessie-sid
Byte-ordering: little
Detected cores: 4
Default encoding: utf-8
Default FS encoding: utf-8
Default locale: (en_US, UTF-8)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Performing only a light (yet comprehensive) subset of the test suite.
If you want a more complete test, try passing the --heavy flag to this script
(or set the 'heavy' parameter in case you are using tables.test() call).
The whole suite will take more than 4 hours to complete on a relatively
modern CPU and around 512 MB of main memory.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
/home/idf/anaconda3/lib/python3.5/site-packages/tables/filters.py:292: FiltersWarning: compression library ``bzip2`` is not available; using ``zlib`` instead
% (complib, default_complib), FiltersWarning)
/home/idf/anaconda3/lib/python3.5/site-packages/tables/filters.py:292: FiltersWarning: compression library ``lzo`` is not available; using ``zlib`` instead
% (complib, default_complib), FiltersWarning)
/home/idf/anaconda3/lib/python3.5/site-packages/tables/atom.py:570: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
for arg in inspect.getargspec(self.__init__)[0]
Warning! ***HDF5 library version mismatched error***
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as 'LD_LIBRARY_PATH'.
You can, at your own risk, disable this warning by setting the environment
variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.8.11, library is 1.8.15
SUMMARY OF THE HDF5 CONFIGURATION
=================================
General Information:
-------------------
HDF5 Version: 1.8.15-patch1
Configured on: Wed Oct 14 16:46:37 CDT 2015
Configured by: [email protected]
Configure mode: production
Host system: x86_64-unknown-linux-gnu
Uname information: Linux centos5x64.corp.continuum.io 2.6.18-400.1.1.el5 #1 SMP Thu Dec 18 00:59:53 EST 2014 x86_64 x86_64 x86_64 GNU/Linux
Byte sex: little-endian
Libraries: shared
Installation point: /home/ilan/minonda/envs/_build
Compiling Options:
------------------
Compilation Mode: production
C Compiler: /usr/bin/gcc ( gcc (GCC) 4.1.2 20080704 )
CFLAGS:
H5_CFLAGS: -std=c99 -pedantic -Wall -Wextra -Wundef -Wshadow -Wpointer-arith -Wbad-function-cast -Wcast-qual -Wcast-align -Wwrite-strings -Wconversion -Waggregate-return -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wredundant-decls -Wnested-externs -Winline -Wno-long-long -Wfloat-equal -Wmissing-format-attribute -Wmissing-noreturn -Wpacked -Wdisabled-optimization -Wformat=2 -Wunreachable-code -Wendif-labels -Wdeclaration-after-statement -Wold-style-definition -Winvalid-pch -Wvariadic-macros -Wnonnull -Winit-self -Wmissing-include-dirs -Wswitch-default -Wswitch-enum -Wunused-macros -Wunsafe-loop-optimizations -Wc++-compat -Wvolatile-register-var -O3 -fomit-frame-pointer -finline-functions
AM_CFLAGS:
CPPFLAGS:
H5_CPPFLAGS: -D_DEFAULT_SOURCE -D_BSD_SOURCE -D_GNU_SOURCE -D_POSIX_C_SOURCE=200112L -DNDEBUG -UH5_DEBUG_API
AM_CPPFLAGS: -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE
Shared C Library: yes
Static C Library: no
Statically Linked Executables: no
LDFLAGS:
H5_LDFLAGS:
AM_LDFLAGS:
Extra libraries: -lrt -lz -ldl -lm
Archiver: ar
Ranlib: ranlib
Debugged Packages:
API Tracing: no
Languages:
----------
Fortran: no
C++: yes
C++ Compiler: /usr/bin/g++ ( g++ (GCC) 4.1.2 20080704 )
C++ Flags:
H5 C++ Flags:
AM C++ Flags:
Shared C++ Library: yes
Static C++ Library: no
Features:
---------
Parallel HDF5: no
High Level library: yes
Threadsafety: no
Default API Mapping: v18
With Deprecated Public Symbols: yes
I/O filters (external): deflate(zlib)
MPE: no
Direct VFD: no
dmalloc: no
Clear file buffers before write: yes
Using memory checker: no
Function Stack Tracing: no
Strict File Format Checks: no
Optimization Instrumentation: no
Bye...
Aborted (core dumped)
idf@DellInsp:~/Documents/Projects/python3$
Answer: I removed then installed version the latest version of Anaconda. This resolved
this issues.
|
Python using numpy ~isnan()
Question: I'm new to Python. I'm trying to reduce the following peace of code.
import numpy as np
data = np.loadtxt('logfilename.txt')
x = data[:,0]
x = x[~np.isnan(x)]
to something like that:
import numpy as np
data = np.loadtxt('logfilename.txt')
x = data[~np.isnan(data[:,0])]
But that doesn't work. Can anyone help me?
Very best
Answer: Following your code, you need to replace `x` with `data[:,0]` so it should be:
x = data[:,0][~np.isnan(data[:,0])]
That's repetitive, why would you want that? Stay with the first one!
|
TypeError when importing Python's decimal module
Question: What causes the TypeError when importing the decimal module?
[Michael@devserver MyScripts]$ cat decTest.py
from decimal import *
#item = Decimal( 0.70 )
[Michael@devserver MyScripts]$ python3.3 decTest.py
Traceback (most recent call last):
File "decTest.py", line 1, in <module>
from decimal import *
File "/usr/local/lib/python3.3/decimal.py", line 433, in <module>
import threading
File "/usr/local/lib/python3.3/threading.py", line 6, in <module>
from time import sleep as _sleep
File "/var/www/python/ineasysteps/MyScripts/time.py", line 3, in <module>
today = datetime.today()
TypeError: an integer is required (got type datetime.time)
[Michael@devserver MyScripts]$
Answer: You have in your own folder a file named "time.py", which conflicts with the
built-in time module. Notice how in the stack trace it shows the threading
module needing to import "time".
Rename your "time.py" to something that is not the same name as a built-in
module.
|
Pickling method descriptor objects in python
Question: I am trying to pickle a `method_descriptor`.
Pickling with `pickle` or `cloudpickle` fails:
Python 2.7.10 |Continuum Analytics, Inc.| (default, Oct 19 2015, 18:04:42)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import pickle, cloudpickle
>>> pickle.dumps(set.union)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/pickle.py", line 1374, in dumps
Pickler(file, protocol).dump(obj)
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/pickle.py", line 306, in save
rv = reduce(self.proto)
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: cannot pickle method_descriptor objects
>>> cloudpickle.dumps(set.union)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 602, in dumps
cp.dump(obj)
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 111, in dump
raise pickle.PicklingError(msg)
pickle.PicklingError: Could not pickle object as excessively deep recursion required.
Importing `dill` somehow makes `pickle` work, as shown below:
>>> import dill
>>> pickle.dumps(set.union)
'cdill.dill\n_getattr\np0\n(c__builtin__\nset\np1\nS\'union\'\np2\nS"<method \'union\' of \'set\' objects>"\np3\ntp4\nRp5\n.'
>>> f = pickle.loads(pickle.dumps(set.union))
>>> set.union(set([1,2]), set([3]))
set([1, 2, 3])
>>> f(set([1,2]), set([3]))
set([1, 2, 3])
The issue in `cloudpickle` remains even after the `dill` import:
>>> cloudpickle.dumps(set.union)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 602, in dumps
cp.dump(obj)
File "/home/pmd/anaconda3/envs/python2/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 111, in dump
raise pickle.PicklingError(msg)
pickle.PicklingError: Could not pickle object as excessively deep recursion required.
In my application I rely on `cloudpickle` to handle functions with globals. So
my question is, how can I get `cloudpickle` to work for `method_descriptor`
objects in Python 2.7?
EDIT: I noticed that the same issue occurs in Python 3.3, but is not present
in Python 3.5.
Answer: I'm the `dill` author. When you do `import dill`, it injects the serialization
registry from `dill` into `pickle` (basically, puts all the `copy_reg`-type
knowledge from `dill` into the `pickle` registry).
>>> import pickle
>>> pickle.Pickler.dispatch
{<type 'function'>: <function save_global at 0x105d0c7d0>, <type 'dict'>: <function save_dict at 0x105d0c668>, <type 'int'>: <function save_int at 0x105d0c230>, <type 'long'>: <function save_long at 0x105d0c2a8>, <type 'list'>: <function save_list at 0x105d0c578>, <type 'str'>: <function save_string at 0x105d0c398>, <type 'unicode'>: <function save_unicode at 0x105d0c410>, <type 'instance'>: <function save_inst at 0x105d0c758>, <type 'type'>: <function save_global at 0x105d0c7d0>, <type 'NoneType'>: <function save_none at 0x105d0c140>, <type 'bool'>: <function save_bool at 0x105d0c1b8>, <type 'tuple'>: <function save_tuple at 0x105d0c488>, <type 'float'>: <function save_float at 0x105d0c320>, <type 'classobj'>: <function save_global at 0x105d0c7d0>, <type 'builtin_function_or_method'>: <function save_global at 0x105d0c7d0>}
>>> import dill
>>> pickle.Pickler.dispatch
{<class '_pyio.BufferedReader'>: <function save_file at 0x106c8b848>, <class '_pyio.TextIOWrapper'>: <function save_file at 0x106c8b848>, <type 'operator.itemgetter'>: <function save_itemgetter at 0x106c8b578>, <type 'weakproxy'>: <function save_weakproxy at 0x106c8c050>, <type 'NoneType'>: <function save_none at 0x105d0c140>, <type 'str'>: <function save_string at 0x105d0c398>, <type 'file'>: <function save_file at 0x106c8b8c0>, <type 'classmethod'>: <function save_classmethod at 0x106c8c230>, <type 'float'>: <function save_float at 0x105d0c320>, <type 'instancemethod'>: <function save_instancemethod0 at 0x106c8ba28>, <type 'cell'>: <function save_cell at 0x106c8bb18>, <type 'member_descriptor'>: <function save_wrapper_descriptor at 0x106c8bc08>, <type 'slice'>: <function save_slice at 0x106c8bc80>, <type 'dict'>: <function save_module_dict at 0x106c8b410>, <type 'long'>: <function save_long at 0x105d0c2a8>, <type 'code'>: <function save_code at 0x106c8b320>, <type 'type'>: <function save_type at 0x106c8c0c8>, <type 'xrange'>: <function save_singleton at 0x106c8bde8>, <type 'builtin_function_or_method'>: <function save_builtin_method at 0x106c8b9b0>, <type 'classobj'>: <function save_classobj at 0x106c8b488>, <type 'weakref'>: <function save_weakref at 0x106c8bed8>, <type 'getset_descriptor'>: <function save_wrapper_descriptor at 0x106c8bc08>, <type 'weakcallableproxy'>: <function save_weakproxy at 0x106c8c050>, <class '_pyio.BufferedRandom'>: <function save_file at 0x106c8b848>, <type 'int'>: <function save_int at 0x105d0c230>, <type 'list'>: <function save_list at 0x105d0c578>, <type 'functools.partial'>: <function save_functor at 0x106c8b7d0>, <type 'bool'>: <function save_bool at 0x105d0c1b8>, <type 'function'>: <function save_function at 0x106c8b398>, <type 'thread.lock'>: <function save_lock at 0x106c8b500>, <type 'super'>: <function save_functor at 0x106c8b938>, <type 'staticmethod'>: <function save_classmethod at 0x106c8c230>, <type 'module'>: <function save_module at 0x106c8bf50>, <type 'method_descriptor'>: <function save_wrapper_descriptor at 0x106c8bc08>, <type 'operator.attrgetter'>: <function save_attrgetter at 0x106c8b5f0>, <type 'wrapper_descriptor'>: <function save_wrapper_descriptor at 0x106c8bc08>, <type 'numpy.ufunc'>: <function save_numpy_ufunc at 0x106c8bcf8>, <type 'method-wrapper'>: <function save_instancemethod at 0x106c8baa0>, <type 'instance'>: <function save_inst at 0x105d0c758>, <type 'cStringIO.StringI'>: <function save_stringi at 0x106c8b6e0>, <type 'unicode'>: <function save_unicode at 0x105d0c410>, <class '_pyio.BufferedWriter'>: <function save_file at 0x106c8b848>, <type 'property'>: <function save_property at 0x106c8c140>, <type 'ellipsis'>: <function save_singleton at 0x106c8bde8>, <type 'tuple'>: <function save_tuple at 0x105d0c488>, <type 'cStringIO.StringO'>: <function save_stringo at 0x106c8b758>, <type 'NotImplementedType'>: <function save_singleton at 0x106c8bde8>, <type 'dictproxy'>: <function save_dictproxy at 0x106c8bb90>}
`cloudpickle` has (slightly) different pickling functions than `dill`, and if
you are using `cloudpickle`, it push it's own serialization functions into the
`pickle` registry. If you want to get `cloudpickle` to work for you, you might
be able to monkeypatch a solution… essentially install a module within your
application that does `import dill as cloudpickle` (Nice reference:
<http://blog.dscpl.com.au/2015/03/safely-applying-monkey-patches-in-
python.html>)… but that would replace the entire use of `cloudpickle` with
`dill` in your application context. You could also try a monkeypatch along
these lines:
>>> #first import dill, which populates itself into pickle's dispatch
>>> import dill
>>> import pickle
>>> # save the MethodDescriptorType from dill
>>> MethodDescriptorType = type(type.__dict__['mro'])
>>> MethodDescriptorWrapper = pickle.Pickler.dispatch[MethodDescriptorType]
>>> # cloudpickle does the same, so let it update the dispatch table
>>> import cloudpickle
>>> # now, put the saved MethodDescriptorType back in
>>> pickle.Pickler.dispatch[MethodDescriptorWrapperType] = MethodDescriptorWrapper
Note that if you are going to use `cloudpickle.dumps` directly, you'd have to
overload the registry in `cloudpickle` directly by doing the above monkeypatch
on `cloudpickle.CloudPickler.dispatch`.
I don't guarantee that it _will_ work, nor do I guarantee that it won't screw
up other objects from `cloudpickle` (essentially, I haven't tried it), but
it's a potential route to replacing the offending `cloudpickle` wrapper with
the one from `dill`.
If you want the short answer, I'd say (at least for this case) use `dill`. ;)
* * *
**EDIT** with regard to `copyreg`:
Here's what's in `dill`:
def _getattr(objclass, name, repr_str):
# hack to grab the reference directly
try:
attr = repr_str.split("'")[3]
return eval(attr+'.__dict__["'+name+'"]')
except:
attr = getattr(objclass,name)
if name == '__dict__':
attr = attr[name]
return attr
Which is used to register a function with a lower-level reduce function
(directly on the pickler instance). `obj` is the object to pickle.
pickler.save_reduce(_getattr, (obj.__objclass__, obj.__name__, obj.__repr__()), obj=obj)
I believe this translates to a reduce method (used directly in
`copyreg.pickle`) like this:
def _reduce_method_descriptor(obj):
return _getattr, (obj.__objclass__, obj.__name__, obj.__repr__())
|
Random values in dictionary- python
Question: I have this `OrderedDict`:
(
[
('Joe', ['90', '0']),
('Alice', ['100', '0']),
('Eva', ['90', '5']),
('Clare', ['100', '10']),
('Bob', ['100', '5'])
]
)
I have to create a function that takes the `OrderedDict` as an argument,
generates a random performance value using normal distribution and returns
these in an `OrderedDict` with names as keys.
The result should look like this:
([("Joe",91.362718),("Bob",100.0)...)
Answer: Do you mean you would like to have something like this:
from collections import OrderedDict
from scipy.stats import norm
ordered_dict = OrderedDict(
[
('Joe', ['90', '0']),
('Alice', ['100', '0']),
('Eva', ['90', '5']),
('Clare', ['100', '10']),
('Bob', ['100', '5'])
]
)
def draw_normal(input_dict):
result = []
for key in input_dict:
loc, scale = input_dict[key]
random_number = norm.rvs(loc=float(loc), scale=float(scale))
result.append((key, random_number))
return OrderedDict(result)
print draw_normal(ordered_dict)
The output:
OrderedDict([('Joe', 90.0), ('Alice', 100.0), ('Eva', 93.55249306218722), ('Clare', 105.280646961399), ('Bob', 104.29299844957707)])
|
I am trying to edit a function to accept user input options 1,2,3,or 4 through pygame interaction based on python 3 knowledge
Question: i was wondering if anyone had an example of code that shows basically the game
asking a question and the user has to input 1,2,3,or 4. I am not sure how to
accept user input on pygame and proceed to the next question. I have code that
asks for the user to press any key to proceed, but i am specifically gearing
towards a correct input. i have this function but am not sure how to change it
select a number/string 1,2,3, or 4. I am also trying to actually have that
input number random against the computer somehow using random.randit to see if
the player will match the number and win. I am also doing this through pygame
and using graphics and such. thanks in advance.
import pygame, random, sys
from pygame.locals import *
mainClock = pygame.time.Clock()
def waitForPlayerToSelect():
while True:
for event in pygame.event.get():
if event.type == QUIT:
leave()
if event.type == KEYDOWN:
if event.key == K_ESCAPE: # pressing escape quits
leave()
return
Answer: Welcome to Stackoverflow! This is a great site for asking questions. The idea
of stackoverflow is to come here after you have done as much looking as you
can. A quick search on stack overflow of `'pygame keyboard input'` led me to
this answer which answers your question:
<http://stackoverflow.com/a/29068383/2605424>
For the future please refer to [www.pygame.org](https://www.pygame.org/docs/)
it's docs include lots of information about this exact thing.
However this time I will give you further insight of how you may accomplish
what you are wanting.
import pygame, random, sys
from pygame.locals import *
mainClock = pygame.time.Clock()
def waitForPlayerToSelect():
while True:
for event in pygame.event.get():
if event.type == QUIT:
leave()
if event.type == KEYDOWN:
if event.key == K_ESCAPE: # pressing escape quits
leave()
elif event.key == K_1:
return 1
elif event.key == K_2:
return 2
return
For this to work, from the place where you call it just use something similar
to:
playerSelection = waitForPlayerToSelect()
|
PyCharm ImportError
Question: New to Python3.5 (and MacOS) and have begun by using the PyCharm IDE. I'm
using an example from [Download file from web in Python
3](http://stackoverflow.com/questions/7243750/download-file-from-web-in-
python-3) to download a file, but it fails at the first statement:
import urllib.request
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read() # a `bytes` object
text = data.decode('utf-8')
/Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 /Users/nevd/PycharmProjects/py3/download.py
Traceback (most recent call last):
File "/Users/nevd/PycharmProjects/py3/download.py", line 4, in <module>
import urllib.request
File "/Users/nevd/PycharmProjects/py3/urllib.py", line 9, in <module>
import urllib.request
ImportError: No module named 'urllib.request'; 'urllib' is not a package
However the code works OK from Terminal, so I assume the problem is a PyCharm
configuration issue. Under `PyCharm File>Default Settings>Default
Project>Project Interpreter` the project interpreter is 3.5.0 but below that I
only see 2 Packages: `pip 7.1.2` & `setuptools 18.2`.
Mac OS X v10.11.1 (El Capitan) and PyCharm Community 5.0.1
Answer: You don't have to do anything just rename your file from urllib.py to
something else in this case you are overshadowing your library package which
you are adding to the file.
> Rename your file
|
how to handle command line input options in python?
Question: I need a script which can take input via command line. Input will be a file
name and It will take a default input if no input is provided. The script will
also check the existence of the file.
Name of the script is : **wrapper.py**
input example :
python wrapper.py -config abc.txt
my try:
import argparse
commandLineArgumentParser = argparse.ArgumentParser()
commandLineArgumentParser.add_argument("-config", "--configfile", help="first Name")
config = commandLineArguments.configfile
Answer: You should put you code which loads configuration into a function. Then you
should pass file name to it and return all configuration data.
So, start with:
def read_configuration_from_file(filename):
config_parser = ConfigParser.ConfigParser()
config_parser.read(filename)
Once you read information from config_parser, you have to figure out how to
return the configuration. There are many options. You could return a
dictionary, or you could create your own object which contains configuration.
Option 1 (dictionary):
return {'first_name': first_name, 'last_name': last_name}
Option 2 (configuration data class):
class ConfigurationData(object):
def __init__(self, first_name, last_name):
self.first_name = first_name
self.last_name = last_name
and then you just `return ConfigurationData(first_name=first_name,
last_name=last_name)`.
You can also merge it all and make the _read_ function part of the class:
class ConfigurationData(object):
def __init__(self, first_name, last_name):
self.first_name = first_name
self.last_name = last_name
@staticmethod
def read_configuration_from_file(filename):
config_parser = ConfigParser.ConfigParser()
config_parser.read(filename)
...
return ConfigurationData(first_name=first_name, last_name=last_name)
|
Why is running a .compute() in dask causing "Fatal Python error: GC object already tracked"
Question: I am running Windows 10 with Jupyter notebook version 4.0.6 with Python 2.7.10
and Anaconda 2.4.0 (64-bit)
I am following a blog/tutorial at
<https://jakevdp.github.io/blog/2015/08/14/out-of-core-dataframes-in-python/>
:
from dask import dataframe as dd
columns = ["name", "amenity", "Longitude", "Latitude"]
data = dd.read_csv("POIWorld.csv", usecols=columns)
with_name = data[data.name.notnull()]
with_amenity = data[data.amenity.notnull()]
is_starbucks = with_name.name.str.contains('[Ss]tarbucks')
is_dunkin = with_name.name.str.contains('[Dd]unkin')
starbucks = with_name[is_starbucks]
dunkin = with_name[is_dunkin]
dd.compute(starbucks.name.count(), dunkin.name.count())
This last statement causes an error to come up in my command prompt session
running Jupyter as follows:
> Fatal Python error: GC object already tracked
Reading similar questions it could be a possible issue in the source code for
dask dealing with Python handling memory, I'm hoping I'm just missing
something.
I had a previous issue with headers and dask in this tutorial and had to run:
pip install git+https://github.com/blaze/dask.git --upgrade
Similar questions that do not help:
[Fatal Python error: GC object already
tracked](http://stackoverflow.com/questions/33480124/fatal-python-error-gc-
object-already-tracked)
[Debugging Python Fatal Error: GC Object already
Tracked](http://stackoverflow.com/questions/23178606/debugging-python-fatal-
error-gc-object-already-tracked)
Answer: Some versions of Pandas do not handle multiple threads well, especially for
`pandas.read_csv`. These are fixed in recent versions of Pandas so this
problem can probably be resolved by one of the following:
conda install pandas
pip install pandas --upgrade
|
how can I add field in serializer?
Question: Below is my `serializer.py` file:
from rest_framework import serializers
class TaskListSerializer(serializers.Serializer):
id = serializers.CharField()
user_id = serializers.CharField()
status = serializers.CharField()
name = serializers.CharField()
In Python shell, I input this:
>>> from serializer import TaskListSerializer as ts
>>> result = ts({'id':1, 'user_id': 1, 'status':2, 'name': 'bob'})
>>> result.data
{'status': u'2', 'user_id': u'1', 'id': u'1', 'name': u'bob'}
Now I want do this:
First, the input is not change, is also `{'id':1, 'user_id': 1, 'status':2,
'name': 'bob'}`
But I want to add a `field` and change `name: bob` to `jim` in `serializer.py`
and make output like this:
`{'status': u'2', 'user_id': u'1', 'id': u'1', 'name': u'jim', u'age': '15'}`
How can I do it in `serializer.py`?
Answer: Use serializers.SerializerMethodField()
class TaskListSerializer(serializers.ModelSerializer):
complex_things = serializers.SerializerMethodField()
def get_complex_things(self, obj):
result_of_complex_things = 2 + 2
return result_of_complex_things
|
install pyshark on python 3.5
Question: I've installed python 3.5 on mac os x (el capitan). I want to import pyshark
module in python, but I get error. I installed the pyshark requirement such as
(logbook, lxml,trollies, py) but I couldn't import pyshark module.
pip3 list >>
syncio (3.4.3)
futures (3.0.3)
Logbook (0.12.3)
lxml (3.5.0)
pip (7.1.2)
py (1.4.30)
pyshark (0.3.6)
setuptools (18.2)
trollius (2.0)
importing pyshark module error:
>>> import pyshark
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pyshark/__init__.py", line 1, in <module>
from pyshark.capture.live_capture import LiveCapture
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pyshark/capture/live_capture.py", line 1, in <module>
from pyshark.capture.capture import Capture
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pyshark/capture/capture.py", line 6, in <module>
import trollius as asyncio
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/__init__.py", line 21, in <module>
from .base_events import *
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/base_events.py", line 39, in <module>
from . import coroutines
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/coroutines.py", line 15, in <module>
from . import futures
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/futures.py", line 116, in <module>
class Future(object):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/futures.py", line 426, in Future
__await__ = __iter__ # make compatible with 'await' expression
NameError: name '__iter__' is not defined
Also importing trollies module has a error:
>>> import trollius
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/__init__.py", line 21, in <module>
from .base_events import *
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/base_events.py", line 39, in <module>
from . import coroutines
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/coroutines.py", line 15, in <module>
from . import futures
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/futures.py", line 116, in <module>
class Future(object):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/trollius/futures.py", line 426, in Future
__await__ = __iter__ # make compatible with 'await' expression
NameError: name '__iter__' is not defined
Could any one help me out, I can't figure out what is the problem. thanks in
advanced
Answer: This is reported as a bug in <https://github.com/haypo/trollius/issues/4>
Until it is resolved you can temporalily resolve it by installing [an
unofficial development
version](https://github.com/ludios/trollius/commit/f757a29815c2c9f5c3a691135ff758649fe84098)
from ludios by doing:
pip install -U git+https://github.com/ludios/trollius@f757a29815c2c9f5c3a691135ff758649fe84098#egg=trollius
Remember that this is a temporary and unofficial solution only. So hit the
"subscribe" button on the issue page:
<https://github.com/haypo/trollius/issues/4> to subscribe to notifications.
When the issue is officially resolved, use the official solution.
|
how to check whether raw_input is integer, string and date in python
Question: I want to check my user input whether it is integer, string or date.
data1 = raw_input("data = ")
print(data1)
if isinstance(data1, datetime.datetime.strptime(data1, '%Y-%m-%d')):
print("date it is")
elif isinstance(data1, int):
print("int it is")
elif isinstance(data1, basestring):
print("str it is")
But it is showing:
time data '10' does not match format '%Y-%m-%d'
Answer: 1. [`datetime.datetime.strptime()`](https://docs.python.org/2/library/datetime.html#datetime.datetime) will raise `ValueError` if the string does not match the format.
2. [`raw_input()`](https://docs.python.org/2/library/functions.html#raw_input) always return string object, so [`isinstance(data1, int)`](https://docs.python.org/2/library/functions.html#isinstance) always is `False` if you don't covert the `data1` to int object before check.
3. I'd suggest use [`try...except`](https://docs.python.org/2/tutorial/errors.html#handling-exceptions) to catch the `ValueError` like the following code:
import datetime
data1 = raw_input("data = ")
print(data1)
try:
datetime.datetime.strptime(data1, '%Y-%m-%d')
except ValueError:
try:
int(data1)
except ValueError:
print("str it is")
else:
print("int it is")
else:
print("date it is")
Demo:
kevin@Arch ~> python2 input_check.py
data = 2014-01-02
2014-01-02
date it is
kevin@Arch ~> python2 input_check.py
data = 12
12
int it is
kevin@Arch ~> python2 input_check.py
data = foobar
foobar
str it is
|
Merging duplicate lists and deleting a field in each list depending on the value in Python
Question: I am still a beginner in Python. I have a tuple to be filtered, merged and
sorted. The tuple looks like this:
id, ts,val
tup = [(213,5,10.0),
(214,5,20.0),
(215,5,30.0),
(313,5,60.0),
(314,5,70.0),
(315,5,80.0),
(213,10,11.0),
(214,10,21.0),
(215,10,31.0),
(313,10,61.0),
(314,10,71.0),
(315,10,81.0),
(315,15,12.0),
(314,15,22.0),
(215,15,32.0),
(313,15,62.0),
(214,15,72.0),
(213,15,82.0] and so on
Description about the list: The first column(id)can have only these 6 values
213,214,215,313,314,315 but in any different order. The second column(ts) will
have same values for every 6 rows. Third column(val) will have some random
floating point values
Now my final result should be something like this:
result = [(5,10.0,20.0,30.0,60.0,70.0,80.0),
(10,11.0,21.0,31.0,61.0,71.0,81.0),
(15,82.0,72.0,32.0,62.0,22.0,12.0)]
That is the first column in each row is to be deleted. There should be only
one unique row for each unique value in the second column. so the order of
each result row should be:
(ts,val corresponding to id 213,val corresponding to 214, corresponding to id 215,val corresponding to 313,corresponding to id 314,val corresponding to 315)
Note : I am restricted to use only the standard python libraries. So panda,
numpy cannot be used.
I tried a lot of possibilities but couldnt solve it. Please help me do this.
Thanks in advance.
Answer: You can use
[itertools.groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby)
from itertools import groupby
result=[]
for i,g in groupby(lst, lambda x:x[1]):
group= [i]+map(lambda x:x[-1],sorted(list(g),key=lambda x:x[0]))
result.append(tuple(group))
print result
Output:
[(5, 10.0, 20.0, 30.0, 60.0, 70.0, 80.0),
(10, 11.0, 21.0, 31.0, 61.0, 71.0, 81.0),
(15, 82.0, 72.0, 32.0, 62.0, 22.0, 12.0)]
|
Python: appending a counter to a csv file
Question: I am working on a project with data(csv) i gathered from last.fm. In the
dataset there are four columns, the first is the artist, second the album, the
3th is the songname and the fourth is the date at which i scrobbled the track
to last.fm. I already have found a way of counting the number of occurences of
each artist, album and song, but i would like to append this data to each data
row so i would and up with an csv file that has 7 columns. So in each row i
want to add the number of times that the song, artist and album are in the
dataset. I just cannot figure out how to do this. I have a hard time to get
the right artist out of the counter. Can someone help?
import csv
import collections
artists = collections.Counter()
album = collections.Counter()
song = collections.Counter()
with open('lastfm.csv') as input_file:
for row in csv.reader(input_file, delimiter=';'):
artists[row[0]] += 1
album[row[1]] += 1
song[row[2]] += 1
for row in input_file:
row[4] = artists(row[0])
Answer: Assuming that the input file isn't enormous, you can just reiterate over your
input file a second time and write the lines out with the counts appended,
like so:
import csv
import collections
artists = collections.Counter()
album = collections.Counter()
song = collections.Counter()
with open('lastfm.csv') as input_file:
for row in csv.reader(input_file, delimiter=';'):
artists[row[0]] += 1
album[row[1]] += 1
song[row[2]] += 1
with open('output.csv', 'w') as output_file:
writer = csv.writer(output_file, delimiter=';')
with open('lastfm.csv', 'r') as input_file:
for row in csv.reader(input_file, delimiter=';'):
writer.writerow(row + [song[row[2]], artists[row[0]], album[row[1]]])
|
struct.pack is much slower in python 2.6 when working with numpy arrays
Question: # The basic question
Hello everyone. I believe that I found an issue with python 2.6, struct.pack,
and numpy arrays. The issue is that the following code is incredibly slow when
I run it using python 2.6 (but it is sufficiently fast when I run it using
python 2.7 or 2.5).
import numpy as np
import struct
x = np.mat(np.random.randint(0,3000,(1000,1000)))
z = struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))
I need to be able to run something similar to this numerous times for an
application that I am working on and right now my only option is to really run
it using python 2.6. I am asking if there is a faster way that I can do this
using python 2.6 or if I should just spend the time trying to get a system
administrator to install python 2.7 or 3.0 for me.
As far as why I need to be able to run on python 2.6 and how I figured out
that this line of code was the issue see below.
# The long story
I am working on a project where I am using python to modify some files that
are input into another program so that I can run a Monte Carlo simulation on
that program. Everything was going great, I wrote all the code using python 3,
made sure it also all worked in python 2.7 and ran a couple test cases on my
underpowered computer. Things went pretty well. It took about 30 seconds to
run a single test case. Then I went to port my analysis to a server that we
have that is much faster/more powerful then my laptop.
I got everything working on the server but have run into a huge problem. It
now takes 30 minutes to run through a single test case. After a little
investigation I found out that the server only has python version 2.6
installed and I believe that this is the problem. Further, I believe that the
following line in particular is the issue
z = struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))
where x is a numpy matrix, numpy has been imported as np, and struct has also
been imported. The reason I believe that this line is the issue follows.
First, when running this line in python 2.6 I get the following warning:
DeprecationWarning: struct integer overflow masking is deprecated
despite the fact that I am absolutely certain that the contents of the array I
am trying to pack falls within the bounds of an unsigned short integer.
Running this same exact code using python 2.7 and 3.5 I do not get the
deprecation warning either.
Further, I attempted a short test using time it. I ran the following code
using python 2.7 and 3.0 on my local machine and got the following results:
python3 -m timeit "import numpy as np;import struct;x=np.mat(np.random.randint(0,3200,(1000,1000)));z = struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))"
10 loops, best of 3: 467 msec per loop
python2.7 -m timeit "import numpy as np;import struct;x=np.mat(np.random.randint(0,3200,(1000,1000)));z = struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))"
10 loops, best of 3: 468 msec per loop
which was pretty good. I then tried to run that using python 2.6
python2.6 -m timeit "import numpy as np;import struct;x=np.mat(np.random.randint(0,3200,(1000,1000)));z = struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))"
10 loops, best of 3: 34.4 sec per loop
which is an absurd 7350% increase in time...
The question I'm asking is--Is there some way that I can get around this
bottleneck using python 2.6 or should I just try to get the system managers
for the server to upgrade to python 3.5 or 2.7?"
Answer: Why not avoid the `struct` module altogether and let `numpy` handle the binary
conversion for you?
For example:
import numpy as np
x = np.random.randint(0, 3200, (1000, 1000))
z = x.astype('<u2').tostring()
`'<u2'` specifies little-endian (`<`) unsigned int (`u`) 16-bit (`2`). It's
identical to `<H`.
Just to prove that they're identical:
z1 = x.astype('<u2').tostring()
z2 = struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))
assert z1 == z2
And for timings (note: this is on python 2.7), the numpy version is ~100x
faster:
In [7]: %timeit x.astype('<u2').tostring()
100 loops, best of 3: 1.33 ms per loop
In [8]: %timeit struct.pack('<'+'H'*x.size,*np.asarray(x).reshape(-1).astype(int))
1 loops, best of 3: 118 ms per loop
|
For list of dicts: merge same key values, sum another diff. values, count each merge iteration +1
Question: I have list of dicts(or tuples), where:
if tuples:
comment_id, user_id, comment_date, comment_time, comment_likes
('51799', '112801710', '2015-12-07', '00:03:21', '0'),
('51761', '112801710', '2015-12-06', '19:31:46', '3'),
('51764', '112801710', '2015-12-06', '19:54:19', '0'),
('51741', '112801710', '2015-12-06', '14:17:34', '2'),
('51768', '52879933', '2015-12-06', '20:03:34', '0'),
('51766', '52879933', '2015-12-06', '21:33:34', '0'),
or can be converted to dict like:
{'comm_count': 1, 'user_id': '217407103', 'likes': 0},
* **comment_id** \- is always unique and cannot meet twice in this list,
* **user_id** \- is not unique for this list, it can be there as much times as comments were left in the set of posts (naturally I wanted to use this as counter)
* **comment_date** and **comment_time** \- can be ignored, needed to sel from db,
* **comment_likes** \- how much likes each comment has.
The task - make one list of tuples or dictionaries where I have just one
'user_id' unique, next how much 'likes' each item has (sum) and how much times
this comment, with same user id was found in list.
To clarify, this is an expected result:
user_id, comment_likes, comments_left
('112801710', '5', '4'),
('52879933', '0', '2')
Somehow I made some different sets, but they does not work as expected.
**Examples of code:**
for row in results:
user_id = row[1] # Get user id ['39411753']
comm_id = row[0] # Get post id ['51 575']
comm_likes = row[4] # Get post likes ['2']
comm_likes = int(comm_likes)
all_users_id_comments.append(user_id)
if user_id not in temp_list:
comm_count = 1
temp_list.append(user_id)
user_ids_dict = {'user_id':user_id,'likes':comm_likes,'comm_count':comm_count}
result_dicts_list.append(user_ids_dict)
if user_id in temp_list:
for item in result_dicts_list:
if item['user_id'] == user_id:
item['comm_count'] += 1
item['likes'] += comm_likes
This way can make list where user_id meet only once and make dict with same
user_ids but also with values. Then it checks list with all ids and if this id
meet second time - update key values. But result is not correct, I've lost
comething important.
Another good way to sort:
merged = {}
for dict in user_comments_list_dicts:
for key,value in dict.items():
if key not in merged:
merged [key] = []
merged [key].append(value)
print(merged)
It make one set based on user_id with list of dicts for each comment, which
user left:
'144964044': [
{'comm_id': '51640', 'likes': '0'},
{'comm_id': '51607', 'likes': '0'},
{'comm_id': '51613', 'likes': '0'},
{'comm_id': '51591', 'likes': '1'},
{'comm_id': '51592', 'likes': '0'},
{'comm_id': '51317', 'likes': '0'},
{'comm_id': '51319', 'likes': '0'},
{'comm_id': '51323', 'likes': '0'}
],
But I can't call value "for 144964044" - it shows me just '144964044' but not
that list. Also confuses me.
Will be great to solve this with python, but IMHO this case can be solved also
on SQL db side, I don't know. Maybe I can UPDATE each row where user_id found
twice or more times and sum it likes and add +1 for each in comments_count.
Also python guys gave me an advice to use: comprehensions, sets, or key\value
- but I use them all - and still no result.
I want to be conscious novice, so I followed your advice about MySQL queries
and found this way:
"""SELECT SUM(comment_likes) AS value_sum, comment_user_id, COUNT(*)
FROM pub_comments_weekly
GROUP BY comment_user_id"""
This will show something like:
((7.0, '-80849532', 3),
(0.0, '100072457', 1),
(4.0, '10224064', 7),
(6.0, '10872377', 27),
(1.0, '111612257', 5),
(10.0, '112801710', 10),
(0.0, '112983834', 2),
(3.0, '11374187', 2),
(0.0, '11558683', 1),
(0.0, '118422944', 1),
(0.0, '119641064', 20),
(1.0, '119991466', 7),
(1.0, '121321268', 1),
(0.0, '12542463', 3))...
where: (likes, user_id, comments)
Thanks for helping!
Answer: counting and summing are most efficient done in the database using count, sum
functions and group by.
for some reason it is necessary that you do it in python, using a dictionary
will be my choice over tuples. also I recommend using dictionary of
dictionaries for the result data structure as it will make accessing it
easier.
list = [ {'comment_id':'51799', 'user_id':'112801710', 'comment_date':'2015-12-07', 'comment_time': '00:03:21', 'comment_likes':'0'},
{'comment_id':'51761', 'user_id':'112801710', 'comment_date':'2015-12-06', 'comment_time':'19:31:46', 'comment_likes':'3'},
{'comment_id':'51764', 'user_id':'112801710', 'comment_date':'2015-12-06', 'comment_time':'19:54:19', 'comment_likes':'0'},
{'comment_id':'51741', 'user_id':'112801710', 'comment_date':'2015-12-06', 'comment_time':'14:17:34', 'comment_likes':'2'},
{'comment_id':'51768', 'user_id':'52879933', 'comment_date':'2015-12-06', 'comment_time':'20:03:34', 'comment_likes':'0'},
{'comment_id':'51766', 'user_id':'52879933', 'comment_date':'2015-12-06', 'comment_time':'21:33:34', 'comment_likes':'0'}]
def combine(list):
result = {}
for item in list:
resItem = result.get(item['user_id'], None)
if not resItem:
resItem = {'comment_likes': int(item['comment_likes']), 'comments_left': 1}
else:
resItem['comment_likes'] += int(item['comment_likes'])
resItem['comments_left'] +=1
result[item['user_id']] = resItem
print result
combine(list)
result:
{'112801710': {'comment_likes': 5, 'comments_left': 4}, '52879933': {'comment_likes': 0, 'comments_left': 2}}
hope this helped.
|
Accessing and altering a global array using python joblib
Question: I am attempting to use joblib in python to speed up some data processing but
I'm having issues trying to work out how to assign the output into the
required format. I have tried to generate a, perhaps overly simplistic, code
which shows the issues that I'm encountering:
from joblib import Parallel, delayed
import numpy as np
def main():
print "Nested loop array assignment:"
regular()
print "Parallel nested loop assignment using a single process:"
par2(1)
print "Parallel nested loop assignment using multiple process:"
par2(2)
def regular():
# Define variables
a = [0,1,2,3,4]
b = [0,1,2,3,4]
# Set array variable to global and define size and shape
global ab
ab = np.zeros((2,np.size(a),np.size(b)))
# Iterate to populate array
for i in range(0,np.size(a)):
for j in range(0,np.size(b)):
func(i,j,a,b)
# Show array output
print ab
def par2(process):
# Define variables
a2 = [0,1,2,3,4]
b2 = [0,1,2,3,4]
# Set array variable to global and define size and shape
global ab2
ab2 = np.zeros((2,np.size(a2),np.size(b2)))
# Parallel process in order to populate array
Parallel(n_jobs=process)(delayed(func2)(i,j,a2,b2) for i in xrange(0,np.size(a2)) for j in xrange(0,np.size(b2)))
# Show array output
print ab2
def func(i,j,a,b):
# Populate array
ab[0,i,j] = a[i]+b[j]
ab[1,i,j] = a[i]*b[j]
def func2(i,j,a2,b2):
# Populate array
ab2[0,i,j] = a2[i]+b2[j]
ab2[1,i,j] = a2[i]*b2[j]
# Run script
main()
The ouput of which looks like this:
Nested loop array assignment:
[[[ 0. 1. 2. 3. 4.]
[ 1. 2. 3. 4. 5.]
[ 2. 3. 4. 5. 6.]
[ 3. 4. 5. 6. 7.]
[ 4. 5. 6. 7. 8.]]
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 2. 3. 4.]
[ 0. 2. 4. 6. 8.]
[ 0. 3. 6. 9. 12.]
[ 0. 4. 8. 12. 16.]]]
Parallel nested loop assignment using a single process:
[[[ 0. 1. 2. 3. 4.]
[ 1. 2. 3. 4. 5.]
[ 2. 3. 4. 5. 6.]
[ 3. 4. 5. 6. 7.]
[ 4. 5. 6. 7. 8.]]
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 2. 3. 4.]
[ 0. 2. 4. 6. 8.]
[ 0. 3. 6. 9. 12.]
[ 0. 4. 8. 12. 16.]]]
Parallel nested loop assignment using multiple process:
[[[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]]
[[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]]]
From Google and StackOverflow search function it appears that when using
joblib, the global array isn't shared between each subprocess. I'm uncertain
if this is a limitation of joblib or if there is a way to get around this?
In reality my script is surrounded by other bits of code that rely on the
final output of this global array being in a (4,_x_ ,_x_) format where _x_ is
variable (but typically ranges in the 100s to several thousands). This is my
current reason for looking at parallel processing as the whole process can
take up to 2 hours for _x_ = 2400.
The use of joblib isn't necessary (but I like the nomenclature and simplicity)
so feel free to suggest simple alternative methods, ideally keeping in mind
the requirements of the final array. I'm using python 2.7.3, and joblib 0.7.1.
Answer: I was able to resolve the issues with this simple example using numpy's
memmap. I was still having issues after using memmap and following the
examples on the [joblib documentation
webpage](https://pythonhosted.org/joblib/parallel.html) but I upgraded to the
latest joblib version (0.9.3) via pip and it all runs smoothly. Here is the
working code:
from joblib import Parallel, delayed
import numpy as np
import os
import tempfile
import shutil
def main():
print "Nested loop array assignment:"
regular()
print "Parallel nested loop assignment using numpy's memmap:"
par3(4)
def regular():
# Define variables
a = [0,1,2,3,4]
b = [0,1,2,3,4]
# Set array variable to global and define size and shape
global ab
ab = np.zeros((2,np.size(a),np.size(b)))
# Iterate to populate array
for i in range(0,np.size(a)):
for j in range(0,np.size(b)):
func(i,j,a,b)
# Show array output
print ab
def par3(process):
# Creat a temporary directory and define the array path
path = tempfile.mkdtemp()
ab3path = os.path.join(path,'ab3.mmap')
# Define variables
a3 = [0,1,2,3,4]
b3 = [0,1,2,3,4]
# Create the array using numpy's memmap
ab3 = np.memmap(ab3path, dtype=float, shape=(2,np.size(a3),np.size(b3)), mode='w+')
# Parallel process in order to populate array
Parallel(n_jobs=process)(delayed(func3)(i,a3,b3,ab3) for i in xrange(0,np.size(a3)))
# Show array output
print ab3
# Delete the temporary directory and contents
try:
shutil.rmtree(path)
except:
print "Couldn't delete folder: "+str(path)
def func(i,j,a,b):
# Populate array
ab[0,i,j] = a[i]+b[j]
ab[1,i,j] = a[i]*b[j]
def func3(i,a3,b3,ab3):
# Populate array
for j in range(0,np.size(b3)):
ab3[0,i,j] = a3[i]+b3[j]
ab3[1,i,j] = a3[i]*b3[j]
# Run script
main()
Giving the following results:
Nested loop array assignment:
[[[ 0. 1. 2. 3. 4.]
[ 1. 2. 3. 4. 5.]
[ 2. 3. 4. 5. 6.]
[ 3. 4. 5. 6. 7.]
[ 4. 5. 6. 7. 8.]]
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 2. 3. 4.]
[ 0. 2. 4. 6. 8.]
[ 0. 3. 6. 9. 12.]
[ 0. 4. 8. 12. 16.]]]
Parallel nested loop assignment using numpy's memmap:
[[[ 0. 1. 2. 3. 4.]
[ 1. 2. 3. 4. 5.]
[ 2. 3. 4. 5. 6.]
[ 3. 4. 5. 6. 7.]
[ 4. 5. 6. 7. 8.]]
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 2. 3. 4.]
[ 0. 2. 4. 6. 8.]
[ 0. 3. 6. 9. 12.]
[ 0. 4. 8. 12. 16.]]]
A few of my thoughts to note for any future readers:
* On small arrays, the time taken to prepare the parallel environment (generally referred to as overhead) means that this runs slower than the simple for loop.
* Comparing a larger array eg. setting _a_ and _a3_ to `np.arange(0,10000)`, and _b_ and _b3_ to `np.arange(0,1000)` gave times of 12.4s for the "regular" method and 7.7s for the joblib method.
* The overheads meant that it was faster to let each core perform the inner _j_ loop (see func3). This makes sense since I'm only starting 10,000 processes rather than starting 10,000,000
processes each of which would need setting up.
|
how to save pil cropped image to image field in django
Question: I am trying to save a cropped image to a model. I am getting the following
error:
> Traceback (most recent call last): File "/mypath/lib/python2.7/site-
> packages/django/core/handlers/base.py", line 132, in get_response response =
> wrapped_callback(request, *callback_args, **callback_kwargs) File
> "/mypath/lib/python2.7/site-packages/django/contrib/auth/decorators.py",
> line 22, in _wrapped_view return view_func(request, *args, **kwargs) File
> "/mypath/views.py", line 236, in player_edit player.save() File
> "/mypath/lib/python2.7/site-packages/django/db/models/base.py", line 734, in
> save force_update=force_update, update_fields=update_fields) File
> "/mypath/lib/python2.7/site-packages/django/db/models/base.py", line 762, in
> save_base updated = self._save_table(raw, cls, force_insert, force_update,
> using, update_fields) File "/mypath/lib/python2.7/site-
> packages/django/db/models/base.py", line 824, in _save_table for f in
> non_pks] File "/mypath/lib/python2.7/site-
> packages/django/db/models/fields/files.py", line 313, in pre_save if file
> and not file._committed: File "/mypath/lib/python2.7/site-
> packages/PIL/Image.py", line 512, in **getattr** raise AttributeError(name)
> AttributeError: _committed
My view which handles the form submit looks like this:
if request.method == 'POST':
form = PlayerForm(request.POST, request.FILES, instance=current_player)
if form.is_valid():
temp_image = form.cleaned_data['profile_image2']
player = form.save()
cropped_image = cropper(temp_image, crop_coords)
player.profile_image = cropped_image
player.save()
return redirect('player')
The crop function looks like this:
from PIL import Image
import Image as pil
def cropper(original_image, crop_coords):
original_image = Image.open(original_image)
original_image.crop((0, 0, 165, 165))
original_image.save("img5.jpg")
return original_image
Is this correct process to save the cropped image to the model. If so, why am
I getting the above error?
Thanks!
Answer: the function should look like this
The crop function looks like this:
from PIL import Image
from django.core.files.base import ContentFile
def cropper(original_image, crop_coords):
img_io = StringIO.StringIO()
original_image = Image.open(original_image)
original_image.crop((0, 0, 165, 165))
original_image.save(img_io, format='JPEG', quality=100)
img_content = ContentFile(img_io.getvalue(), 'img5.jpg')
return img_content
|
loop a range() Function with a time delay- Python
Question: How to loop a range() Function x number of times with a time delay in Python?
for example, for i in range(4,9) repeat it for 3 times and each time stop 5
second before starting the new count. The answer should be like this:
"0 second delay" (4,5,6,7,8,9)
"5 second delay" (4,5,6,7,8,9)
"10 second delay" (4,5,6,7,8,9)
Answer: How about this?
from time import sleep
N = 3 # number of times you want to repeat
for d in xrange(N):
sleep(d * 5)
for x in xrange(4, 10):
# do something
**Note** : Assuming Python 2.x (xrange) otherwise, you should use the regular
range (Python 3.x).
|
SQLite triggers & datetime defaults in SQL DDL using Peewee in Python
Question: I have a SQLite table defined like so:
create table if not exists KeyValuePair (
key CHAR(255) primary key not null,
val text not null,
fup timestamp default current_timestamp not null, -- time of first upload
lup timestamp default current_timestamp not null -- time of last upload
);
create trigger if not exists entry_first_insert after insert
on KeyValuePair
begin
update KeyValuePair set lup = current_timestamp where key = new.key;
end;
create trigger if not exists entry_last_updated after update of value
on KeyValuePair
begin
update KeyValuePair set lup = current_timestamp where key = old.key;
end;
I'm trying to write a `peewee.Model` for this table in Python. This is what I
have so far:
import peewee as pw
db = pw.SqliteDatabase('dhm.db')
class BaseModel(pw.Model):
class Meta:
database = db
class KeyValuePair(BaseModel):
key = pw.FixedCharField(primary_key=True, max_length=255)
val = pw.TextField(null=False)
fup = pw.DateTimeField(
verbose_name='first_updated', null=False, default=datetime.datetime.now)
lup = pw.DateTimeField(
verbose_name='last_updated', null=False, default=datetime.datetime.now)
db.connect()
db.create_tables([KeyValuePair])
When I inspect the SQL produced by the last line I get:
CREATE TABLE "keyvaluepair" (
"key" CHAR(255) NOT NULL PRIMARY KEY,
"val" TEXT NOT NULL,
"fup" DATETIME NOT NULL,
"lup" DATETIME NOT NULL
);
So I have two questions at this point:
1. I've been unable to find a way to achieve the behavior of the `entry_first_insert` and `entry_last_updated` triggers. Does `peewee` support triggers? If not, is there a way to just create a table from a .sql file rather than the `Model` class definition?
2. Is there a way to make the default for `fup` and `lup` propogate to the SQL definitions?
Answer: I've figured out a proper answer to both questions. This solution actually
enforces the desired triggers and default timestamps in the SQL DDL.
First we define a convenience class to wrap up the SQL for a trigger. There is
a more proper way to do this with the `peewee.Node` objects, but I didn't have
time to delve into all of that for this project. This `Trigger` class simply
provides string formatting to output proper sql for trigger creation.
class Trigger(object):
"""Trigger template wrapper for use with peewee ORM."""
_template = """
{create} {name} {when} {trigger_op}
on {tablename}
begin
{op} {tablename} {sql} where {pk} = {old_new}.{pk};
end;
"""
def __init__(self, table, name, when, trigger_op, op, sql, safe=True):
self.create = 'create trigger' + (' if not exists' if safe else '')
self.tablename = table._meta.name
self.pk = table._meta.primary_key.name
self.name = name
self.when = when
self.trigger_op = trigger_op
self.op = op
self.sql = sql
self.old_new = 'new' if trigger_op.lower() == 'insert' else 'old'
def __str__(self):
return self._template.format(**self.__dict__)
Next we define a class `TriggerTable` that inherits from the `BaseModel`. This
class overrides the default `create_table` to follow table creation with
trigger creation. If any triggers fail to create, the whole create is rolled
back.
class TriggerTable(BaseModel):
"""Table with triggers."""
@classmethod
def triggers(cls):
"""Return an iterable of `Trigger` objects to create upon table creation."""
return tuple()
@classmethod
def new_trigger(cls, name, when, trigger_op, op, sql):
"""Create a new trigger for this class's table."""
return Trigger(cls, name, when, trigger_op, op, sql)
@classmethod
def create_table(cls, fail_silently=False):
"""Create this table in the underlying database."""
super(TriggerTable, cls).create_table(fail_silently)
for trigger in cls.triggers():
try:
cls._meta.database.execute_sql(str(trigger))
except:
cls._meta.database.drop_table(cls, fail_silently)
raise
The next step is to create a class `BetterDateTimeField`. This `Field` object
overrides the default `__ddl__` to append a "DEFAULT current_timestamp" string
if the `default` instance variable is set to the `datetime.datetime.now`
function. There are certainly better ways to do this, but this one captures
the basic use case.
class BetterDateTimeField(pw.DateTimeField):
"""Propogate defaults to database layer."""
def __ddl__(self, column_type):
"""Return a list of Node instances that defines the column."""
ddl = super(BetterDateTimeField, self).__ddl__(column_type)
if self.default == datetime.datetime.now:
ddl.append(pw.SQL('DEFAULT current_timestamp'))
return ddl
Finally, we define the new and improved `KeyValuePair` Model, incorporating
our trigger and datetime field improvements. We conclude the Python code by
creating the table.
class KeyValuePair(TriggerTable):
"""DurableHashMap entries are key-value pairs."""
key = pw.FixedCharField(primary_key=True, max_length=255)
val = pw.TextField(null=False)
fup = BetterDateTimeField(
verbose_name='first_updated', null=False, default=datetime.datetime.now)
lup = BetterDateTimeField(
verbose_name='last_updated', null=False, default=datetime.datetime.now)
@classmethod
def triggers(cls):
return (
cls.new_trigger(
'kvp_first_insert', 'after', 'insert', 'update',
'set lup = current_timestamp'),
cls.new_trigger(
'kvp_last_udpated', 'after', 'update', 'update',
'set lup = current_timestamp')
)
KeyValuePair.create_table()
Now the schema is created properly:
sqlite> .schema keyvaluepair
CREATE TABLE "keyvaluepair" ("key" CHAR(255) NOT NULL PRIMARY KEY, "val" TEXT NOT NULL, "fup" DATETIME NOT NULL DEFAULT current_timestamp, "lup" DATETIME NOT NULL DEFAULT current_timestamp);
CREATE TRIGGER kvp_first_insert after insert
on keyvaluepair
begin
update keyvaluepair set lup = current_timestamp where key = new.key;
end;
CREATE TRIGGER kvp_last_udpated after update
on keyvaluepair
begin
update keyvaluepair set lup = current_timestamp where key = old.key;
end;
sqlite> insert into keyvaluepair (key, val) values ('test', 'test-value');
sqlite> select * from keyvaluepair;
test|test-value|2015-12-07 21:58:05|2015-12-07 21:58:05
sqlite> update keyvaluepair set val = 'test-value-two' where key = 'test';
sqlite> select * from keyvaluepair;
test|test-value-two|2015-12-07 21:58:05|2015-12-07 21:58:22
|
NameError in Python 3.4.3
Question: When I'm trying this with the following configuration :
* VirtualEnv with python3.4.3
* Running on an online IDE
When I'm trying this :
from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
try:
html = urlopen("http://www.pythonscraping.com/pages/pages1.html")
if html is None:
print("url not found")
else:
except HTTPError as e:
print("test")
else:
bsObj = BeautifulSoup(html.read())
print(bsObj)
I got the following error :
~/workspace/scrapingEnv $ python test2.py
File "test2.py", line 7
if html is None:
^
SyntaxError: invalid syntax
What am I doing wrong ?
Answer: Thanks for the hints, I found a way to get around my problem :
from urllib.request import urlopen
from urllib.error import HTTPError
from urllib.error import URLError
from bs4 import BeautifulSoup
try:
html = urlopen("http://www.pythonscrapng.com/pages/pages1.html")
bsObj = BeautifulSoup(html.read())
print(bsObj)
except HTTPError as e:
print("test")
except URLError as j:
print ("No URL")
else:
bsObj = BeautifulSoup(html.read())
print(bsObj)
|
python get-pip.py return invalid syntax form shutil.py
Question: Ok I need help here.
I have windows 7, and i'm stuck with Python 3.3 because my company sucks I
have been trying to install pip via
> python get-pip.py
I am using the get-pip.py from <https://bootstrap.pypa.io/get-pip.py> which is
the re-direct from the pip python page.
When I go to the command prompt to install this, I keep getting the same error
message
Traceback (most recent call last): File "get-pip.py", line 25, in(module)
import shutil File "C:\Python33\Lib\shutil.py", line 85 def copyfile(src, dst,
*, follow_symlinks(True): ^ SyntaxError: invalid syntax
I have checked shutil against the source code and my code is not different.
Can someone please help here, I can't find anything that will make this issue
go away.
Answer: An alternate way: Try this in powershell:
> (Invoke-WebRequest https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py).Content | python -
Source: <https://pypi.python.org/pypi/setuptools/3.3#windows-7-or-graphical-
install>
Then you can do this: `easy_install pip3`
|
Is there a way to redirect stderr to file in Jupyter?
Question: There was a redirect_output function in IPython.utils, and there was a
%%capture magic function, but these are now gone, and [this
thread](http://stackoverflow.com/questions/14571090/ipython-redirecting-
output-of-a-python-script-to-a-file-like-bash) on the topic is now outdated.
I'd like to do something like the following:
from IPython.utils import io
from __future__ import print_function
with io.redirect_output(stdout=False, stderr="stderr_test.txt"):
while True:
print('hello!', file=sys.stderr)
Thoughts? For more context, I am trying to capture the output of some ML
functions that run for hours or days, and output a line every 5-10 seconds to
stderr. I then want to take the output, munge it, and plot the data.
Answer: You could probably try replacing `sys.stderr` with some other file descriptor
the same way as suggested [here](http://stackoverflow.com/a/31153046/232371).
import sys
oldstderr = sys.stderr
sys.stderr = open('log.txt', 'w')
# do something
sys.stderr = oldstderr
|
Determining if data in a txt file obeys certain statistics
Question: I'm working with a Geiger counter which can be hooked up to a computer and
which records its output in the form of a .txt file, NC.txt, where it records
the time since starting and the 'value' of the radiation it recorded. It looks
like
import pylab
import scipy.stats
import numpy as np
import matplotlib.pyplot as plt
x1 = []
y1 = []
#Define a dictionary: counts
f = open("NC.txt", "r")
for line in f:
line = line.strip()
parts = line.split(",") #the columns are separated by commas and spaces
time = float(parts[1]) #time is recorded in the second column of NC.txt
value = float(parts[2]) #and the value it records is in the third
x1.append(time)
y1.append(value)
f.close()
xv = np.array(x1)
yv = np.array(y1)
#Statistics
m = np.mean(yv)
d = np.std(yv)
#Strip out background radiation
trueval = yv - m
#Basic plot of counts
num_bins = 10000
plt.hist(trueval,num_bins)
plt.xlabel('Value')
plt.ylabel('Count')
plt.show()
So this code so far will just create a simple histogram of the radiation
counts centred at zero, so the background radiation is ignored.
What I want to do now is perform a chi-squared test to see how well the data
fits, say, Poisson statistics (and then go on to compare it with other
distributions later). I'm not really sure how to do that. I have access to
scipy and numpy, so I feel like this should be a simple task, but just
learning python as I go here, so I'm not a terrific programmer.
Does anyone know of a straightforward way to do this?
Edit for clarity: I'm not asking so much about if there is a chi-squared
function or not. I'm more interested in how to compare it with other
statistical distributions.
Thanks in advance.
Answer: You can use SciPy library, [here is
documentation](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.chisquare.html)
and examples.
|
Classifying numbers in python
Question: I have tried to classify numbers in python from 0.0 to 1.0. However, all
numbers are getting the else condition. I need each number to get the
appropriate condition, but even when the number is 0.3 or 0.2 is getting the
last bin. What I am doing wrong?
#!/usr/bin/python
import csv
import sys
import re
bin1=0;
bin2=0;
bin3=0;
bin4=0;
bin5=0;
bin6=0;
bin7=0;
bin8=0;
bin9=0;
bin10=0;
def Read():
with open (file_name) as csvfile:
readCSV = csv.reader (csvfile, delimiter=',')
for row in readCSV:
for num in range(1,2):
print row[num]
#Include in 10 bins
if 0.0 <= row[num] < 0.1:
bin1+=1;
print "bin1";
elif 0.1 <= row[num] < 0.2:
bin2+=1;
print "bin2"
elif 0.2 <= row[num] < 0.3:
bin3+=1;
print "bin3"
elif 0.3 <= row[num] < 0.4:
bin4+=1;
print "bin4"
elif 0.4 <= row[num] < 0.5:
bin5+=1;
print "bin5"
elif 0.5 <= row[num] < 0.6:
bin6+=1;
print bin6;
print "bin6"
elif 0.6 <= row[num] < 0.7:
bin7+=1;
print bin7;
print "bin7"
elif 0.7 <= row[num] < 0.8:
bin8+=1;
print "bin8"
elif 0.8 <= row[num] < 0.9:
bin9+=1;
print "bin9"
else:
print "bin10"
file_name = str(sys.argv[1])
print file_name
Read()
What is wrong with this simple classifier?
Thank you
Cheers
Answer: It is wrong because you are reading strings from the csv file. when you
compare a string to a number, then the names of their types (!!!) are
compared.
Solution: convert `row[num]` into a number before comparing it to floating
point values.
|
ArrayField returns wrong value
Question: I am using PostgreSQL with Django and am trying to use
ArrayField(CharField())
Neither storing values nor retrieving raises any exceptions, but attepmting to
store `["string", "another_string", "string with whitespaces", "str"]` and
then retrieve it returns
'{string,another_string,"string with whitespaces",str}'
This issue does not occur when using `ArrayField(IntegerField())` or
`ArrayField(ArrayField(CharField()))`
While I know I could just use JSON or nest the list in another list to get
[[strings]] which would be correctly read, I'd like to know why this is
happening.
* * *
EDIT: As it turns out, using `ArrayField(ArrayField(CharField()))` doesn't
work either:
Python 3.3.2 (default, Mar 20 2014, 20:25:51)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from api.models import Game
>>> g = Game.objects.all()[0]
>>> g.p1hand = [["a", "b", "c d"]]
>>> g.p1hand
[['a', 'b', 'c d']]
>>> g.save()
>>> g = Game.objects.all()[0]
>>> g.p1hand
'{{a,b,"c d"}}'
>>>
I have no idea why it works in a single instance of
`ArrayField(ArrayField(CharField()))`
* * *
EDIT: With regards to @LongBeard_Boldy, this is what another instance of
`ArrayField(ArrayField(CharField()))` returns:
>>> g.game_objects
[['Test', '3', '3', '0', 'building', '5', '2', '2', '0'], ....]
Answer: I had the same problem and it ended up being a problem with the migrations. I
did include the field in the models.py file but I did not migrate, so django
was kind of understanding how to process the lists but not how to retrieve
them.
As soon as I migrated the database, everything works perfectly.
|
selenium facebook comment reply
Question: Trying to reply to facebook comments using selenium and python. I've been able
to select the field using
find_elements_by_css_selector(".UFIAddCommentInput")
But I can't post text using the send_keys method. Heres a simplified structure
of the comment html for facebook:
<div><input tabindex="-1" name="add_comment_text">
<div class="UFIAddCommentInput _1osb _5yk1"><div class="_5yk2">
<div class="_5yw9"><div class="_5ywb">
<div class="_3br6">Write a comment...</div></div>
<div class="_5ywa">
<div title="Write a comment..." role="combobox"
class="_54z"contenteditable="true">
<div data-contents="true">
<div class="_209g _2vxa">
Answer: It works perfectly fine. The only catch being, clear the `div` everytime
before you begin to type a new comment.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
f = webdriver.Firefox()
f.get("https://facebook.com")
# Skipping the logging in and going to a particular post part.
# The selector I used correspond to id of the post and
# class of the div that contains editable text part
e = f.find_element_by_css_selector("div#u_jsonp_8_q div._54-z")
e.send_keys("bob")
e.send_keys(Keys.ENTER)
e.clear()
e.send_keys("kevin")
e.send_keys(Keys.ENTER)
e.clear()
|
Python find text in file between quotation marks
Question: I'm trying to capture the text within quotation marks and set them as
variables for me to change them at a later stage. I know how to do this in
bash shell, but I'm at a loss how to do this in Python.
I've started with this, but I was hoping someone can point me where my
mistakes are.
import re
input = open(filename, 'r')
quotes = re.findall(r'"[^"]*"', input.read(), re.U)
print quotes
Sadly though, this outputs:
['"test1"', '"test2"']
while I'm looking for:
value1 = test1
value2 = test2
In Bash I used this (but I obviously cant use it this way!):
i=0
regex='"([^"]*)"'
while read line
do
if [[ $line =~ $regex ]]; then
printf -v "text$i" '%s' "${BASH_REMATCH[1]}"
i=$((i + 1))
fi
done < filename
echo "value1: $text0"
echo "value2: $text1"
Answer: Use a non-capturing group `(?:...)`, like this:
In [18]: re.findall('(?:")([^"]*)(?:")', '''hello "foo" "bar" haha''')
Out[18]: ['foo', 'bar']
Or use non-consuming groups `(?<=...)` etc:
In [14]: re.findall('(?<=")[^"]*(?=")', '''hello "foo" "bar" haha''')
Out[14]: ['foo', ' ', 'bar']
The latter has a side-effect of also selecting `" "` between `"foo"` and
`"bar"`.
|
python, matplotlib: specgram data array values does not match specgram plot
Question: I am using matplotlib.pyplot.specgram and matplotlib.pyplot.pcolormesh to make
spectrogram plots of a seismic signal.
_Background information -The reason for using pcolormesh is that I need to do
arithmitic on the spectragram data array and then replot the resulting
spectrogram (for a three-component seismogram - east, north and vertical - I
need to work out the horizontal spectral magnitude and divide the vertical
spectra by the horizontal spectra). It is easier to do this using the
spectrogram array data than on individual amplitude spectra_
I have found that the plots of the spectrograms after doing my arithmetic have
unexpected values. Upon further investigation it turns out that the
spectrogram plot made using the pyplot.specgram method has different values
compared to the spectrogram plot made using pyplot.pcolormesh and the returned
data array from the pyplot.specgram method. Both plots/arrays should contain
the same values, I cannot work out why they do not.
Example: The plot of
plt.subplot(513)
PxN, freqsN, binsN, imN = plt.specgram(trN.data, NFFT = 20000, noverlap = 0, Fs = trN.stats.sampling_rate, detrend = 'mean', mode = 'magnitude')
plt.title('North')
plt.xlabel('Time [s]')
plt.ylabel('Frequency [Hz]')
plt.clim(0, 150)
plt.colorbar()
#np.savetxt('PxN.txt', PxN)
looks different to the plot of
plt.subplot(514)
plt.pcolormesh(binsZ, freqsZ, PxN)
plt.clim(0,150)
plt.colorbar()
even though the "PxN" data array (that is, the spectrogram data values for
each segment) is generated by the first method and re-used in the second.
Is anyone aware why this is happening?
P.S. I realise that my value for NFFT is not a square number, but it's not
important at this stage of my coding.
P.P.S. I am not aware of what the "imN" array (fourth returned variable from
pyplot.specgram) is and what it is used for....
Answer: First off, let's show an example of what you're describing so that other folks
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
# Brownian noise sequence
x = np.random.normal(0, 1, 10000).cumsum()
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(8, 10))
values, ybins, xbins, im = ax1.specgram(x, cmap='gist_earth')
ax1.set(title='Specgram')
fig.colorbar(im, ax=ax1)
mesh = ax2.pcolormesh(xbins, ybins, values, cmap='gist_earth')
ax2.axis('tight')
ax2.set(title='Raw Plot of Returned Values')
fig.colorbar(mesh, ax=ax2)
plt.show()
[](http://i.stack.imgur.com/JwVg4.png)
## Magnitude Differences
You'll immediately notice the difference in magnitude of the plotted values.
By default, `plt.specgram` doesn't plot the "raw" values it returns. Instead,
it scales them to decibels (in other words, it plots the `10 * log10` of the
amplitudes). If you'd like it not to scale things, you'll need to specify
`scale="linear"`. However, for looking at frequency composition, a log scale
is going to make the most sense.
With that in mind, let's mimic what `specgram` does:
plotted = 10 * np.log10(values)
fig, ax = plt.subplots()
mesh = ax.pcolormesh(xbins, ybins, plotted, cmap='gist_earth')
ax.axis('tight')
ax.set(title='Plot of $10 * log_{10}(values)$')
fig.colorbar(mesh)
plt.show()
[](http://i.stack.imgur.com/c498I.png)
## Using a Log Color Scale Instead
Alternatively, we could use a log norm on the image and get a similar result,
but communicate that the color values are on a log scale more clearly:
from matplotlib.colors import LogNorm
fig, ax = plt.subplots()
mesh = ax.pcolormesh(xbins, ybins, values, cmap='gist_earth', norm=LogNorm())
ax.axis('tight')
ax.set(title='Log Normalized Plot of Values')
fig.colorbar(mesh)
plt.show()
[](http://i.stack.imgur.com/AXKEJ.png)
## `imshow` vs `pcolormesh`
Finally, note that the examples we've shown have had no interpolation applied,
while the original `specgram` plot did. `specgram` uses `imshow`, while we've
been plotting with `pcolormesh`. In this case (regular grid spacing) we can
use either.
Both `imshow` and `pcolormesh` are very good options, in this case.
However,`imshow` will have significantly better performance if you're working
with a large array. Therefore, you might consider using it instead, even if
you don't want interpolation (e.g. `interpolation='nearest'` to turn off
interpolation).
As an example:
extent = [xbins.min(), xbins.max(), ybins.min(), ybins.max()]
fig, ax = plt.subplots()
mesh = ax.imshow(values, extent=extent, origin='lower', aspect='auto',
cmap='gist_earth', norm=LogNorm())
ax.axis('tight')
ax.set(title='Log Normalized Plot of Values')
fig.colorbar(mesh)
plt.show()
[](http://i.stack.imgur.com/1qePp.png)
|
Leave dates as strings using read_excel function from pandas in python
Question: **Python 2.7.10**
**Tried pandas 0.17.1 -- function read_excel**
**Tried pyexcel 0.1.7 + pyexcel-xlsx 0.0.7 -- function get_records()**
When using pandas in Python is it possible to read excel files (formats:
_xls|xlsx_) and leave columns containing **date** or **date + time** values as
**strings** rather than **_auto-converting_** to `datetime.datetime` or
`timestamp` types?
If this is not possible using pandas can someone suggest an alternate
method/library to read _xls|xlsx_ files and leave date column values as
strings?
For the **pandas** solution attempts the `df.info()` and resultant date column
types are shown below:
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 117 entries, 0 to 116
Columns: 176 entries, Mine to Index
dtypes: datetime64[ns](2), float64(145), int64(26), object(3)
memory usage: 161.8+ KB
>>> type(df['Start Date'][0])
Out[6]: pandas.tslib.Timestamp
>>> type(df['End Date'][0])
Out[7]: pandas.tslib.Timestamp
**Attempt/Approach 1:**
def read_as_dataframe(filename, ext):
import pandas as pd
if ext in ('xls', 'xlsx'):
# problem: date columns auto converted to datetime.datetime or timestamp!
df = pd.read_excel(filename) # unwanted - date columns converted!
return df, name, ext
**Attempt/Approach 2:**
import pandas as pd
# import datetime as datetime
# parse_date = lambda x: datetime.strptime(x, '%Y%m%d %H')
parse_date = lambda x: x
elif ext in ('xls', 'xlsx', ):
df = pd.read_excel(filename, parse_dates=False)
date_cols = [df.columns.get_loc(c) for c in df.columns if c in ('Start Date', 'End Date')]
# problem: date columns auto converted to datetime.datetime or timestamp!
df = pd.read_excel(filename, parse_dates=date_cols, date_parser=parse_date)
And have also tried pyexcel library but it does the same auto-magic convert
behavior:
**Attempt/Approach 3:**
import pyexcel as pe
import pyexcel.ext.xls
import pyexcel.ext.xlsx
t0 = time.time()
if ext == 'xlsx':
records = pe.get_records(file_name=filename)
for record in records:
print("start date = %s (type=%s), end date = %s (type=%s)" %
(record['Start Date'],
str(type(record['Start Date'])),
record['End Date'],
str(type(record['End Date'])))
)
Answer: For attempt 3, you could have added a few formatting lines which [pyexcel
offers](http://pyexcel.readthedocs.org/en/latest/tutorial03.html#convert-a-
column-of-numbers-to-strings):
import pyexcel as pe
import pyexcel.ext.xlsx
t0 = time.time()
if ext == 'xlsx':
sheet = pe.get_sheet(file_name=filename) # <<-- difference starts here
sheet.name_columns_by_row(0) # so that 'Start Date' refers the whole column
sheet.column.format('Start Date', lambda d: d.isoformat()) # format it
sheet.column.format('End Date', lambda d: d.isoformat())
records = sheet.to_records() # then get the records out <<-- ends here
for record in records:
print("start date = %s (type=%s), end date = %s (type=%s)" %
(record['Start Date'],
str(type(record['Start Date'])),
record['End Date'],
str(type(record['End Date'])))
)
|
if statement for a subprocess python not working
Question: I've tried to create a little app that plays a sound when you lose
connectivity for an extended period and plays another when the connection is
established. Useful for wireless connections.
I'm still new to Python :) trying little projects to improve my knowledge. If
you do answer I will be very grateful if you could include any information
about how to use subprocess.
I've defined the subprocess but I'm not sure how to word my if statement so it
loops from one function to the other. IE Function 1 = IF ping loss > 15 pings
play sound and move on to function 2... If function 2 ping success > 15 pings
play sound and move back to function 1. So on.
I've yet to wrap the program in a loop, at this point I'm just trying to get
the ping to work with the if statement.
So right now the application just continuously loop pings.
import os
import subprocess
import winsound
import time
def NetFail():
winsound.Beep(2000 , 180), winsound.Beep(1400 , 180)
def NetSucc():
winsound.Beep(1400 , 250), winsound.Beep(2000 , 250),
ips=[]
n = 1
NetSuccess = 10
NetFailure = 10
PinSuc = 0
PinFail = 0
x = '8.8.8.8'
ips.append(x)
for ping in range(0,n):
ipd=ips[ping]
def PingFailure():
while PinFail < NetSuccess:
res = subprocess.call(['ping', '-n', '10', ipd])
if ipd in str(res):
PingSuccess()
else:
print ("ping to", ipd, "failed!"), NetFail()
def PingSuccess():
while PinFail < NetFailure: # This needs to be cleaned up so it doesn't interfere with the other function
res = subprocess.call(['ping', '-n', '10', ipd])
if ipd in str(res):
PingFail()
else:
print ("ping to", ipd, "successful!"), NetSucc()
Answer: As you use the command `ping -n 10 ip`, I assume that you are using a Windows
system, as on Linux (or other Unix-like) it would be `ping -c 10 ip`.
Unfortunately, on Windows `ping` always return 0, so you cannot use the return
value to know whether peer was reached. And even the output is not very
clear...
So you should:
* run in a `cmd` console the command `ping -n 1 ip` with an accessible and inaccessible ip, note the output and identify the differences. On my (french) system, it writes _Impossible_ , I suppose that you should get _Unable_ or the equivalent in your locale
* start the `ping` from Python with `subprocess.Popen` redirecting the output to a pipe
* get the output (and error output) from the command with `communicate`
* search for the `Unable` word in output.
Code could be like:
errWord = 'Unable' # replace with what your locale defines...
p = subprocess.Popen([ 'ping', '-n', '1', ipd],
stdout = subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if errWord in out:
# process network disconnected
else:
# process network connected
Alternatively, you could search [pypi](https://pypi.python.org/) for a pure
Python implementation of ping such as py-ping ...
Anyway, I would not use two functions in flip-flop because it will be harder
if you later wanted to test connectivity to multiple IPs. I would rather use
an class
class IP(object):
UNABLE = "Unable" # word indicating unreachable host
MAX = 15 # number of success/failure to record new state
def __init__(self, ip, failfunc, succfunc, initial = True):
self.ip = ip
self.failfunc = failfunc # to warn of a disconnection
self.succfunc = succfunc # to warn of a connection
self.connected = initial # start by default in connected state
self.curr = 0 # number of successive alternate states
def test(self):
p = subprocess.Popen([ 'ping', '-n', '1', self.ip],
stdout = subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if self.UNABLE in out:
if self.connected:
self.curr += 1
else:
self.curr = 0 # reset count
else:
if not self.connected:
self.curr += 1
else:
self.curr = 0 # reset count
if self.curr >= self.MAX: # state has changed
self.connected = not self.connected
self.curr = 0
if self.connected: # warn for new state
self.succfunc(self)
else:
self.failfunc(self)
Then you can iterate over a list of IP objects, repeatedly calling
`ip.test()`, and you will be warned for state changes
|
Connecting docker instance with python notebook to docker instance with Neo4J
Question: I am running a Jupyter notebook docker instance
(<https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook>)
and I've installed py2neo on it.
I am also running a docker container with Neo4J installed with port 7474
exposed.
The problem is I cannot seem to connect to the REST interface of the Neo4J
docker from the notebook docker. I think the problem is related to the
definition of localhost.
What worked so far. I used only the Neo4J docker and start a local notebook
(ipython notebook) then the following works:
import py2neo
from py2neo import Node, Relationship, Graph, authenticate
authenticate("http://localhost:7474", "neo4j", "admin")
graph = Graph('http://localhost:7474/db/data')
graph.cypher.execute('match (y:Year) return y')
The same code doesn't work in the notebook which is running in a separate
docker container since the definition of localhost is not the same. But now I
don't understand what it should be:
I've used **docker inspect** on the Neo4J container and used the following two
in an attempt to find the address corresponding to my localhost:
* "Gateway": "xxx.yy.42.1"
* "IPAddress": "xxx.yy.0.3"
But both of them result in `ClientError: 401 Unauthorized`
Any suggestion on how to overcome this issue? (Note that my current docker
version is 1.7.1, thus not support network yet, but I could obviously upgrade
if that's necessary.)
Answer:
graph = Graph('https://' + username + ':' + pwd + '@' + ip_neo + ':7473 /db/data')
This seems to work. Note that you need port 7473 which is the standard HTTPS
port. No success getting the approach with authenticate to work.
For ip_neo I inspect the neo4J docker instance:
sudo docker inspect neo4j | grep "Gateway"
|
Merging PDF's with PyPDF2 with inputs based on file iterator
Question: I have two folders with PDF's of identical file names. I want to iterate
through the first folder, get the first 3 characters of the filename, make
that the 'current' page name, then use that value to grab the 2 corresponding
PDF's from both folders, merge them, and write them to a third folder.
The script below works as expected for the first iteration, but after that,
the subsequent merged PDF's include all the previous ones (ballooning quickly
to 72 pages within 8 iterations).
Some of this could be due to poor code, but I can't figure out where that is,
or how to clear the inputs/outputs that could be causing the failure to write
only 2 pages per iteration:
import os
from PyPDF2 import PdfFileMerger
merger = PdfFileMerger()
rootdir = 'D:/Python/Scatterplots/BoundaryEnrollmentPatternMap'
for subdir, dirs, files in os.walk(rootdir):
for currentPDF in files:
#print os.path.join(file[0:3])
pagename = os.path.join(currentPDF[0:3])
print "pagename is: " + pagename
print "File is: " + pagename + ".pdf"
input1temp = 'D:/Python/Scatterplots/BoundaryEnrollmentPatternMap/' + pagename + '.pdf'
input2temp = 'D:/Python/Scatterplots/TraditionalScatter/' + pagename + '.pdf'
input1 = open(input1temp, "rb")
input2 = open(input2temp, "rb")
merger.append(fileobj=input1, pages=(0,1))
merger.append(fileobj=input2, pages=(0,1))
outputfile = 'D:/Python/Scatterplots/CombinedMaps/Sch_' + pagename + '.pdf'
print merger.inputs
output = open(outputfile, "wb")
merger.write(output)
output.close()
#clear all inputs - necessary?
outputfile = []
output = []
merger.inputs = []
input1temp = []
input2temp = []
input1 = []
input2 = []
print "done"
My code / work is based on this sample:
<https://github.com/mstamy2/PyPDF2/blob/master/Sample_Code/basic_merging.py>
Answer: I think that the error is that `merger` is initialized before the loop and it
accumulates all the documents. Try to move line `merger = PdfFileMerger()`
into the loop body. `merger.inputs = []` doesn't seem to help in this case.
There are a few notes about your code:
* `input1 = []` doesn't close file. It will result in many files, which are opened by the program. You should call `input1.close()` instead.
* [] means an empty array. It is better to use None if a variable should not contain any meaningful value.
* To remove a variable (e.g. `output`), use `del output`.
* After all, clearing all variables is not necessary. They will be freed with garbage collector.
* Use [os.path.join](https://docs.python.org/2/library/os.path.html#os.path.join) to create input1temp and input2temp.
|
Numpy Matrix extension such as below:
Question: My question interesed in matrix manuplations in numpy;
A=([[2, 3, 4, 2, 1, 3, 4, 1, 3, 2 ]])
In this matrix the biggest value is 4 as you see. I want obtain a matrix as
below, this matrix have 4 column and 10 rows (10x4) because I have 10
observation
B=([[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 1, 0, 0],
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0]])
first row, **second** column of B matrix is should be 1 and others row element
should be 0, because first element of A matrix is two. Similarly second row
and third column shold be 1 and other row elements should be because secon
element of A matrix is 3 and so on...
How can be writen **Python**(numpy) code which give us this B matrix as output
it is very very important for me please HELP....
Answer: It looks like you just want to match `A` with the list (or array) `[1,2,3,4]`,
and mark the appropriate column
In [110]: A=np.array([2, 3, 4, 2, 1, 3, 4, 1, 3, 2 ])
Use broadcasting to make a 2d true/false array of matches
In [111]: (A[:,None]==np.arange(1,5))
Out[111]:
array([[False, True, False, False],
[False, False, True, False],
[False, False, False, True],
[False, True, False, False],
[ True, False, False, False],
[False, False, True, False],
[False, False, False, True],
[ True, False, False, False],
[False, False, True, False],
[False, True, False, False]], dtype=bool)
convert the T/F to 1/0 integers:
In [112]: (A[:,None]==np.arange(1,5)).astype(int)
Out[112]:
array([[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 1, 0, 0],
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0]])
|
Train departures board ticker Tkinter
Question: I have created a full screen departures board for my local train station using
python and the Tkinter GUI and I have hooked it up to a raspberry pi.
I have limited the number of characters for the display of the train's
destination to 13 to save space. I would like to make this scroll in a ticker
style in order to read destinations that are longer than 13 characters.
Please let me know if you have any ideas to make the train destinations scroll
along?
I have blanked out my API key for the national rail database on the code
below. If you would like it to test the program, please ask.
import Tkinter as tk
from nredarwin.webservice import DarwinLdbSession
from datetime import datetime
top = tk.Tk()
top.title("Departure Board for Harringay Station")
top.configure(background='black')
def callback():
darwin_session = DarwinLdbSession(wsdl='https://lite.realtime.nationalrail.co.uk/OpenLDBWS/wsdl.aspx?ver=2015-05-14', api_key = 'xxxxx-xxxxx-xxxxxx-xxxxxxx')
crs_code = "HGY"
board = darwin_session.get_station_board(crs_code)
tex.delete(1.0, tk.END)
s = "\nNext departures for %s" % (board.location_name)
t = """
-------------------------------
|P| DEST |SCHED| DUE |
------------------------------- \n"""
tex.insert(tk.END, s + t, 'tag-center')
tex.see(tk.END)
for service in board.train_services:
u = ("|%1s|%14s|%5s|%7s|\n" %(service.platform or "", service.destination_text[:13], service.std, service.etd[:7]))
tex.insert(tk.END, u, 'tag-center')
v = "--------------------------------\n"
tex.insert(tk.END, v, 'tag-center')
tex.after(10000, callback)
def tick():
tex2.delete(1.0, tk.END)
now = datetime.now()
tex2.insert(tk.END, "%s %s %s %s %s:%s" %(now.strftime('%A'), now.strftime('%d'), now.strftime('%b'), now.strftime('%Y'), now.strftime('%H'), now.strftime('%M')), 'tag-center')
tex2.after(1000, tick)
def close_window ():
top.destroy()
button = tk.Button(top, text = "Exit", highlightthickness=0, command = close_window, bg = "black", fg = "orange")
button.pack(side = tk.TOP, anchor = tk.NE)
tex2 = tk.Text(master=top, font = "Courier 28 bold", highlightthickness=0, cursor = 'none', insertwidth=0, height = 1, bg = "black", fg = "orange", borderwidth = 0)
tex2.pack(side = tk.TOP)
tex2.tag_configure('tag-center', justify='center')
tex = tk.Text(master=top, font = "Courier 25 bold", highlightthickness=0, cursor = 'none', bg = "black", fg = "orange", borderwidth = 0)
tex.pack()
tex.tag_configure('tag-center', justify='center')
w, h = top.winfo_screenwidth(), top.winfo_screenheight()
top.overrideredirect(1)
top.geometry("%dx%d+0+0" % (w, h))
callback()
tick()
top.mainloop()
This produces the following output (click below for picture):
[Tkinter train departures board](http://i.stack.imgur.com/6cVK3.jpg)
As you can see, "Hertford North" and "Welwyn Garden City" do not fit in the
space provided. I want these names to tick across in a continuous loop within
the current text widget.
Apologies for the messy script, I'm a bit of a noob
Answer: Here is an example of how to make a ticker from Dani Web:
''' Tk_Text_ticker102.py
using Tkinter to create a marquee/ticker
uses a display width of 20 characters
not superbly smooth but good enough to read
tested with Python27 and Python33 by vegaseat 04oct2013
'''
import time
try:
# Python2
import Tkinter as tk
except ImportError:
# Python3
import tkinter as tk
root = tk.Tk()
# width --> width in chars, height --> lines of text
text_width = 20
text = tk.Text(root, width=text_width, height=1, bg='yellow')
text.pack()
# use a proportional font to handle spaces correctly
text.config(font=('courier', 48, 'bold'))
s1 = "We don't really care why the chicken crossed the road. "
s2 = "We just want to know if the chicken is on our side of the "
s3 = "road or not. The chicken is either for us or against us. "
s4 = "There is no middle ground here. (George W. Bush)"
# pad front and end of text with spaces
s5 = ' ' * text_width
# concatenate it all
s = s5 + s1 + s2 + s3 + s4 + s5
for k in range(len(s)):
# use string slicing to do the trick
ticker_text = s[k:k+text_width]
text.insert("1.1", ticker_text)
root.update()
# delay by 0.22 seconds
time.sleep(0.22)
root.mainloop()
P.S This is not my code it is
[vegaseat's](https://www.daniweb.com/members/19440/vegaseat) code
|
Unable to install openfst python library on Mac OS X Yosemite
Question: I have been trying to install openfst python library for the last week,
however I am stuck. I have read all similar questions on stack overflow and
other websites but none of the instructions work. I have the latest Xcode
installed, using
brew install openfst
I also installed openfst, however when I want to install the python library by
writing:
pip install openfst
in the terminal, I get:
Collecting openfst
Using cached openfst-1.5.0.tar.gz
Building wheels for collected packages: openfst
Running setup.py bdist_wheel for openfst
Complete output from command /Users/ali/anaconda/bin/python -c "import setuptools;__file__='/private/var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/pip-build-Jqe8Nu/openfst/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/tmpFNyllkpip-wheel-:
running bdist_wheel
running build
running build_ext
building 'fst' extension
creating build
creating build/temp.macosx-10.5-x86_64-2.7
gcc -fno-strict-aliasing -I/Users/ali/anaconda/include -arch x86_64 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/ali/anaconda/include/python2.7 -c fst.cc -o build/temp.macosx-10.5-x86_64-2.7/fst.o -std=c++11 -Wno-unneeded-internal-declaration -Wno-unused-function
In file included from fst.cc:241:
/usr/local/include/fst/util.h:24:10: fatal error: 'unordered_map' file not found
#include <unordered_map>
^
1 error generated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for openfst
Failed to build openfst
Installing collected packages: openfst
Running setup.py install for openfst
Complete output from command /Users/ali/anaconda/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/pip-build-Jqe8Nu/openfst/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/pip-oi7XrR-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_ext
building 'fst' extension
gcc -fno-strict-aliasing -I/Users/ali/anaconda/include -arch x86_64 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/ali/anaconda/include/python2.7 -c fst.cc -o build/temp.macosx-10.5-x86_64-2.7/fst.o -std=c++11 -Wno-unneeded-internal-declaration -Wno-unused-function
In file included from fst.cc:241:
/usr/local/include/fst/util.h:24:10: fatal error: 'unordered_map' file not found
#include <unordered_map>
^
1 error generated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/Users/ali/anaconda/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/pip-build-Jqe8Nu/openfst/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/pip-oi7XrR-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/36/0m4j84pd49l55mvcqmbqt3z00000gn/T/pip-build-Jqe8Nu/openfst
Could someone please help me?
Answer: I would suggest compiling OpenFST from source with Python enabled. It's pretty
easy:
wget http://www.openfst.org/twiki/pub/FST/FstDownload/openfst-1.5.3.tar.gz
tar zxvf openfst-1.5.3.tar.gz
cd openfst-1.5.3
./configure --enable-python
make
sudo make install
|
Can't find Numpy in Windows 8
Question: I tried to run a code that imports Numpy, but it shown
Traceback (most recent call last):
File "C:\Users\MarcosPaulo\Dropbox\Marcos\Desacelerador Zeeman\Simulação da desaceleração atômica\Programa\Dy_deceleration.py", line 4, in <module>
import numpy as np
ImportError: No module named numpy
I ensure you that I have Python 2.7 and Numpy 1.1 installed on my laptop.
Answer: [Try this download
location](http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy). It worked for me.
Use **cmd** to do `pip install numpy-1.10.2+mkl-cp27-none-win32.whl` (or
conda) for whatever one you downloaded.
**EDIT:** Restart your python shell, and **import sys**. Also make sure your
installment location is in your `sys.path` by printing it out to the screen.
It should include `'C:\\Python27\\lib\\site-packages'`.
After that, do a loop of python **command line** (shell) **outside the`numpy`
directory** and with `import numpy;numpy.test()` a couple of times until you
get no fails. It might take a while to test.
|
NameError name 'request' is not defined when trying to check for cookie -Python
Question: I'm encountering the following problem when I'm trying to check if a cookie
exists.
import requests
username = raw_input('Please enter your username: ')
password = raw_input('Please enter your password: ')
DATA = {
"url": "http://example.com/index.php",
"action": "do_login",
"submit": "Login",
"quick_login": "1",
"quick_username": (username),
"quick_password": (password),
}
requests.post("http://example.com/member.php", data=DATA)
if 'mybbuser' in request.COOKIES.keys():
print("Correct")
else:
print("Wrong")
The error I'm getting:
NameError: name 'request' is not defined
Answer: Perhaps you need to store the response and look for the cookies there:
resp = requests.post(...)
if resp.cookies['blahblah']:
|
Passing a variable from Excel to Python with XLwings
Question: I am trying write a simple user defined function in Python that I pass a value
to from `Excel` via `Xlwings`. I ran across some examples with an Add-in that
you need to import user defined functions, but that seems overly complex.
Why isn't my example working?
VBA:
Function Hello(name As String) As String
RunPython ("import Test; Test.sayhi(name)")
End Function
Python (`Test.py`):
from xlwings import Workbook, Range
def sayhi(name):
wb = Workbook.caller()
return 'Hello {}'.format(name)
Error:
NameError: name 'name' is not defined
Answer: Make sure you're supplying the argument correctly:
RunPython ("import Test; Test.sayhi('" & name & "')")
|
PythonPath and Python Config Scripts
Question: I need some major help and am a bit scared since I do not want to mess up my
computer!I am on a Macbook Air running OSX 10.10.5. So I was following a
tutorial to help me learn Django. The tutorial isn't important. What is
important is that when doing it I changed my $PYTHONPATH to this:
export
PYTHONPATH=$PYTHONPATH:/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages
Then I got scared with a homebrew warning here it is:
Warning: "config" scripts exist outside your system or Homebrew directories.
`./configure` scripts often look for *-config scripts to determine if software
packages are installed, and what additional flags to use when compiling and
linking.
Having additional scripts in your path can confuse software installed via
Homebrew if the config script overrides a system or Homebrew provided script
of the same name. We found the following "config" scripts:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python-config
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2-config
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-config
Warning: Your XQuartz (2.7.7) is outdated Please install XQuartz 2.7.8:
<https://xquartz.macosforge.org>
Warning: Python is installed at /Library/Frameworks/Python.framework
Homebrew only supports building against the System-provided Python or a brewed
Python. In particular, Pythons installed to /Library can interfere with other
software installs.
I got scared that I had messed something up because of two things first the
message relating to config scripts and then this one :
Warning: Python is installed at /Library/Frameworks/Python.framework
Homebrew only supports building against the System-provided Python or a brewed
Python. In particular, Pythons installed to /Library can interfere with other
software installs.
I did my research and here are the links I found:
[Repairing mysterious Python config scripts outside of the
system](http://stackoverflow.com/questions/17030490/repairing-mysterious-
python-config-scripts-outside-of-the-system)
[Homebrew Warnings additional "config" scripts in
python](http://stackoverflow.com/questions/34030890/homebrew-warnings-
additional-config-scripts-in-python)
The first one says to clean my path but I have no idea how to do that and the
second has no answers.
Any help would be greatly appreciated since I don't want to use my computer
until I can make sure everything is fixed!
EDIT: Will using export $PATH = /usr/local/bin fix my issue? I got that from
this link: <http://apple.stackexchange.com/questions/96308/python-
installation-messed-up>
Answer: As per my second comment: your PATH and PYTHONPATH depend on what you are
using. You wouldn't have to need PYTHONPATH if you install the necessary
packages for the particular Python you're using (using, for example, the
complementary pip); and you can amend PATH to include that Python executable,
if it's not already on PATH.
For example, I use Homebrew Python. My default PATH already includes
`/usr/local/bin`, and I use `/usr/local/bin/pip` to install packages for that
particular Python. No need for PYTHONPATH, everything works if I make sure I
use `/usr/local/bin/python`.
The catch with this is that `/usr/bin/python` is likely to be found earlier on
your PATH than `/usr/local/bin/python`. That will cause problems. Either use
the full path, `/usr/local/bin/python`, or set an alias (which is shorter to
type).
In fact, this way I'm running Python 2.7, 3.4 and 3.5 all in `/usr/local/bin`,
all with aliases. And I still have my system Python at `/usr/bin/python` for
system scripts. (The tricky part with multiple versions is pip: I've made
multiple copies of pip, each named differently, each with a different hash-
bang as the first line. Alternatively, I could run `/usr/local/bin/pythonx.y
/usr/local/bin/pip` and the correct `pip` is used.)
* * *
In short:
* unset PYTHONPATH
* make sure `/usr/local/bin` is included in PATH, but don't necessary set it at the front of PATH
* remove Homebrew Python
The following depends on if you want to use Homebrew:
* if you want to use the latest Python version(s), (re)install Python 2 (and 3; just try it out) with Homebrew.
* Make aliases, if necessary, for `/usr/local/bin/python2.7` and corresponding `pip`. (Ditto Python 3.)
* Install all packages that pip. Or, if you using `setup.py`, with the appropriate Python executable.
Something similar goes if you like to use, e.g., Anaconda Python.
If you attempt to install some binary package (e.g., through an installer),
you're bound to mess up things. Don't do it, use the appropriate pip.
|
Running a python script from another script repeats forever
Question: I am having trouble running a second script from a first script in python. To
simplify what I'm doing the following code illustrates what I'm submitting
file1.py:
from os import system
x = 1
system('python file2.py')
file2.py:
from file1 import x
print x
The trouble is that when I run file1.py x is printed forever until
interrupted. Is there something I am doing wrong?
Answer: `from file1 import x` imports the whole file1. When it is imported, all of it
is evaluated, including `system('python file2.py')`.
You can prevent the recursion this way:
if __name__ == "__main__":
system('python file2.py')
That will solve your current problem, however, it does not look like it is
doing anything useful.
You should choose one of two options:
1. If file2 has access to file1, remove the `system` call completely and simply execute file2 directly.
2. If file2 does not have access to file1 and you have to start the file2 process from file1, then simply pass the arguments to it through the command line and don't import file1 from file2.
`system('python file2.py %s' % x)`
(or, a bit better, use `subprocess.call(['python', 'file2.py', x])`)
In file2, you can then access value of `x` as:
x = sys.argv[1]
|
Argparse suggests nonsensical order in help text usage line
Question: This program:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('files', metavar='INPUT', nargs='*',
help='File(s) containing words to include. If none given, stdin will be used.')
parser.add_argument('-x', '--exclude', nargs='*',
help='File(s) containing words to exclude.')
args = parser.parse_args()
print args.files
print args.exclude
produces this output when run in Python 2.7.9:
$ python prog.py --help
usage: prog.py [-h] [-x [EXCLUDE [EXCLUDE ...]]] [INPUT [INPUT ...]]
positional arguments:
INPUT File(s) containing words to include. If
none given, stdin will be used.
optional arguments:
-h, --help show this help message and exit
-x [EXCLUDE [EXCLUDE ...]], --exclude [EXCLUDE [EXCLUDE ...]]
File(s) containing words to exclude.
However, that "help" output instructs the user to use a nonsensical ordering
for the arguments. It is nonsensical because if the `-x` option is used, then
no `INPUT` arguments will be detected.
Argparse ought instead to advise the user to use this ordering:
usage: prog.py [-h] [INPUT [INPUT ...]] [-x [EXCLUDE [EXCLUDE ...]]]
Two questions:
1. Is this a bug in `argparse`? (I think it is.)
2. Regardless of whether it is a bug, how can I fix it so that `$ python prog.py --help` will output the help text I desire (see above), preferably in as [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) a way as possible?
Answer: The simplest way is to add `usage="..."` to `argparse.ArgumentParser()`.
By viewing the source of argparse, I found a way to resort arguments which
might be a little bit dirty:
class MyHelpFormatter(argparse.HelpFormatter):
def _format_actions_usage(self, actions, groups):
actions.sort(key=lambda a: bool(a.option_strings and a.nargs != 0))
return super(MyHelpFormatter, self)._format_actions_usage(actions, groups)
parser = argparse.ArgumentParser(formatter_class = MyHelpFormatter)
|
Python code not deleting record from database
Question: This is the code I have to delete a record from two tables in my database that
share the same ID code and I'm not too sure where I've gone wrong. Anything
missing? I've checked this a million times
def deletePhoto(photoID):
"""
Middleware function to delete a photo post
"""
#connect to the database
conn, cursor = getConnectionAndCursor()
#create sql to delete from the ratings table
sql = """
DELETE
FROM ratings
WHERE photoID= %s
"""
#set the parameters
parameters = (photoID)
#execute the sql
cursor.execute(sql, parameters)
#create sql to delete from the photo table
sql = """
DELETE
FROM photo
WHERE photoID = %s
"""
#set the parameters
parameters = (photoID)
#execute the sql
cursor.execute(sql, parameters)
#fetch the data
data = cursor.rowcount
#clean up
conn.commit()
cursor.close()
conn.close()
Answer: You might try adding a sleeper after your executes. It can take some time for
the server to process your query.
import time
time.sleep(x)
x in seconds
|
PyQt4: How to pause a Thread until a signal is emitted?
Question: I have the following pyqtmain.py:
#!/usr/bin/python3
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from pyqtMeasThread import *
class MainWindow(QMainWindow):
def __init__(self, parent=None):
self.qt_app = QApplication(sys.argv)
QMainWindow.__init__(self, parent)
buttonWidget = QWidget()
rsltLabel = QLabel("Result:")
self.rsltFiled = QLineEdit()
self.buttonStart = QPushButton("Start")
verticalLayout = QVBoxLayout(buttonWidget)
verticalLayout.addWidget(rsltLabel)
verticalLayout.addWidget(self.rsltFiled)
verticalLayout.addWidget(self.buttonStart)
butDW = QDockWidget("Control", self)
butDW.setWidget(buttonWidget)
self.addDockWidget(Qt.LeftDockWidgetArea, butDW)
self.mthread = QThread() # New thread to run the Measurement Engine
self.worker = MeasurementEngine() # Measurement Engine Object
self.worker.moveToThread(self.mthread)
self.mthread.finished.connect(self.worker.deleteLater) # Cleanup after thread finished
self.worker.measure_msg.connect(self.showRslt)
self.buttonStart.clicked.connect(self.worker.run)
# Everything configured, start the worker thread.
self.mthread.start()
def run(self):
""" Show the window and start the event loop """
self.show()
self.qt_app.exec_() # Start event loop
@pyqtSlot(str)
def showRslt(self, mystr):
self.rsltFiled.setText(mystr)
def main():
win = MainWindow()
win.run()
if __name__ == '__main__':
main()
And another thread script performing the actual measurement:
from PyQt4.QtCore import *
import time
class MeasurementEngine(QObject):
measure_msg = pyqtSignal(str)
def __init__(self):
QObject.__init__(self) # Don't forget to call base class constructor
@pyqtSlot()
def run(self):
self.measure_msg.emit('phase1')
time.sleep(2) # here I would like to make it as an interrupt
self.measure_msg.emit('phase2')
What this code does now is that after the Start button is pressed, the
function run in the thread will be executed. However, actually in the function
run, there are two phases of the measurement. Right now I used an time delay.
But what I would like to implement actually is that after the 'phase1'
measurement is done. A message box will be popped up, and at the same time,
the thread will be paused/held. Until the user closed the message box, then
the thread function will be resumed.
Answer: Use a `QWaitCondition` from the `QtCore` module. Using a mutex lock, you set
the background thread to wait/sleep until the foreground thread wakes it back
up. Then it will continue doing its work from there.
#!/usr/bin/python3
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from pyqtMeasThread import *
class MainWindow(QMainWindow):
def __init__(self, parent=None):
self.qt_app = QApplication(sys.argv)
QMainWindow.__init__(self, parent)
buttonWidget = QWidget()
rsltLabel = QLabel("Result:")
self.rsltFiled = QLineEdit()
self.buttonStart = QPushButton("Start")
verticalLayout = QVBoxLayout(buttonWidget)
verticalLayout.addWidget(rsltLabel)
verticalLayout.addWidget(self.rsltFiled)
verticalLayout.addWidget(self.buttonStart)
butDW = QDockWidget("Control", self)
butDW.setWidget(buttonWidget)
self.addDockWidget(Qt.LeftDockWidgetArea, butDW)
self.mutex = QMutex()
self.cond = QWaitCondition()
self.mthread = QThread() # New thread to run the Measurement Engine
self.worker = MeasurementEngine(self.mutex, self.cond) # Measurement Engine Object
self.worker.moveToThread(self.mthread)
self.mthread.finished.connect(self.worker.deleteLater) # Cleanup after thread finished
self.worker.measure_msg.connect(self.showRslt)
self.buttonStart.clicked.connect(self.worker.run)
# Everything configured, start the worker thread.
self.mthread.start()
def run(self):
""" Show the window and start the event loop """
self.show()
self.qt_app.exec_() # Start event loop
# since this is a slot, it will always get run in the event loop in the main thread
@pyqtSlot(str)
def showRslt(self, mystr):
self.rsltFiled.setText(mystr)
msgBox = QMessageBox(parent=self)
msgBox.setText("Close this dialog to continue to Phase 2.")
msgBox.exec_()
self.cond.wakeAll()
def main():
win = MainWindow()
win.run()
if __name__ == '__main__':
main()
And:
from PyQt4.QtCore import *
import time
class MeasurementEngine(QObject):
measure_msg = pyqtSignal(str)
def __init__(self, mutex, cond):
QObject.__init__(self) # Don't forget to call base class constructor
self.mtx = mutex
self.cond = cond
@pyqtSlot()
def run(self):
# NOTE: do work for phase 1 here
self.measure_msg.emit('phase1')
self.mtx.lock()
try:
self.cond.wait(self.mtx)
# NOTE: do work for phase 2 here
self.measure_msg.emit('phase2')
finally:
self.mtx.unlock()
Your timing is a little bit off in all this though. You create the app and
start the thread before you even show your window. Thus, the message box will
pop up **_before_** the main window even pops up. To get the right sequence of
events, you should start your thread as part of the `run` method of your
MainWindow, **_after_** you have already made the main window visible. If you
want the wait condition to be separate from the setting of the messages, you
may need a separate signal and slot to deal with that.
|
Select records based on the specific index string value and then remove subsequent fields by python
Question: I have a .csv file named `file01.csv` that contains many records. Some records
are required and some are not. I find that the required records has a string
variable “Mi”, but it is not exist into the unnecessary records. So, I want to
select the required records based on string value “Mi” in the field for every
records.
Finally I want to delete the subsequent fields of each record from the field
that contains value “Mi”. Any suggestion and advice is appreciated.
Optional:
1. In addition, I want to delete the first column.
2. Split column BB into two column named as a_id, and c_id. Separate the value by _ (underscore) and left side will go to a_id, and right side will go to c_id.
My `fileO.csv` is as follows:
AA BB CC DD EE FF GG
1 1_1.csv (=0 =10" 27" =57 "Mi"
0.97 0.9 0.8 NaN 0.9 od 0.2
2 1_3.csv (=0 =10" 27" "Mi" 0.5
0.97 0.5 0.8 NaN 0.9 od 0.4
3 1_6.csv (=0 =10" "Mi" =53 cnt
0.97 0.9 0.8 NaN 0.9 od 0.6
4 2_6.csv No Bi 000 000 000 000
5 2_8.csv No Bi 000 000 000 000
6 6_9.csv less 000 000 000 000
7 7_9.csv s(=0 =26" =46" "Mi" 121
My Expected results files (outFile.csv):
a_id b_id CC DD EE FF GG
1 1 0 10 27 57
1 3 0 10 27
1 6 0 10
7 9 0 26 46
Answer: The following approach should work fine using Python `csv` module:
import csv
import re
import string
output_header = ['a_id', 'b_id', 'CC', 'DD', 'EE', 'FF', 'GG']
sanitise_table = string.maketrans("","")
nodigits_table = sanitise_table.translate(sanitise_table, string.digits)
def find_mi(row):
for index, col in enumerate(row):
if col.find('Mi') != -1:
return index
return -1
def sanitise_cell(cell):
return cell.translate(sanitise_table, nodigits_table) # Keep digits
f_input = open('fileO.csv', 'rb')
f_output = open('outFile.csv', 'wb')
csv_input = csv.reader(f_input)
csv_output = csv.writer(f_output)
input_header = next(f_input)
csv_output.writerow(output_header)
for row in csv_input:
#print '%2d %s' % (len(row), row)
if len(row) >= 2:
bb = re.match(r'(\d+)__(\d+).0\.csv', row[1])
mi = find_mi(row)
if bb and mi != -1:
row[:] = row[:mi] + [''] * (len(row) - mi)
row[:] = [sanitise_cell(col) for col in row]
row[0] = bb.group(1)
row[1] = bb.group(2)
csv_output.writerow(row)
f_input.close()
f_output.close()
`outFile.csv` will contain the following:
a_id,b_id,CC,DD,EE,FF,GG
1,1,0,10,27,57,
1,3,0,10,27,,
1,6,0,10,,,
7,9,0,26,46,,
Tested using Python 2.6.6
|
Python gaussian fit on simulated gaussian noisy data
Question: I need to interpolate data coming from an instrument using a gaussian fit. To
this end I thought about using the `curve_fit` function from `scipy`. Since
I'd like to test this functionality on fake data before trying it on the
instrument I wrote the following code to generate noisy gaussian data and to
fit it:
from scipy.optimize import curve_fit
import numpy
import pylab
# Create a gaussian function
def gaussian(x, a, b, c):
val = a * numpy.exp(-(x - b)**2 / (2*c**2))
return val
# Generate fake data.
zMinEntry = 80.0*1E-06
zMaxEntry = 180.0*1E-06
zStepEntry = 0.2*1E-06
x = numpy.arange(zMinEntry,
zMaxEntry,
zStepEntry,
dtype = numpy.float64)
n = len(x)
meanY = zMinEntry + (zMaxEntry - zMinEntry)/2
sigmaY = 10.0E-06
a = 1.0/(sigmaY*numpy.sqrt(2*numpy.pi))
y = gaussian(x, a, meanY, sigmaY) + a*0.1*numpy.random.normal(0, 1, size=len(x))
# Fit
popt, pcov = curve_fit(gaussian, x, y)
# Print results
print("Scale = %.3f +/- %.3f" % (popt[0], numpy.sqrt(pcov[0, 0])))
print("Offset = %.3f +/- %.3f" % (popt[1], numpy.sqrt(pcov[1, 1])))
print("Sigma = %.3f +/- %.3f" % (popt[2], numpy.sqrt(pcov[2, 2])))
pylab.plot(x, y, 'ro')
pylab.plot(x, gaussian(x, popt[0], popt[1], popt[2]))
pylab.grid(True)
pylab.show()
Unfortunately this does not work properly, the output of the code is the
following:
Scale = 6174.816 +/- 7114424813.672
Offset = 429.319 +/- 3919751917.830
Sigma = 1602.869 +/- 17923909301.176
And the plotted result is (blue is the fit function, red dots is the noisy
input data):
[](http://i.stack.imgur.com/MCqXY.png)
I also tried to look at
[this](http://stackoverflow.com/questions/19206332/gaussian-fit-for-python)
answer, but couldn't figure out where my problem is. Am I missing something
here? Or am I using the `curve_fit` function in the wrong way? Thanks in
advance!
Answer: I agree with Olaf in so far as it is a question of scale. The optimal
parameters differ by many orders of magnitude. However, scaling the parameters
with which you generated your toy data does not seem to solve the problem for
your actual application.
[`curve_fit`](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.optimize.curve_fit.html)
uses
[`lestsq`](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.optimize.leastsq.html#scipy.optimize.leastsq),
which numerically approximates the Jacobian, where numerical problems arise
because of the differences in scale (try to use the `full_output` keyword in
`curve_fit`).
In my experience it is often best to use
[`fmin`](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.optimize.fmin.html)
which does not rely on approximated derivatives but uses only function values.
You now have to write your own least-squares function that is to be optimized.
Starting values are still important. In your case you can make sufficiently
good guesses by taking the maximum amplitude for `a` and the corresponding
x-values for `b`and `c`.
In code, it looks like this:
from scipy.optimize import curve_fit,fmin
import numpy
import pylab
# Create a gaussian function
def gaussian(x, a, b, c):
val = a * numpy.exp(-(x - b)**2 / (2*c**2))
return val
# Generate fake data.
zMinEntry = 80.0*1E-06
zMaxEntry = 180.0*1E-06
zStepEntry = 0.2*1E-06
x = numpy.arange(zMinEntry,
zMaxEntry,
zStepEntry,
dtype = numpy.float64)
n = len(x)
meanY = zMinEntry + (zMaxEntry - zMinEntry)/2
sigmaY = 10.0E-06
a = 1.0/(sigmaY*numpy.sqrt(2*numpy.pi))
y = gaussian(x, a, meanY, sigmaY) + a*0.1*numpy.random.normal(0, 1, size=len(x))
print a, meanY, sigmaY
# estimate starting values from the data
a = y.max()
b = x[numpy.argmax(a)]
c = b
# define a least squares function to optimize
def minfunc(params):
return sum((y-gaussian(x,params[0],params[1],params[2]))**2)
# fit
popt = fmin(minfunc,[a,b,c])
# Print results
print("Scale = %.3f" % (popt[0]))
print("Offset = %.3f" % (popt[1]))
print("Sigma = %.3f" % (popt[2]))
pylab.plot(x, y, 'ro')
pylab.plot(x, gaussian(x, popt[0], popt[1], popt[2]),lw = 2)
pylab.xlim(x.min(),x.max())
pylab.grid(True)
pylab.show()
[](http://i.stack.imgur.com/tk92z.png)
|
Using class attributes/methods across different threads - Python
Question: I'll do my best to explain this issue in a clear way, it's come up as part of
a much larger piece of software I'm developing for an A level project - a
project that aims to create a simple version of a graphical programming system
(think scratch made by monkeys with about 7 commands).
My trouble currently stems from the need to have an execution function running
on a unique thread that is capable of interacting with a user interface that
shows the results of executing the code blocks made by the user (written using
the Tkinter libraries) on the main thread. This function is designed to go
through a dynamic list that contains information on the user's "code" in a
form that can be looped through and dealt with "line by line".
The issue occurs when the execution begins, and the threaded function attempts
to call a function that is part of the user interface class. I have limited
understanding of multi threading, so it's all too likely that I am breaking
some important rules and doing things in ways that don't make sense, and help
here would be great.
I have achieved close to the functionality I am after previously, but always
with some errors coming up in different ways (mostly due to my original
attempts opening a tkinter window in a second thread... a bad idea).
As far as I'm aware my current code works in terms of opening a second thread,
opening the UI in the main thread, and beginning to run the execution function
in the second thread. In order to explain this issue, I have created a small
piece of code that works on the same basis, and produces the same "none type"
error, I would use the original code, but it's bulky, and a lot more annoying
to follow than below:
from tkinter import *
import threading
#Represents what would be my main code
class MainClass():
#attributes for instances of each of the other classes
outputUI = None
threadingObject = None
#attempt to open second thread and the output ui
def beginExecute(self):
self.threadingObject = ThreadingClass()
self.outputUI = OutputUI()
#called by function of the threaded class, attempts to refer to instance
#of "outputUI" created in the "begin execute" function
def execute(self):
return self.outputUI.functionThatReturns()
#class for the output ui - just a blank box
class OutputUI():
#constructor to make a window
def __init__(self):
root = Tk()
root.title = ("Window in main thread")
root.mainloop()
#function to return a string when called
def functionThatReturns(self):
return("I'm a real object, look I exist! Maybe")
#inherits from threading library, contains threading... (this is where my
#understanding gets more patchy)
class ThreadingClass(threading.Thread):
#constructor - create new thread, run the thread...
def __init__(self):
threading.Thread.__init__(self)
self.start()
#auto called by self.start() ^ (as far as I'm aware)
def run(self):
#attempt to run the main classes "execute" function
print(mainClass.execute())
#create instance of the main class, then attempt execution of some
#threading
mainClass = MainClass()
mainClass.beginExecute()
When this code is run, it produces the following result:
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 920, in _bootstrap_inner
self.run()
File "H:/Programs/Python/more more threading tests.py", line 33, in run
print(mainClass.execute())
File "H:/Programs/Python/more more threading tests.py", line 14, in execute
return self.outputUI.functionThatReturns()
AttributeError: 'NoneType' object has no attribute 'functionThatReturns'
I guess it should be noted that the tkinter window opens as I hoped, and the
threading class does what it's supposed to, but does not appear to be aware of
the existence of the output UI. I assume this is due to some part of object
orientation and threading of which I am woefully under-informed.
So, is there a way in which I can call the function in the output ui from the
threaded function? Or is there a work around to something similar? It should
be noted that I didn't put the creation of the output window in the **init**
function of the main class, as I need to be able to create the output window
and start threading etc as a result of another input.
Sorry if this doesn't make sense, shout at me and I'll try and fix it, but
help would be greatly appreciated, cheers.
Answer: The problem here is your order of operations. You create a `ThreadingClass` in
`MainClass.beginExecute()`, which calls `self.start()`. That calls `run` which
ultimately tries to call a function on `main.outputUI`. But `main.outputUI`
hasn't been initialized yet (that's the _next_ line in `beginExecute()`). You
could probably make it work just by reordering those two lines. But you
probably don't want to be calling `self.start()` in the thread's `__init__`.
That seems like poor form.
|
tsplot out of bounds error in Python seaborn
Question: i'm trying to plot time series data as points in seaborn, colored by
condition. i tried the following:
import matplotlib
import matplotlib.pylab as plt
import seaborn as sns
import pandas
df = pandas.DataFrame({"t": [0, 1],
"y": [1, 1],
"c": ["A", "B"]})
colors = {"A": "r", "B": "g"}
fig = plt.figure()
# this fails
sns.tsplot(time="t", value="y", condition="c", unit="c",
data=df, err_style="unit_points", interpolate=False,
color=colors)
plt.show()
the error is:
x_diff = x[1] - x[0]
IndexError: index 1 is out of bounds for axis 0 with size 1
however if i plot the data as:
# this works
sns.pointplot(x="t", y="y", hue="c", join=False, data=df)
then it works. `pointplot` treats time as categorical data though which is not
right for this. how can this be done with `tsplot`? it should give the same
result as `pointplot` except the x-axis (`t`) should scale quantitatively like
time and not as categorical.
**update**
here's a revised example that shows `tsplot` fails even when there are
multiple observations for most of the labels. in this df 2 out of 3 conditions
have multiple observations, but 1 condition that doesn't is enough to cause
the error:
df = pandas.DataFrame({"t": [0, 1.1, 2.9, 3.5, 4.5, 5.9],
"y": [1, 1, 1, 1, 1, 1],
"c": ["A", "A", "B", "B", "B", "C"]})
colors = {"A": "r", "B": "g", "C": "k"}
print df
fig = plt.figure()
# this works but time axis is wrong
#sns.pointplot(x="t", y="y", hue="c", join=False, data=df)
# this fails
sns.tsplot(time="t", value="y", condition="c", unit="c",
data=df, err_style="unit_points", interpolate=False,
color=colors)
plt.show()
@mwaskom suggested making ordinary plot. doing that manually is difficult is
error prone and duplicates work that seaborn already does. seaborn already has
a way to plot and facet data by various features in dataframes and i don't
want to reproduce this code. here's a way to do manually which is cumbersome:
# solution using plt.subplot
# cumbersome and error prone solution
# the use of 'set' makes the order non-deterministic
for l in set(df["c"]):
subset = df[df["c"] == l]
plt.plot(subset["t"], subset["y"], "o", color=colors[l], label=l)
basically i am looking for something like `sns.pointplot` that uses numeric,
rather than categorical x-axis. does seaborn have something like this? another
way to think of it is as a dataframe aware version of `plt.scatter` or
`plt.plot`.
Answer: I think the problem really is that you have too few observations. 2
observations for 2 conditions means there will be only 1 observation for each
condition. For a time-series plot, this is probably not going to work. `x_diff
= x[1] - x[0]` calculates the time interval length and can't work with just
one observation per group.
If you have more than 1 observation per group, i.e.:
df = pandas.DataFrame({"t": [0, 1, 2, 3],
"y": [1, 1, 2, 4],
"c": ["A", "B", "A", "B"]})
it should plot just fine.
|
print json object nested in an array using python
Question: I am working with a JSON file and am using Python. I am trying to print an
object that is nested in an array. I would like to print select objects (e.g.
"name", "thomas_id") from the following array (is it considered a 'list' of
'objects' in an array? would the array be called _the_ "cosponsors" array?):
"cosponsors": [
{
"district": null,
"name": "Akaka, Daniel K.",
"sponsored_at": "2011-01-25",
"state": "HI",
"thomas_id": "00007",
"title": "Sen",
"withdrawn_at": null
},
.
.
.
{
"district": null,
"name": "Lautenberg, Frank R.",
"sponsored_at": "2011-01-25",
"state": "NJ",
"thomas_id": "01381",
"title": "Sen",
"withdrawn_at": null
}
]
The problem is that I do not know the syntax to print objects (listed?) in an
array. I have tried a number of variations extrapolated from what I have found
on stack overflow; namely, variations of the following:
print(data['cosponsors']['0']['thomas_id']
I recieve the error "list indices must be integers or slices, not str"
Background:
I have over 3000 json files that are contained within a so-called master file.
I only need the same specific aspects of each file that I will need to later
export into a MYSQL DB, but that is another topic (or is it, i.e. am I going
about this the wrong way?). Accordingly, I am writing a code that I can
impliment on all of the files in order to get the data that I need. I've been
doing quite well, considering that I do not have any experience with
programming. I have been using the following code in Python:
import json
data = json.load(open('s2_data.json', 'r'))
print (data["official_title"], data["number"], data["introduced_at"],
data["bill_id"], data['subjects_top_term'], data['subjects'],
data['summary']['text'], data['sponsor']['thomas_id'],
data['sponsor']['state'], data['sponsor']['name'], data['sponsor']
['type'])
It has been returning results that are separated with a space. So far I am
happy with that.
Answer: How about
data['cosponsors'][0]['thomas_id']
Since a list has numeric indices.
|
Temporal statistics from command line in paraview linux
Question: I'm running paraview on a linux server with poor graphics so I'm limited to
command line approach.
I'd very much like to be able to read in a CFD results file into paraview.
Average them at each point by: `Filter`>`Temporal Statistics` Then plot a line
: `Plot over line` And save plot as .csv file.
I'm not sure if a python script would be way forward or perhaps running
paraview from command line. What do you recommend?
Answer: This definitely looks like a job for Python scripting. First, I would use the
ParaView GUI's tracing capability to create the script to automate what you
want to do. Then on your Linux server use the `pvpython` program (which comes
with ParaView) to run the script. (Note that if you are on a cluster that uses
MPI, you should use `pvbatch` instead. But it sounds like your server is a
single workstation.) You might want to edit the script ParaView generates to
remove all the rendering items, and you will probably need to change the
filename that the script loads and saves.
Here is a quick script I made that does exactly what you are asking on one of
ParaView's test data sets. I used the GUI tracing to create it and then
deleted all the rendering/display commands as well as other extraneous
commands.
#### import the simple module from the paraview
from paraview.simple import *
#### disable automatic camera reset on 'Show'
paraview.simple._DisableFirstRenderCameraReset()
# create a new 'ExodusIIReader'
canex2 = ExodusIIReader(FileName=['/Users/kmorel/data/ParaViewDataNew/can.ex2'])
canex2.ElementVariables = ['EQPS']
canex2.PointVariables = ['DISPL', 'VEL', 'ACCL']
canex2.GlobalVariables = ['KE', 'XMOM', 'YMOM', 'ZMOM', 'NSTEPS', 'TMSTEP']
canex2.NodeSetArrayStatus = []
canex2.SideSetArrayStatus = []
canex2.ElementBlocks = ['Unnamed block ID: 1 Type: HEX', 'Unnamed block ID: 2 Type: HEX']
canex2.ApplyDisplacements = 0
# create a new 'Temporal Statistics'
temporalStatistics1 = TemporalStatistics(Input=canex2)
# create a new 'Plot Over Line'
plotOverLine1 = PlotOverLine(Input=temporalStatistics1,
Source='High Resolution Line Source')
# init the 'High Resolution Line Source' selected for 'Source'
plotOverLine1.Source.Point1 = [-7.878461837768555, 0.0, -14.999999046325684]
plotOverLine1.Source.Point2 = [8.312582015991211, 8.0, 4.778104782104492]
# Properties modified on plotOverLine1
plotOverLine1.Tolerance = 2.22044604925031e-16
# Properties modified on plotOverLine1.Source
plotOverLine1.Source.Point1 = [0.0, 5.0, -15.0]
plotOverLine1.Source.Point2 = [0.0, 5.0, 0.0]
# save data
SaveData('plot_over_line.csv', proxy=plotOverLine1)
|
Fast random weighted selection across all rows of a stochastic matrix
Question: `numpy.random.choice` allows for weighted selection from a vector, i.e.
arr = numpy.array([1, 2, 3])
weights = numpy.array([0.2, 0.5, 0.3])
choice = numpy.random.choice(arr, p=weights)
selects 1 with probability 0.2, 2 with probability 0.5, and 3 with probability
0.3.
What if we wanted to do this quickly in a vectorized fashion for a 2D array
(matrix) for which each of the rows are a vector of probabilities? That is, we
want a vector of choices from a stochastic matrix? This is the super slow way:
import numpy as np
m = 10
n = 100 # Or some very large number
items = np.arange(m)
prob_weights = np.random.rand(m, n)
prob_matrix = prob_weights / prob_weights.sum(axis=0, keepdims=True)
choices = np.zeros((n,))
# This is slow, because of the loop in Python
for i in range(n):
choices[i] = np.random.choice(items, p=prob_matrix[:,i])
`print(choices)`:
array([ 4., 7., 8., 1., 0., 4., 3., 7., 1., 5., 7., 5., 3.,
1., 9., 1., 1., 5., 9., 8., 2., 3., 2., 6., 4., 3.,
8., 4., 1., 1., 4., 0., 1., 8., 5., 3., 9., 9., 6.,
5., 4., 8., 4., 2., 4., 0., 3., 1., 2., 5., 9., 3.,
9., 9., 7., 9., 3., 9., 4., 8., 8., 7., 6., 4., 6.,
7., 9., 5., 0., 6., 1., 3., 3., 2., 4., 7., 0., 6.,
3., 5., 8., 0., 8., 3., 4., 5., 2., 2., 1., 1., 9.,
9., 4., 3., 3., 2., 8., 0., 6., 1.])
[This post](http://stackoverflow.com/q/24140114/586086) suggests that `cumsum`
and `bisect` could be a potential approach, and is fast. But while
`numpy.cumsum(arr, axis=1)` can do this along one axis of a numpy array, the
[`bisect.bisect`](https://docs.python.org/2/library/bisect.html) function only
works on a single array at a time. Similarly,
[`numpy.searchsorted`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html)
only works on 1D arrays as well.
Is there a quick way to do this using only vectorized operations?
Answer: Here's a fully vectorized version that's pretty fast:
def vectorized(prob_matrix, items):
s = prob_matrix.cumsum(axis=0)
r = np.random.rand(prob_matrix.shape[1])
k = (s < r).sum(axis=0)
return items[k]
_In theory_ , `searchsorted` is the right function to use for looking up the
random value in the cumulatively summed probabilities, but with `m` being
relatively small, `k = (s < r).sum(axis=0)` ends up being much faster. Its
time complexity is O(m), while the `searchsorted` method is O(log(m)), but
that will only matter for much larger `m`. _Also_ , `cumsum` is O(m), so both
`vectorized` and @perimosocordiae's `improved` are O(m). (If your `m` is, in
fact, much larger, you'll have to run some tests to see how large `m` can be
before this method is slower.)
Here's the timing I get with `m = 10` and `n = 10000` (using the functions
`original` and `improved` from @perimosocordiae's answer):
In [115]: %timeit original(prob_matrix, items)
1 loops, best of 3: 270 ms per loop
In [116]: %timeit improved(prob_matrix, items)
10 loops, best of 3: 24.9 ms per loop
In [117]: %timeit vectorized(prob_matrix, items)
1000 loops, best of 3: 1 ms per loop
The full script where the functions are defined is:
import numpy as np
def improved(prob_matrix, items):
# transpose here for better data locality later
cdf = np.cumsum(prob_matrix.T, axis=1)
# random numbers are expensive, so we'll get all of them at once
ridx = np.random.random(size=n)
# the one loop we can't avoid, made as simple as possible
idx = np.zeros(n, dtype=int)
for i, r in enumerate(ridx):
idx[i] = np.searchsorted(cdf[i], r)
# fancy indexing all at once is faster than indexing in a loop
return items[idx]
def original(prob_matrix, items):
choices = np.zeros((n,))
# This is slow, because of the loop in Python
for i in range(n):
choices[i] = np.random.choice(items, p=prob_matrix[:,i])
return choices
def vectorized(prob_matrix, items):
s = prob_matrix.cumsum(axis=0)
r = np.random.rand(prob_matrix.shape[1])
k = (s < r).sum(axis=0)
return items[k]
m = 10
n = 10000 # Or some very large number
items = np.arange(m)
prob_weights = np.random.rand(m, n)
prob_matrix = prob_weights / prob_weights.sum(axis=0, keepdims=True)
|
Python/Django: creating parent model that all models will inherit from
Question: I am attempting to create a model class that all models will inherit from when
they are created (in django). I want any model class, with any attributes to
be able to inherit from this class and read from the appropriate database
table. I know I am going need to use `**kwargs` and `.setattr()` at some point
but am unclear as to where I even start. I am also going to try to recreate
`.all()`, `.filter()` and `.get()` with in that class that all other methods
that inherit this class can access.
This is what I have so far...
import sqlite3
class Model:
def __init__(self):
pass
@classmethod
def all(self, **kwargs):
pass
@classmethod
def get(self):
pass
@classmethod
def filter(self):
pass
###don't touch the code for these
class Users(Model):
pass
class Stocks(Model):
pass
Could somebody help me out with the initialization of ths class?
Answer: It looks like you're trying to insert an [abstract base
class](https://docs.djangoproject.com/en/1.9/topics/db/models/#abstract-base-
classes) for your models.
Basically, what you've got there is correct, except you're missing
from django.db.models import Model
class MyModelBase(Model):
class Meta:
abstract = True
# ... The fields and methods I want all my models to inherit.
Then rather than making your models inherit from `django.db.models.Model`,
they should inherit from `MyModelBase`.
|
Efficiently testing if strings are present in 2D numpy arrays
Question: I have a list of sentences that I want to convert into a Pandas MultiIndex
(but don't worry, this question can probably be answered purely using numpy).
For example, lets say the sentences are:
sentences = ['she went', 'I went', 'she and I']
To make an index I first need to get a unique list of words in all the
sentences. Each word will become an index. The result of this should be:
words = ['she', 'went', 'I', 'and']
Then to work out what values of the index each row has, I need a 2d array of
booleans. Making this array is the main problem as I want it to be as
efficient as possible, and hopefully without relying on python data
manipulation at all. This 2D array can be in _either one of_ two different
formats:
* An array of tuples. Each tuple contains booleans to indicate the presence of the given word in the row. This will be passed to `pandas.MultiIndex.from_tuples()` For example:
tuples = [
#"She went" contains "she" and "went", but not "I" or "and"
(True, True, False, False),
#"I went" contains "I" and "went", but not "she" or "and"
(False, True, True, False),
#"She and I" contains "she", "I" and "and", but not "went"
(True, False, True, True),
]
* An array of arrays, one inner array for each word. This will be passed to `pandas.MultiIndex.from_array()`. For example:
arrays = [
# 'she' is in the first and third sentences
[True, False, True],
# 'went' is in the first and second sentences
[True, True, False],
# 'I' is in the second and third sentences
[True, False, True],
# 'and' is in the first sentence only
[True, False, False],
]
* * *
Ideally the solution will convert sentences to an np array and work with that
from then on. My naiive implementation is so far this. Unfortunately I'm not
sure how to do this with numpy without list comprehensions
import pandas as pd
sentences = ['she went', 'I went', 'she and I']
# Can this be done using numpy?
split_sentences = [sentence.split(" ") for sentence in sentences]
words = list(set(sum(split_sentences, [])))
# Is there a built in way of doing this with numpy, for example np.intersect?
tuples = [
[True if word in sentence_words else False for word in words]
for sentence_words in split_sentences
]
index = pd.MultiIndex.from_tuples(tuples, names=words)
Answer: Here's one approach vectorizing the crux of the problem of finding the
intersections -
# Split setences
split_sentences = [sentence.split(" ") for sentence in sentences]
# Get group lengths
grplens = np.array([len(item) for item in split_sentences])
# ID each word
unq_words,ID = np.unique(np.concatenate((split_sentences)),return_inverse=True)
# Get lengths
N = len(ID)
Nunq = len(unq_words)
Nsent = len(sentences)
# Get ID shift positions and thus row shifts for final 2D array output
row_shifts = np.zeros(N,dtype=int)
row_shifts[grplens.cumsum()[:-1]] = 1
# Finally get output boolean array using ID and row_shift IDs
out = np.zeros((Nsent,Nunq),dtype=bool)
out[row_shifts.cumsum(),ID] = 1
Sample run -
In [494]: sentences
Out[494]: ['she went', 'I went', 'she and I', 'with dog', 'dog and she']
In [495]: unq_words
Out[495]:
array(['I', 'and', 'dog', 'she', 'went', 'with'],
dtype='|S4')
In [496]: out
Out[496]:
array([[False, False, False, True, True, False],
[ True, False, False, False, True, False],
[ True, True, False, True, False, False],
[False, False, True, False, False, True],
[False, True, True, True, False, False]], dtype=bool)
|
Why won't start_transaction set the transaction isolation level?
Question: The [start_transaction](https://dev.mysql.com/doc/connector-
python/en/connector-python-api-mysqlconnection-start-transaction.html) method
of the MySQLConnection class of `mysql-connector-python` doesn't seem to work
as expected.
>>> from mysql.connector import connect
>>> conn = connect(user = 'john', unix_socket = '/tmp/mysql.sock', database='db7')
>>> cur = conn.cursor()
>>> cur.execute("select @@session.tx_isolation")
>>> cur.fetchone()
('REPEATABLE-READ',)
>>> cur.close()
True
>>> conn.start_transaction(isolation_level = 'SERIALIZABLE')
>>> cur = conn.cursor()
>>> cur.execute("select @@session.tx_isolation")
>>> cur.fetchone()
('REPEATABLE-READ',)
The isolation level remains REPEATABLE-READ despite setting the isolation
level to 'SERIALIZABLE' in the `start_transaction` call. Am I missing
something?
Answer:
SELECT TRX_ISOLATION_LEVEL
FROM information_schema.innodb_trx
WHERE TRX_MYSQL_THREAD_ID = CONNECTION_ID();
This should give you your current transaction's isolation level. However,
unless you `START TRANSACTION WITH CONSISTENT SNAPSHOT;` the transaction won't
likely appear here until after InnoDB has seen you run at least one query.
|
psycopg2 installation error: Unable to find vcvarsall.bat
Question: I'm trying connect my django 1.9 project to postgresql database. First of all
i need to install psycopg2. I got my psycopg file from
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#psycopg> . However I could not
install, I have an "Unable to find vcvarsall.bat" error.
My python version is 3.5.1.
Here is my error;
copying tests\test_quote.py -> build\lib.win-amd64-3.5\psycopg2\tests
copying tests\test_transaction.py -> build\lib.win-amd64-3.5\psycopg2\tests
copying tests\test_types_basic.py -> build\lib.win-amd64-3.5\psycopg2\tests
copying tests\test_types_extras.py -> build\lib.win-amd64-3.5\psycopg2\tests
copying tests\test_with.py -> build\lib.win-amd64-3.5\psycopg2\tests
copying tests\__init__.py -> build\lib.win-amd64-3.5\psycopg2\tests
Skipping optional fixer: buffer
Skipping optional fixer: idioms
Skipping optional fixer: set_literal
Skipping optional fixer: ws_comma
running build_ext
building 'psycopg2._psycopg' extension
error: Unable to find vcvarsall.bat
----------------------------------------
Command "c:\python\python35\python.exe -c "import setuptools, tokenize;__file__=
'C:\\Users\\User\\AppData\\Local\\Temp\\pip-build-4q_3mvan\\psycopg2\\setup.py';
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\
n'), __file__, 'exec'))" install --record C:\Users\User\AppData\Local\Temp\pip-e
kz8kaam-record\install-record.txt --single-version-externally-managed --compile"
failed with error code 1 in C:\Users\User\AppData\Local\Temp\pip-build-4q_3mvan
\psycopg2
Does anyone have an idea ? Thank you..
Answer: psycopg2 needs to be compiled various development libraries. On Windows this
is usually automated by using some version of Visual Studio -- which is what
the vcvarsall.bat file is all about -- and is usually a huge pain.
Fortunately, Jason Erickson maintains a Windows port of psycopg2
[here.](http://www.stickpeople.com/projects/python/win-psycopg/)
I see now that you're using Python 3.5, and there doesn't seem to be a release
for that version available on that page. Fortunately, there's an answer for
that [already on Stack
Overflow.](https://stackoverflow.com/questions/28611808/how-to-intall-
psycopg2-for-python-3-5) (One of the comments says there's not a version for
Python 3.5 on the linked page, but that's not true anymore)
|
Sentiment analysis for sentences- positive, negative and neutral
Question: I'm designing a text classifier in Python using NLTK. One of the features
considered in every sentence is it's sentiment. I want to weight sentences
with either positive or negative sentiments more that those without any
sentiment(neutral sentences). Using the movie review corpus along with the
naive bayes classifier results in only positive and negative labels. I tried
using demo_liu_hu_lexicon in nltk.sentiment.utils but the function does not
return any values but instead prints it to the output and is very slow. Does
anyone know of a library which gives some sort of weight to sentences based on
sentiment?
Thanks!
Answer: Try the [textblob module](https://pypi.python.org/pypi/textblob):
from textblob import TextBlob
text = '''
These laptops are horrible but I've seen worse. How about lunch today? The food was okay.
'''
blob = TextBlob(text)
for sentence in blob.sentences:
print(sentence.sentiment.polarity)
# -0.7
# 0.0
# 0.5
It uses the nltk library to determine the polarity - which is a float measure
ranging from -1 to 1 for the sentiment. Neutral sentences have zero polarity.
You should be able to get the same measure directly from nltk.
|
Update variable in python after each loop
Question: Been struggling with this for a little while, I want the variable "result" to
be updated after each loop of the function so that next time it steps through
the function it will return something new. I have tried assigning result
outside of my function then attempting to update it like that but haven't had
any look.
def move2(msg):
global result
if result is None:
print "Error result variable does not have a value"
elif result is "rtn":
rtn()
result = "fwd"
return result
elif result is "fwd":
fwd()
result = "step"
return result
elif result is "step":
step()
result = "rtn"
return result
else:
print "ERROR"
To note I am fairly new to python and am not opposed to other approaches if
any of you have suggestions. Thanks
Answer: You are trying to make your function stateful which is generally a bad idea.
Make it a class instead, like
from itertools import cycle
class StatefulCaller:
def __init__(self, state_fns, repeat=True):
if repeat:
state_fns = cycle(state_fns)
self.fn_iter = iter(state_fns)
def __call__(self):
return next(self.fn_iter)()
which you can then use like:
rtn = lambda: "rtn"
fwd = lambda: "fwd"
step = lambda: "step"
myfn = StatefulCaller([rtn, fwd, step])
for i in range(10):
print(i, myfn())
which results in
0 rtn
1 fwd
2 step
3 rtn
4 fwd
5 step
6 rtn
7 fwd
8 step
9 rtn
|
How do I distribute fonts with my python package?
Question: I have created a package called
[clearplot](http://clearplot.readthedocs.org/en/latest/index.html) that wraps
around matplotlib. I have also created a nice font that I want to distribute
with my package. I consulted [this section](https://python-packaging-user-
guide.readthedocs.org/en/latest/distributing/#data-files) of the Python
Packaging User guide, and determined that I should use the `data_files`
keyword. I chose `data_files` instead of `package_data` since I need to
install the font in a matplotlib directory that is _outside_ of my package.
Here is my first, flawed, attempt at a `setup.py` file:
from distutils.core import setup
import os, sys
import matplotlib as mpl
#Find where matplotlib stores its True Type fonts
mpl_data_dir = os.path.dirname(mpl.matplotlib_fname())
mpl_ttf_dir = os.path.join(mpl_data_dir, 'fonts', 'ttf')
setup(
...(edited for brevity)...
install_requires = ['matplotlib >= 1.4.0, !=1.4.3', 'numpy >= 1.6'],
data_files = [
(mpl_ttf_dir, ['./font_files/TeXGyreHeros-txfonts/TeXGyreHerosTXfonts-Regular.ttf']),
(mpl_ttf_dir, ['./font_files/TeXGyreHeros-txfonts/TeXGyreHerosTXfonts-Italic.ttf'])]
)
#Try to delete matplotlib's fontList cache
mpl_cache_dir = mpl.get_cachedir()
mpl_cache_dir_ls = os.listdir(mpl_cache_dir)
if 'fontList.cache' in mpl_cache_dir_ls:
fontList_path = os.path.join(mpl_cache_dir, 'fontList.cache')
os.remove(fontList_path)
There are two issues with this `setup.py`:
1. I attempt to import matplotlib before `setup()` has a chance to install it. This is an obvious booboo, but I needed to know where `mpl_ttf_dir` was before I ran `setup()`.
2. As mentioned [here](https://python-packaging-user-guide.readthedocs.org/en/latest/distributing/#data-files), wheel distributions do not support absolute paths for `data_files`. I didn't think this would be a problem because I thought I would just use a sdist distribution. (sdists do allow absolute paths.) Then I came to find out that pip 7.0 (and later) converts all packages to wheel distributions, even if the distribution was originally created as a sdist.
I was quite annoyed by issue #2, but, since then, I found out that absolute
paths are bad because they do not work with virtualenv. Thus, I am now willing
to change my approach, but what do I do?
The only idea I have is to distribute the font as `package_data` first and
then move the font to the proper location afterwards using the `os` module. Is
that a kosher method?
Answer: > The only idea I have is to distribute the font as package_data first and
> then move the font to the proper location afterwards using the os module. Is
> that a kosher method?
I would consider doing exactly this. I know your package may not be an obvious
candidate for virtualenvs, but consider that python packages may be installed
only to a user-writable location. Thus, copying the font when you first run
your programme and detecting the correct location, might prompt you to do
stuff in a better manner than possible through setup.py, stuff like: Elevate
privileges through a password prompt in case it's needed, ask for a different
location in case you fail to detect it, prompt if you are over-writing
existing system files etc.
I once tried arguing that Python packages should be able to place stuff in
`/etc`, but I realized the benefits were small compared to just creating a
proper native package for the target OS, i.e. a debian package for Debian or a
.exe installer for Windows.
The bottom line is that wheel and setuptools are not package managers for your
entire OS, but just for what's in some local `site-packages/`.
I hope this answer gives you enough background to avoid `data_files`. One last
good reason: Making it work across distutils, setuptools, and wheel is a no-
go.
|
random generated math quiz
Question: i am trying to make a random generated math quiz on python and i keep
encountering a syntax error but i cant find out whats wrong with my code?
print ("what is your username")
name = input () .title()
print (name, "welcome")
import random
score = 0
random.randint
ops = ["+", "-", "*"]
num1 = random.randint (0,10)
num2 = random.randint (0,10)
oparator = random.choice=(ops)
Q=(str(num1)+(ops)+(str(num2))
print(Q)
I keep encountering this error but i have no idea whats wrong with my code?
Answer: Ok, try to replace input with raw_input, and your code has lots of errors.
This code is running well in my computer
print ("what is your username")
name = raw_input()
print (name, "welcome")
import random
score = 0
random.randint
ops = ["+", "-", "*"]
num1 = random.randint (0,10)
num2 = random.randint (0,10)
oparator = random.choice(ops)
Q=(str(num1)+(oparator)+(str(num2)))
print Q
|
Python:Errno 10053 when both the server and the client are in the same system
Question: I'm just beginning to code using Python and Python Networking is a whole new
topic for me!I couldn't find out what the problem was and I say, I have tried
all the solutions provided by the user!
Please help me with this code!
Server Code:
from socket import *
import time
HostName = ''
PortNum = 2015
Buff = 5000
ServAdd = (HostName, PortNum)
ServSoc = socket(AF_INET, SOCK_STREAM)
ServSoc.bind(ServAdd)
ServSoc.listen(5)
while 1:
CliAdd = ServSoc.accept()
TempSoc = ServSoc.accept()
while 1:
CliData = TempSoc.recv(Buff)
print CliData
time.sleep(20)
TempSoc.send('Hai')
TempSoc.close()
ServSoc.close()
Client Code:
from socket import *
import time
HostName = 'localhost'
PortNum = 2015
Buff = 5000
ServAdd = (HostName, PortNum)
CliSoc = socket(AF_INET, SOCK_STREAM)
CliSoc.connect(ServAdd)
while 1:
Data = raw_input('Enter Data to sent to the Server: ')
CliSoc.send(Data)
ServData = CliSoc.recv(Buff)
print ServData
time.sleep(10)
CliSoc.close()
The Error message is
Enter Data to sent to the Server: hello
Traceback (most recent call last):
File "C:/Python26/Progs/TCPClient.py", line 15, in <module>
ServData = CliSoc.recv(Buff)
error: [Errno 10053] An established connection was aborted by the software in your host machine
Answer: Try following server code. Your client side seems ok. You have to run
server.py first to listen related port. Then run your client.py. What you do
at the server side is creating a socket to listen (which is "ServSoc" in your
case) and accept connections. Connections are not socket objects so using
`close()` function is not meaningful. Following server code accepts
connections in a loop and check client connection. Thus, it wont crush when
client disconnects.
**Server code:**
from socket import *
import time
HostName = ''
PortNum = 2015
Buff = 5000
ServAdd = (HostName, PortNum)
ServSoc = socket(AF_INET, SOCK_STREAM)
ServSoc.bind(ServAdd)
ServSoc.listen(5)
while 1:
conn, addr = ServSoc.accept()
if conn:
print "connection established with client:",addr
CliData = conn.recv(Buff)
print CliData
time.sleep(1)
conn.send('Hai')
ServSoc.close()
|
Python open file and delete a line
Question: I need a program that imports a file. My file is:
1 abc
2 def
3 ghi
4 jkl
5 mno
6 pqr
7 stu.
I want to delete lines 1, 6 and 7.
Iv tried the following to import the file:
f = open("myfile.txt","r")
lines = f.readlines()
f.close()
f = open("myfile.txt","w")
if line = 1:
f.write(line)
f.close
Answer: You could remove those lines as follows:
lines = []
with open('myfile.txt') as file:
for line_number, line in enumerate(file, start=1):
if line_number not in [1, 6, 7]:
lines.append(line)
with open('myfile.txt', 'w') as file:
file.writelines(lines)
By using Python's `with` command, it ensures that the file is correctly closed
afterwards.
|
Check word frequency of imported file against vocabulary python
Question: I want to create bag of words representation of text file in form of vector
(.toarray()) . I am using code :
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(input="file")
f = open('D:\\test\\45.txt')
bag_of_words = vectorizer.fit_transform([f])
print(bag_of_words)
I want to use vocabulary of countvectorizer for comparison. I have text file
which I tokenized and want to use it as vocabulary. How to do it?
Answer: Given that the tokenization was done by insert whitespaces between the single
tokens creating an vocabulary from text is as simple as:
f = open('foo.txt')
text = f.read() # text is a string
tokens = text.split() # breaks the string in single tokens
vocab = list(set(tokens)) # set() removes the doubles form the token list
|
html form error in django project
Question: I'm rather new to programming. I followed a tutorial about building a
**_django project_**. I've gotten an error that looks like this while running
the server and launching the site (error):
TemplateSyntaxError at /
Variables and attributes may not begin with underscores: 'form.as._p'
Request Method: GET
Request URL: http://127.0.0.1:8000/
Django Version: 1.9
Exception Type: TemplateSyntaxError
Exception Value:
Variables and attributes may not begin with underscores: 'form.as._p'
Exception Location: /home/vagrant/Desktop/lwc2/local/lib/python2.7/site-packages/django/template/base.py in parse, line 514
Python Executable: /home/vagrant/Desktop/lwc2/bin/python
Python Version: 2.7.3
Python Path:
['/home/vagrant/Desktop/lwc2/website',
'/home/vagrant/Desktop/lwc2/lib/python2.7',
'/home/vagrant/Desktop/lwc2/lib/python2.7/plat-linux2',
'/home/vagrant/Desktop/lwc2/lib/python2.7/lib-tk',
'/home/vagrant/Desktop/lwc2/lib/python2.7/lib-old',
'/home/vagrant/Desktop/lwc2/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib/python2.7/lib-tk',
'/home/vagrant/Desktop/lwc2/local/lib/python2.7/site-packages',
'/home/vagrant/Desktop/lwc2/lib/python2.7/site-packages']
Server time: Thu, 10 Dec 2015 15:04:07 +0000
Error during template rendering
In template /home/vagrant/Desktop/lwc2/website/website/templates/home.html, error at line 9
Variables and attributes may not begin with underscores: 'form.as._p'
1 {% extends "base.html" %}
2
3
4 {% block content %}
5
6 <h1>Hello, welcome to the home page</h1>
7
8 <form>
9 {{form.as._p }}
10 </form>
11
12
13
14 {% endblock %}
This is a the code that's defining my form : (inside `form.py`)
from django import forms
class EmailForm(forms.Form):
email = forms.EmailField()
`
This is the code defining my view and implementing form : (inside `views.py`)
from django.shortcuts import render
from .forms import EmailForm
def home(request):
form = EmailForm
context = {"form":form}
template = "home.html"
return render(request, template, context)
And this is the `home.html` thats extending the `base.html`:
{% extends "base.html" %}
{% block content %}
<h1>Hello, welcome to the home page</h1>
<form>
{{form.as._p }}
</form>
{% endblock %}
Answer: You have a typo. It should be `{{ form.as_p }}`.
|
How do I override a decorated method in Python?
Question: Let's say I want to extend the following Python class, which includes a
decorator that I don't know much about:
from somewhere import some_decorator
class One(object):
@some_decorator
def some_method(self):
do_something()
Should I decorate the overridden method or not? In other words, can I safely
do:
class Two(One):
def some_method(self):
super(Two, self).some_method()
Or do I need to do:
class Two(One):
@some_decorator
def some_method(self):
super(Two, self).some_method()
Answer: Remember what the `@decorator` syntax does:
@decorator
def foo():
print "foo"
is just syntactic sugar for
def foo():
print "foo"
foo = decorator(foo)
Thus, the undecorated function is no longer callable by its name after it has
been decorated because its name has been assigned to something else.
This means that when you call `super(Two, self).some_method()` in the child
class, then the decorated function `some_method` in the parent will be called.
Knowing whether or not you also need to decorate the child overridden method
entirely depends on what you want to do and what the decorator does. But know
that if you call `super(Two, self).some_method()`, then you will call the
decorated function.
|
Python - merge two lists of tuples into one list of tuples
Question: What's the pythonic way of achieving the following?
from:
a = [('apple', 10), ('of', 10)]
b = [('orange', 10), ('of', 7)]
to get
c = [('orange', 10), ('of', 17), ('apple', 10)]
Answer: You essentially have word-counter pairs. Using `collections.Counter()` lets
you handle those in a natural, Pythonic way:
from collections import Counter
c = (Counter(dict(a)) + Counter(dict(b))).items()
Also see [Is there any pythonic way to combine two dicts (adding values for
keys that appear in both)?](https://stackoverflow.com/questions/11011756/is-
there-any-pythonic-way-to-combine-two-dicts-adding-values-for-keys-that-
appe/11011846#11011846)
Demo:
>>> from collections import Counter
>>> a = [('apple', 10), ('of', 10)]
>>> b = [('orange', 10), ('of', 7)]
>>> Counter(dict(a)) + Counter(dict(b))
Counter({'of': 17, 'orange': 10, 'apple': 10})
>>> (Counter(dict(a)) + Counter(dict(b))).items()
[('orange', 10), ('of', 17), ('apple', 10)]
You could just drop the `.items()` call and keep using a `Counter()` here.
You may want to avoid building (word, count) tuples to begin with and work
with `Counter()` objects from the start.
|
What am I getting wrong in my pandas lambda map?
Question: I'm trying to find the percentile of a dataframe which observations in a
second dataframe would belong to, and I thought a lambda function would do the
trick here like so:
df1.var1.map(lambda x: np.percentile(df2.var1, x))
which I read as for each `x` in the series `df1.var1`, apply the function
`np.percentile(df2.var1, x)`, which finds the percentile of `x` in the series
`df2.var1`. For some reason, I'm getting the error
kth(=-9223372036854775599) out of bounds (209)
where 209 is the length of `df2`, but I have no idea what the `kth` part
refers to. Any ideas what I'm doing wrong here?
FULL ERROR:
ValueError Traceback (most recent call last)
<ipython-input-82-02d5cacfecd4> in <module>()
----> 1 df1.var1.map(lambda x: np.percentile(df2.var1, x))
C:\Users\ngudat\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\core\series.pyc in map(self, arg, na_action)
2043 index=self.index).__finalize__ (self)
2044 else:
-> 2045 mapped = map_f(values, arg)
2046 return self._constructor(mapped,
2047 index=self.index).__finalize__(self)
pandas\src\inference.pyx in pandas.lib.map_infer (pandas\lib.c:62187)()
<ipython-input-82-02d5cacfecd4> in <lambda>(x)
----> 1 df.qof.map(lambda x: np.percentile(prac_prof.qof, x))
C:\Users\ngudat\AppData\Local\Continuum\Anaconda\lib\site-packages\numpy\lib\function_base.pyc in percentile(a, q, axis, out, overwrite_input, interpolation, keepdims)
3266 r, k = _ureduce(a, func=_percentile, q=q, axis=axis, out=out,
3267 overwrite_input=overwrite_input,
-> 3268 interpolation=interpolation)
3269 if keepdims:
3270 if q.ndim == 0:
C:\Users\ngudat\AppData\Local\Continuum\Anaconda\lib\site-packages\numpy\lib\function_base.pyc in _ureduce(a, func, **kwargs)
2995 keepdim = [1] * a.ndim
2996
-> 2997 r = func(a, **kwargs)
2998 return r, keepdim
2999
C:\Users\ngudat\AppData\Local\Continuum\Anaconda\lib\site-packages\numpy\lib\function_base.pyc in _percentile(a, q, axis, out, overwrite_input, interpolation, keepdims)
3370 weights_above.shape = weights_shape
3371
-> 3372 ap.partition(concatenate((indices_below, indices_above)),axis=axis)
3373
3374 # ensure axis with qth is first
ValueError: kth(=-9223372036854775599) out of bounds (209)
Answer: Percentile will not give you what you need here, it takes a percentile and
gives you the value. You need the opposite. You should rank the entries in the
column and calculate the percentiles from that:
import pandas as pd
aa = [1,3,2,4,11,8,9]
dd = pd.DataFrame(data=aa,columns=['xx'])
dd['rank']=dd['xx'].rank()
dd['percentile'] = dd['rank']/len(dd)
This gives you the percentile corresponding to each entry:
xx rank percentile
0 1 1 0.142857
1 3 3 0.428571
2 2 2 0.285714
3 4 4 0.571429
4 11 7 1.000000
5 8 5 0.714286
6 9 6 0.857143
|
Generation of multidimensional Grid in Python
Question: I have a N dimensional array of points that represents the sampling of a
function. Then I am using the numpy histogramdd to create a multi dimensional
histogram :
histoComp,edges = np.histogramdd(pointsFunction,bins = [np.unique(pointsFunction[:,i]) for i in range(dim)])
Next I am trying to generate a "grid" with the coordinate of the different
points of each bin. To do so, I am using :
Grid = np.vstack(np.meshgrid([edges[i] for i in range(len(edges))])).reshape(len(edges),-1).T
However this doesn't work the way I expected it to because the input of
np.meshgrid is a list of arrays instead of arrays... But I have to use a
generator given that the number of edges is not known.
Any tips ?
\---UPDATE--- Here is an example of what I mean by "not working"
>>>a = [4, 8, 7, 5, 9]
>>>b = [7, 8, 9, 4, 5]
So this is the kind of result I want :
>>>np.vstack(np.meshgrid(a,b)).reshape(2,-1).T
array([[4, 7],
[8, 7],
[7, 7],
[5, 7],
[9, 7],
[4, 8],
[8, 8],
[7, 8],
[5, 8],
[9, 8],
[4, 9],
[8, 9],
[7, 9],
[5, 9],
[9, 9],
[4, 4],
[8, 4],
[7, 4],
[5, 4],
[9, 4],
[4, 5],
[8, 5],
[7, 5],
[5, 5],
[9, 5]])
But this is the result I get :
>>> np.vstack(np.meshgrid([a,b])).reshape(2,-1).T
array([[4, 7],
[8, 8],
[7, 9],
[5, 4],
[9, 5]])
Thank you,
Answer: Use the [`*` argument unpacking
operator](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-
in-python/):
np.meshgrid(*[A, B, C])
is equivalent to
np.meshgrid(A, B, C)
Since `edges` is a list, `np.meshgrid(*edges)` unpacks the items in `edges`
and passes them as arguments to `np.meshgrid`.
For example,
import numpy as np
x = np.array([0, 0, 1])
y = np.array([0, 0, 1])
z = np.array([0, 0, 3])
xedges = np.linspace(0, 4, 3)
yedges = np.linspace(0, 4, 3)
zedges = np.linspace(0, 4, 3)
xyz = np.vstack((x, y, z)).T
hist, edges = np.histogramdd(xyz, (xedges, yedges, zedges))
grid = np.vstack(np.meshgrid(*edges)).reshape(len(edges), -1).T
yields
In [153]: grid
Out[153]:
array([[ 0., 0., 0.],
[ 0., 0., 2.],
[ 0., 0., 4.],
...
[ 2., 4., 4.],
[ 4., 4., 0.],
[ 4., 4., 2.],
[ 4., 4., 4.]])
|
I'm having an invalid syntax error with the socket module
Question:
#!/bin/python
import socket
HOST = "localhost"
PORT = 30002
list = []
passwd = "UoMYTrfrBFHyQXmg6gzctqAwOmw1IohZ"
for i in range(1000, 9999):
list.append(i)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
iter = 0
data = s.recv(1024)
# Brute forcing loop
while 1:
s.send(passwd + " " + list[iter]
data = s.recv(1024)
if "Fail!" not in data:
print s.recv(1024)
s.close()
else:
print "Not: " + list[iter]
iter += 1
s.close()
I get an invalid syntax on the s.recv call, but I believe that the socket
isn't initiating a valid handshake. I can connect to the daemon through
netcat.
Answer: You miss the parenthesis after the s.send() function
|
Not able to Import in NLTK - Python
Question: When I run this command in a file or in a shell
import nltk
I get the following error :
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/nltk/__init__.py", line 83, in <module>
from collocations import *
File "/usr/lib/python2.7/dist-packages/nltk/collocations.py", line 37, in <module>
from nltk.probability import FreqDist
File "/usr/lib/python2.7/dist-packages/nltk/probability.py", line 45, in <module>
import random
File "random.py", line 2, in <module>
T = int(raw_input())
ValueError: invalid literal for int() with base 10: ''
Not able to comprehend what is going wrong.
Answer: You have a local `random` module, which masks the `random` module from
standard library.
If you try to `import nltk` from a different working directory, it should
succeed. But in general it's not a good idea name your modules after standard
modules, so please rename your `random.py` file to something else.
You for completeness, let me say that the error was obvious from the last
lines of you traceback:
File "random.py", line 2, in <module>
T = int(raw_input())
ValueError: invalid literal for int() with base 10: ''
From the path, `random.py`, you can tell that the error is in a local file
named _random.py_. And from the exception, you know that _something_ passed an
empty string, `''`, from `raw_input` to `int` function, which failed to be
converted to `int`.
_Rule of thumb number 2_ : Always guard you executable code, in a module, in a
`if __name__ == '__main__':` block.
|
Python code to pick out all possible permutations of a n-digit number using x-y numbers only?
Question:
__author__ = 'Mark'
import itertools
def power(num, x=1):
result = 1;
for x in range(x):
result = result * num
return result
print power(4,7)
count = 0
for subset in itertools.product('0123456', repeat = 4):
print(subset)
count +=1
print count
I need to enumerate all possible permutation of a 4-digit number using 0-6
only with repetition.
Answer: Take a look at product, permutations, combinations,
combinations_wit_replacement functions from
[itertools](https://docs.python.org/2/library/itertools.html).
|
Python Apscheduler - Schedule Jobs dynamically (nested)
Question: We have a requirement to schedule multiple jobs dynamically while current job
is executing.
Approximate Scenario is:
* There should be a scheduler to go through a table of application daily basis (assume at 6 AM UTC).
* Find the users who has today's datetime as `resume_dttime`
* Dynamically schedule a job for that user and start his service at that today's `resume_dttime`
So the my code is:
from apscheduler.schedulers.blocking import BlockingScheduler
sched = BlockingScheduler()
@sched.scheduled_job('cron', day_of_week='mon-fri', hour=6)
def scheduled_job():
"""
"""
liveusers = todays_userslist() #Get users from table with todays resume_dttime
for u in liveusers:
user_job = get_userjob(u.id)
runtime = u.resume_dttime #eg u.resume_dttime is datetime(2015, 12, 13, 16, 30, 5)
sched.add_job(user_job, 'date', run_date=runtime, args=[u.name])
if __name__ == "__main__":
sched.start()
sched.shutdown(wait=True)
The queries are:
* Is this the good way to add jobs dynamically?
* The issue is, there could be 100 or more users. so, adding 100 jobs dynamically is a good idea?
* Is there any other way to achieve this?
Answer: APScheduler 3.0 was specifically designed to efficiently handle a high volume
of scheduled jobs, so I believe your intended way of using it is valid.
|
python split statement for two string expressions
Question: I have to parse a file in which there are some expressions such as:
(COMM_MINFULLCOMTIMEOFCHANNEL == "STD_ON" || COMM_NMLIGHTDURATIONOFCHANNEL == "STD_ON" || COMM_NMLIGHTSILENTDURATIONOFCHANNEL == "STD_ON")
i have parsed the expressions separately but when i have an expression like:
(COMM_MINFULLCOMTIMEOFCHANNEL == "STD_ON" || COMM_NMLIGHTDURATIONOFCHANNEL == "STD_ON" || COMM_NMLIGHTSILENTDURATIONOFCHANNEL == "STD_ON") => COMM_KEEP_AWAKE_CHANNELS_SUPPORT == "STD_ON"
it shows me an error because i haven't handles the "=>" implies sign. To
handle this i have to split these expressions i think, but i dont know how to
do that. Please help!!1 Thanks in advance.
Answer: You can try using regexps. Here's an example that matches all of your blocks:
import re
"""
whatever code
"""
tokens=re.findall(r"([\w\s\=\"]{3,})",line)
"""
whatever code
"""
|
Eclipse Plugin + Jython - Unhandled event loop exception
Question: Just in case anybody else tries to use Jython inside a self-build eclipse-
plug-in. I suffered 2 days on the following error, which occured as soon as I
try to import my python scripts via `interpreter.exec("from myScript import
*\n");`:
!ENTRY org.eclipse.ui 4 0 2015-12-11 11:22:53.549
!MESSAGE Unhandled event loop exception
!STACK 0
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/eclipse/luna/../../../common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/src/scripts/iecommon.py", line 6, in <module>
from xml.dom import minidom
File "/common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/target/lib/jython-standalone-2.7.0.jar/Lib/xml/dom/__init__.py", line 226, in <module>
File "/common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/target/lib/jython-standalone-2.7.0.jar/Lib/xml/dom/MessageSource.py", line 19, in <module>
File "/common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/target/lib/jython-standalone-2.7.0.jar/Lib/xml/FtCore.py", line 38, in <module>
File "/common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/target/lib/jython-standalone-2.7.0.jar/Lib/xml/FtCore.py", line 38, in <module>
File "/common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/target/lib/jython-standalone-2.7.0.jar/Lib/gettext.py", line 58, in <module>
File "/opt/eclipse/luna/../../../common/home/bel/schwinn/lnx/workspace-silecs/silecs-configuration/target/lib/jython-standalone-2.7.0.jar/Lib/posixpath.py", line 77, in join
AttributeError: 'NoneType' object has no attribute 'endswith'
Answer: It is needed to set the property `python.home` to some value. It looks like it
does not even matter which value .. it just is not ok to leave it empty.
Setting the property e.g. can be done in the java-code:
String jythonJarPath = "target/lib/jython-standalone-2.7.0.jar";
String pythonLibPath = SilecsUtils.findInBundle(jythonJarPath);
Properties sysProps = System.getProperties();
sysProps.setProperty("python.path", pythonLibPath + "/Lib");
sysProps.setProperty("python.home", ".");
|
merging loop produces and giving a name
Question:
import numpy as np
A=([ 3.,1.], [1.,4.], [1.,0.], [2., 1.])
for i in A:
y=i*1
print y
This python loop produces four lists as shown:
[3.0, 1.0]
[1.0, 4.0]
[1.0, 0.0]
[2.0, 1.0]
But it should be as shown below, other words it should be a matrix. How can it
be like that? My second question is how can I give a name to this matrix ?
For example A, B or x something like this
([[3.0, 1.0]
[1.0, 4.0]
[1.0, 0.0]
[2.0, 1.0]])
and
A=([[3.0, 1.0]
[1.0, 4.0]
[1.0, 0.0]
[2.0, 1.0]])
Answer: When you write this line, you simply have a tuple, which has 4 `list`
elements.
A = ([ 3.,1.], [1.,4.], [1.,0.], [2., 1.])
If you want to make a `numpy.matrix`, then you can use that to initialize `A`
import numpy as np
A = np.matrix([[ 3.,1.], [1.,4.], [1.,0.], [2., 1.]])
So `A` is now
>>> A
matrix([[ 3., 1.],
[ 1., 4.],
[ 1., 0.],
[ 2., 1.]])
|
How to start with linkedIn API?
Question: Being new to API, I am currently trying to use linkedIn information for my
application. For that I'm using [python-
linkedin](https://pypi.python.org/pypi/python-linkedin/4.0) interface for
accessing linkedIn API. I installed this module by using following command:
`pip install python-linkedin`.
I have written one sample python file to use this interface as:
from linkedin import linkedin
API_KEY = "api-key"
API_SECRET = "api-secret"
RETURN_URL = "http://localhost:8000"
authentication = linkedin.LinkedInAuthentication(API_KEY, API_SECRET, RETURN_URL, linkedin.PERMISSIONS.enums.values())
print authentication.authorization_url
application = linkedin.LinkedInApplication(authentication)
I got the following output-
>
> [https://www.linkedin.com/uas/oauth2/authorization?scope=r_basicprofile%20rw_nus%20r_network%20r_contactinfo%20w_messages%20rw_groups%20r_emailaddress%20r_fullprofile&state=8a9678902396c6e7f7e0027a486898d2&redirect_uri=http%3A//localhost%3A8000&response_type=code&client_id=api-
> key](https://www.linkedin.com/uas/oauth2/authorization?scope=r_basicprofile%20rw_nus%20r_network%20r_contactinfo%20w_messages%20rw_groups%20r_emailaddress%20r_fullprofile&state=8a9678902396c6e7f7e0027a486898d2&redirect_uri=http%3A//localhost%3A8000&response_type=code&client_id=api-
> key)
I want to integrate this with my application. And I need to fetch data from
linkedIn and store it in database. Does any one know how to do that?
Answer: Your callback url should be externally accessible. using localhost is not
valid url. instead try to use your external IP address.
|
python check output find command not working
Question: I am trying to find the path to a file named 'config.txt' from a usb flash
drive plugged into a raspberry pi. The physical drive that is used may not
always be the same, so the path may not always be the same. So I use
'find /media/pi/*/config.txt'
to locate the path in the terminal and works just fine. Now I go to use
check_output and get a giant string of paths.
from subprocess import check_output
cmd = ['find', '/media/pi/*/config.txt']
out = check_output(cmd,shell=True)
I set the shell as True to allow wild cards, according to
<https://docs.python.org/2/library/subprocess.html>
Results for out are:
'.\n./.Xauthority\n./.xsession-errors\n./Public\n./.dmrc\n./Downloads\n./test.sh\n./.idlerc\n./.idlerc/recent-files.lst\n./.idlerc/breakpoints.lst\n./.asoundrc\n./.bash_logout\n./.profile\n./Templates\n./Music\n./.bash_history\n./Videos\n./.local\n./.local/share\n./.local/share/gvfs-metadata\n./.local/share/gvfs-metadata/home\n./.local/share/gvfs-metadata/home-d6050e94.log\n./.local/share/applications\n./.local/share/recently-used.xbel\n./.local/share/Trash\n.....
And it continues to go on for awhile. I tried looking at several other similar
questions including the link below, but no luck.
[Store output of subprocess.Popen call in a
string](http://stackoverflow.com/questions/2502833/store-output-of-subprocess-
popen-call-in-a-string)
Answer: You would need to _pass a single string_ exactly as you would from your shell
if you want to use a wildcard:
from subprocess import check_output
cmd = 'find /media/pi/*/config.txt'
out = check_output(cmd,shell=True)
You don't actually need subprocess at all,
[glob](https://docs.python.org/2/library/glob.html) will do what you want:
from glob import glob
files = glob('/media/pi/*/config.txt')
|
Find a word in a string python using regex or other methods
Question: I am trying to go through an array of words and check if they exist in a
string. I understand there are many options for doing this such as using
re.search but I need to differ between some words (ie. Java vs Javascript)
An example:
import re
s = 'Some types (python, c++, java, javascript) are examples of programming.'
words = ['python', 'java', 'c++', 'javascript', 'programming']
for w in words:
p = re.search(w, s)
print(p)
>><_sre.SRE_Match object; span=(12, 18), match='python'>
>><_sre.SRE_Match object; span=(20, 24), match='java'>
>><_sre.SRE_Match object; span=(20, 30), match='javascript'>
>><_sre.SRE_Match object; span=(48, 59), match='programming'>
The above works to an extent but matches Java with Javascript.
EDIT: Here was my solution
for w in words:
regexPart1 = r"\s"
regexPart2 = r"(?:!+|,|\.|\·|;|:|\(|\)|\"|\?+)?\s"
p = re.compile(regexPart1 + re.escape(w) + regexPart2 , re.IGNORECASE)
result = p.search(s)
Answer: You want to add word boundary marks to you regular expressions, say
`r'/bjavascript/b'` in place of merely `'javascript'`. (Note also that `+`
should be escaped in `c++` )
Also, iteration over words to match lacks potential efficiency of a compiled
regexp. It may be better to combine the regexps into one:
w = r'\b(?:python|java|c\+\+|javascript|programming)\b'
re.search(w,s)
|
Saleor Django Apache mod_wsgi will not serve /static/admin/
Question: I am running the saleor django app under a virtualenv with apache as a non-
privileged user.
Actually getting everything going was fairly straightforward, but one bit is
confusing me.
The /static/admin/ portion of the site is not being served.
I have looked at the [deployment
docs](https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/#serving-
files) and [other advice](http://stackoverflow.com/questions/3271731/djangos-
admin-pages-are-missing-their-typical-formatting-style-have-i-set-it-u) but
have found nothing that helps, so far.
My /static/ directory is being served just fine. I'm sure it's something very
obvious to the non-django-noob, but that's what I am. I'm not certain that
it's copacetic to alias a sub-directory in another aliased directory. I'd
rather not resort to symlinks.
# dev-site.conf
WSGIPythonPath/home/admin/project/saleor:/home/admin/project/venv/lib/python2.7/site-packages
<VirtualHost *:80>
ServerName example.com
ServerAdmin [email protected]
DocumentRoot "/home/admin/project"
WSGIDaemonProcess example.com python-path=/home/admin/project/saleor:/home/admin/project/venv/lib/python2.7/site-packages
WSGIProcessGroup example.com
WSGIScriptAlias / /home/admin/project/saleor/wsgi.py
Alias /media/ /home/admin/project/media/
<Directory /home/admin/project/media/>
Require all granted
</Directory>
Alias /favicon.ico /home/admin/project/saleor/static/images/favicon.ico
Alias /robots.txt /home/admin/project/saleor/static/robots.txt
Alias /static/ /home/admin/project/saleor/static/
<Directory /home/admin/project/saleor/static/>
Require all granted
</Directory>
Alias /static/admin/ /home/admin/project/venv/lib/python2.7/site-packages/django/contrib/admin/static/admin/
<Directory /home/admin/project/venv/lib/python2.7/site-packages/django/contrib/admin/static/admin/>
Require all granted
</Directory>
<Directory /home/admin/project>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
I include my saleor/wsgi.py for smarts.
# wsgi.py
python
import os
import sys
import site
site.addsitedir("/home/admin/project/venv/lib/python2.7/site-packages")
os.environ.setdefault("SECRET_KEY", "br549")
os.environ.setdefault("MYSQL_DB_URL", "mysql://dbuser:[email protected]:3306/saleor")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "saleor.settings")
project = "/home/admin/project/saleor"
workspace = os.path.dirname(project)
sys.path.append(workspace)
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Answer: You shouldn't attempt to get Apache to serve directly from the static
directory of either your own apps or the admin app. Instead, follow the
documentation and use the `collectstatic` command to collect the static files
for all apps in one place and serve from there.
|
How to trigger something on the close of a loop?
Question: Is this possible? I want to print lines in my file 5 at a time (to send to an
API in a batch). But when I get to the last few lines they never print because
there are less than 5, never triggering my if statement to print. SO I figured
one way to tackle this is to print the remaining lines when the loop closes.
The current code is messy and redundant but this is the idea:
urls = []
urls_csv = ""
counter = 0
with open(urls_file) as f:
for line in f:
# Keep track of the lines we've went through
counter = counter + 1
# If we have 5 urls in our list it's time to send them to the API call
if counter > 5:
counter = 0
urls_csv = ",".join(urls) # turn the python list into a string csv list
do_api(urls_csv) # put them to work
urls = [] # reset the value so we don't send the same urls next time
urls_csv = "" # reset the value so we don't send the same urls next time
# Else append to the url list
else:
urls.append(line.strip))
Also - Generally speaking, is there a better way to tackle this?
Answer: You can group them into sets of 5 lines at a time with the [`itertools`
grouper recipe](https://docs.python.org/3/library/itertools.html#itertools-
recipes).
import itertools
def grouper(iterable, n, fillvalue=None):
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
with open(...) as f:
for group in grouper(f, 5, fillvalue=""):
do_api(",".join([g.strip() for g in group if g]))
|
Plotting a function in Python 2.7
Question: I am trying to plot f in this program but I am screwing something up. Can
someone have a look and inform me as to where I am messing up. Thanks.
import math
#x is the horizontal distance that the ball has traveled
g=9.81
v=raw_input('Enter an initial velocity:')
theta=raw_input('Enter the angle that the object was thrown at:')
y=raw_input('Enter the initial position of the object on the y-axis:')
t=(2*v*math.sin(theta))/g
x=(0.5)*((v*math.sin(theta))+v)*t
float(v)
float(theta)
float(y)
float(t)
f=x*math.tan(theta)-(1/(2*(v**2)))*((g(x**2))/(math.cos(theta)**2))+y
figure(1)
clf()
plot(f)
xlabel('x')
ylabel('y')
show()
Answer: So first of all, I would import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
Then, you have to convert your string input into floats, for that you can use
eval.
initial_velo = eval(raw_input("Whatever you like: "))
...
Then for plotting with matplotlib you actually have to create a list of values
(just as when you collect real data and then type it into the computer and
then plot the single data points). For that I like to use linspace from the
numpy import:
time_steps = np.linspace(0, t, steps)
# steps gives the numbers of intervals your time from 0 to t is splitted into
Now you create your functions x and f as functions of t. They will also have
to be of type list. And in the end you can plot what you want via:
plt.figure(1)
plt.plot(time_steps, f)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
But maybe you should also watch how to plot stuff in the matplotlib doc. Also
numpy has a great doc.
|
"No module named 'osmium._osmium'" error when trying to use PyOsmium
Question: I am trying to use [PyOsmium](https://github.com/osmcode/pyosmium) but it will
not import. `python3 setup.py install` appears to complete just fine but when
I `import osmium` I get the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dylan/Downloads/osmium/osmium/__init__.py", line 1, in <module>
from ._osmium import *
ImportError: No module named 'osmium._osmium'
I have no idea what's causing this and it's my first time manually installing
a C++ wrapper. I have the top-level PyOsmium and libosmium directories in the
same directory. Is it trying to import the C++ files?
Any help would be much appreciated.
Answer: I had the [same problem](https://github.com/osmcode/pyosmium/issues/9). The
solution, [as provided by one of the
maintainers](https://github.com/osmcode/pyosmium/issues/9#issuecomment-169442027),
is very easy:
> Are you in the pyosmium root directory while trying the import? Change the
> directory to somewhere else and try again. In the root directory the local
> osmium source directory takes precedence over your freshly installed
> version.
Change to a different directory from the one you compiled in and it should
work; it did for me.
|
SymPy symbolic integration returns error
Question: I am trying to use Sympy's symbolic integration to find a closed form for a
definite integral. In particular, I run
from sympy import *
x, s, H = symbols('x s H', real=True, nonnegative=True)
integrate(1/((1-s)**(1/2-H)*(x-s)**(1/2-H)),(s,0,1))
Unfortunately, with Python 2.7.11 my Jupyter runs and runs and runs. Maybe it
helps to strengthen the assumptions adding
0<H<1/2 and x>1
but I didn't find out how to do it.
_Remark_ I have also used Mathematica's symbolic integration capabilities to
do it and it has come up with a Gauss hypergeometric function. Unfortunately,
evaluating that function returns a complex number which doesn't really make
sense in evaluating a real integral. Hence my hope that SymPy might help.
Answer: First of all, 1/2 = 0 in Python 2.x; you should put `from __future__ import
division` at the beginning or use Python 3 instead.
The key unused assumption is that x>1. A quick way to implement it is to
introduce a temporary variable y = x-1, assume it to be positive, evaluate and
substitute back.
from __future__ import division
from sympy import *
s, x = symbols('s x')
y, H = symbols('x H', positive=True)
f = 1/((1-s)**(1/2-H)*(x-s)**(1/2-H)) % function to be integrated
integrate(f.subs(x,y+1), (s,0,1)).subs(y,x-1) % sub, integrate, sub back
If I didn't import division, this would return within several seconds with a
reasonable-looking result:
(-1)**H*(x - 1)**H*exp(I*pi*H)*gamma(H + 1)*hyper((-H, H + 1), (H + 2,), -exp_polar(2*I*pi)/(x - 1))/gamma(H + 2)
which is kind of an improvement on what you had -- though of course incorrect
due to 1/2=0. Unfortunately, with the correct fraction in place integration
fails to finish in reasonable time.
I doubt that you will get a better result from sympy than from Mathematica.
The fact that the Mathematica result has complex numbers is not unusual for
difficult integrals. It means one has to carefully simplify the output, using
the correct branches of the complex functions. It's possible Mathematica can
do some of this simplification on its own: I suggest proceeding in this
direction, perhaps with the help of
[Mathematica.SE](http://mathematica.stackexchange.com/) site.
|
python - How to get high and low envelope of a signal?
Question: I have quite a noisy data, and I am trying to work out a high and low envelope
to the signal. It is kinda of like this example in MATLAB:
<http://uk.mathworks.com/help/signal/examples/signal-smoothing.html>
in "Extracting Peak Envelope". Is there a similar function in Python that can
do that? My entire project has been written in Python, worst case scenario I
can extract my numpy array and throw it into MATLAB and use that example. But
I prefer the look of matplotlib... and really cba doing all of those I/O
between MATLAB and Python...
Thanks,
Answer: > Is there a similar function in Python that can do that?
As far as I am aware there is no such function in Numpy / Scipy / Python.
However, it is not that difficult to create one. The general idea is as
follows:
Given a vector of values (s):
1. Find the location of peaks of (s). Let's call them (u)
2. Find the location of troughs of s. Let's call them (l).
3. Fit a model to the (u) value pairs. Let's call it (u_p)
4. Fit a model to the (l) value pairs. Let's call it (l_p)
5. Evaluate (u_p) over the domain of (s) to get the interpolated values of the upper envelope. (Let's call them (q_u))
6. Evaluate (l_p) over the domain of (s) to get the interpolated values of the lower envelope. (Let's call them (q_l)).
As you can see, it is the sequence of three steps (Find location, fit model,
evaluate model) but applied twice, once for the upper part of the envelope and
one for the lower.
To collect the "peaks" of (s) you need to locate points where the slope of (s)
changes from positive to negative and to collect the "troughs" of (s) you need
to locate the points where the slope of (s) changes from negative to positive.
A peak example: s = [4,5,4] 5-4 is positive 4-5 is negative
A trough example: s = [5,4,5] 4-5 is negative 5-4 is positive
Here is an example script to get you started with plenty of inline comments:
from numpy import array, sign, zeros
from scipy.interpolate import interp1d
from matplotlib.pyplot import plot,show,hold,grid
s = array([1,4,3,5,3,2,4,3,4,5,4,3,2,5,6,7,8,7,8]) #This is your noisy vector of values.
q_u = zeros(s.shape)
q_l = zeros(s.shape)
#Prepend the first value of (s) to the interpolating values. This forces the model to use the same starting point for both the upper and lower envelope models.
u_x = [0,]
u_y = [s[0],]
l_x = [0,]
l_y = [s[0],]
#Detect peaks and troughs and mark their location in u_x,u_y,l_x,l_y respectively.
for k in xrange(1,len(s)-1):
if (sign(s[k]-s[k-1])==1) and (sign(s[k]-s[k+1])==1):
u_x.append(k)
u_y.append(s[k])
if (sign(s[k]-s[k-1])==-1) and ((sign(s[k]-s[k+1]))==-1):
l_x.append(k)
l_y.append(s[k])
#Append the last value of (s) to the interpolating values. This forces the model to use the same ending point for both the upper and lower envelope models.
u_x.append(len(s)-1)
u_y.append(s[-1])
l_x.append(len(s)-1)
l_y.append(s[-1])
#Fit suitable models to the data. Here I am using cubic splines, similarly to the MATLAB example given in the question.
u_p = interp1d(u_x,u_y, kind = 'cubic',bounds_error = False, fill_value=0.0)
l_p = interp1d(l_x,l_y,kind = 'cubic',bounds_error = False, fill_value=0.0)
#Evaluate each model over the domain of (s)
for k in xrange(0,len(s)):
q_u[k] = u_p(k)
q_l[k] = l_p(k)
#Plot everything
plot(s);hold(True);plot(q_u,'r');plot(q_l,'g');grid(True);show()
This produces this output:
[](http://i.stack.imgur.com/DYsVk.png)
Points for further improvement:
1. The above code does not _filter_ peaks or troughs that may be occuring closer than some threshold "distance" (Tl) (e.g. time). This is similar to the second parameter of `envelope`. It is easy to add it though by examining the differences between consecutive values of `u_x,u_y`.
2. However, a quick improvement over the point mentioned previously is to lowpass filter your data with a moving average filter **BEFORE** interpolating an upper and lower envelope functions. You can do this easily by convolving your (s) with a suitable moving average filter. Without going to a great detail here (can do if required), to produce a moving average filter that operates over N consecutive samples, you would do something like this: `s_filtered = numpy.convolve(s, numpy.ones((1,N))/float(N)`. The higher the (N) the smoother your data will appear. Please note however that this will shift your (s) values (N/2) samples to the right (in `s_filtered`) due to something that is called [group delay](https://en.wikipedia.org/wiki/Group_delay_and_phase_delay) of the smoothing filter. For more information about the moving average, please see [this link](https://en.wikipedia.org/wiki/Moving_average).
Hope this helps.
(Happy to ammend the response if more information about the original
application is provided. Perhaps the data can be pre-processed in a more
suitable way (?) )
|
XMPPPY unable to connect to the server
Question: I am using xmpppy python library to connect with XMPP server(ejabberd2) but
unable to connect and actually don't have clarity on how to connect,
authenticate and send a message to the server.
Please help me to make it working
If possible please provide some code snippet using XMPPPY.
[](http://i.stack.imgur.com/0zTZM.png)
Answer: I figure out the solution with the help of a friend
It requires change the in XMPP ejabberd server config. Change the line
**{hosts, ["localhost"]}** with **{hosts, ["localhost", "server-domain",
"server-ip-address"]}** in the ejabberd.cfg file.
Restart the server and create another user under new hosts with the server
domain or server ip.
**Code snippet:**
import xmpp
ipaddress='<server-ip>'
user='<new-user>' #without @<server-ip>
passwd='<password>'
c = xmpp.Client(ipaddress)
c.connect((ipaddress,5222), secure=0)
c.auth(user,passwd,sasl=1)
c.sendInitPresence()
c.isConnected()
c.send(xmpp.protocol.Message('<jid of receiver user with @<domain> >',"hello world"))
|
JSON response in Google App Engine with Python
Question: I want to build a REST service with Python and Google App Engine and have the
following code:
Edited Code:
import webapp2
from google.appengine.ext import db
from google.appengine.api import users
import json
class Item(db.Model):
author = db.UserProperty(required=False)
summary = db.StringProperty(required=True)
description = db.StringProperty(multiline=True)
url = db.StringProperty()
created = db.DateTimeProperty(auto_now_add=True)
updated = db.DateTimeProperty(auto_now=True)
dueDate = db.StringProperty(required=True)
finished = db.BooleanProperty()
class GetAllItems(webapp2.RequestHandler):
def get(self):
item = Item(summary="Summary", dueDate="Date")
item.put()
allItems = Item.all()
data = []
for entry in allItems:
data.append(db.to_dict(entry))
self.response.out.write(json.dumps(entry))
app = webapp2.WSGIApplication(
[
('/api/items', GetAllItems)
],
debug=True)
How can i convert all items of this model into JSON and send it back as JSON?
I always get this Error:
TypeError: <main.Item object at 0x0538B590> is not JSON serializable
Answer: I use now NDB instead of DB and the following code solved all my problems:
import decimal
import webapp2
from google.appengine.ext import ndb
import json
from google.appengine.api import users
class Item(ndb.Model):
author = ndb.UserProperty(required=False)
summary = ndb.StringProperty(required=True)
description = ndb.StringProperty()
url = ndb.StringProperty()
created = ndb.DateTimeProperty(auto_now_add=True)
updated = ndb.DateTimeProperty(auto_now=True)
updated = ndb.StringProperty()
dueDate = ndb.StringProperty(required=True)
finished = ndb.BooleanProperty()
class DateTimeEncoder(json.JSONEncoder):
def default(self, obj):
if hasattr(obj, 'isoformat'):
return obj.isoformat()
elif isinstance(obj, decimal.Decimal):
return float(obj)
else:
return json.JSONEncoder.default(self, obj)
class GetAllItems(webapp2.RequestHandler):
def get(self):
item = Item(summary="Summary", dueDate="Date")
item.put()
text = json.dumps([i.to_dict() for i in Item.query().fetch()], cls=DateTimeEncoder)
self.response.out.write(text)
app = webapp2.WSGIApplication(
[
('/api/items', GetAllItems)
],
debug=True)
|
Optimize python for Connected Component Labeling Area of Subsets
Question: I have a binary map on which I do Connected Component Labeling and get
something like this for a 64x64 grid - <http://pastebin.com/bauas0NJ>
Now I want to group them by label, so that I can find their area and their
center of mass. This is what I do:
#ccl_np is the computed array from the previous step (see pastebin)
#I discard the label '1' as its the background
unique, count = np.unique(ccl_np, return_counts = True)
xcm_array = []
ycm_array = []
for i in range(1,len(unique)):
subarray = np.where(ccl_np == unique[i])
xcm_array.append("{0:.5f}".format((sum(subarray[0]))/(count[i]*1.)))
ycm_array.append("{0:.5f}".format((sum(subarray[1]))/(count[i]*1.)))
final_array = zip(xcm_array,ycm_array,count[1:])
I want a fast code (as I will be doing this for grids of size 4096x4096) and
was told to check out numba. Here's my naive attempt :
unique, inverse, count = np.unique(ccl_np, return_counts = True, return_inverse = True)
xcm_array = np.zeros(len(count),dtype=np.float32)
ycm_array = np.zeros(len(count),dtype=np.float32)
inverse = inverse.reshape(64,64)
@numba.autojit
def mysolver(xcm_array, ycm_array, inverse, count):
for i in range(64):
for j in range(64):
pos = inverse[i][j]
local_count = count[pos]
xcm_array[pos] += i/(local_count*1.)
ycm_array[pos] += j/(local_count*1.)
mysolver(xcm_array, ycm_array, inverse, count)
final_array = zip(xcm_array,ycm_array,count)
To my surprise, using numba was slower or at best equal to the speed of the
previous way. What am I doing wrong ? Also, can this be done in Cython and
will that be faster ?
I am using the included packages in the latest Anaconda python 2.7
distribution.
Answer: I believe the issue might be that you are timing jit'd code incorrectly. The
first time you run the code, your timing includes the time it takes numba to
compile the code. This is called warming up the jit. If you call it again,
that cost is gone.
import numpy as np
import numba as nb
unique, inverse, count = np.unique(ccl_np, return_counts = True, return_inverse = True)
xcm_array = np.zeros(len(count),dtype=np.float32)
ycm_array = np.zeros(len(count),dtype=np.float32)
inverse = inverse.reshape(64,64)
def mysolver(xcm_array, ycm_array, inverse, count):
for i in range(64):
for j in range(64):
pos = inverse[i][j]
local_count = count[pos]
xcm_array[pos] += i/(local_count*1.)
ycm_array[pos] += j/(local_count*1.)
@nb.jit(nopython=True)
def mysolver_nb(xcm_array, ycm_array, inverse, count):
for i in range(64):
for j in range(64):
pos = inverse[i,j]
local_count = count[pos]
xcm_array[pos] += i/(local_count*1.)
ycm_array[pos] += j/(local_count*1.)
Then the timings with `timeit` which runs the code multiple times. First the
plain python version:
In [4]:%timeit mysolver(xcm_array, ycm_array, inverse, count)
10 loops, best of 3: 25.8 ms per loop
and then with numba:
In [5]: %timeit mysolver_nb(xcm_array, ycm_array, inverse, count)
The slowest run took 3630.44 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 33.1 µs per loop
The numba code is ~1000 times faster.
|
How can I turn my python script off from the terminal?
Question: I have just made a script that I want to turn off from the terminal, but
instead of just ending it I want it to pickle a file. Is there a correct way
to do this?
Answer: Have a look at how [signals](https://docs.python.org/2/library/signal.html)
works.
You can basically make your script wait for a signal on QUIT:
#!/usr/bin/python
from signal import *
from time import sleep
def process_quit(signum, frame):
# Do your pickle thing here
print('Exiting...')
signal(SIGQUIT, process_quit)
while 1:
print('Working hard')
sleep(0.5)
You can also register an handler with `atexit`:
import atexit
def exit_handler():
print 'My application is ending!'
atexit.register(exit_handler)
|
Using NLTK's universalt tagset with non-English corpora
Question: I'm using NLTK (3.0.4-1) in Python 3.4.3+ and I'd like to proccess some of the
tagged corpora using the universal tagset (which I had to install), [as
explained in NLTK book, chapter 5](http://www.nltk.org/book/ch05.html).
I can access any of these corpora with their original PoS tagset, e.g.:
from nltk.corpus import brown, cess_esp, floresta
print(brown.tagged_sents()[0])
[('The', 'AT'), ('Fulton', 'NP-TL'), ('County', 'NN-TL'), ('Grand', 'JJ-TL'), ('Jury', 'NN-TL'), ('said', 'VBD'), ('Friday', 'NR'), ('an', 'AT'), ('investigation', 'NN'), ('of', 'IN'), ("Atlanta's", 'NP$'), ('recent', 'JJ'), ('primary', 'NN'), ('election', 'NN'), ('produced', 'VBD'), ('``', '``'), ('no', 'AT'), ('evidence', 'NN'), ("''", "''"), ('that', 'CS'), ('any', 'DTI'), ('irregularities', 'NNS'), ('took', 'VBD'), ('place', 'NN'), ('.', '.')]
print(cess_esp.tagged_sents()[0])
[('El', 'da0ms0'), ('grupo', 'ncms000'), ('estatal', 'aq0cs0'), ('Electricité_de_France', 'np00000'), ('-Fpa-', 'Fpa'), ('EDF', 'np00000'), ('-Fpt-', 'Fpt'), ('anunció', 'vmis3s0'), ('hoy', 'rg'), (',', 'Fc'), ('jueves', 'W'), (',', 'Fc'), ('la', 'da0fs0'), ('compra', 'ncfs000'), ('del', 'spcms'), ('51_por_ciento', 'Zp'), ('de', 'sps00'), ('la', 'da0fs0'), ('empresa', 'ncfs000'), ('mexicana', 'aq0fs0'), ('Electricidad_Águila_de_Altamira', 'np00000'), ('-Fpa-', 'Fpa'), ('EAA', 'np00000'), ('-Fpt-', 'Fpt'), (',', 'Fc'), ('creada', 'aq0fsp'), ('por', 'sps00'), ('el', 'da0ms0'), ('japonés', 'aq0ms0'), ('Mitsubishi_Corporation', 'np00000'), ('para', 'sps00'), ('poner_en_marcha', 'vmn0000'), ('una', 'di0fs0'), ('central', 'ncfs000'), ('de', 'sps00'), ('gas', 'ncms000'), ('de', 'sps00'), ('495', 'Z'), ('megavatios', 'ncmp000'), ('.', 'Fp')]
print(floresta.tagged_sents()[0])
[('Um', '>N+art'), ('revivalismo', 'H+n'), ('refrescante', 'N<+adj')]
So far so good, but when I use the option `tagset='universal'` to access the
simplified version of the PoS tags, it works only for the Brown corpus.
print(brown.tagged_sents(tagset='universal')[0])
[('The', 'DET'), ('Fulton', 'NOUN'), ('County', 'NOUN'), ('Grand', 'ADJ'), ('Jury', 'NOUN'), ('said', 'VERB'), ('Friday', 'NOUN'), ('an', 'DET'), ('investigation', 'NOUN'), ('of', 'ADP'), ("Atlanta's", 'NOUN'), ('recent', 'ADJ'), ('primary', 'NOUN'), ('election', 'NOUN'), ('produced', 'VERB'), ('``', '.'), ('no', 'DET'), ('evidence', 'NOUN'), ("''", '.'), ('that', 'ADP'), ('any', 'DET'), ('irregularities', 'NOUN'), ('took', 'VERB'), ('place', 'NOUN'), ('.', '.')]
When accessing the corpora in Spanish and Portuguese I get a long chain of
errors and a `LookupError` exception.
print(cess_esp.tagged_sents(tagset='universal')[0])
---------------------------------------------------------------------------
LookupError Traceback (most recent call last)
<ipython-input-6-4e2e43e54e2d> in <module>()
----> 1 print(cess_esp.tagged_sents(tagset='universal')[0])
[...]
LookupError:
**********************************************************************
Resource 'taggers/universal_tagset/unknown.map' not found.
Please use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- '/home/victor/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''
**********************************************************************
Among the mappings located in my `taggers/universal_tagset` directory, I can
find the mappings for Spanish (`es-cast3lb.map`) and Portuguese (`pt-
bosque.map`), but I don't have any `unknown.map` file. Any ideas how to solve
it?
Thanks in advance :-)
Answer: That's an interesting question. The NLTK implements mapping to the Universal
tagset only for a fixed collection of corpora, with the help of the fixed maps
you found in `nltk_data/taggers/universal_tagset/`. Except for a few special
cases (which include treating the brown corpus as if it was named `en-brown`),
the rule is to look for a mapping file that has the same name as the tagset
used for your corpus. In your case, the tagset is set to "unknown", which is
why you see that message.
Now, are you sure "the mapping for Spanish", i.e. the map `es-cast3lb.map`,
actually matches the tagset for your corpus? I certainly wouldn't just assume
it does, since any project can create their own tagset and rules for use. **If
this is the same tagset your corpus uses,** your problem has an easy solution:
* When you initialize your corpus reader, e.g. `cess_esp`, add the option `tagset="es-cast3lb"` to the constructor. If necessary, e.g. for corpora already loaded by the NLTK with `tagset="unknown"`, you can override the tagset after initialization like this:
cess_esp._tagset = "es-cast3lb"
This tells the corpus reader what tagset is used in the corpus. After that,
specifying `tagset="universal"` should cause the selected mapping to be
applied.
If **this tagset is not actually suited** to your corpus, your first job is to
study the documentation of the tagset for your corpus, and create an
appropriate mapping to the Universal tagset; as you've probably seen, the
format is pretty trivial. You can then put your mapping in operation by
dropping it in `nltk_data/taggers/universal_tagset`. Adding your own resources
to the `nltk_data` area is decidedly a hack, but if you get this far, I
recommend you contribute your tagset map to the nltk... which will resolve the
hack after the fact.
**Edit:** So (per the comments) it's the right tagset, but only the 1-2 letter
POS tags are in the mapping dictionary (the rest of the tag presumably
describes the features of inflected words). Here's a quick way to extend the
mapping dictionary on the fly, so that you can see the universal tags:
import nltk
from nltk.corpus import cess_esp
cess_esp._tagset = "es-cast3lb"
nltk.tag.mapping._load_universal_map("es-cast3lb") # initialize; normally loaded on demand
mapdict = nltk.tag.mapping._MAPPINGS["es-cast3lb"]["universal"] # shortcut to the map
alltags = set(t for w, t in cess_esp.tagged_words())
for tag in alltags:
if len(tag) <= 2: # These are complete
continue
mapdict[tag] = mapdict[tag[:2]]
This discards the agreement information. If you'd rather decorate the
"universal" tags with it, just set `mapdict[tag]` to
`mapdict[tag[:2]]+"-"+tag[2:]`.
I'd save this dictionary to a file as described above, so that you don't have
to recompute the mapping every time you load your corpus.
|
Python: remove strings with same words
Question: I have a list of strings like this:
string_list = ['this is a string', 'and this', 'also this', 'is this a string']
What I want to do is remove strings that have the same words in them, which in
this case would be 'this is a string' and 'is this a string', so that there
only remains one (not important which one of the two). In the end the string
should look like this:
string_list = ['this is a string', 'and this', 'also this']
So basically, remove strings that contain the same words as another string.
Answer: sorted each string after splitting and keep a set of all those sorted
substrings, if we have seen the sorted substring before the remove the current
element from the list :
string_list = ['this is a string', 'and this', 'also this',
'this and this is a string', 'is this a string']
seen = set()
for ele in reversed(string_list):
tp = tuple(sorted(ele.split()))
if tp in seen:
string_list.remove(ele)
seen.add(tp)
print(string_list)
['this is a string', 'and this', 'also this', 'this and this is a string']
|
Raspberry Pi background program runs but doesnt work
Question: SOLVED: For some reason making CRON run a bash script that runs the python
script solved the problem.
I have a python script "temperature.py" which is an infinite loop that checks
values from a GPIO pin, adds the values to a file which it uploads to google
drive with "gdrive" and sometimes sends a mail using smtp. The script works
perfectly if i run it from the SSH terminal ($ sudo python temperature.py) but
it doesn't work at startup like i would want it to. I'm using raspbian wheezy.
What I've done:
in /etc/rc.local:
#...
#...
sleep 10
python /home/pi/temperature.py&
exit 0
the pi boots normally and after i login using SSH and write:
...$ps aux
i get:
...
root 2357 1.4 1.9 10556 8836 ? S 21:11 0:12 python /home/pi/temperature.py
...
so I'm guessing it is running and it uses 1.4% CPU which is very little but
almost all other processes use 0.0%. Since the program probably It doesnt do
anything however... my google drive is empty...
**So it works if i run it from terminal as background but not if i run it from
rc.local...**
What I'm guessing :
1. it lacks some permission?
2. it must be something with rc.local... since it works perfectly from terminal The result of
...$ls -l temperature.py -rwxr-xr-x 1 root root 1927 Dec 12 21:10
temperature.py ...$ls -l /etc/rc.local -rwxr-xr-x 1 root root 373 Dec 12 20:54
/etc/rc.local
I have tried staring it using cron ($sudo crontab -e) but it didn't work
either.
Any ideas? I feel like I'm missing something obvious but since I'm very new to
raspberry pi and linux stuff I can't find it on google.
The script temperature.py
#Made by Matthew Kirk
# Licensed under MIT License, see
# http://www.cl.cam.ac.uk/freshers/raspberrypi/tutorials/temperature/LICENSE
#Adapted by me
import RPi.GPIO as GPIO
import time
import subprocess
import os
import commands
import sys
import smtplib
from email.mime.text import MIMEText
print 'TEMPERATURE LOGGER - M'
print ' '
#MAILER SETUP
to = '****@gmail.com'
gmail_user = '****@gmail.com'
gmail_password = '*****'
smtpserver = smtplib.SMTP('smtp.gmail.com',587)
#TEMP LOGGER GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(7,GPIO.IN)
while True:
print 'fail'
if GPIO.input(7):
break
while GPIO.input(7):
pass
waitTime = 60
tempTreshold = 50
logFile = "/home/pi/tDat.csv"
while True:
dataFile = open(logFile,"a")
time_1 = time.time()
tFile = open("/sys/bus/w1/devices/28-011582ac5dff/w1_slave")
text = tFile.read();
tFile.close();
tData = text.split("\n")[1].split(" ")[9]
temp = float(tData[2:])
temp = temp/1000
timeStamp = time.strftime("%d/%m/%Y %H:%M:%S")
dataFile.write(str(temp)+","+ timeStamp+ "\n")
dataFile.close()
file_ID = commands.getoutput('drive list | tail -n +2 | head -1 | awk \'{print $1;}\' ')
cmd = 'drive delete --id '+file_ID
os.system( cmd )
cmd = 'drive upload --file '+logFile
os.system( cmd )
# MAIL IF TEMP TOO LOW
if temp < tempTreshold:
smtpserver.ehlo()
smtpserver.starttls()
smtpserver.ehlo()
smtpserver.login(gmail_user,gmail_password)
msg = MIMEText('The temperature in Branten, {}C, is below {} degrees C!!!'.format(temp,tempTreshold)+'\n'+'Recorded$
msg['Subject'] = 'Branten Temperature Warning'
msg['From'] = gmail_user
msg['To'] = to
smtpserver.sendmail(gmail_user,[to],msg.as_string())
smtpserver.quit()
sys.exit()
and the CRON:
* * * * * python /home/pi/temperature.py
Answer: Consider revising your code to not use an infinate loop.
Read about Linux CRON jobs. CRON is a service that will execute your program
or script on a schedule (properly). EDIT: it is installed by default on most
linux distros including Rasbian.
[Some good examples](http://www.thegeekstuff.com/2009/06/15-practical-crontab-
examples/)
|
Using while loop in Pycharm and Kivy
Question: how can i use while loop in this code to read serial every 2 second and show
it in a Label? this application will hanged in run and i'm new to python to
solve this.
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from time import sleep
import serial
class LoginScreen(GridLayout):
def __init__(self, **kwargs):
super(LoginScreen, self).__init__(**kwargs)
self.cols = 2
self.rows = 2
ser = serial.Serial('COM3', 9600, timeout=0)
while 1:
sleep(2)
ser.read()
data = ser.read()
self.add_widget(Label(text=str(data)))
class MyApp(App):
def build(self):
return LoginScreen()
if __name__ == '__main__':
MyApp().run()
Answer: You can't run a 'while True' loop like that - that's what Kivy itself is doing
internally, every iteration it checks input, updates the gui etc. By doing it
yourself you stop Kivy's loop from ever advancing. This isn't just a kivy
thing either, it's also how other gui frameworks work, though not all run the
gui stuff in the main thread.
The sleep also does the same thing - any time you sleep, it does exactly that,
and the gui will freeze until it finishes.
The solution is to hook into Kivy's event system and use its internal while
loop. The simplest way is probably to add a new method to your LoginScreen, as
below.
in `__init__`:
self.ser = serial.Serial('COM3', 9600, timeout=0)
and a new method:
def update(self, dt):
self.ser.read() # Not sure if you're deliberately or accidentally reading twice
data = self.ser.read()
self.add_widget(Label(text=str(data)))
...and then
from kivy.clock import Clock
from functools import partial
Clock.schedule_interval(self.update, 2)
The update method will then be called every 2 seconds.
|
How to parse strings into time ranges in Python?
Question:
time_ranges = ['Sunday-Thursday: 5:00 pm - 8:00 pm', 'Friday - Saturday: 1:00 pm - 2:00 pm']
Is there a simple way to turn the likes of a list of strings shown above into
datetime ranges? Is there a package or library that would make this task easy?
Answer: Here's one way of doing it using a list of business hours:
import datetime
Now = datetime.datetime.now()
business_hours = [(8,18),(8,18),(12,18),(8,18),(8,20),(8,20),(-1,-1)]
for i in range(6):
Now = datetime.datetime.now() + datetime.timedelta(days=i)
if Now.hour >= business_hours[Now.weekday()][0]\
and Now.hour <= business_hours[Now.weekday()][1]:
print Now.strftime("%a %d-%m-%Y"), "The store is open"
else:
print Now.strftime("%a %d-%m-%Y"), "The Store is closed"
Results:
Sun 13-12-2015 The Store is closed
Mon 14-12-2015 The store is open
Tue 15-12-2015 The store is open
Wed 16-12-2015 The Store is closed
Thu 17-12-2015 The store is open
Fri 18-12-2015 The store is open
Obviously this example attempts to prove the point using multiple dates, for
your purposes, you would dump the `for range` loop and just test `Now`.
Beware of the difference between `weekday` and `isoweekday` which treat the
first day of the week, Monday or Sunday, differently i.e. `weekday` where
Monday is 0 and Sunday is 6 and `isoweekday` where Monday is 1 and Sunday is 7
|
Extract number from a website using beautifulsoup in Python
Question: I am trying to use urllib to grab a html page, then use beautifulsoup to
extract data out. I want to get all the number from comments_42.html and print
out the sum of them, then display the numbers of data. Here is my code, I am
trying to use regex, but it doesn't work for me.
import urllib
from bs4 import BeautifulSoup
url = 'http://python-data.dr-chuck.net/comments_42.html'
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html,"html.parser")
tags = soup('span')
for tag in tags:
print tag
Answer: Use findAll() method of BeautifulSoup to extract all span tags with class
'comments', since they contain the information you need. You can then perform
any operation on them depending on your requirements.
soup = BeautifulSoup(html,"html.parser")
data = soup.findAll("span", { "class":"comments" })
numbers = [d.text for d in data]
Here is the output:
[u'100', u'97', u'87', u'86', u'86', u'78', u'75', u'74', u'72', u'72', u'72', u'70', u'70', u'66', u'66', u'65', u'65', u'63', u'61', u'60', u'60', u'59', u'59', u'57', u'56', u'54', u'52', u'52', u'51', u'47', u'47', u'41', u'41', u'41', u'38', u'35', u'32', u'31', u'24', u'19', u'19', u'18', u'17', u'16', u'13', u'8', u'7', u'1', u'1', u'1']
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.