text
stringlengths 226
34.5k
|
---|
Python + Apache web pages not updating
Question: I have apache server with WSGI module for python pages.
I have index.wsgi file with str(datetime.datetime.now()) which print the
current timestamp.
The issue is when I am refreshing the page I cannot see it updates, it jumps
between seconds or something, looks like it has 3-4 old results in the cache
and it is showing them..
I tried to see if it use cache on the web browser but could not find
anything..
index.wsgi:
#!/usr/bin/python
# -*- coding: UTF-8 -*-
import datetime
html="""
<HTML>
<HEAD><TITLE>Manual Runner</TITLE>
<BODY>
timestamp: {0}<BR><BR>
</BODY></HTML>
""".format(str(datetime.datetime.now()))
def application (env, r):
body = html
status = '200 OK'
response_headers = [ ('Content-Type', 'text/html'), ('Content-Length', str (len (body) ) ) ]
r (status, response_headers)
return [body]
httpd.conf:
WSGIScriptAlias / /web_manager/manual_run/index.wsgi
<Directory /web_manager/manual_run>
Order allow,deny
Allow from all
Options +ExecCGI
AddHandler cgi-script .py
DirectoryIndex index.wsgi
</Directory>
Any idea?!
Thanks.
Answer: Your body variable is global, which means it is evaluated when the process
starts, and is never recalculated. The reason you flip between a few different
values its that Apache has started a few separate processes: each will have
its own value for body, which will persist until the process is restarted, and
different requests are being routed to different processes.
Instead of putting the body variable at global level, return it from a
function which is called from your application function.
|
How to get data out of websocket autobahnpython
Question: Hi I'm using autobahnpython which connects to a websocket and retrieves the
data but I'm struggling to get the data out. Basically I want to use Queues to
send the data out to another consumer thread that works on the data itself.
The examples I found, only print the received data to stdout or generate some
random numbers which they send to the socket. Here is the error I'm getting
2014-12-28 18:07:04+0100 [-] factory.protocol.set_ask_order_queue(Cryptsy_ask_Queue)
2014-12-28 18:07:04+0100 [-] TypeError: unbound method set_ask_order_queue() must be called
with Cryptsy_socket instance as first argument (got LifoQueue instance instead)
Here is my WebSocketClientProtocol derived class:
import json
from decimal import *
from autobahn.twisted.websocket import WebSocketClientProtocol
import time
class Cryptsy_socket(WebSocketClientProtocol):
_ask_order_queue = []
_bid_order_queue = []
_old_bid_order = {}
_old_ask_order = {}
_bid_order = {}
_ask_order = {}
def set_ask_order_queue(self,queue):
self._ask_order_queue = queue
def set_bid_order_queue(self,queue):
self._bid_order_queue = queue
def onOpen(self):
#subscribe for ltc/btc
self.sendMessage(u"{\"event\": \"pusher:subscribe\",\"data\": {\"channel\": \"ticker.3\"}}".encode("utf8"))
def onConnect(self,response):
print ("Server connectred: {0}".format(response.peer))
def onMessage(self, payload, isBinary):
print("Text message received: {0}".format(payload.decode('utf8')))
json_receive = json.loads(payload.decode('utf8'))
if "event" in json_receive:
if "message" in json_receive["event"]:
json_data = json.loads(json_receive["data"])
buy_order = json_data["trade"]["topbuy"]
sell_order = json_data["trade"]["topsell"]
sell_order_price_as_decimal = Decimal(sell_order["price"])
sell_order_amount_as_decimal = Decimal(sell_order["quantity"])
sell_order_quote_currency_amount = sell_order_price_as_decimal*sell_order_amount_as_decimal
ask_as_decimal = {"price" : sell_order_price_as_decimal,\
"amount": sell_order_amount_as_decimal,\
"amount_of_second_currency": sell_order_quote_currency_amount,\
"name_of_exchange" : "Cryptsy",\
"fee_in_percent" : Decimal("0.25")}
buy_order_price_as_decimal = Decimal(buy_order["price"])
buy_order_amount_as_decimal = Decimal(buy_order["quantity"])
buy_order_quote_currency_amount = buy_order_price_as_decimal*buy_order_amount_as_decimal
bid_as_decimal = {"price" : buy_order_price_as_decimal,\
"amount" : buy_order_amount_as_decimal,\
"amount_of_second_currency" : buy_order_quote_currency_amount,\
"name_of_exchange" : "Cryptsy",\
"fee_in_percent" : Decimal("0.25")}
self._bid_order["order"] = bid_as_decimal
self._ask_order["order"] = ask_as_decimal
if self._old_ask_order == {} or \
self._ask_order["order"]["price"] != self._old_ask_order["order"]["price"] or\
self._ask_order["order"]["amount"] != self._old_ask_order["order"]["amount"]:
ask_order_for_consumer = self._old_ask_order
ask_order_for_consumer["time"] = time.time()
self._ask_order_queue.put(ask_order_for_consumer)
self._old_ask_order = self._ask_order
if self._old_bid_order == {} or \
self._bid_order["order"]["price"] != self._old_bid_order["order"]["price"] or\
self._bid_order["order"]["amount"] != self._old_bid_order["order"]["amount"]:
bid_order_for_consumer = self._old_bid_order
bid_order_for_consumer["time"] = time.time()
self._bid_order_queue.put(bid_order_for_consumer)
self._old_bid_order = self._bid_order
And here is my main which creates the queues and connects, here you can see my
problem. The Cryptsy_socket constructor is not called here thats why I get an
error when I try to call set_ask_order_queue and set_bid_order_queue.
import sys
from thread_handling.websocket_process_orders import Cryptsy_socket
from twisted.python import log
from twisted.internet import reactor, ssl
from autobahn.twisted.websocket import WebSocketClientFactory, \
connectWS
import Queue
if __name__ == '__main__':
Cryptsy_ask_Queue = Queue.LifoQueue()
Cryptsy_bid_Queue = Queue.LifoQueue()
log.startLogging(sys.stdout)
factory = WebSocketClientFactory("wss://ws.pusherapp.com:443/app/cb65d0a7a72cd94adf1f?client=PythonPusherClient&version=0.2.0&protocol=6")
factory.protocol = Cryptsy_socket
factory.protocol.set_ask_order_queue(Cryptsy_ask_Queue)
factory.protocol.set_bid_order_queue(Cryptsy_bid_Queue)
## SSL client context: default
##
if factory.isSecure:
contextFactory = ssl.ClientContextFactory()
else:
contextFactory = None
connectWS(factory, contextFactory)
reactor.run()
So how to accomplish getting data out with a queue?
Answer: Ok I found an answer in an old post, where a queue is added to communicate
with other threads, which is exactly what I was looking for. [override
websocketclientprotocol](http://stackoverflow.com/questions/20740987/override-
autobahn-twisted-websocketclientprotocol-class)
|
python code to generate password list
Question: I am researching wireless security and trying to write a python script to
generate passwords, not random, but a dictionary of hex numbers. The letters
need to be capital, and it has to go from 12 characters to 20 characters. I
went from 11 f's to 20 f's, this seems like it would meet the requirements. I
then tried to place them in a text file. After I made the file, I chmod'ed it
to 777 and then clicked run. It has been a few minutes, but I cannot tell if
it is working or not. I am running it in kali right now, on a 64 bit core i3
with 8gb of ram. I'm not sure how long it would be expected to take, but this
is my code, let me know if it looks right please:
# generate 10 to 32 character password list using hex numbers, 0-9 A-F
def gen_pwd(x):
x = range(17592186044415 -295147905179352830000)
def toHex(dec):
x = (dec % 16)
digits = "0123456789ABCDEF"
rest = dec / 16
if (rest == 0):
return digits[x]
return toHex(rest) + digits[x]
for x in range(x):
print toHex(x)
f = open(/root/Home/sdnlnk_pwd.txt)
print f
value = x
string = str(value)
f.write(string)
gen_pwd
Answer: how bout just
password = hex(random.randint(1000000,100000000))[2:]
or
pw_len = 12
my_alphabet = "1234567890ABCDEF"
password = "".join(random.choice(my_alphabet) for _ in range(pw_len))
or what maybe closer to what you are trying to do
struct.pack("Q",12365468987654).encode("hex").upper()
basically you are overcomplicating a very simple task
to do exactly what you are asking you can simplify it
import itertools, struct
def int_to_chars(d):
'''
step 1: break into bytes
'''
while d > 0: # while we have not consumed
yield struct.pack("B",d&0xFF) # decode char
d>>=8 # shift right one byte
yield "" # a terminator just in case its empty
def to_password(d):
# this will convert an arbitrarily large number to a password
return "".join(int_to_chars(d)).encode("hex").upper()
# you could probably just get away with `return hex(d)[2:]`
def all_the_passwords(minimum,maximum):
#: since our numbers are so big we need to resort to some trickery
all_pw = itertools.takewhile(lambda x:x<maximum,
itertools.count(minimum))
for pw in all_pw:
yield to_password(pw)
all_passwords = all_the_passwords( 0xfffffffffff ,0xffffffffffffffffffff)
#this next bit is gonna take a while ... go get some coffee or something
for pw in all_passwords:
print pw
#you will be waiting for it to finish for a very long time ... but it will get there
|
Python/Cassandra: insert vs. CSV import
Question: I am generating load test data in a Python script for Cassandra.
Is it better to insert directly into Cassandra from the script, or to write a
CSV file and then load that via Cassandra?
This is for a couple million rows.
Answer: For a few million, I'd say just use CSV (assuming rows aren't huge); and see
if it works. If not, inserts it is :)
For more heavy duty stuff, you might want to create sstables and use sstable
loader.
|
Python, How to use part of function outside function
Question: How can I use `today` and `returntime` in `return_fee` function?
import datetime
class Movie(object):
def __init__(self,title):
self.title = title
def time_of_return(self):
self.today = today
self.returntime = returntime
today = datetime.datetime.now()
returntime = today + datetime.timedelta(days=30)
def return_fee(Movie):
fee = -2
delta = today - returntime
Answer: If you want `time_of_return` and `return_fee` to be instance attributes, call
`time_of_return` from `__init__` to set them and then prefix with `self`:
class Movie(object):
def __init__(self,title):
self.title = title
self.time_of_return()
def time_of_return(self):
self.today = datetime.datetime.now()
self.returntime = today + datetime.timedelta(days=30)
def return_fee(Movie):
fee = None
delta = self.today - self.returntime
# ... presumably do something else
Alternatively (since, in particular, `today` may change over time), call the
function `time_of_return` from within `return_fee` and make sure it returns
something:
class Movie(object):
def __init__(self,title):
self.title = title
def time_of_return(self):
today = datetime.datetime.now()
returntime = today + datetime.timedelta(days=30)
return today, returntime
def return_fee(Movie):
fee = None
today, returntime = self.time_of_return()
delta = today - returntime
# ... presumably do something else
It's a good idea to indent your code by 4 spaces, by the way. And `None` (or
0) would be a better default value for `fee`.
|
How to handle ssl connections in raw Python socket?
Question: I'm writing a program to download a given webpage. I need to only use raw
python sockets for all the connection due to some restriction. So I make a
socket connection to a given domain (the Host field in the response header of
an object) and then send the GET request on this. Now when the url is a https
url, I think I need to first do the SSL handshake (because otherwise I'm
getting non-200 OK responses from the server and other error responses
mentioning P3P policies). I inspected curl's response to check how it's able
to successfully download while I'm not, turns out curl first does the SSL
handshake (that's all the difference). curl is always able to successfully
download a given object, the only difference always being the SSL handshake it
does.
So I'm wondering how to do the SSL handshake in raw python sockets? Basically
I want as easy a solution which allows me to do the minimum besides using raw
sockets.
Answer: Here is an example of a TCP client with SLL.
Not sure if it's the best way to download a web page but it should answer your
question "SSL handshake in raw python socket".
You will probably have to adapt the struct.pack/unpack but you get the general
idea:
import socket
import ssl
import struct
import binascii
import sys
class NotConnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class DisconnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class Connector:
def __init__(self):
pass
def is_connected(self):
return (self.sock and self.ssl_sock)
def open(self, hostname, port, cacert):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ssl_sock = ssl.wrap_socket(self.sock, ca_certs=cacert, cert_reqs=ssl.CERT_REQUIRED)
if hostname == socket.gethostname():
ipaddress = socket.gethostbyname_ex(hostname)[2][0]
self.ssl_sock.connect((ipaddress, port))
else:
self.ssl_sock.connect((hostname, port))
self.sock.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
def close(self):
if self.sock: self.sock.close()
self.sock = None
self.ssl_sock = None
def send(self, buffer):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
self.ssl_sock.sendall(struct.pack('L', len(buffer)))
self.ssl_sock.sendall(buffer)
def receive(self):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
data_size_buffer = self.ssl_sock.recv(4)
if len(data_size_buffer) <= 0:
raise DisconnectedException()
data_size = struct.unpack('L', data_size_buffer)[0]
received_size = 0
data_buffer = ""
while received_size < data_size:
chunk = self.ssl_sock.recv(1024)
data_buffer += chunk
received_size += len(chunk)
return data_buffer
Then you use the class like this:
connector = Connector.Connector()
connector.open(server_ip, server_port, path_to_the_CA_cert.pem)
connector.send(your_data)
response = connector.receive()
connector.close()
|
how to make a recursive loop in python
Question: I'm trying to make a web scraper that can cycle though pages using
BeautifulSoup
To do this I am trying to write a function that makes a call to the page I'm
looking for, finds the Href of the next button prints the result, then assigns
it to the request and repeats the function recursively printing each new value
of the next button.
This is what I have and I can't really figure out what it is not working. I
get no error so I think my structure may be off.
Thank you in advance.
import urllib.request
from bs4 import BeautifulSoup
import re
url = "http://www.calaiswine.co.uk/products/type/all-wines/1.aspx"
root_url = "http://www.calaiswine.co.uk"
first_index_url = '/products/type/all-wines/1.aspx'
htmlFile = urllib.request.urlopen(url);
htmlText = htmlFile.read();
soup = BeautifulSoup(htmlText);
def cycle_to_next_page(foo):
response = urllib.request.urlopen( root_url + foo)
soup = BeautifulSoup(response)
items = [a.attrs.get('href') for a in soup.findAll('a', title='Next')]
print (cycle_to_next_page(items[0]))
cycle_to_next_page(first_index_url)
Answer: **Your recursive function returns nothing** , it just prints.
In Python, a function that doesn't return is considered returning `None`. So,
Python understands your `cycle_to_next_page(first_index_url)` instruction as
if you did:
print(print(None))
I would personally not use recursion for this example. Just a basic `for` loop
iterating through `items`.
|
installing mayavi with pip - Building TVTK classes...Assertion failed
Question: I've been trying to install mayavi in Yosemite. I've already installed the
Numpy, VTK, wxPython, and configobj. When I run `sudo pip install mayavi`, it
shows the following error msg:
Running setup.py install for mayavi
----------------------------------------------------------------------
Building TVTK classes...Assertion failed: ("pre: not_empty" && !IsEmpty()), function
GetAttributesToInterpolate, file /tmp/vtk-MvPwfE/VTK-6.1.0/Common/DataModel
/vtkGenericAttributeCollection.cxx, line 453.
Complete output from command /usr/local/opt/python/bin/python2.7 -c "import
setuptools,tokenize;__file__='/private/tmp/pip_build_root/mayavi/setup.py';
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'),
__file__, 'exec'))" install --record /tmp/pip-haj8cd-record/install-record.txt
--single-version-externally-managed --compile:
running install
running build
I've no idea how to deal with this.
The following might be helpful.
{20:06:44}~/test ➭ which pip
/usr/local/bin/pip
{20:07:13}~/test ➭ which python
/usr/local/bin/python
{20:07:25}~/test ➭ python
Python 2.7.9 (default, Dec 19 2014, 06:00:59)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import vtk
>>> vtk
<module 'vtk' from '/usr/local/lib/python2.7/site-packages/vtk/__init__.pyc'>
>>>
Answer: I git cloned the mayavi, and run `python setup.py install`. It works.
|
Extracting only words from html pages
Question: I am using python 2.7 and I have a folder with a list of html pages from which
i would like to extract only the words from. Currently, the process that I am
using is open the html file, run it through beautiful soup library, get the
text and write it to a new file. But the problem here is I still get
javascript, css (body, colour, #000000 .etc), symbols (|, `,~,[] .etc) and
random numbers in the output.
How do I get rid of the unwanted output and get text only?
path = *folder path*
raw = open(path + "/raw.txt", "w")
files = os.listdir(path)
for name in files:
fname = os.path.join(path, name)
try:
with open(fname) as f:
b = f.read()
soup = BeautifulSoup(b)
txt = soup.body.getText().encode("UTF-8")
raw.write(txt)
Answer: Could strip out the script and style tags
import requests
from bs4 import BeautifulSoup
session = requests.session()
soup = BeautifulSoup(session.get('http://stackoverflow.com/questions/27684020/extracting-only-words- from-html-pages').text)
#This part here will strip out the script and style tags.
for script in soup(["script", "style"]):
script.extract()
print soup.get_text()
|
Issue while importing LoginManager from flask.ext.login
Question: I am trying get hands-on with flask-login extension. I am using virtualenv for
flask.
I could able to import LoginManager from flask.ext.login in the python
interpreter window, but not in the script. Below the import calls in the
script.
from flask.ext.login import LoginManager
I am getting below trace.
Traceback (most recent call last):
File "practice/flask_login.py", line 1, in <module>
from flask.ext.login import LoginManager
File "/Users/sunil/co_operative/flask/lib/python2.7/site-packages/flask/exthook.py", line 81, in load_module
reraise(exc_type, exc_value, tb.tb_next)
File "/Users/sunil/co_operative/practice/flask_login.py", line 1, in <module>
from flask.ext.login import LoginManager
ImportError: cannot import name LoginManager
Answer: **Cause of problem :**
Your file name is causing problem.
**Explanation:**
As mentioned in official flask document:
<http://flask.pocoo.org/docs/0.10/extensiondev/>, when user try to import
flask extension using expression import **_flask.ext.something_** , flask will
look for module **_flask_something_**.
So, import flask.ext.login look for module flask_login which is conflicting by
your file name. And flask is trying to find Definition of LoginManager in your
file.
**Solution**
File rename will solve problem.
|
Python: Have an iterator iterate over log messages on a different thread
Question: I have 2 threads running:
# Thread 1: In model
logging.getLogger('ui').info("Sit still, I'm computing...")
more_stuff = compute_stuff(stuff)
logging.info("Ok, I'm done.")
...
# Thread 2: In view, streaming messages to client
for message in log_iterator('ui'):
send_to_client(message)
I want log_iterator to hang until there's a log message, and then when one
comes up, do something with it. The question is, how do you make log_iterator?
Or is there a better way to achieve this goal?
Answer: You can write logs to file by 1 thread. Second thread can read the file and
shows logs.
This is logging configuration:
import logging
logger = logging.getLogger('simple_example') # set logger
logger.setLevel(logging.INFO) # set logger level
fh = logging.FileHandler('path-to-log-file') #set Handler
fh.setLevel(logging.INFO) #set Handler level
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') # set log format
fh.setFormatter(formatter) # add format to Handler
logger.addHandler(fh) # add Handler to logger
In my example function _rewrite_ logs to another file:
def print_message():
for i in range(0,1000):
f = open('path-to-log-file', 'r')
s = f.read()
f.close()
f2 = open('path-to-log-file2', 'r+')
f2.write(s)
f2.close()
time.sleep(.2)
import threading
t = threading.Thread(target=print_message)
t.start()
logger.error('error message1')
logger.error('error message2')
logger.error('error message3')
t.join()
So logger write logs to file by first thread and second thread reads logs from
file. I think that this is what you want.
Also if you want to avoid writing logs to file you can make own Handler which
send log exactly to second thread. I think that it could be complicated but I
poorly know threading.
Here is logging documentation: <https://docs.python.org/2/howto/logging.html>
|
mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement
Question: Following is my python script:
#!/usr/bin/env python
import mysql.connector
cnx = mysql.connector.connect(user='qdb1', password='qdb1', host='170.19.17.9', database='qdb1')
cursor = cnx.cursor()
insert_sql = ("insert into qdb1.amis"
" (CUSTOMER_NAME, AWS_ACCOUNT, AMI_START, CRT_REGION_PRIMARY, CRT_REGION_DR1, CRT_REGION_DR2, DBAPP_INST_ID, DBAPP_AMI_ID, DBAPP_AMI_NAME, VISTA_INST_ID, VISTA_AMI_ID, VISTA_AMI_NAME, WS_INST_ID, WS_AMI_ID, WS_AMI_NAME, DEL_REGION_PRIMARY, DEL_REGION_DR1, DEL_REGION_DR2, DELETED_AMI_PRIMARY, DELETED_SNAP_PRIMARY, DELETED_AMI_DR1, DELETED_SNAP_DR1, DELETED_AMI_DR2, DELETED_SNAP_DR2, SUCCESSFUL) "
"values ( %(CUSTOMER_NAME)s , %(AWS_ACCOUNT)s , %(AMI_START)s , %(CRT_REGION_PRIMARY)s , %(CRT_REGION_DR1)s , %(CRT_REGION_DR2)s , %(DBAPP_INST_ID)s , %(DBAPP_AMI_ID)s , %(DBAPP_AMI_NAME)s , %(VISTA_INST_ID)s , %(VISTA_AMI_ID)s , %(VISTA_AMI_NAME)s , %(WS_INST_ID)s , %(WS_AMI_ID)s , %(WS_AMI_NAME)s , %(DEL_REGION_PRIMARY)s , %(DEL_REGION_DR1)s , %(DEL_REGION_DR2)s , %(DELETED_AMI_PRIMARY)s , %(DELETED_SNAP_PRIMARY)s , %(DELETED_AMI_DR1)s , %(DELETED_SNAP_DR1)s , %(DELETED_AMI_DR2)s , %(DELETED_SNAP_DR2)s , %(SUCCESSFUL)s)")
print insert_sql
insert_data = ('SERVER1', '68687687876','2014-12-29 13:27:46', 'us-west-9', 'None', 'None', 'i-gtsuid43', 'ami-9jsh222f', 'DBAPP-SERVER', 'i-4wj333e3', 'ami-73eee351', 'VISTA-SERVER', 'i-5464ssse', 'ami-4ddd2853', 'WS-QSERVER', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 1)
cursor.execute(insert_sql, insert_data)
cnx.commit()
cursor.close()
I am getting the following error and need help in getting rid of this error:
/usr/bin/python ./test.py
insert into qdb1.amis (CUSTOMER_NAME, AWS_ACCOUNT, AMI_START, CRT_REGION_PRIMARY, CRT_REGION_DR1,
CRT_REGION_DR2, DBAPP_INST_ID, DBAPP_AMI_ID, DBAPP_AMI_NAME, EVISTA_INST_ID, EVISTA_AMI_ID,
EVISTA_AMI_NAME, WS_INST_ID, WS_AMI_ID, WS_AMI_NAME, DEL_REGION_PRIMARY, DEL_REGION_DR1,
DEL_REGION_DR2, DELETED_AMI_PRIMARY, DELETED_SNAP_PRIMARY, DELETED_AMI_DR1, DELETED_SNAP_DR1,
DELETED_AMI_DR2, DELETED_SNAP_DR2, SUCCESSFUL) values ( %(CUSTOMER_NAME)s , %(AWS_ACCOUNT)s , %
(AMI_START)s , %(CRT_REGION_PRIMARY)s , %(CRT_REGION_DR1)s , %(CRT_REGION_DR2)s , %(DBAPP_INST_ID)s ,
%(DBAPP_AMI_ID)s , %(DBAPP_AMI_NAME)s , %(EVISTA_INST_ID)s , %(EVISTA_AMI_ID)s , %(EVISTA_AMI_NAME)s
, %(WS_INST_ID)s , %(WS_AMI_ID)s , %(WS_AMI_NAME)s , %(DEL_REGION_PRIMARY)s , %(DEL_REGION_DR1)s , %
(DEL_REGION_DR2)s , %(DELETED_AMI_PRIMARY)s , %(DELETED_SNAP_PRIMARY)s , %(DELETED_AMI_DR1)s , %(DELETED_SNAP_DR1)s , %(DELETED_AMI_DR2)s , %(DELETED_SNAP_DR2)s , %(SUCCESSFUL)s)
Traceback (most recent call last):
File "./test.py", line 18, in <module>
cursor.execute(insert_sql, insert_data)
File "/usr/lib/python2.6/site-packages/mysql/connector/cursor.py", line 498, in execute
"Not all parameters were used in the SQL statement")
mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement
Answer: To use "named placeholders" in mysql connector, as you're doing, the second
argument to execute should be a dictionary having those names as its keys, not
a tuple (as you're using) or list. Alternatively, use the "unnamed
placeholder" `'%s'`.
|
Cartopy: Can't add features to geo axes (no errors or warnings)
Question: I can't add features to cartopy geo axes. Here is an example from the
[gallery](http://scitools.org.uk/cartopy/docs/latest/examples/features.html):
import cartopy
import matplotlib.pyplot as plt
def main():
ax = plt.axes(projection=cartopy.crs.PlateCarree())
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.5)
ax.add_feature(cartopy.feature.RIVERS)
ax.set_extent([-20, 60, -40, 40])
plt.show()
if __name__ == '__main__':
main()
When I run this on my laptop at home I get nothing but a blank geo axes in the
figure (I can tell because the extents are correct and I can see the
coordinates in the bottom left hand corner of the figure window. BUT - if I
run this same example at work, everything plots as expected in the link.
My feeling is that there is some kind of dependency issue, but I'm not sure
where to start with this one. There are absolutely no warnings or errors of
any kind.
Both at work and on my laptop I am running Windows 7 x64 with Python 2.7 and
installed cartopy via windows binaries from
[here](http://www.lfd.uci.edu/~gohlke/pythonlibs/)
If I plot something myself like a contour plot then it does show up, but I
think there is something going wrong between getting the data from
naturalearthddata.com, processing the shape files, and adding them to the
axes.
Does anyone have any ideas on where to start with this one?
Answer: I wasn't getting this error previously, but when I ran the code from an
IPython notebook for some reason I got this error `HTTPError: HTTP Error 404:
Not Found`
What I ended up having to do was change line 264 in `shapereader.py` from:
_NE_URL_TEMPLATE = ('http://www.nacis.org/naturalearth/{resolution}'
'/{category}/ne_{resolution}_{name}.zip')
To:
_NE_URL_TEMPLATE = ('http://www.naturalearthdata.com/'
'http//www.naturalearthdata.com/download/{resolution}'
'/{category}/ne_{resolution}_{name}.zip')
Which was recommended in multiple places in the past including
[here](https://groups.google.com/forum/#!msg/scitools-
iris/aawdQFPumpU/PjREeNlRZJAJ)
It has been mentioned that cartopy has since fixed this broken link to
naturalearth in the latest release, but I think the link is not updated in the
version of cartopy that I have installed via the mentioned windows binary. I
think this issue is fairly well documented, but for some reason I could not
see what the actual error was up until now.
|
using split() to split values in an entire column in a python dataframe
Question: I am trying to clean a list of url's that has garbage as shown.
1. /gradoffice/index.aspx(
2. /gradoffice/index.aspx-
3. /gradoffice/index.aspxjavascript$
4. /gradoffice/index.aspx~
I have a csv file with over 190k records of different url's. I tried to load
the csv into a pandas dataframe and took the entire column of url's into a
list by using the statement
str = df['csuristem']
it clearly gave me all the values in the column. when i use the following code
- It is only printing 40k records and it starts some where in the middle. I
don't know where am going wrong. the program runs perfectly but is showing me
only partial number of results. any help would be much appreciated.
import pandas
table = pandas.read_csv("SS3.csv", dtype=object)
df = pandas.DataFrame(table)
str = df['csuristem']
for s in str:
s = s.split(".")[0]
print s
I am looking to get an output like this
1. /gradoffice/index.
2. /gradoffice/index.
3. /gradoffice/index.
4. /gradoffice/index.
Thank you, Santhosh.
Answer: You need to do the following, so call `.str.split` on the column and then
`.str[0]` to access the first portion of the split string of interest:
In [6]:
df['csuristem'].str.split('.').str[0]
Out[6]:
0 /gradoffice/index
1 /gradoffice/index
2 /gradoffice/index
3 /gradoffice/index
Name: csuristem, dtype: object
|
How do I update PyQt5?
Question: I have an apparently older version of PyQt5 installed on my Xubuntu (Voyager).
When I print the `PYQT_VERSION_STR`, it displays: `'5.2.1'`. I downloaded the
latest PyQt5 release form here:
<http://sourceforge.net/projects/pyqt/files/PyQt5/PyQt-5.4/> I configured it,
make and make installed it, everything went according to plan. However, if I
print the `PYQT_VERSION_STR` again, it still outputs `'5.2.1'`.
How do I tell my python3.4 to use the updated version?
(Shouldn't the reinstall of the new version overwrite the other one? I don't
understand why it is still showing 5.2.1 as version string.)
**EDIT #1:**
`sys.path`: ['', '/home/user/.pythonbrew/lib', '/usr/lib/python3.4',
'/usr/lib/python3.4/plat-x86_64-linux-gnu', '/usr/lib/python3.4/lib-dynload',
'/usr/local/lib/python3.4/dist-packages', '/usr/lib/python3/dist-packages']
`PyQt5.__file__` '/usr/lib/python3/dist-packages/PyQt5/**init**.py'
So it seems my python is using the version from the repositories, except if
that's where it got installed when make installing.
**EDIT #2:**
It seems that the `PYQT_VERSION_STR` returns the version of Qt (!) which the
PyQt5 configuration before making and make installing found. So the actual
issue seems to be with my Qt5 version, which is `5.2.1` according to the
output of `python configure` of PyQt5:
Querying qmake about your Qt installation...
Determining the details of your Qt installation...
This is the GPL version of PyQt 5.4 (licensed under the GNU General Public License) for Python 3.4.0 on linux.
Type 'L' to view the license.
Type 'yes' to accept the terms of the license.
Type 'no' to decline the terms of the license.
Do you accept the terms of the license? yes
Found the license file pyqt-gpl.sip.
Checking [...]
DBus v1 does not seem to be installed.
Qt v5.2.1 (Open Source) is being used.
The qmake executable is /usr/bin/qmake.
Qt is built as a shared library.
SIP 4.16.5 is being used.
The sip executable is /usr/bin/sip.
These PyQt5 modules will be built: QtCore, QtGui, QtNetwork, QtOpenGL,
QtPrintSupport, QtQml, QtQuick, QtSql, QtTest, QtWidgets, QtXml, QtDBus, _QOpenGLFunctions_2_0.
The PyQt5 Python package will be installed in /usr/lib/python3.4/site-packages.
PyQt5 is being built with generated docstrings.
PyQt5 is being built with 'protected' redefined as 'public'.
The Designer plugin will be installed in
/usr/lib/x86_64-linux-gnu/qt5/plugins/designer.
The qmlscene plugin will be installed in
/usr/lib/x86_64-linux-gnu/qt5/plugins/PyQt5.
The PyQt5 .sip files will be installed in /usr/share/sip/PyQt5.
pyuic5, pyrcc5 and pylupdate5 will be installed in /usr/bin.
The interpreter used by pyuic5 is /usr/bin/python3.
Generating the C++ source for the QtCore module...
Embedding sip flags...
Generating [...]
Re-writing
/home/xiaolong/Downloads/PyQt-gpl-5.4/examples/quick/tutorials/extending/chapter6-plugins/Charts/qmldir...
Generating the top-level .pro file...
Making the pyuic5 wrapper executable...
Generating the Makefiles...
So the PyQt5 is going into the correct directory, but the actual version of
Qt5 is older than 5.4. Now the question seems turns into "How do I update my
Qt5 version?" unless I misunderstood something here.
**EDIT #3:**
Output of `sys.executable`:
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/usr/bin/python3'
>>>
**EDIT #4:**
Contents of my `.bash_aliases` file: ` alias python=python3 alias pip=pip3 `
Answer: **EDIT** :
I have been assuming that the suggestions made in the initial comments to your
question somehow didn't work for you, and so I guessed that there must be a
second python installation on your system. But after reading through
everything again, it seems that in fact you didn't follow through on all the
hints that were given.
It looks like this is simply a case of `dist-packages` vs `site-packages`. On
Debian-based systems, the python packages supplied by the distro are installed
in `dist-packages`, whereas user-installed packages would normally be
installed in `site-packages`. This is done to minimise the chances of breaking
other system applications which rely on specific versions of python packages.
The output from the PyQt5 configuration script shows that you installed it to
`site-packages`:
The PyQt5 Python package will be installed in /usr/lib/python3.4/site-packages.
But that directory does not appear in your `sys.path`, and so python cannot
import packages from that location.
There are several ways to fix this, but some are "safer" than others. For
instance, you could manipulate the python path in various ways, but that has
the significant downside of affecting _all_ python applications (including
ones using python2), so it's probably best avoided.
Since there are probably very few system applications that rely on PyQt-5.2.1,
it should be okay to simply override it. To do that, re-compile and install
PyQt-5.4 by configuring it like this:
python3 configure.py --destdir=/usr/local/lib/python3.4/dist-packages
Alternatively, you could uninstall your system PyQt5 package (using `apt-get`,
or whatever), and replace it altogether by configuring like this:
python3 configure.py --destdir=/usr/lib/python3/dist-packages
And as a final step, you should probably tidy things up by removing the
redundant `PyQt5` directory from `/usr/lib/python3.4/site-packages`.
|
Scrapy blank AssertionError?
Question: The below code is throwing the following error for each request sent to the
parse method (Scrapy v0.24.4):
2014-12-30 01:20:06+0000 [yelp_spider] DEBUG: Crawled (200) <GET http://www.yelp.com/biz/lookout-tavern-oak-bluffs> (referer: http://www.yelp.com/search?find_desc=Restaurants&find_loc=02557&ns=1) ['partial']
2014-12-30 01:20:06+0000 [yelp_spider] ERROR: Spider error processing <GET http://www.yelp.com/biz/lookout-tavern-oak-bluffs>
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/scrapy/core/scraper.py", line 111, in _scrape_next
self._scrape(response, request, spider).chainDeferred(deferred)
File "/usr/lib/python2.7/site-packages/scrapy/core/scraper.py", line 118, in _scrape
dfd = self._scrape2(response, request, spider) # returns spiders processed output
File "/usr/lib/python2.7/site-packages/scrapy/core/scraper.py", line 128, in _scrape2
request_result, request, spider)
File "/usr/lib/python2.7/site-packages/scrapy/core/spidermw.py", line 69, in scrape_response
dfd = mustbe_deferred(process_spider_input, response)
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/scrapy/utils/defer.py", line 39, in mustbe_deferred
result = f(*args, **kw)
File "/usr/lib/python2.7/site-packages/scrapy/core/spidermw.py", line 48, in process_spider_input
return scrape_func(response, request, spider)
File "/usr/lib/python2.7/site-packages/scrapy/core/scraper.py", line 138, in call_spider
dfd.addCallbacks(request.callback or spider.parse, request.errback)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 288, in addCallbacks
assert callable(callback)
exceptions.AssertionError:
Code:
import scrapy
from scrapy import Request
import re
ROOT_URL = "http://www.yelp.com"
class YelpReview(scrapy.Item):
zip_code = scrapy.Field()
review_date = scrapy.Field()
class yelp_spider(scrapy.Spider):
name = 'yelp_spider'
allowed_domains = ['yelp.com']
start_urls = ["http://www.yelp.com/search?find_desc=Restaurants&find_loc=02557&ns=1"]
def parse(self, response):
business_urls = [business_url.extract() for
business_url in response.xpath('//a[@class="biz-name"]/@href')[1:]
]
for business_url in business_urls:
yield Request(url=ROOT_URL + business_url, callback="scrape_reviews")
if response.url.find('?start=') == -1:
self.createRestaurantPageLinks(response)
def scrape_reviews(self, response):
reviews = response.xpath('//meta[@itemprop="datePublished"]/@content')
item = YelpReview()
for review in reviews:
item['zip_code'] = "02557"
item['review_date'] = review.extract()
yield item
if response.url.find('?start=') == -1:
self.createReviewPageLinks(response)
def createRestaurantPageLinks(self, response):
raw_num_results = response.xpath('//span[@class="pagination-results-window"]/text()').extract()[0]
num_business_results = int(re.findall(" of (\d+)", raw_num_results)[0])
BUSINESSES_PER_PAGE = 10
restaurant_page_links = [Request(url=response.url + '?start=' + str(BUSINESSES_PER_PAGE*(n+1)),
callback="parse") for n in range(num_business_results/BUSINESSES_PER_PAGE)]
return restaurant_page_links
def createReviewsPageLinks(self, response):
REVIEWS_PER_PAGE = 40
num_review_results = int(response.xpath('//span[@itemprop="reviewCount"]/text()').extract()[0])
review_page_links = [Request(url=response.url + '?start=' + str(REVIEWS_PER_PAGE*(n+1)),
callback="scrape_reviews") for n in range(num_review_results/REVIEWS_PER_PAGE)]
return review_page_links
I've tried making a bunch of changes but still can't figure out what's
triggering this error.
Answer: You need to return from the `parse()` method:
if response.url.find('?start=') == -1:
return self.createRestaurantPageLinks(response)
|
Marking off checkboxes in a table that match a specific text
Question: I have a table (screenshot below) where I want to mark off the checkboxes that
have the text "Xatu Auto Test" in the same row using selenium python.
<http://i.imgur.com/31eDDkl.png>
I've tried following these two posts:
* [Iterating Through a Table in Selenium Very Slow](http://stackoverflow.com/questions/27234879/iterating-through-a-table-in-selenium-very-slow)
* [Get row & column values in web table using python web driver](http://stackoverflow.com/questions/14907134/get-row-column-values-in-web-table-using-python-web-driver)
But I couldn't get those solutions to work on my code.
My code:
form = self.browser.find_element_by_id("quotes-form")
try:
rows = form.find_elements_by_tag_name("tr")
for row in rows:
columns = row.find_elements_by_tag_name("td")
for column in columns:
if column.text == self.group_name:
column.find_element_by_name("quote_id").click()
except NoSuchElementException:
pass
The checkboxes are never clicked and I am wondering what I am doing wrong.
This is the HTML when I inspect with FirePath:
<form id="quotes-form" action="/admin/quote/delete_multiple" method="post" name="quotesForm">
<table class="table table-striped table-shadow">
<thead>
<tbody id="quote-rows">
<tr>
<tr>
<td class="document-column">
<td>47</td>
<td class="nobr">
<td class="nobr">
<td class="nobr">
<td class="nobr">
<a title="Xatu Auto Test Data: No" href="http://192.168.56.10:5001/admin/quote/47/">Xatu Auto Test</a>
</td>
<td>$100,000</td>
<td style="text-align: right;">1,000</td>
<td class="nobr">Processing...</td>
<td class="nobr">192.168....</td>
<td/>
<td>
<input type="checkbox" value="47" name="quote_id"/>
</td>
</tr>
<tr>
</tbody>
<tbody id="quote-rows-footer">
</table>
<div class="btn-toolbar" style="text-align:center; width:100%;">
Answer: With a quick look, I reckon this line needs changing as you're trying to
access `column`'s `quote_id`, it should be `row`'s:
From:
column.find_element_by_name("quote_id").click()
To:
row.find_element_by_name("quote_id").click()
P.S. Provided that like @Saifur commented, you have your comparison done
correctly.
### Updated:
I have run a simulation and indeed the checkbox is ticked if changing `column`
to `row`, simplified version:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('your-form-sample.html')
form = driver.find_element_by_id("quotes-form")
rows = form.find_elements_by_tag_name("tr")
for row in rows:
columns = row.find_elements_by_tag_name("td")
for column in columns:
# I changed this to the actual string provided your comparison is correct
if column.text == 'Xatu Auto Test':
# you need to change from column to row, and it will work
row.find_element_by_name("quote_id").click()
Here's the output:

|
wb = xlwings.Workbook() fails on mac
Question: I'm just tinkering with xlwings on a mac to write values to cells. However,
when I initialize a new workbook, I get this:
import xlwings as xl
wb = xl.Workbook()
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Developer/anaconda/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/Developer/anaconda/lib/python2.7/site-packages/xlwings/_xlmac.py", line 30, in clean_up
app('Microsoft Excel').run_VB_macro('CleanUp')
File "/Developer/anaconda/lib/python2.7/site-packages/aeosa/appscript/reference.py", line 579, in __getattr__
raise AttributeError("Unknown property, element or command: %r" % name)
AttributeError: Unknown property, element or command: 'run_VB_macro'
Error in sys.exitfunc:
Traceback (most recent call last):
File "/Developer/anaconda/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/Developer/anaconda/lib/python2.7/site-packages/xlwings/_xlmac.py", line 30, in clean_up
app('Microsoft Excel').run_VB_macro('CleanUp')
File "/Developer/anaconda/lib/python2.7/site-packages/aeosa/appscript/reference.py", line 579, in __getattr__
raise AttributeError("Unknown property, element or command: %r" % name)
AttributeError: Unknown property, element or command: 'run_VB_macro'
Excel does open and creates a new file, but then the exception happens. I
don't understand why it would be trying to run macros when running on a mac. I
know the author is here. Hopefully, he can weigh in.
Answer: I don't get this error on OS X Yosemite 10.10.2, Office 2011 v14.4.6, Python
3.4.2 and xlwings 0.3.0.
Maybe you installed xlwings for Python 2.7? Try the following command as su:
`pip3 install xlwings`
|
Loading regular expression patterns from external source?
Question: I have a series of regular expression patterns defined for automated
processing of text. Due to the design of the program, it's better to have
these patterns separate in a text file, namely a JSON file. The pattern in
Python is of `r''` type, but all I can provide is a string. I'd like to retain
functionalities such as grouping. I'd like to have features such as entities
(`[A-z]`), so I'm not talking about escaping everything.
I'm using Python 3.4. How do I properly load these patterns into the `re`
module? And what kind of escaping problem should I watch out for?
Answer: I am not sure what you want but have a look at this.:
If you have a file called input.txt containing `\d+`
Then you can use it this way:
import re
f=open("input.txt","r")
x="asasd3243sdfdsf23234sdsdf"
print re.findall(r""+f.readline(),x)
Output:`['3243', '23234']`
When you use `r` mode you need not escape anything.
|
Interactive pixel information of an image in Python?
Question: **Short version:** is there a Python method for displaying an image which
shows, in real time, the pixel indices and intensities? So that as I move the
cursor over the image, I have a continually updated display such as
`pixel[103,214] = 198` (for grayscale) or `pixel[103,214] = (138,24,211)` for
rgb?
**Long version:**
Suppose I open a grayscale image saved as an ndarray `im` and display it with
`imshow` from matplotlib:
im = plt.imread('image.png')
plt.imshow(im,cm.gray)
What I get is the image, and in the bottom right of the window frame, an
interactive display of the pixel indices. Except that they're not quite, as
the values are not integers: `x=134.64 y=129.169` for example.
If I set the display with correct resolution:
plt.axis('equal')
the x and y values are still not integers.
The `imshow` method from the `spectral` package does a better job:
import spectral as spc
spc.imshow(im)
Then in the bottom right I now have `pixel=[103,152]` for example.
However, none of these methods also shows the pixel values. So I have two
questions:
1. Can the `imshow` from `matplotlib` (and the `imshow` from `scikit-image`) be coerced into showing the correct (integer) pixel indices?
2. Can any of these methods be extended to show the pixel values as well?
Answer: There a couple of different ways to go about this.
You can monkey-patch `ax.format_coord`, similar to [this official
example](http://matplotlib.org/examples/api/image_zcoord.html). I'm going to
use a slightly more "pythonic" approach here that doesn't rely on global
variables. (Note that I'm assuming no `extent` kwarg was specified, similar to
the matplotlib example. To be fully general, you need to do [a touch more
work](https://github.com/joferkington/mpldatacursor/blob/master/mpldatacursor/pick_info.py#L27).)
import numpy as np
import matplotlib.pyplot as plt
class Formatter(object):
def __init__(self, im):
self.im = im
def __call__(self, x, y):
z = self.im.get_array()[int(y), int(x)]
return 'x={:.01f}, y={:.01f}, z={:.01f}'.format(x, y, z)
data = np.random.random((10,10))
fig, ax = plt.subplots()
im = ax.imshow(data, interpolation='none')
ax.format_coord = Formatter(im)
plt.show()

Alternatively, just to plug one of my own projects, you can use
[`mpldatacursor`](https://github.com/joferkington/mpldatacursor) for this. If
you specify `hover=True`, the box will pop up whenever you hover over an
enabled artist. (By default it only pops up when clicked.) Note that
`mpldatacursor` does handle the `extent` and `origin` kwargs to `imshow`
correctly.
import numpy as np
import matplotlib.pyplot as plt
import mpldatacursor
data = np.random.random((10,10))
fig, ax = plt.subplots()
ax.imshow(data, interpolation='none')
mpldatacursor.datacursor(hover=True, bbox=dict(alpha=1, fc='w'))
plt.show()

Also, I forgot to mention how to show the pixel indices. In the first example,
it's just assuming that `i, j = int(y), int(x)`. You can add those in place of
`x` and `y`, if you'd prefer.
With `mpldatacursor`, you can specify them with a custom formatter. The `i`
and `j` arguments are the correct pixel indices, regardless of the `extent`
and `origin` of the image plotted.
For example (note the `extent` of the image vs. the `i,j` coordinates
displayed):
import numpy as np
import matplotlib.pyplot as plt
import mpldatacursor
data = np.random.random((10,10))
fig, ax = plt.subplots()
ax.imshow(data, interpolation='none', extent=[0, 1.5*np.pi, 0, np.pi])
mpldatacursor.datacursor(hover=True, bbox=dict(alpha=1, fc='w'),
formatter='i, j = {i}, {j}\nz = {z:.02g}'.format)
plt.show()

|
how to control the import paths in Python?
Question: I am new to Python. I am at a company where they built a large system in
Python. They use a proprietary system to manage the paths when the system is
running, but now I have been asked to build a standalone script that interacts
with some of the code in their system. Sadly, my standalone script won't be
running under the path-manager they use, so I need to figure out the paths on
my own.
So, for instance, I have this line:
from hark.tasks import REPLY_LINE
This is actually copied from some of their older code. In this case, the
script can find hark, but hark has an `__init__.py` file, and that is where
the problems start. So I get this:
meg/src/python2/hark/hark/__init__.py in <module>()
5 from flask import jsonify, render_template, request
6 import jinja2
----> 7 import logbook, logbook.compat
8
9 from healthhark.context import Ghost, g
The project that they built actually includes logbook 3 times. If I do:
find . -name "*logbook*"
I see:
meg/zurge/opt/python2.7/lib/python2.7/site-packages/logbook
meg/zurge/opt/python2.7-hark/lib/python2.7/site-packages/logbook
meg/zurge/opt/python3.4/lib/python3.4/site-packages/logbook
Like I said, they have a proprietary path manager that usually tells each
piece of code where it can find the packages that it should include, but I am
building a standalone app.
I don't know much about Python, but I am wondering if their is an idiomatic
and Pythonic way of including packages that are in such distant directories?
And, before anyone suggests `pip install`, we don't rely on global installs at
all.
Answer: The best solution would probably be
[virtualenv](https://pypi.python.org/pypi/virtualenv) or
[virtualenvwrapper](https://pypi.python.org/pypi/virtualenvwrapper). This
would allow you to define an environment which contains all of the libraries
that your script requires. This would not be global.
This can be done as follows:
* Create a [requirements.txt file](http://pip.readthedocs.org/en/latest/user_guide.html#requirements-files) defining the required libraries
* Install pip and virtualenvwrapper
* `source /usr/local/bin/virtualenvwrapper.sh`
* `workon hark_task_script_env || ( mkvirtualenv hark_task_script_env && pip install -r requirements.txt )`
* `python your-script.py`
The virtualenvwrapper installs the environment into a folder in your home
directory. The plain virtualenv library installs the environment into a folder
in the project. Other than that they are equivalent.
I would really recommend using them over a proprietry package manager. If you
must use the proprietry package manager then it is reasonable to have access
to the package loader for it!
If this really isn't satisfactory then you can hack a package loader as
follows (assuming you use a *nix system). This is a shell script written in
zsh. Hopefully it is clear enough for you to rewrite it in a supported shell
if your system does not have that available:
#!/bin/zsh
setopt extended_glob
function find_package_init_files () {
locate __init__.py
}
# Get the containing folder of a file or a folder
function file_or_folder_to_parent_folder () {
while read file_or_folder
do
echo ${file_or_folder:h}
done
}
# Exclude folders where the parent folder also has an __init__.py file in it
function exclude_inner_packages () {
while read init_file_folder
do
init_file_parent_folder=${init_file_folder:h}
if [ ! -e ${init_file_parent_folder}/__init__.py ]
then
echo ${init_file_folder}
fi
done
}
# This produces an array of all folders
# that contain at least one python package
function get_distinct_python_package_folders () {
find_package_init_files |
file_or_folder_to_parent_folder |
exclude_inner_packages |
file_or_folder_to_parent_folder |
sort |
uniq
}
PYTHONPATH=${(j/:/)$(get_distinct_python_package_folders)} YOUR_SCRIPT_HERE
You may well need to update this script to put the default python path first,
and remember that this is an incredibly clumsy approach. If there are multiple
versions of libraries installed on a system then the one that you may end up
using will be ill defined.
Using a proper package manager, even if that is the proprietry one, is the
best way.
|
Python parse empty string
Question: I'm using the [`parse`](https://pypi.python.org/pypi/parse) library and ran
into surprising (to me) functionality: it does not match empty strings:
>>> from parse import parse
>>> parse('hi "{}"', 'hi "everybody"')
<Result ('everybody',) {}>
>>> parse('hi "{}"', 'hi ""')
>>>
Is there a way, using `parse`, to get it to match any string between `""` in
the same way that `re` does:
>>> from re import match
>>> match('hi "(.*)"', 'hi "everybody"').groups()
('everybody',)
>>> match('hi "(.*)"', 'hi ""').groups()
('',)
Answer: Use a custom type conversion:
from parse import parse
def zero_or_more_string(text):
return text
zero_or_more_string.pattern = r".*"
parse('hi "{:z}"', 'hi ""', { "z": zero_or_more_string })
and you'll get this:
<Result ('',) {}>
|
Pandas DataFrame Assignment Issue - Possible Bug?
Question: I am trying to write Python Pandas code to merge the data in two DataFrames,
with the new DataFrame's data replacing the old DataFrame's data if the index
and columns are identical. There seems to be a bug in Pandas that sometimes
causes the column names to be mixed up.
Here is an example. First, create the two DataFrames:
In [1]: df1 = DataFrame([[1, 2, 3, 4]]*3, columns=["A1", "B2", "C3", "D4"], index=[0, 1, 2])
In [2]: df2 = DataFrame([[30, 10, 40, 20]]*3, columns=["C3", "A1", "D4", "B2"], index=[1, 2, 3])
In [3]: df1
Out[3]:
A1 B2 C3 D4
0 1 2 3 4
1 1 2 3 4
2 1 2 3 4
[3 rows x 4 columns]
In [4]: df2
Out[4]:
C3 A1 D4 B2
1 30 10 40 20
2 30 10 40 20
3 30 10 40 20
[3 rows x 4 columns]
Observe that df2 has the same columns but in a different order. The data is
the same as 10*df1.
Now merge them:
In [5]: merge_df = DataFrame(index=df1.index.union(df2.index), columns=df1.columns.union(df2.columns))
In [6]: merge_df.loc[df1.index, df1.columns] = df1
In [7]: merge_df.loc[df2.index, df2.columns] = df2
In [8]: merge_df
Out[8]:
A1 B2 C3 D4
0 1 2 3 4
1 10 20 30 40
2 10 20 30 40
3 10 20 30 40
[4 rows x 4 columns]
This works as expected.
Now redefine df2 so that it has a similar index as df1.
In [9]: df2 = DataFrame([[30, 10, 40, 20]]*3, columns=["C3", "A1", "D4", "B2"], index=[0, 1, 2])
In [10]: df2
Out[10]:
C3 A1 D4 B2
0 30 10 40 20
1 30 10 40 20
2 30 10 40 20
[3 rows x 4 columns]
Then merge using the same code as before:
In [11]: merge_df = DataFrame(index=df1.index.union(df2.index), columns=df1.columns.union(df2.columns))
In [12]: merge_df.loc[df1.index, df1.columns] = df1
In [13]: merge_df.loc[df2.index, df2.columns] = df2
In [14]: merge_df
Out[14]:
A1 B2 C3 D4
0 30 10 40 20
1 30 10 40 20
2 30 10 40 20
[3 rows x 4 columns]
Why are the column names and data mixed up? Am I using .loc wrong? Changing
that last line to .ix does not fix the problem. It only works if I do this:
In [15]: merge_df = DataFrame(index=df1.index.union(df2.index), columns=df1.columns.union(df2.columns))
In [16]: merge_df.loc[df1.index, df1.columns] = df1
In [17]: merge_df[df2.columns] = df2
In [18]: merge_df
Out[18]:
A1 B2 C3 D4
0 10 20 30 40
1 10 20 30 40
2 10 20 30 40
[3 rows x 4 columns]
That is the desired result.
I may be doing something wrong here, but if I am, there is something important
I do not understand about DataFrames and I could be making similar mistakes
elsewhere in my code. If that is the case, please explain.
I can't check the Pandas gitbug bugtracker as that website is blocked from
work. Any help would be appreciated.
In [19]: pd.__version__
Out[19]: '0.13.1'
Answer: I need to upgrade to Pandas 0.14.0, according to jreback on github:
<https://github.com/pydata/pandas/issues/9200>
|
Recording multiple videos to differing file names with Raspberry Pi Camera triggered by IR Sensor
Question: I am currently working on setting up my Raspberry Pi to (when triggered by an
IR sensor) record twenty second videos, each to a new file.
Currently it is returning this error when movement is detected:
"_Traceback (most recent call last): File "pir_2filenametest2.py", line 57, in
for filename in camera.start_recording('pivid{counter:03}.h264'): TypeError:
'NoneType' object is not iterable_ "
I am pretty much brand new to python, so any help would be wonderful.
Here is the code:
# Author : Matt Hawkins
# Date : 21/01/2013
# Import required Python libraries
import RPi.GPIO as GPIO
import time
import picamera
# Use BCM GPIO references
# instead of physical pin numbers
GPIO.setmode(GPIO.BCM)
# Define GPIO to use on Pi
GPIO_PIR = 7
print "Wilbur Cam! (CTRL-C to exit)"
# Set pin as input
GPIO.setup(GPIO_PIR,GPIO.IN) # Echo
Current_State = 0
Previous_State = 0
camera = picamera.PiCamera()
try:
print "Waiting for PIR to settle ..."
# Loop until PIR output is 0
while GPIO.input(GPIO_PIR)==1:
Current_State = 0
print " Ready"
# Loop until users quits with CTRL-C
while True :
# Read PIR state
Current_State = GPIO.input(GPIO_PIR)
if Current_State==1 and Previous_State==0:
# PIR is triggered
print " Motion detected!"
# Record previous state
Previous_State=1
# Camera begins to record
camera.resolution = (1360, 768)
for filename in camera.start_recording('pivid{counter:03}.h264'):
print('Captured %s' % filename)
time.sleep(20)
camera.stop_recording()
elif Current_State==0 and Previous_State==1:
# PIR has returned to ready state
print " Ready"
Previous_State=0
# Wait for 10 milliseconds
time.sleep(0.01)
except KeyboardInterrupt:
print " Quit"
# Reset GPIO settings
GPIO.cleanup()
Answer: [camera.start_recording](http://picamera.readthedocs.org/en/latest/api.html#picamera.PiCamera.start_recording)
does not return a value, it starts the recording, the first arg `output` is
what the video is written to which you could pass as a string or a file object
but there is nothing to iterate over:
**start_recording(output, format=None, resize=None, splitter_port=1,
**options)**
`Start recording video from the camera, storing it in output. If output is a
string, it will be treated as a filename for a new file which the video will
be written to. Otherwise, output is assumed to be a file-like object and the
video data is appended to it (the implementation only assumes the object has a
write() method - no other methods will be called).`
So in your case the filename is `'pivid{counter:03}.h264')`
If you want different filenames you could use something like the following:
i = 0 # set i to 0 outside the while
while True :
# Read PIR state
Current_State = GPIO.input(GPIO_PIR)
if Current_State==1 and Previous_State==0:
# PIR is triggered
print " Motion detected!"
# Record previous state
Previous_State=1
# Camera begins to record
camera.resolution = (1360, 768)
camera.start_recording('pivid{}.h264'.format(i)) # pass i to str.format
print('Captured pivid{}.264'.format(i))
i += 1 # increase i
time.sleep(20)
camera.stop_recording()
|
Python : "print" doesn't work
Question: I've recently got a bug in my pokemon battling bot program i've been writing
for some time, which i cannot explain at all.
The code is too long to fit well in a stack overflow question, so here's the
entire code in a pastebin.
<http://pastebin.com/h4V2DnXh>
In short, the code connects to a websocket, receive and send data trough it,
and, for me to follow what's happening, print everything it receives. Plus,
several debug tools... well, in any case, it prints stuff. Or rather, it used
too. For some hours, nothing has been printed. A bug, would I think, the
program doesn't reach any line where a print order is issued. But it does !
Everything works, data is correctly sent and received, the data is not empty,
the bot does everything it's supposed to do. Thus, I added a
print "test."
at the beginning... and nothing happens... it isn't even about printing empty
things, it doesn't print. At all.
By researching, i've found that print bugs could be linked to the usage of
IDLE , but i'm using enthought canopy (python 2.7), or that it could be linked
to imports, but the print "test." doesn't work, anyway.
Plus, it used to work, and I didn't modify the list of imported modules for a
while. And anyway, the modules do not have errors.
I really don't understand. Why won't print work ?
Answer: Are you using python 3.4? If you are try using brackets like this `print
("test.")`
|
How to deal with globals in modules?
Question: I try to make a non blocking api calls for
[OpenWeatherMap](http://www.openweathermap.com/api), but my problem is:
When i was doing tests on the file, and run it, the `global api` was taking
effect, but when importing the function, `global` dont work anymore, and `api`
dident change: `api = ""`?
Just after declaring the function i put `global api`, and then when I use
`print 'The API link is: ' + api` I get the exact `api`, but `global` dident
took effect!
Here is the code:
<https://github.com/abdelouahabb/tornadowm/blob/master/tornadowm.py#L62>
What am I doing wrong?
When I import the file:
from tornadowm import *
forecast('daily', q='london', lang='fr')
The API link is: http://api.openweathermap.org/data/2.5/forecast/daily?lang=fr&q=london
api
Out[5]: ''
When executing the file instead of importing it:
runfile('C:/Python27/Lib/site-packages/tornadowm.py', wdir='C:/Python27/Lib/site-packages')
forecast('daily', q='london', lang='fr')
The API link is: http://api.openweathermap.org/data/2.5/forecast/daily?lang=fr&q=london
api
Out[8]: 'http://api.openweathermap.org/data/2.5/forecast/daily?lang=fr&q=london'
Edit: here is the code, if the Git got updated:
from tornado.httpclient import AsyncHTTPClient
import json
import xml.etree.ElementTree as ET
http_client = AsyncHTTPClient()
url = ''
response = ''
args = []
link = 'http://api.openweathermap.org/data/2.5/'
api = ''
result = {}
way = ''
def forecast(way, **kwargs):
global api
if way in ('weather', 'forecast', 'daily', 'find'):
if way == 'daily':
way = 'forecast/daily?'
else:
way += '?'
for i, j in kwargs.iteritems():
args.append('&{0}={1}'.format(i, j))
a = ''.join(set(args))
api = (link + way + a.replace(' ', '+')).replace('?&', '?')
print 'The API link is: ' + api
def handle_request(resp):
global response
if resp.error:
print "Error:", resp.error
else:
response = resp.body
http_client.fetch(api, handle_request)
else:
print "please put a way: 'weather', 'forecast', 'daily', 'find' "
def get_result():
global result
if response.startswith('{'):
print 'the result is JSON, stored in the variable result'
result = json.loads(response)
elif response.startswith('<'):
print 'the result is XML, parse the result variable to work on the nodes,'
print 'or, use response to see the raw result'
result = ET.fromstring(response)
else:
print '''Sorry, no valid response, or you used a parameter that is not compatible with the way!\n please check http://www.openweathermap.com/api for more informations''
Answer: It's the side effect of using `global`.
When you do `from tornadowm import *` your `forecast()` function is, we could
say metaphorically, "on its own" and is not "hard-linked" to your global space
anymore.
Why? Because any effect you make on your global `api` will "end" with your
function, and the definition of `api = ""` in your global space will take
precedence.
Also, as a side note, it's not considered a good practice to use `from
something import *`. You should do `from tornadowm import forecast` or even
better, `import tornadown` and then use `tornadowm.forecast()`.
* **OR**
Even better, I just noticed your `forecast()` function doesn't return
anything. Which technically makes it not a `function` anymore, but a
`procedure` (a procedure is like a function but it returns nothing, it just
"does" stuff).
Instead of using a `global`, you should define `api` in this function and then
`return api` from it. Like this:
def forecast(blablabla):
api = "something"
blablabla
return api
And then
import tornadowm
api = tornadown.forecast(something)
And you're done.
|
Python dictionary: set value as the key string
Question: I have a python dictionary and want to set one of the values as the key itself
whilst initializing the dict. That is:
dummy = dict(
Key1 = ["SomeValue1", "Key1"],
Key2 = ["SomeValue2", "Key2"],
)
Can this be done programmably? That is, to skip writing the key again and set
something like `dummy.keys()[currentkeyindex]`.
Answer: If you want to keep track of items as well then use
[defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict)
>>> from collections import defaultdict
>>> output = defaultdict(list)
>>> values = [
["SomeValue1", "Key1"],
["SomeValue2a", "Key2"],
["SomeValue2b", "Key2"]
]
>>> for x in values:
... output[x[1]].append(x)
...
>>> output
defaultdict(<type 'list'>, {
'Key2': [['SomeValue2a', 'Key2'], ['SomeValue2b', 'Key2']],
'Key1': [['SomeValue1', 'Key1']]
})
|
Class inheritance from different modules python
Question: I'm new to python and I have a hard time trying to figuring out how can I
inherit from a class in an other module.
module: ~/foo.py
import bar
class foo:
def test(self)
print("this is a test")
module: ~/bar.py
class bar(foo):
def __init__(self):
super().test()
As soon as bar is imported, I get this error message :
NameError: name 'foo' is not defined
Answer: [If you want to refer to a name in another module then you must import
it.](https://docs.python.org/3/tutorial/modules.html)
import foo
class bar(foo.foo):
...
|
Cannot communicate with websocket. Autobahn: received HELLO message, and session is not yet established
Question: I am trying to build a WebSocket session using Python 3.4, Django, Autobahn
and JS. I have successfully run the websocket server on the python side, but i
cannot subscribe or receive any data published by the server
My code is fairly simple:
class TestAppWS(ApplicationSession):
"""
An application component that publishes an event every second.
"""
def onConnect(self):
self.join(u"realm1")
@asyncio.coroutine
def onJoin(self, details):
counter = 0
while True:
self.publish('com.myapp.topic1', counter)
counter += 1
yield from asyncio.sleep(1)
def start_ws():
print("Running")
session_factory = ApplicationSessionFactory()
session_factory.session = TestAppWS
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# factory = WebSocketServerFactory("ws://localhost:8090", debug=False)
# factory.protocol = MyServerProtocol
server = None
try:
transport_factory = WampWebSocketServerFactory(session_factory, debug_wamp=True)
loop = asyncio.get_event_loop()
coro = loop.create_server(transport_factory, 'localhost', 8090)
server = loop.run_until_complete(coro)
loop.run_forever()
except OSError:
print("WS server already running")
except KeyboardInterrupt:
pass
finally:
if server:
server.close()
loop.close()
start_ws() is run inside a separate Thread object. If I access localhost:8090
on my browser I can see the Autobahn welcome message.
On the frontend I have
var connection = new autobahn.Connection({
url: 'ws://localhost:8090/',
realm: 'realm1'}
);
connection.onopen = function (session) {
var received = 0;
function onevent1(args) {
console.log("Got event:", args[0]);
received += 1;
if (received > 5) {
console.log("Closing ..");
connection.close();
}
}
session.subscribe('com.myapp.topic1', onevent1);
};
connection.open();
It does not seem to work, when I try to connect the frontend I get the
following error on the backend side:
Failing WAMP-over-WebSocket transport: code = 1002, reason = 'WAMP Protocol Error (Received <class 'autobahn.wamp.message.Hello'> message, and session is not yet established)'
WAMP-over-WebSocket transport lost: wasClean = False, code = 1006, reason = 'connection was closed uncleanly (I failed the WebSocket connection by dropping the TCP connection)'
TX WAMP HELLO Message (realm = realm1, roles = [<autobahn.wamp.role.RolePublisherFeatures object at 0x04710270>, <autobahn.wamp.role.RoleSubscriberFeatures object at 0x047102B0>, <autobahn.wamp.role.RoleCallerFeatures object at 0x047102D0>, <autobahn.wamp.role.RoleCalleeFeatures object at 0x047102F0>], authmethods = None, authid = None)
RX WAMP HELLO Message (realm = realm1, roles = [<autobahn.wamp.role.RoleSubscriberFeatures object at 0x04710350>, <autobahn.wamp.role.RoleCallerFeatures object at 0x04710330>, <autobahn.wamp.role.RoleCalleeFeatures object at 0x04710390>, <autobahn.wamp.role.RolePublisherFeatures object at 0x04710370>], authmethods = None, authid = None)
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\autobahn\wamp\websocket.py", line 91, in onMessage
self._session.onMessage(msg)
File "C:\Python34\lib\site-packages\autobahn\wamp\protocol.py", line 429, in onMessage
raise ProtocolError("Received {0} message, and session is not yet established".format(msg.__class__))
autobahn.wamp.exception.ProtocolError: Received <class 'autobahn.wamp.message.Hello'> message, and session is not yet established
on the javascript console I see:
Uncaught InvalidAccessError: Failed to execute 'close' on 'WebSocket': The code must be either 1000, or between 3000 and 4999. 1002 is neither.
Any idea? It looks like the session is not started, honestly it is not clear
how this session work. Should not the session be initialized once a connection
from the client is made?
Answer: Your `TestAppWs` and your browser code are both _WAMP application components_.
Both of these need to connect to a WAMP **router**. Then they can talk freely
to each other (as if there were no router in between .. transparently).
Here is how to run.
**Run a WAMP Router.**
Using [Crossbar.io](http://crossbar.io/) (but you can use [other WAMP
routers](http://wamp.ws/implementations/#routers) as well), that's trivial.
First install Crossbar.io:
pip install crossbar
> Crossbar.io (currently) runs on Python 2, but that's irrelevant as your app
> components can run on Python 3 or any other WAMP supported language/run-
> time. Think of Crossbar.io like a black-box, an external infrastructure,
> like a database system.
Then create and start a Crossbar.io default router:
cd $HOME
mkdir mynode
cd mynode
crossbar init
crossbar start
**Run your Python 3 / asyncio component**
import asyncio
from autobahn.asyncio.wamp import ApplicationSession
class MyComponent(ApplicationSession):
@asyncio.coroutine
def onJoin(self, details):
print("session ready")
counter = 0
while True:
self.publish('com.myapp.topic1', counter)
counter += 1
yield from asyncio.sleep(1)
if __name__ == '__main__':
from autobahn.asyncio.wamp import ApplicationRunner
runner = ApplicationRunner(url = "ws://localhost:8080/ws", realm = "realm1")
runner.run(MyComponent)
**Run your browser component**
var connection = new autobahn.Connection({
url: 'ws://localhost:8080/ws',
realm: 'realm1'}
);
connection.onopen = function (session) {
var received = 0;
function onevent1(args) {
console.log("Got event:", args[0]);
received += 1;
if (received > 5) {
console.log("Closing ..");
connection.close();
}
}
session.subscribe('com.myapp.topic1', onevent1);
};
connection.open();
|
Read data with NAs into python and calculate mean row-wise
Question: I am reading in data from a csvfile and attempt to calculate the mean
columnwise. While the number of columns is fixed, the number of rows isn't.
Therefore I first read in the rows I need, make them a list and then form a
numpy array of the list. But it doesn't work.
import csv
import numpy
Reading in (loops through every file and find matches, which will then be
appended):
with open(input_file, mode='r') as f:
reader = csv.reader(f, delimiter=';')
for row in reader:
pass
# matchin algorithm omitted
found_line = row
del found_line[0] #remove first entry on name
`input_file` looks like
Weihnachtsmann;16;30.3125;0.00677830307346;0.000491988890358;0.2796728754;0.00371057513915;0.000667111407605;0.00177896375361
Tannenbaum;6;33.5;0.032918005099;0.00312809941211;0.308224811515;0.0124857679873;0.00644874360685;0.000667111407605
Heilier Klaus;1;NA;NA;NA;NA;NA;NA;NA
Then, I make a list out of the entries that match:
author_list.append(','.join(found_line))
author_array = numpy.array(author_list)
I am not creating the numpy array in the first place because I heard it's
unpythonic and slow to append to numpy arrays.
print author_arry
yields
['1,NA,NA,NA,NA,NA,NA' '6;33.5;0.032918005099;0.00312809941211;0.308224811515;0.0124857679873;0.00644874360685;0.000667111407605' '16;30.3125;0.00677830307346;0.000491988890358;0.2796728754;0.00371057513915;0.000667111407605;0.00177896375361']
but I am not even sure if that's an array with the dimensions I want (should
be exactly eight columns) or just one colum and three rows.
Afterwards, I have to convert the `NA`s that come from `R` into numpy's `NaN`
(if I am correctly) and I don't know how to do that. I tried
[author_entry.replace('NA','nan') for author_entry in author_list]
but I get an error.
Answer: There are a number of different ways you could read in the data from the file
using NumPy. Here's one way using
[`np.genfromtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html).
The names in the first column become NumPy `nan` values, as do any other non-
float strings in your file:
>>> arr = np.genfromtxt(input_file, delimiter=';', dtype=np.float64)
>>> arr
array([[ nan, 1.60000000e+01, 3.03125000e+01,
6.77830307e-03, 4.91988890e-04, 2.79672875e-01,
3.71057514e-03, 6.67111408e-04, 1.77896375e-03],
[ nan, 6.00000000e+00, 3.35000000e+01,
3.29180051e-02, 3.12809941e-03, 3.08224812e-01,
1.24857680e-02, 6.44874361e-03, 6.67111408e-04],
[ nan, 1.00000000e+00, nan,
nan, nan, nan,
nan, nan, nan]])
This is an array with 3 rows and 9 columns. To remove the first entry on each
line, you could just slice and reassign with `arr = arr[:, 1:]`.
You can calculate the row-wise mean using `np.nanmean` (to ignore the `nan`
values when calculating the mean):
>>> np.nanmean(arr, axis=1)
array([ 5.82569998, 4.98298407, 1. ])
|
Same Python code returns different results for same input string
Question: Below code is supposed to return the most common letter in the TEXT string in
the format:
* always lowercase
* ignoring punctuation and spaces
* in the case of words such as "One" - where there is no 2 letters the same - return the first letter in the alphabet
Each time I run the code using the same string, e.g. "One" the result cycles
through the letters...weirdly though, only from the third try (in this "One"
example).
text=input('Insert String: ')
def mwl(text):
from string import punctuation
from collections import Counter
for l in punctuation:
if l in text:
text = text.replace(l,'')
text = text.lower()
text=''.join(text.split())
text= sorted(text)
collist=Counter(text).most_common(1)
print(collist[0][0])
mwl(text)
Answer: `Counter` uses a dictionary:
>>> Counter('one')
Counter({'e': 1, 'o': 1, 'n': 1})
Dictionaries are not ordered, hence the behavior.
|
How to fix an encoding migrating Python subprocess to unicode_literals?
Question: We're preparing to move to Python 3.4 and added unicode_literals. Our code
relies extensively on piping to/from external utilities using subprocess
module. The following code snippet works fine on Python 2.7 to pipe UTF-8
strings to a sub-process:
kw = {}
kw[u'stdin'] = subprocess.PIPE
kw[u'stdout'] = subprocess.PIPE
kw[u'stderr'] = subprocess.PIPE
kw[u'executable'] = u'/path/to/binary/utility'
args = [u'', u'-l', u'nl']
line = u'¡Basta Ya!'
popen = subprocess.Popen(args,**kw)
popen.stdin.write('%s\n' % line.encode(u'utf-8'))
...blah blah...
The following changes throw this error:
from __future__ import unicode_literals
kw = {}
kw[u'stdin'] = subprocess.PIPE
kw[u'stdout'] = subprocess.PIPE
kw[u'stderr'] = subprocess.PIPE
kw[u'executable'] = u'/path/to/binary/utility'
args = [u'', u'-l', u'nl']
line = u'¡Basta Ya!'
popen = subprocess.Popen(args,**kw)
popen.stdin.write('%s\n' % line.encode(u'utf-8'))
Traceback (most recent call last):
File "test.py", line 138, in <module>
exitcode = main()
File "test.py", line 57, in main
popen.stdin.write('%s\n' % line.encode('utf-8'))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)
Any suggestions to pass UTF-8 through the pipe?
Answer: `'%s\n'` is a unicode string when you use `unicode_literals`:
>>> line = u'¡Basta Ya!'
>>> '%s\n' % line.encode(u'utf-8')
'\xc2\xa1Basta Ya!\n'
>>> u'%s\n' % line.encode(u'utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)
What happens is that your encoded `line` value is being _decoded_ to
interpolate into the unicode `'%s\n'` string.
You'll have to use a _byte string_ instead; prefix the string with `b`:
>>> from __future__ import unicode_literals
>>> line = u'¡Basta Ya!'
>>> b'%s\n' % line.encode(u'utf-8')
'\xc2\xa1Basta Ya!\n'
or encode _after_ interpolation:
>>> line = u'¡Basta Ya!'
>>> ('%s\n' % line).encode(u'utf-8')
'\xc2\xa1Basta Ya!\n'
In Python 3, you'll have to write bytestrings to pipes anyway.
|
Python socket works over LAN but not over Wifi
Question: I have a simple UDP server implemented in python:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(("",10005))
while True:
data = sock.recv(1024)
I run this code on computer A. I send UDP commands from computer B in these
two situations:
1. Both A and B are connected to a router in a local network via LAN cable.
2. Both A and B are connected to router over Wifi.
The UDP packets are received in situaltion 1 (LAN Cable) but not in situation
2 (over Wifi). **In both the cases Wireshark shows the received packet on
computer A.** Any thoughts?
OS: Windows
**The client program:**
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(char,("192.168.1.107",10005))
sock.close()
I have come close to finding the solution. Windows is dropping the UDP
packets. I checked with `netstat -s -p UDP` command. Whenever the sending
computer sends the UDP packets, the Receive Errors increase. Now I just have
to figure out why the packets are being received erroneously.
**Edit** I have tested it on other computers. It works. I have switched of the
firewall on the computer where it doesn't work but still can not figure out
what is filtering out the UDP packet.
Answer: Check the trust setting on the Wifi network for the server machine. According
to [this article](http://technet.microsoft.com/en-us/library/cc731634.aspx)
from Microsoft:
> For example, a program that accepts inbound connections from the Internet
> (like a file sharing program) may not work in the Public profile because the
> Windows Firewall default setting will block all inbound connections to
> programs that are not on the list of allowed programs.
I believe by default Wifi networks are put in the Public profile, so it sounds
like what's happening here. Since you know the packet is getting there OK
(form wireshark), the most likely explanation is that the firewall refuses to
deliver it for you.
The alternative would be to add python to the [allowed programs
list](http://windows.microsoft.com/en-gb/windows/communicate-through-windows-
firewall#1TC=windows-7) if you are perhaps not wholly trusting of the network.
|
splinter - strange ElementNotVisible exception for link that is indeed visible in dumped html/screenshot
Question: I am running some browser tests with splinter and, at one point, come across a
page with a link I want to follow. This call succeeds and returns the link:
my_browser.find_link_by_partial_href('/mystuff/' + str(important_number))
But I cannot click it:
my_browser.find_link_by_partial_href('/mystuff/' + str(important_number)).click()
...
...
...
ElementNotVisibleException: Message: u'{"errorMessage":"Element is not currently visible and may not be manipulated","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:38495","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"7812e810-9100-11e4-881c-37067349397d\\", \\"id\\": \\":wdc:1420039695427\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/7812e810-9100-11e4-881c-37067349397d/element/%3Awdc%3A1420039695427/click"}}' ; Screenshot: available via screen
What's odd here is that the link is indeed present when I follow
`my_browser.url`, as well as if I look at `my_browser.html` or
try`browser.show_screenshot(my_browser)`.
And it doesn't seem to be an issue of waiting for visibility. Adding a quick
`import time(); time.wait(5);` before the click still doesn't work (nor do
longer waits, though that's probably more than sufficient).
What could I be missing here?
Answer: Ah. Splinter is defaulting to the first link it finds, which isn't visible:
(Pdb) [link.visible for link in my_browser.find_link_by_partial_href('/mystuff/' + str(important_number))]
[False, True]
This extra hidden link isn't supposed to be there in the first place, which
goes to show you what can happen if you make assumptions about your code -
even the seemingly irrelevant parts!
|
Python subprocess: capture output of ffmpeg and run regular expression against it
Question: I have the following code
import subprocess
import re
from itertools import *
command = ['ffprobe', '-i', '/media/some_file.mp4']
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
text = p.stderr.read()
retcode = p.wait()
text = text.decode('utf-8')
p = re.compile("Duration(.*)")
num = 0 #for debugging
for line in iter(text.splitlines()):
print(str(num) + line) #for debugging
m = p.match(str(line))
if m != None:
print(m.group(1))
When I look at the output there is a line that says "Duration" on it, however
it is not captured, print(m.group(1)) is never reached. If I change the text
variable to a hardcoded string of "Duration blahblah" I get " blahblah", which
is what I expect. It seems like the regex doesn't recognize the text coming
back from stderr. How can I get the text into a format that the regex will
recognize and match on?
* * *
I have come up with the following solution, should it help anyone else
attempting to capture duration from ffmpeg using python
import subprocess
import re
command = ['ffprobe', '-i', '/media/some_file.mp4']
p = subprocess.Popen(command, stderr=subprocess.PIPE)
text = p.stderr.read()
retcode = p.wait()
text = text.decode('utf-8')
p = re.compile(".*Duration:\s([0-9:\.]*),", re.MULTILINE|re.DOTALL)
m = p.match(text)
print(m.group(1))
Answer:
p = re.compile(r".*?Duration(.*)")
Try this.`match` starts from the begining while there may might be something
before `duration`.
|
Search and Find through a list Python
Question: I have a main text file that looks like this:
STATUS| CRN| SUBJECT| SECT| COURSE| CREDIT| INSTR.| BLDG/RM| DAY/TIME| FROM / TO|
OPEN| 43565| ACA6202| 10| Acting II| 3.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43566| ACA6206| 10| Topics:Classical Drama/Cult II| 2.00| Jacobson, L| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43567| ACA6210| 10| Text II| 2.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43568| ACA6212| 10| Voice and Speech II| 3.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43569| ACA6216| 10| Movement II| 2.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43570| ACA6220| 10| Alexander Technique II| 2.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43571| ACA6224| 10| Stage Combat II| 2.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 43572| ACA6228| 10| Practicum IV| 3.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
OPEN| 44500| ACA6595| 10| Selected Topics| 1.00| Logan, G| SEE DEPT| | 01/12/15 - 04/27/15|
My code below gathers only the "SUBJECT" column and strips the numbers from
the string. So for example, the output from the top of the file would print
several "ACA"s.
with open ("/Users/it/Desktop/Classbook/classAbrevs.txt", "r") as myfile:
subsAndAbrevsMap = tuple(open("/Users/it/Desktop/Classbook/classAbrevs.txt", 'r'))
with open ("/Users/it/Desktop/Classbook/masterClassList.txt", "r") as myfile:
masterSchedule = tuple(open("/Users/it/Desktop/Classbook/masterClassList.txt", 'r'))
for masterline in masterSchedule:
masterline.strip()
masterSplitLine = masterline.split("|")
if masterSplitLine[0] != "STATUS":
subjectAbrev = ''.join([i for i in masterSplitLine[2] if not i.isdigit()])
I have another .txt file that looks like this:
Academy for Classical Acting,ACA
Accountancy,ACCY
Africana Studies,AFST
American Studies,AMST
Anatomy & Regenerative Biology,ANAT
Anthropology,ANTH
Applied Science,APSC
Arabic,ARAB
Art/Art History,AH
Art/Fine Arts,FA
Astronomy,ASTR
Biochemistry,BIOC
Biological Sciences,BISC
In my code below, I check to see if the abbreviations(column 2) in my second
.txt equal the abbreviations generated from my first .txt document. If it is a
match I would like to append the full class name:
#open 2nd .txt, strip and split
for subsline in subsAndAbrevsMap:
subsline.strip()
subLineSplit = subsline.split(",")
print "subLineSplit is: " + subsline[0]
if subLineSplit[1] == subjectAbrev:
realSubjectName = subLineSplit[0]
print "The subject name for abrev " + subjectAbrev + " is " + realSubjectName
I want the output to print:
"The subject name for abrev ACA is Academy for Classical Acting"
What am I doing wrong?
Answer: First of all, these are csv files, so use your `csv` module!
# path to first file is ~/classes.csv
# path to second file is ~/abbr.csv
import csv
with open("~/classes.csv", 'rU') as classes_csv,\
open("~/abbr.csv", 'rU') as abbr_csv:
classes = csv.reader(classes_csv, delimiter='|')
abbr = csv.reader(abbr_csv, delimiter=',')
header = next(classes)
abbr_dict = {line[1].strip():line[0].strip() for line in abbr}
# create a lookup dictionary for your tags -> names
class_tags = (line[2].strip("0123456789 ") for line in classes)
# create a genexp for all the extant tags in ~/classes.csv
result = {tag:abbr_dict[tag] for tag in class_tags if tag in abbr_dict}
Then it should be easy to format your result.
for abbr,cls in result.items():
print("The abbreviation for {} is {}".format(cls,abbr))
|
Add buttons to ActionBar app on KIVY. Python
Question: This a code I copied from the KIVY example directory that comes with the
software, I' am trying to modify it, and add other widgets.
.KV File
#:kivy 1.0
<ActionBar>:
height: '48dp'
size_hint_y: None
spacing: '4dp'
canvas:
Color:
rgba: self.background_color
BorderImage:
border: root.border
pos: self.pos
size: self.size
source: self.background_image
<ActionView>:
orientation: 'horizontal'
canvas:
Color:
rgba: self.background_color
BorderImage:
pos: self.pos
size: self.size
source: self.background_image
<ActionSeparator>:
size_hint_x: None
minimum_width: '2sp'
width: self.minimum_width
canvas:
Rectangle:
pos: self.x, self.y + sp(4)
size: self.width, self.height - sp(8)
source: self.background_image
<ActionButton,ActionToggleButton>:
background_normal: 'atlas://data/images/defaulttheme/' + ('action_bar' if self.inside_group else 'action_item')
background_down: 'atlas://data/images/defaulttheme/action_item_down'
size_hint_x: None if not root.inside_group else 1
width: [dp(48) if (root.icon and not root.inside_group) else max(dp(48), (self.texture_size[0] + dp(32))), self.size_hint_x][0]
color: self.color[:3] + [0 if (root.icon and not root.inside_group) else 1]
Image:
opacity: 1 if (root.icon and not root.inside_group) else 0
source: root.icon
mipmap: root.mipmap
pos: root.x + dp(4), root.y + dp(4)
size: root.width - dp(8), root.height - sp(8)
<ActionGroup>:
size_hint_x: None
width: self.texture_size[0] + dp(32)
<ActionCheck>:
background_normal: 'atlas://data/images/defaulttheme/action_bar' if self.inside_group else 'atlas://data/images/defaulttheme/action_item'
<ActionPrevious>:
size_hint_x: 1
minimum_width: '100sp'
important: True
BoxLayout:
orientation: 'horizontal'
pos: root.pos
size: root.size
Image:
source: root.previous_image
opacity: 1 if root.with_previous else 0
allow_stretch: True
size_hint_x: None
width: self.texture_size[0] if root.with_previous else dp(8)
mipmap: root.mipmap
Image:
source: root.app_icon
allow_stretch: True
size_hint_x: None
width: min(self.height, self.texture_size[0]) if self.texture else self.height
mipmap: root.mipmap
Widget:
size_hint_x: None
width: '5sp'
Label:
text: root.title
text_size: self.size
color: root.color
shorten: True
halign: 'left'
valign: 'middle'
<ActionGroup>:
background_normal: 'atlas://data/images/defaulttheme/action_group'
background_down: 'atlas://data/images/defaulttheme/action_group_down'
background_disabled_normal: 'atlas://data/images/defaulttheme/action_group_disabled'
border: 30, 30, 3, 3
ActionSeparator:
pos: root.pos
size: root.separator_width, root.height
opacity: 1 if root.use_separator else 0
background_image: root.separator_image if root.use_separator else 'action_view'
<ActionOverflow>:
border: 3, 3, 3, 3
background_normal: 'atlas://data/images/defaulttheme/action_item'
background_down: 'atlas://data/images/defaulttheme/action_item_down'
background_disabled_normal: 'atlas://data/images/defaulttheme/button_disabled'
size_hint_x: None
minimum_width: '48sp'
width: self.texture_size[0] if self.texture else self.minimum_width
canvas.after:
Color:
rgb: 1, 1, 1
Rectangle:
pos: root.center_x - sp(16), root.center_y - sp(16)
size: sp(32), sp(32)
source: root.overflow_image
<ActionDropDown>:
auto_width: False
<ContextualActionView>:
.PY File
from kivy.base import runTouchApp
from kivy.lang import Builder
runTouchApp(Builder.load_string('''
ActionBar:
pos_hint: {'top':1}
ActionView:
use_separator: True
ActionPrevious:
title: 'Action Bar'
with_previous: False
ActionOverflow:
ActionButton:
text: 'Btn0'
icon: 'atlas://data/images/defaulttheme/audio-volume-high'
ActionButton:
text: 'Btn1'
ActionButton:
text: 'Btn2'
ActionButton:
text: 'Btn3'
ActionButton:
text: 'Btn4'
ActionGroup:
text: 'Group1'
ActionButton:
text: 'Btn5'
ActionButton:
text: 'Btn6'
ActionButton:
text: 'Btn7'
'''))
I was trying to add a scroll view feature to this app, but I keep getting
error messages. Can someone help me add a button as an example to help me
complete this code?
Answer: Your problem is that you don't understand how it works.
The .kv file is only loaded if you create a subclass of kivy.app.App and let
its name end with "App". Then a .kv file that has the same name without the
"App" gets loaded. You can simply avoid your confusion with moving everything
in Builder.load_string to your .kv file and create a subclass of App.
Now you can put your ActionBar and your new Button in a horizontal BoxLayout
like this:
ActionBarTest.kv:
BoxLayout:
orientation: "horizontal"
ActionBar:
pos_hint: {'top':1}
ActionView:
use_separator: True
...
Button:
#new Button
text: "Hello World"
main.py
import kivy
from kivy.app import App
class ActionBarTestApp(App):
def build(self):
#self.root is already defined, because
#you set a root object in .kv file
return self.root
app = ActionBarTestApp()
app.run()
|
openerp change fields in a record by wizard
Question: I have installed openeducat module. In there I was trying to update timetable
record with update status to postponed which I created and Start-End Days by
wizard view.. & here is my python code in wizard. (postponed_op_timetable.py)
from osv import osv, fields
class op_timetable_postponed(osv.osv_memory):
_name = 'op.timetable.postponed'
_inherit = 'op.timetable'
_columns = {
}
def action_postponed_timetable(self, cr, uid, vals, context=None):
res = {}
timetable_id = super(op_timetable, self).create(cr, uid, vals, context=context)
for this_obj in self.browse(cr, uid, timetable_id[0], context=context):
self.write(cr, uid, timetable_id, {
'start_datetime': this_obj.start_datetime,
'end_datetime': this_obj.end_datetime,
'state': 'postponed'
}, context=context)
return res
And here is my xml (postponed_op_timetable_view.xml)
<?xml version="1.0" encoding="UTF-8"?>
<openerp>
<data>
<record id="view_op_timetable_postponed" model="ir.ui.view">
<field name="name">op.timetable.postponed.form</field>
<field name="model">op.timetable.postponed</field>
<field name="arch" type="xml">
<form string="Postponed Timetable" col="4" version="7.0">
<group colspan="2">
<field name="start_datetime" colspan="2"/>
<field name="end_datetime" colspan="2"/>
</group>
<footer>
<button type="special"
special="cancel"
string="Cancel"
icon="gtk-cancel"/>
<button type="object"
name="action_postponed_timetable"
string="Postponed"
icon="gtk-ok"/>
</footer>
</form>
</field>
</record>
<record model="ir.actions.act_window" id="action_op_timetable_postponed">
<field name="name">Postponed Timetable</field>
<field name="type">ir.actions.act_window</field>
<field name="src_model">op.timetable</field>
<field name="res_model">op.timetable.postponed</field>
<field name="view_type">form</field>
<field name="view_mode">form</field>
<field name="view_id" ref="view_op_timetable_postponed"/>
<field name="context">{'default_timetable_id': active_id}</field>
<field name="target">new</field>
</record>
</data>
</openerp>
and this is the normal timetable form view with my status bar.
<record id="view_op_timetable_form" model="ir.ui.view">
<field name="name">op.timetable.form</field>
<field name="model">op.timetable</field>
<field name="priority" eval="8" />
<field name="arch" type="xml">
<form string="Time Table" version="7.0">
<header>
<button name="action_complete" string="Complete" type="workflow" icon="gtk-apply" states="planned,postponed"/>
<button name="%(action_op_timetable_postponed)d" string="Postponed" type="action"
icon="gtk-jump-to" states="planned" context="{'timetable_id': active_id}"/>
<button name="action_cancel" string="Cancel" type="workflow" icon="gtk-cancel" states="planned,postponed"/>
<field name="state" widget="statusbar" readonly="True" statusbar_colors='{"r":"red"}'
statusbar_visible="planned,postponed,completed,cancelled"/>
</header>
<sheet>
<separator colspan="4" string="Time Table" />
<group colspan="4" col="4">
<field name="faculty_id" />
<field name="standard_id" />
<field name="division_id" />
<field name="period_id" />
<field name="subject_id" />
<field name="classroom_id" />
<field name="start_datetime" />
<field name="end_datetime" />
<field name="type"/>
</group>
</sheet>
</form>
</field>
</record>
and this is the error I have got.
Client Traceback (most recent call last):
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\http.py", line 204, in dispatch
response["result"] = method(self, **self.params)
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\controllers\main.py", line 1132, in call_button
action = self._call_kw(req, model, method, args, {})
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\controllers\main.py", line 1120, in _call_kw
return getattr(req.session.model(model), method)(*args, **kwargs)
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\session.py", line 42, in proxy
result = self.proxy.execute_kw(self.session._db, self.session._uid, self.session._password, self.model, method, args, kw)
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\session.py", line 30, in proxy_method
result = self.session.send(self.service_name, method, *args)
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\session.py", line 103, in send
raise xmlrpclib.Fault(openerp.tools.ustr(e), formatted_info)
Server Traceback (most recent call last):
File "E:\Development\MySchool_latest\Source\trunk\openerp.myschool\server\openerp\addons\web\session.py", line 89, in send
return openerp.netsvc.dispatch_rpc(service_name, method, args)
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\server\openerp\netsvc.py", line 292, in dispatch_rpc
result = ExportService.getService(service_name).dispatch(method, params)
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\server\openerp\service\web_services.py", line 626, in dispatch
res = fn(db, uid, *params)
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\server\openerp\osv\osv.py", line 190, in execute_kw
return self.execute(db, uid, obj, method, *args, **kw or {})
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\server\openerp\osv\osv.py", line 132, in wrapper
return f(self, dbname, *args, **kwargs)
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\server\openerp\osv\osv.py", line 199, in execute
res = self.execute_cr(cr, uid, obj, method, *args, **kw)
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\server\openerp\osv\osv.py", line 187, in execute_cr
return getattr(object, method)(cr, uid, *args, **kw)
File "E:\Development\MySchool_New\Source\trunk\openerp.myschool\src\myschool\wizard\postponed_op_timetable .py", line 13, in action_postponed_timetable
timetable_id = super(op_timetable, self).create(cr, uid, vals, context=context)
NameError: global name 'op_timetable' is not defined
Answer: As the stack trace says, your error is caused by this line:
timetable_id = super(op_timetable, self).create(cr, uid, vals, context=context)
The problem is that your class is called 'op_timetable_postponed', not
'op_timetable'. If you change that line to the below then that should sort you
out :)
timetable_id = super(op_timetable_postponed, self).create(cr, uid, vals, context=context)
NOTE: If you want to directly create records of a specific type, the best way
might be to do something like the below:
tt_obj = self.pool.get('op.timetable')
timetable_id = tt_obj.create(cr, uid, vals, context)
|
binarize a sparse matrix in python in a different way
Question: Assume I have a matrix like:
4 0 3 5
0 2 6 0
7 0 1 0
I want it binarized as:
0 0 0 0
0 1 0 0
0 0 1 0
That is set threshold equal to 2, any element greater than the threshold is
set to 0, any element less or equal than the threshold(except 0) is set to 1.
Can we do this on python's csr_matrix or any other sparse matrix?
I know scikit-learn offer Binarizer to replace values below or equal to the
threshold by 0, above it by 1.
Answer: When dealing with a sparse matrix, `s`, avoid inequalities that include zero
since a sparse matrix (if you're using it appropriately) should have a great
many zeros and forming an array of all the locations which are zero would be
huge. So avoid `s <= 2` for example. Use inequalities that select away from
zero instead.
import numpy as np
from scipy import sparse
s = sparse.csr_matrix(np.array([[4, 0, 3, 5],
[0, 2, 6, 0],
[7, 0, 1, 0]]))
print(s)
# <3x4 sparse matrix of type '<type 'numpy.int64'>'
# with 7 stored elements in Compressed Sparse Row format>
s[s > 2] = 0
s[s != 0] = 1
print(s.todense())
yields
matrix([[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0]])
|
Odd Behaviour of Loop in Python
Question: I am writing a script to report statistics from a text file in Markdown. The
file contains book titles and dates. Each date belongs to the titles that
follow, until a new date appears. Here is a sample:
#### 8/23/05
Defining the World (Hitchings)
#### 8/26/05
Lost Japan
#### 9/5/05
The Kite Runner
*The Dark Valley (Brendon)*
#### 9/9/05
Active Liberty
I iterate over lines in the file with a `for` loop and examine each line to
see if it's a date. If it's a date, I set a variable `this_date`. If it's a
title, I make it into a dict with the current value of `this_date`.
There are two exceptions: the file starts with titles, not a date, so I set an
initial value for `this_date` before the for loop. And halfway through the
file there is a region where dates were lost, and I set a specific date for
those titles.
But in the resulting list of dicts, all the titles are given that date until
the lost-data region starts. After that point, the rest of the titles are
given the date that appears last in the file. What is most confusing: when I
print the contents of `this_date` right before appending the new dict, it
contains the correct value on every loop.
I expect `this_date` to be visible at all levels of the loop. I know I need to
break this up into functions, and passing results explicitly between functions
will probably fix the issue, but I'd like to know why this approach didn't
work. Thank you very much.
result = []
# regex patterns
ddp = re.compile('\d+') # extract digits
mp = re.compile('^#+\s*\d+') # captures hashes and spaces
dp = re.compile('/\d+/') # captures slashes
yp = re.compile('\d+$')
sp = re.compile('^\*')
# initialize
this_date = {
'month': 4,
'day': 30,
'year': 2005
}
# print('this_date initialized')
for line in text:
if line == '':
pass
else:
if '#' in line: # markdown header format - line is a new date
if 'Reconstructing lost data' in line: # handle exception
# titles after this line are given 12/31/14 (the last date in the file) instead of 8/31/10
# all prior dates are overwritten with 8/31/10
# but the intent is that titles after this line appears have date 8/31/10, until the next date
this_date = {
'month': 8,
'day': 31,
'year': 2010
}
# print('set this_date to handle exception')
else: # get the date from the header
month = ddp.search( mp.search(line).group() ) # digits only
day = ddp.search( dp.search(line).group() ) # digits only
year = yp.search(line)
if month and day and year:
# print('setting this_date within header parse')
this_date['month'] = int(month.group())
this_date['day'] = int(day.group())
this_date['year'] = ( int(year.group()) + 2000 )
else:
pass
else: # line is a title
x = {
'date': this_date,
'read': False
}
if sp.match(line): # starts with asterisk - has been read
x['read'] = True
x['title'] = line[1:-3] # trim trailing asterisk and spaces
else:
x['title'] = line
# this_date is correct when printed here
# print('this_date is ' + str(this_date['month']) + '/' + str(this_date['day']) + '/' + str(this_date['year']) )
result.append(x)
# x has correct date when printed here
# print(x)
# print("Done; found %d titles.") % len(result)
# elements of result have wrong dates (either 8/31/10 or 12/31/14, no other values) when printed here
# print( result[0::20])
Answer: You create the `this_date` dictionary _just once_. You then _reuse_ that
dictionary each loop iteration. You are only adding _references_ to that
dictionary to your `result` list; it is just the _one_ dictionary referenced
over and over again.
Store a _new_ copy of the dictionary each loop iteration:
x = {
'date': this_date.copy(),
'read': False
}
Your code could do with some simplification; I'd use [`datetime.date()`
objects](https://docs.python.org/2/library/datetime.html#datetime.date) here
instead as they model dates properly. No regular expressions are required:
from datetime import datetime
current_date = None
results = []
for line in text:
line = line.strip()
if not line:
continue
if line.startswith('#'):
current_date = datetime.strptime(line.strip('# '), '%m/%d/%y').date()
continue
entry = {'date': current_date, 'read': False}
if line.startswith('*') and line.endswith('*'):
# previously read
line = line.strip('*')
entry['read'] = True
entry['title'] = line
results.append(entry)
Because `datetime.date()` objects are immutable and we create a new `date`
object each time we encounter a header line, you can safely re-use the last-
read date.
Demo:
>>> from datetime import datetime
>>> from pprint import pprint
>>> text = '''\
... #### 8/23/05
... Defining the World (Hitchings)
... #### 8/26/05
... Lost Japan
... #### 9/5/05
... The Kite Runner
... *The Dark Valley (Brendon)*
... #### 9/9/05
... Active Liberty
... '''.splitlines(True)
>>> current_date = None
>>> results = []
>>> for line in text:
... line = line.strip()
... if not line:
... continue
... if line.startswith('#'):
... current_date = datetime.strptime(line.strip('# '), '%m/%d/%y').date()
... continue
... entry = {'date': current_date, 'read': False}
... if line.startswith('*') and line.endswith('*'):
... # previously read
... line = line.strip('*')
... entry['read'] = True
... entry['title'] = line
... results.append(entry)
...
>>> pprint(results)
[{'date': datetime.date(2005, 8, 23),
'read': False,
'title': 'Defining the World (Hitchings)'},
{'date': datetime.date(2005, 8, 26), 'read': False, 'title': 'Lost Japan'},
{'date': datetime.date(2005, 9, 5),
'read': False,
'title': 'The Kite Runner'},
{'date': datetime.date(2005, 9, 5),
'read': True,
'title': 'The Dark Valley (Brendon)'},
{'date': datetime.date(2005, 9, 9), 'read': False, 'title': 'Active Liberty'}]
|
Combine two lists in a pythonic way
Question: I have no clue how to search for this, however, I cannot find an obvious
solution for my pythonic problem. I would like to combine two lists (one is a
manipulated one of the other) and permute them by keeping the length of the
lists constant.
An example:
a = ['A','B','C','D']
b = ['a','b','c','d']
combined = [['a','B','C','D'], ['A','b','C','D'], ..., ['a','b','c','d']]
And then I can permute them using itertools. However, the first step is for me
not easy to manage. I don't want nested for-loops and Co.
Answer: Using [`zip`](https://docs.python.org/3/library/functions.html#zip),
[`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product)
and [list
comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-
comprehensions):
>>> import itertools
>>> a = ['A','B','C','D']
>>> b = ['a','b','c','d'] # [x.lower() for x in a]
>>> [list(x) for x in itertools.product(*zip(a, b))]
[['A', 'B', 'C', 'D'], ['A', 'B', 'C', 'd'], ['A', 'B', 'c', 'D'],
['A', 'B', 'c', 'd'], ['A', 'b', 'C', 'D'], ['A', 'b', 'C', 'd'],
['A', 'b', 'c', 'D'], ['A', 'b', 'c', 'd'], ['a', 'B', 'C', 'D'],
['a', 'B', 'C', 'd'], ['a', 'B', 'c', 'D'], ['a', 'B', 'c', 'd'],
['a', 'b', 'C', 'D'], ['a', 'b', 'C', 'd'], ['a', 'b', 'c', 'D'],
['a', 'b', 'c', 'd']]
|
Scrapy ImportError: No module named project.settings when using subprocess.Popen
Question: I have scrapy crawler scraping thru sites. On some occasions scrapy kills
itself due to RAM issues. I rewrote the spider such that it can be split and
run for a site.
After the initial run, I use subprocess.Popen to submit the scrapy crawler
again with new start item.
But I am getting error
`ImportError: No module named shop.settingsTraceback (most recent call last):
File "/home/kumar/envs/ishop/bin/scrapy", line 4, in <module> execute() File
"/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/cmdline.py", line
109, in execute settings = get_project_settings() File
"/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/utils/project.py",
line 60, in get_project_settings settings.setmodule(settings_module_path,
priority='project') File "/home/kumar/envs/ishop/lib/python2.7/site-
packages/scrapy/settings/__init__.py", line 109, in setmodule module =
import_module(module) File "/usr/lib64/python2.7/importlib/__init__.py", line
37, in import_module __import__(name)ImportError: No module named
shop.settings`
The subprocess cmd is
`newp = Popen(comm, stderr=filename, stdout=filename, cwd=fp, shell=True)`
* comm - `source /home/kumar/envs/ishop/bin/activate && cd /home/kumar/projects/usg/shop/spiders/../.. && /home/kumar/envs/ishop/bin/scrapy crawl -a category=laptop -a site=newsite -a start=2 -a numpages=10 -a split=1 'allsitespider'`
* cwd - **/home/kumar/projects/usg**
I checked sys.path and it is correct `['/home/kumar/envs/ishop/bin',
'/home/kumar/envs/ishop/lib64/python27.zip',
'/home/kumar/envs/ishop/lib64/python2.7',
'/home/kumar/envs/ishop/lib64/python2.7/plat-linux2',
'/home/kumar/envs/ishop/lib64/python2.7/lib-tk',
'/home/kumar/envs/ishop/lib64/python2.7/lib-old',
'/home/kumar/envs/ishop/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7',
'/usr/lib/python2.7', '/home/kumar/envs/ishop/lib/python2.7/site-packages']`
But looks like the import statement is using
`"/usr/lib64/python2.7/importlib/__init__.py"` instead of my virtual env.
Where am I wrong? Help please?
Answer: Looks like the settings in not being loaded properly. One solution would be to
build an egg and deploy it in the env before starting the crawler.
Official docs, [Eggify scrapy
project](http://doc.scrapy.org/en/0.7/topics/scrapyd.html#deploying-your-
project)
|
How to test Django Form ChoiceField
Question: I've been stuck trying to test a Form `ChoiceField` in Django.
I have a `Form` with a single `ChoiceField`:
class PickPlanForm(forms.Form):
"Set the `plan` session cookie for choice here."
plan_choices = Plan.objects.get_choices()
# Field
plan = forms.ChoiceField(required=True, choices=plan_choices)
This is the tuple list of my `plan_choices`:
[('Bronze', 'Bronze ($10.00 per month)'),
('Silver', 'Silver ($20.00 per month)')]
I am trying to test it in the following way:
response = self.client.post(reverse('payment:register_step3'),
{'plan': 'Bronze'}, follow=True)
self.assertRedirects(response, reverse('payment:register_step4'))
However, when running my tests, I keep getting the error traceback:
Traceback (most recent call last):
File "/Users/aaron/Documents/djcode/textress_concierge/textress/main/tests/test_views.py", line 170, in test_register_step3
self.assertRedirects(response, reverse('payment:register_step4'))
File "/Users/aaron/Documents/virtualenvs/textress/lib/python3.4/site-packages/django/test/testcases.py", line 263, in assertRedirects
(response.status_code, status_code))
AssertionError: False is not True : Response didn't redirect as expected: Response code was 200 (expected 302)
I am using:
Django 1.6.8
Python 3.4
I'm thinking this is something easy that I am missing?
thank you
**Edit: add View**
from django.views.generic import FormView
from braces.views import LoginRequiredMixin
class PickPlanView(LoginRequiredMixin, FormView):
"""
Step #3 of Registration
Pick a Plan, and save the Plan as a `session cookie` before creating
the Stipe Customer/Subscription using the Plan Choice.
"""
template_name = 'main/hotel_form.html'
form_class = PickPlanForm
success_url = reverse_lazy('payment:register_step4')
authenticated_redirect_url = settings.VERIFY_LOGOUT_URL
def get_form_kwargs(self):
"""Add the `request` Obj to Form, so I can set the PlanPick as a
session cookie for the time being."""
kwargs = super().get_form_kwargs()
kwargs['request'] = self.request
return kwargs
**Notes on View:**
It just accepts the Form Choice.
It works when I test it manually.
I am just having trouble with the `Django ChoiceField Test` part because I
haven't tested a `ChoiceField` before.
Thank you
Answer: I could bring the test to fail in exactly the same way, if I post an invalid
choice. Posting _Bronze_ passes the test, but I have the choices hardcoded in
the `form` class and don't fetch them dynamically from the database.
Makes me think if the plans exist in the database of the unit test (or in a
mocked `queryset`) so that they are fetched correctly from
`Plan.objects.get_choices()`?
|
What is this issue of variable affectation in python?
Question: I have this problem in python. I have a function which take the following
inputs
import numpy;
from numpy import *;
def GetInf(G, X, m, n):
g = G[m - 1, :].T;
Y = X;
Y[m - 1, :] = 0;
Y[:, n - 1] = 0;
# Here I modify Y. The problem is that X is modified too. Why?
# In fact, I add Y after I see that X is changing but X keeps changing.
result = sum(Y * G);
return result;
G = array([[1., 2., 3.], [4., 5., 6.]]);
X = array([[1., 0., 0.], [0., 0., 1.]]);
I = GetInf(G, X, 1, 1);
My problem is that when I debug the program I see that after modifying `Y`,
`X` is also modified. I cannot understand why.
Answer: it because of that you assign `X` to `Y` . that means `Y` is a reference to
where `X` refer ! if you don't want that you have to make a copy of `X` :
Y=np.copy(X)
|
Move a flask server to production with mod_wsgi
Question: I followed this
[tutorial](http://flask.pocoo.org/docs/0.10/deploying/mod_wsgi/). Here is my
server:
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run()
When running
python server.py
curl 127.0.0.1:5000
I get the expected result.
Now I add a test.wsgi:
import sys
sys.path.insert(0, '/var/www/test')
from server import app as application
and in /etc/apache2/sites-enabled/test.config:
<VirtualHost *:5000>
ServerName example.com
WSGIDaemonProcess test user=user1 group=user1 threads=5
WSGIScriptAlias / /var/www/test/test.wsgi
<Directory /var/www/test>
WSGIProcessGroup test
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
Now calling
sudo service apache2 restart
curl 127.0.0.1:5000
will return an error. What did I do wrong?
Answer: As Daniel Roseman pointed out, I needed to
1. Remove the ServerName directive
2. Add `Listen 5000` to my `/etc/apache2/apache.conf` to make the server listen on port 5000
|
Import function from other file in python
Question: In `otsu.py` I have:
def Hello(n):
print "Hello",n
print "abc"
exit()
In `another.py`
from otsu import Hello
Hello(5)
When I run `python another.py`, the output is `abc`, not `Hello, 5`.
What am I doing wrong?
Answer: Firstly, make sure you don't have any stale `.pyc` or `.pyo` files in the
directory. Or if you're using Python 3 then remove the `__pycache__` directory
just to be sure. This is likely the problem.
In `another.py`, running `from otsu import Hello` should print `abc`. Then
running `Hello(5)` will produce `Hello 5`. So your output will look like:
abc
Hello 5
I just ran this to confirm and it worked as expected.
|
improving speed on strings with math operators in python
Question: I am working with a set of strings that could contain math operators, either
'+' or '-' for now. Initially i used eval which was pretty straightforward. I
am trying to move away from eval due to its known issues but mostly am trying
to improve the speed. Here is a typical set of string that i would work on :
DAC = ('12','0x1E','34','12+20','2+0x1F')
Taking each value and applying the eval will give the results i need. To move
away from this I tried a few methods, they work fine until i hit into a math
operator. I did read about AST module and am gonna work on that but would like
to get some feedback if its the right track to go in terms of improving the
speed or any other suggestions the community can give me..
Thanks
Answer: What is wrong with `literal_eval`? That's probably your best bet short of
writing or finding a lex parser, which is certainly overkill for this
situation.
>>> DAC = ('12','0x1E','34','12+20','2+0x1F')
>>> from ast import literal_eval
You could use a list comprehension
>>> [literal_eval(i) for i in DAC]
[12, 30, 34, 32, 33]
Or `map`
>>> list(map(literal_eval, DAC))
[12, 30, 34, 32, 33]
|
how to get a Python timing object to retain appropriate information within function and decorator scopes
Question: I'm writing a module for quick and easy timing in a Python program. The idea
is that little instances of clocks can be created throughout the code. These
clocks are available as objects that can be started, stopped, started again
and queried. Any clocks instantiated are added to a module list of all clocks.
At the conclusion of the program, a printout of all clocks (either a listing
of all clocks or the means of all similar clocks) can be requested of this
list.
I've got a lot of it working, but the timing of functions is still causing me
difficulty. Specifically, the times measured for the functions are measured as
0 using either explicit clocks or using a decorator, when the time measured
for functions 1 and 1 should be ~3 seconds and ~4 seconds respectively.
I suspect that I am not retaining the clock attribute `_startTimeTmp` in an
appropriate way (it can be reset for the purposes of internal calculations).
I would really appreciate some guidance on getting the timers working
correctly. I've got myself a bit confused on how to solve it!
I'm aware that the code may look slightly long, but I've minimized it as much
as I know how to without obscuring the vision of what I'm trying to do overall
(so that any suggestions proposed don't remove critical functionality). I do
think it's reasonably clear how it works, at least.
# module (shijian.py):
from __future__ import division
import os
import time
import uuid as uuid
import datetime
import inspect
import functools
def _main():
global clocks
clocks = Clocks()
def time_UTC(
style = None
):
return(
style_datetime_object(
datetimeObject = datetime.datetime.utcnow(),
style = style
)
)
def style_datetime_object(
datetimeObject = None,
style = "YYYY-MM-DDTHHMMSS"
):
# filename safe
if style == "YYYY-MM-DDTHHMMSSZ":
return(datetimeObject.strftime('%Y-%m-%dT%H%M%SZ'))
# microseconds
elif style == "YYYY-MM-DDTHHMMSSMMMMMMZ":
return(datetimeObject.strftime('%Y-%m-%dT%H%M%S%fZ'))
# elegant
elif style == "YYYY-MM-DD HH:MM:SS UTC":
return(datetimeObject.strftime('%Y-%m-%d %H:%M:%SZ'))
# UNIX time in seconds with second fraction
elif style == "UNIX time S.SSSSSS":
return(
(datetimeObject -\
datetime.datetime.utcfromtimestamp(0)).total_seconds()
)
# UNIX time in seconds rounded
elif style == "UNIX time S":
return(
int((datetimeObject -\
datetime.datetime.utcfromtimestamp(0)).total_seconds())
)
# filename safe
else:
return(datetimeObject.strftime('%Y-%m-%dT%H%M%SZ'))
def UID():
return(str(uuid.uuid4()))
class Clock(object):
def __init__(
self,
name = None,
start = True
):
self._name = name
self._start = start # Boolean start clock on instantiation
self._startTime = None # internal (value to return)
self._startTimeTmp = None # internal (value for calculations)
self._stopTime = None # internal (value to return)
self._updateTime = None # internal
# If no name is specified, generate a unique one.
if self._name is None:
self._name = UID()
# If a global clock list is detected, add a clock instance to it.
if "clocks" in globals():
clocks.add(self)
self.reset()
if self._start:
self.start()
def start(self):
self._startTimeTmp = datetime.datetime.utcnow()
self._startTime = datetime.datetime.utcnow()
def stop(self):
self._updateTime = None
self._startTimeTmp = None
self._stopTime = datetime.datetime.utcnow()
# Update the clock accumulator.
def update(self):
if self._updateTime:
self.accumulator += (
datetime.datetime.utcnow() - self._updateTime
)
else:
self.accumulator += (
datetime.datetime.utcnow() - self._startTimeTmp
)
self._updateTime = datetime.datetime.utcnow()
def reset(self):
self.accumulator = datetime.timedelta(0)
self._startTimeTmp = None
# If the clock has a start time, add the difference between now and the
# start time to the accumulator and return the accumulation. If the clock
# does not have a start time, return the accumulation.
def elapsed(self):
if self._startTimeTmp:
self.update()
return(self.accumulator)
def name(self):
return(self._name)
def time(self):
return(self.elapsed().total_seconds())
def startTime(self):
if self._startTime:
return(style_datetime_object(datetimeObject = self._startTime))
else:
return("none")
def stopTime(self):
if self._stopTime:
return(style_datetime_object(datetimeObject = self._stopTime))
else:
return("none")
def report(
self
):
string = "clock attribute".ljust(39) + "value"
string += "\nname".ljust(40) + self.name()
string += "\ntime start (s)".ljust(40) + self.startTime()
string += "\ntime stop (s)".ljust(40) + self.stopTime()
string += "\ntime elapsed (s)".ljust(40) + str(self.time())
string += "\n"
return(string)
def printout(self):
print(self.report())
def timer(function):
#@functools.wraps(function)
def decoration(
*args,
**kwargs
):
arguments = inspect.getcallargs(function, *args, **kwargs)
clock = Clock(name = function.__name__)
result = function(*args, **kwargs)
clock.stop()
return(decoration)
class Clocks(object):
def __init__(
self
):
self._listOfClocks = []
self._defaultReportStyle = "statistics"
def add(
self,
clock
):
self._listOfClocks.append(clock)
def report(
self,
style = None
):
if style is None:
style = self._defaultReportStyle
if self._listOfClocks != []:
if style == "statistics":
# Create a dictionary of clock types with corresponding lists of
# times for all instances.
dictionaryOfClockTypes = {}
# Get the names of all clocks and add them to the dictionary.
for clock in self._listOfClocks:
dictionaryOfClockTypes[clock.name()] = []
# Record the values of all clocks for their respective names in
# the dictionary.
for clock in self._listOfClocks:
dictionaryOfClockTypes[clock.name()].append(clock.time())
# Create a report, calculating the average value for each clock
# type.
string = "clock type".ljust(39) + "mean time (s)"
for name, values in dictionaryOfClockTypes.iteritems():
string += "\n" +\
str(name).ljust(39) + str(sum(values)/len(values))
string += "\n"
elif style == "full":
# Create a report, listing the values of all clocks.
string = "clock".ljust(39) + "time (s)"
for clock in self._listOfClocks:
string += "\n" +\
str(clock.name()).ljust(39) + str(clock.time())
string += "\n"
else:
string = "no clocks"
return(string)
def printout(
self,
style = None
):
if style is None:
style = self._defaultReportStyle
print(self.report(style = style))
_main()
# main code example (examples.py):
import shijian
import time
import inspect
def main():
print("create clock alpha")
alpha = shijian.Clock(name = "alpha")
print("clock alpha start time: {time}".format(time = alpha.startTime()))
print("sleep 2 seconds")
time.sleep(2)
print("clock alpha current time (s): {time}".format(time = alpha.time()))
print("\ncreate clock beta")
beta = shijian.Clock(name = "beta")
print("clock beta start time: {time}".format(time = beta.startTime()))
print("clock beta stop time: {time}".format(time = beta.stopTime()))
print("sleep 2 seconds")
time.sleep(2)
print("clock beta current time (s): {time}".format(time = beta.time()))
print("stop clock beta")
beta.stop()
print("clock beta start time: {time}".format(time = beta.startTime()))
print("clock beta stop time: {time}".format(time = beta.stopTime()))
print("sleep 2 seconds")
time.sleep(2)
print("clock beta start time: {time}".format(time = beta.startTime()))
print("clock beta stop time: {time}".format(time = beta.stopTime()))
print("clock beta current time (s): {time}".format(time = beta.time()))
print("\nclock beta printout:\n")
beta.printout()
print("create two gamma clocks")
gamma = shijian.Clock(name = "gamma")
gamma = shijian.Clock(name = "gamma")
print("sleep 2 seconds")
time.sleep(2)
print("\ncreate two unnamed clocks")
delta = shijian.Clock()
epsilon = shijian.Clock()
print("sleep 2 seconds")
time.sleep(2)
print("\nrun function 1 (which is timed using internal clocks)")
function1()
print("\nrun function 2 (which is timed using a decorator)")
function2()
print("\nclocks full printout:\n")
shijian.clocks.printout(style = "full")
print("clocks statistics printout:\n")
shijian.clocks.printout()
def function1():
functionName = inspect.stack()[0][3]
clock = shijian.Clock(name = functionName)
print("initiate {functionName}".format(functionName = functionName))
time.sleep(3)
print("terminate {functionName}".format(functionName = functionName))
clock.stop()
@shijian.timer
def function2():
functionName = inspect.stack()[0][3]
print("initiate {functionName}".format(functionName = functionName))
time.sleep(4)
print("terminate {functionName}".format(functionName = functionName))
if __name__ == '__main__':
main()
# example terminal output:
create clock alpha
clock alpha start time: 2015-01-03T090124Z
sleep 2 seconds
clock alpha current time (s): 2.000887
create clock beta
clock beta start time: 2015-01-03T090126Z
clock beta stop time: none
sleep 2 seconds
clock beta current time (s): 2.002123
stop clock beta
clock beta start time: 2015-01-03T090126Z
clock beta stop time: 2015-01-03T090128Z
sleep 2 seconds
clock beta start time: 2015-01-03T090126Z
clock beta stop time: 2015-01-03T090128Z
clock beta current time (s): 2.002123
clock beta printout:
clock attribute value
name beta
time start (s) 2015-01-03T090126Z
time stop (s) 2015-01-03T090128Z
time elapsed (s) 2.002123
create two gamma clocks
sleep 2 seconds
create two unnamed clocks
sleep 2 seconds
run function 1 (which is timed using internal clocks)
initiate function1
terminate function1
run function 2 (which is timed using a decorator)
initiate function2
terminate function2
clocks full printout:
clock time (s)
alpha 17.023659
beta 2.002123
gamma 11.018138
gamma 11.018138
1919f9de-85ce-48c9-b1c8-5164f3a2633e 9.017148
d24c818c-f4e6-48d0-ad72-f050a5cf86d3 9.017027
function1 0.0
function2 0.0
clocks statistics printout:
clock type mean time (s)
function1 0.0
function2 0.0
1919f9de-85ce-48c9-b1c8-5164f3a2633e 9.017283
beta 2.002123
alpha 17.023834
d24c818c-f4e6-48d0-ad72-f050a5cf86d3 9.017163
gamma 11.0182835
Answer: The `Clock` does not get `update`d when it is `stop`ped. The minimal fix is:
def stop(self):
self.update()
self._updateTime = None
self._startTimeTmp = None
self._stopTime = datetime.datetime.utcnow()
You have three other errors:
* You should test for `None` by identity (`if foo is not None`) not truthiness (`if foo`), to avoid issues with `False`-y values that aren't `None`;
* `shijian.timer` doesn't `return result`, so although the timing will work you'll break any code that expects a return from the decorated function; and
* If you want the code to work in Python 2 and 3, you can't use `dict.iteritems`, which doesn't exist in the latter. If you only want it to work in Python 2, either `from __future__ import print_function` or use `print whatever` rather than `print(whatever)`.
Additionally, your code is not at all compliant with [the style
guide](https://www.python.org/dev/peps/pep-0008/) (or, worse, even internally
consistent - compare the definition of `Clock.start` with that of
`Clock.report`, for example).
There is also room for improvement in the design and functionality (e.g.
`Clock.name` could be a `@property`, and I would separate the table printing
from the generation of results). You should consider submitting your code for
[Code Review](http://codereview.stackexchange.com/), once you have:
* completed;
* tested; and
* style-guide-complianced it
(you might find using [`pylint`](http://www.pylint.org/) helpful for the
latter).
Finally, I assume you're doing this for learning purposes rather than because
you need the functionality, as Python [has its own
profilers](https://docs.python.org/2/library/profile.html).
|
How to find 500 most frequent words in 500 text files in python?
Question: I have 500 text files in one directory.I have to find 500 most frequent words
in all of the text files combined.How can I achieve that?
PS: I have searched a lot but could not find a solution.
Answer: Use
[`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter):
import os
from collections import Counter
c, directory = Counter(), 'path_to_your_directory'
for x in os.listdir(directory):
fname = os.path.join(directory, x)
if os.path.isfile(fname):
with open(fname) as f:
c += Counter(f.read().split())
for word, _ in c.most_common(500):
print(word)
Of course, it will read every file found in that directory. If that's not the
intended behavior, use [`glob.glob` or
`glob.iglob`](https://docs.python.org/3/library/glob.html) with the required
pattern instead of `os.listdir` (see Reut's comment to my answer).
|
Stanford pos tagger not displaying the output elements in Python (MAC)
Question:
#-*- coding:Utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding('utf8')
import os
java_path = "/usr/libexec/java_home" # replace this
os.environ['JAVAHOME'] = java_path
from nltk.tag.stanford import POSTagger
french_postagger = POSTagger("stanford-postagger-full-2014-10-26/models/french.tagger", "stanford-postagger-full-2014-10-26/stanford-postagger.jar", encoding="utf-8")
english_postagger = POSTagger("stanford-postagger-full-2014-10-26/models/english-bidirectional-distsim.tagger", "stanford-postagger-full-2014-10-26/stanford-postagger.jar", encoding="utf-8")
print french_postagger.tag("siddhartha is a good boy".split())
the result is as follows:
[('', u'/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home')]
instead I need to see the words and their tags.
Answer: The problem is this part of your code:
java_path = "/usr/libexec/java_home" # replace this
os.environ['JAVAHOME'] = java_path
Where did that code from? It looks like you should replace it. If your setup
is like mine, changing that first line to `java_path = "/usr/bin/java"` fixes
the problem. Actually, if your setup is like mine, just **deleting those two
lines completely** fixes the problem (while including them reproduces it):
from nltk.tag.stanford import POSTagger
french_postagger = POSTagger("models/french.tagger", "stanford-postagger.jar", encoding="utf-8")
english_postagger = POSTagger("models/english-bidirectional-distsim.tagger", "stanford-postagger.jar", encoding="utf-8")
print french_postagger.tag("siddhartha is a good boy".split())
> [[(u'siddhartha', u'ADV'), (u'is', u'VPP'), (u'a', u'V'), (u'good', u'ET'), (u'boy', u'ET')]]
|
Recursively move files from subdirectory to folders in parent directory
Question: In the following directory,
/Drive/Company/images/full_res/
there exists over 900 .jpg files, like so:
Skywalker.jpg
Pineapple.jpg
Purple.jpg
White.jpg
One level up from 'full_res' ('images'), there exists nearly the same amount
of folders as there are images in 'full_res', and for the most part are named
accordingly, like so:
..
.
Skywalker/
Pineapple/
Purple/
White/
full_res/
I need to move or copy all of the files in full_res to their correspondingly
named folder in 'images' while simultaneously renaming the file to
'export.jpg'. The result should be as such:
/Drive/Company/images/
----------------------
..
.
Skywalker/export.jpg
Pineapple/export.jpg
Purple/export.jpg
White/export.jpg
[This is the closest thing](http://stackoverflow.com/questions/18295056/batch-
file-to-recursively-move-files-to-first-level-directory-from-deeper-subdir) I
could find relevant to my query (I think?), but I'm looking for a way to do
this with Python. Here's the nothing I was able to produce:
import os, shutil
path = os.path.expanduser('~/Drive/Company/images/')
src = os.listdir(os.path.join(path, 'full_res/'))
for filename in src:
images = [filename.endswith('.jpg') for filename in src]
for x in images:
x = x.split('.')
print x[0] #log to console so I can see it's at least doing something (it's not)
dest = os.path.join(path, x[0])
if not os.path.exists(dest):
os.makedirs(dest) #create the folder if it doesn't exist
shutil.copyfile(filename, os.path.join(dest, '/export.jpg'))
* * *
There's probably a lot wrong with this, but I suspect one of my biggest
failings has something to do with my misunderstanding of the concept of list
comprehension. In any case, I've been struggling with this for so long that I
probably could have manually moved and renamed all of those image files myself
by now. Any and all help is appreciated.
Answer: You're not far from the correct answer:
import os, shutil
path = os.path.expanduser('~/Drive/Company/images/')
src = os.listdir(os.path.join(path, 'full_res'))
for filename in src:
if filename.endswith('.jpg'):
basename = os.path.splitext(filename)[0]
print basename #log to console so I can see it's at least doing something (it's not)
dest = os.path.join(path, basename)
if not os.path.exists(dest):
os.makedirs(dest) #create the folder if it doesn't exist
shutil.copyfile(os.path.join(path, 'full_res', filename), os.path.join(dest, 'export.jpg'))
|
Python: Using oswalk in first function to find files, pass name, then trying to open in second function
Question: I am trying to use two functions to find files and process them, the output of
which, ultimately, will be sent to a SQlite3 database. I have the second part
working when NOT a function, but for this, need to enter the name of each file
to be processed. I want it automated, hence the first function with oswalk.
However, when I add the first function, which finds the files and returns them
to the second function, something is going wrong. In the code posted, I just
want to test that the files are being passed and can be opened and read one at
a time.
Notes about what is happening and pointers to the output (which will be pasted
below the code) are hashed out in the code below.
import os
import fnmatch
def findFiles (path, filter):
Files = []
for root, dirs, files in os.walk(path):
for file in fnmatch.filter(files, filter):
Files.append(os.path.join(root, file))
return Files
def fastq2SQlite(Files):
for file in Files:
print file ## At this point, I have a list of files. See "A" below.
with open(file, 'r'): ##If this block is then added, it's evident that the files shown
##in "A" are not being recognized as files. Output is "A"
##transposed, each line a letter of a file name.
for line in file:
print line
Output "A"
C:/Users/Documents/JKC10/test.txt
C:/Users/Documents/JKC10/test2.txt
C:/Users/Documents/JKC10/test3.txt
None
I have even unsuccessfully tried to get the files to be read by modifying them
so that their names become (example): 'C:/Users/Documents/JKC10/test_out.txt'
by adding a block that looks like:
def fastq2SQlite(Files):
for file in Files:
f = "'" + file + "'"
Answer:
import os
import fnmatch
def findFiles (path, filter):
Files = []
for root, dirs, files in os.walk(path):
for file in fnmatch.filter(files, filter):
Files.append(os.path.join(root, file))
return Files
def fastq2SQlite(Files):
for file_ in Files:
print file_
with open(file_, 'r') as dummy_file:
for line in dummy_file:
print line
You should also avoid using the keywords like `file`, etc. as variable names,
It overrides the original object and can cause some serious issues, Just a
good programming practice.
|
Python : Inserting multiple values into a table from excel
Question: I have to read data from Excel and insert it into Table...
For this I am using Python 2.7, pymssql and xlrd modules...
My sql connection is working fine and I am also able to read data from Excel
properly.
My table structure :
CREATE TABLE MONTHLY_BUDGET
(
SEQUENCE INT IDENTITY,
TRANSACTION_DATE VARCHAR(100),
TRANSACTION_REMARKS VARCHAR(1000),
WITHDRAWL_AMOUNT VARCHAR(100),
DEPOSIT_AMOUNT VARCHAR(100),
BALANCE_AMOUNT VARCHAR(100)
)
My excel values are like this :
02/01/2015 To RD Ac no 147825000874 7,000.00 - 36,575.74
I am having problem while inserting multiple values in the table... I am not
sure how to do this...
import xlrd
import pymssql
file_location = 'C:/Users/praveen/Downloads/OpTransactionHistory03-01-2015.xls'
#Connecting SQL Server
conn = pymssql.connect (host='host',user='user',password='pwd',database='Practice')
cur = conn.cursor()
# Open Workbook
workbook = xlrd.open_workbook(file_location)
# Open Worksheet
sheet = workbook.sheet_by_index(0)
for rows in range(13,sheet.nrows):
for cols in range(sheet.ncols):
cur.execute(
" INSERT INTO MONTHLY_BUDGET VALUES (%s, %s, %s, %s, %s)", <--- Not sure!!!
[(sheet.cell_value(rows,cols))])
conn.commit()
Error : ValueError: 'params' arg () can be only a tuple or a dictionary.
The docs are here : <http://pymssql.org/en/stable/pymssql_examples.html>
Answer: The exception you are getting says that the "'params' arg() can be only a
tuple or a dictionary" but you're passing in a list. Also, your parameter list
appears to be a single tuple instead of a list with 4 values. Try changing
cur.execute(
" INSERT INTO MONTHLY_BUDGET VALUES (?, ?, ?, ?, ?)", <--- Not sure!!!
[(sheet.cell_value(rows,cols))])
to
cur.execute(
" INSERT INTO MONTHLY_BUDGET VALUES (?, ?, ?, ?, ?)", <--- Not sure!!!
(sheet.cell_value(rows,cols)))
... or maybe
cur.execute(
" INSERT INTO MONTHLY_BUDGET VALUES (?, ?, ?, ?, ?)", <--- Not sure!!!
((sheet.cell_value(rows,cols))))
NB: untested. I've always changed how the bind variables in your SQL are being
called.
|
python database query using two scripts
Question: I have two scripts main.py and testscript.py. On main.py is the function to
connect to my database and on testscript.py I will use as my query request and
call this query to my database by using main.py. Adding to main.py
if __name__ == '__main__':
q = "select domain_name from domains_domain limit 10"
query(q)
and run script
> python main.py
working as is expected, but I don't know how to do this on testscript.py
Here is main.py
import MySQLdb
from localdb import *
import socket
host = socket.gethostname()
error_report = "Check if MySQL service is running and user name/password are correct"
class WeekQuery:
def __init__(self, name):
self.name = name
def query(self):
cursor = None
results = None
week_number = None
#Connecting to production database
if host == 'db01':
try:
db = MySQLdb.connect("db01", "myuser", "mypass", "mydb")
cursor = db.cursor()
cursor.callproc(self)
results = cursor.fetchall()
week_number = [i[0] for i in cursor.description]
except MySQLdb.Error, e:
try:
print "%s \n MySQL Error [%d]: %s" % (error_report, e.args[0], e.args[1])
except IndexError:
print "%s \n MySQL Error: %s " % (error_report, str(e))
else:
#local database rename localdb.py_example to localdb.py
try:
db = MySQLdb.connect(host=MYSQL_HOST, user=MYSQL_USER, passwd=MYSQL_PASSWD, db=MYSQL_DATABASE)
cursor = db.cursor()
cursor.execute(self)
results = cursor.fetchall()
week_number = [i[0] for i in cursor.description]
except MySQLdb.Error, e:
try:
print "%s \n MySQL Error [%d]: %s" % (error_report, e.args[0], e.args[1])
except IndexError:
print "%s \n MySQL Error: %s " % (error_report, str(e))
#finally:
# cursor.close()
# db.close()
print(results, week_number) #For debuging only
return (results, week_number)
Here my testscript.py
from main import WeekQuery
def sample():
q = "select domain_name from domains_domain limit 10"
x = WeekQuery(q)
for y in x.query():
print(y)
Answer: You need tom import host and error_report too from main. Is that your problem
?
|
put 3D world coordinates in 2D array
Question: I'm working with 3D image processing using OpenCV and Python.
I can make from two points in two images (left and right) a world coordinate
(X, Y, Z).
For my remainder of the program: It’s important to have a 2D array that
corresponds to the left (or right) image pixels.
He needs to recognize objects in a normal 2D image. When that is done giving
the x, y, z world coordinate of the founded object. This he does giving the
(x, y) of the image en with this (x, y) getting the x, y, z world coordinate.
My question is: Is there a special function for it or how can I do this?
Answer: If you want to use glm you can try this:
glm::vec3 array[image_resolution_y][image_resolution_x];
array[0][0][0] = x;
array[0][0][1] = y;
array[0][0][2] = z;
In Linux you can install glm with apt-get install libglm-dev
But of your course you can just create a point struct / class and use this.
struct Point3D
{
float x;
float y;
float z;
};
Point3D array [image_resolution_y][image_resolution_x];
array[0][0].x = x;
array[0][0].y = y;
array[0][0].z = z;
|
Python Requests Code 141 Error
Question: I'm trying to use requests in python to post a json dictionary to a url. I
need to get a string back from the url but I keep getting a code 141 error
-{"code":141,"error":"Missing a github repository link"}. I'm using this
website(<http://docs.python-requests.org/en/latest/user/quickstart/>) to do
requests.
Any ideas on why I keep getting that error? Code is below.
import requests
import json
payload = { "email" : "[email protected]", "github" : "https://github.com/"}
headers = {'content-type': 'application/json', "Accept": 'application/json'}
r = requests.post("http://challenge.code2040.org/api/register", params = payload, headers = headers)
print(r.url)
print r.text
Update: The suggestion worked but now I'm getting
an{"code":141,"error":"success/error was not called"} error when I try to save
the response I recieve from the url into a variable and then post it back to a
different url.
#Store the token into a variable
token = r.text
payload = { "token" : token}
headers = {'content-type': 'application/json', "Accept": 'application/json'}
r = requests.post("http://challenge.code2040.org/api/getstring", json = payload, headers = headers)
print r.text
Answer: Since you are making a `POST` request and [you need to provide a JSON in the
request body](http://challenge.code2040.org/), use [`json`
argument](http://docs.python-
requests.org/en/latest/api/?highlight=post#requests.post), not `params`:
r = requests.post("http://challenge.code2040.org/api/register",
json=payload,
headers=headers)
(tested - got back a token)
Note that `json` argument was introduced in [requests
2.4.2](https://github.com/kennethreitz/requests/blob/master/HISTORY.rst#242-2014-10-05).
|
Efficient way to find the average of the rank of added scores to a list. Possibly a more efficient way of sorting or a math way?
Question: I am trying to solve a problem where I need to find the average of the ranks
of a bunch of scores added to a list.
For example if the input is: 5 100 200 150 170 50
Then the program should output 2.2
There are 5 scores to be added
When 100 is entered it is ranked 1
When 200 is entered it is ranked 1
When 150 is entered it is ranked 2
When 170 is entered it is ranked 2
when 50 is entered it is ranked 5
Then (1 + 1 + 2 + 2 + 5) = 2.2
Right now I have a solution that works perfectly but It is not fast enough for
large test cases.
games = input()
lst = []
acc = 0.0
counter = 0.0
for i in range(0, games):
number = input()
lst.append(number)
lstt = sorted(lst)
lsttt = lstt[::-1]
acc += (lsttt.index(number) + 1)
print acc / games
Right now i am using the default python sort function, and I'm thinking using
a different kind of sort could make it faster. Is that the issue or is there a
better math way to do it?
Answer: You could use the [`bisect`](https://docs.python.org/2/library/bisect.html)
module to find the insertion point in `O(log(n))` time:
import bisect
games = input()
lst = []
acc = 0.0
counter = 0.0
for i in range(games):
number = input()
pos = bisect.bisect(lst, number)
lst.insert(pos, number) # O(log(n)) for the search, but O(n) for the insertion
acc += len(lst) - pos
print acc / games
It's an improvement over your algorithm in that it's `O(n^2)` rather than
`O((n^2)*log(n))`. If that's still too slow, you might want to consider using
a tree.
|
Python: UUID in a list
Question: I have a list with a uuid4 in it. I also have a string. For example:
list = [UUID('79d8f4b7-06a0-41d1-99d6-dd8c5308875f'), 'example1', 'example2']
string = 79d8f4b7-06a0-41d1-99d6-dd8c5308875f
But when I try:
if string in list:
print("It's in!")
else:
print("It's not!")
The output is always "it's not".
I know there is probably a data type error going on but i cant seem to find it
myself. Any help is appreciated, i'm sure something this simple will be fixed
within seconds, thanks in advance.
_When I type print list[0] this is what is output:
79d8f4b7-06a0-41d1-99d6-dd8c5308875f. But even when i try say "..in list[0]",
it still doesn't work._
Answer: You need to convert
[`uuid`](https://docs.python.org/2/library/uuid.html#uuid.UUID) to string with
`str()` function .
>>> import uuid
>>> x=uuid.uuid4()
>>> str(x)
'924db46b-5c51-4330-861c-363570ea9ef6'
and for check you need to convert string to uuid ,with `uuid.UUID` but as this
function accept bytes you need to pass the bytes of your string to it :
>>> my_list = [x, 'example1', 'example2']
>>> uuid.UUID(bytes=x.bytes) in my_list
True
|
BTC-E ticker time format
Question: Im using python to pull the Bitcoin ticker into a pandas DF and put into my
SQL Database, however I have no idea what format the dam servertime and
updated times are in. I would like to convert the the time in my pandas DF
first then put into my SQL DB.
import pandas as pd
import urllib
import json
from pandas.io.json import json_normalize
import sqlalchemy
engine = sqlalchemy.create_engine("mssql+pyodbc://@localhost")
bitcoin = 'https://btc-e.com/api/2/btc_usd/ticker'
data = urllib.urlopen(bitcoin)
data = json.load(data)
data = json_normalize(data)
data = pd.DataFrame(data)
data.to_sql('TESTTABLE',engine, if_exists='append', index = False)
print data
For now the servertime and updated information is going into columns in my SQL
DB that are set to bigint datatype.
Anyhelp would be awesome :)
Answer: Right now pandas is bringing both times in as int64. Changing them is
relatively easy with pd.to_datetime() once you figure out the format (sorry,
can't help you there). to_datetime() will change the data type and parse the
date based on the format parameter. For example
data['ticker.server_time'] = pd.to_datetime(data['ticker.server_time'],
format='whatever is needed')
format : string, default None strftime to parse time, eg "%d/%m/%Y", note that
"%f" will parse all the way up to nanoseconds
I tried using the infer_datetime_format parameter with that data, but it
didn't make a difference.
data['ticker.server_time'] = pd.to_datetime(data['ticker.server_time'],
infer_datetime_format=True)`
|
Aquamacs error code 127; LaTeX: problems after [0] pages error
Question: I have a new `macbook pro (OSX 10.10)` and I installed `Aquamacs 3.2 GNU Emacs
24.4.51.2` I have used aquamacs on an old machine for over a year with few
problems.
The installation seemed simple and straightforward, basically double click the
dmg file and move the icon to Applications.
MESSAGES WHILE STARTING UP ACQAMACS:
> Loading prestart plugin files ... ... done. Wrote
> /Users/greggold/Library/Preferences/Aquamacs Emacs/Packages/.nosearch Shell:
> /bin/bash Loading /Users/greggold/Library/Preferences/Aquamacs Emacs/Recent
> Files.el (source)...done Cleaning up the recentf list...done (0 removed) 16
> environment variables imported from login shell (/bin/bash). Loading
> /Volumes/Aquamacs Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex.el (source)...done Loading plugins ... Loading
> /Volumes/Aquamacs Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/site-
> start.el (source)...done ... done. Loading `custom-file' failed. Loading
> /Users/greggold/Library/Preferences/Aquamacs Emacs/Preferences.el
> (source)...done Mark set one-buffer-one-frame-mode disabled. Mark set [26
> times] Loading /Users/greggold/Library/Preferences/Aquamacs Emacs/frame-
> positions.el (source)...done file-error: (Opening directory no such file or
> directory /Users/greggold/Library/Logs/CrashReporter) Mark set [5 times]
>
> MY USER FILE (clump_present4.tex) WAS THEN LOADED Mark set [3 times] Auto-
> saving...done Auto-saving...done Mark set No connection file
> "/var/folders/q_/v6d8z7t96x911lskqblhg_680000gn/T/emacs501/server" Automatic
> display of crossref information was turned on Applying style hooks...
> Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/article.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/graphicx.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/amssymb.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/amsmath.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/amstext.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/amsbsy.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/amsopn.elc...done Loading /Volumes/Aquamacs
> Emacs/Aquamacs.app/Contents/Resources/lisp/aquamacs/edit-
> modes/auctex/style/fancyhdr.elc...done Applying style hooks... done is
> undefined [3 times]
>
> CLICK ON LATEX (compile command) Type `^C ^L' to display results of
> compilation. LaTeX: problems after [0] pages
>
> TeX Output exited abnormally with code 127 at Mon Jan 5 17:09:56 Running
> `LaTeX' on`clump_present4' with ``pdflatex --synctex=1
> -interaction=nonstopmode "\input" clump_present4.tex'' /bin/sh: pdflatex:
> command not found TeX Output exited abnormally with code 127 at Mon Jan 5
> 17:09:56 I tried to run pdflatex from a terminal but it was not found; and
> locate pdflatex only found a pdf file bash-3.2$ locate pdflatex
> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/matplotlib/tests/baseline_images/test_backend_pgf/pgf_pdflatex.pdf
> bash-3.2$
Any advice would be much appeciated
Answer: I am using this on yosemite and am not having any problems. I would suggest
that you go to the aquamacs.org site go into the developers area. There was a
problem unrelated to what you are showing with yosemite and the developer
there was very helpful in fixing it. I would email him he probably can from
your info tell you what is going on.
Before doing that try the development version that is on that page just in
case that fixes it for you. That is the one I ended up using and I don't know
if they have forced it back into the main download yet.
Also they have all the old email listing of problems that you may want to
troll through. I use it for latex and find it is excellent. Particularly when
teamed with skim.
Good luck.
|
How to read file names and word count in respective files in Hadoop?
Question: I am trying to fetch file names from sequence file from hadoop with the help
of dumbo package of python. But it provides me some kind of identifier. How
can i map this to file name?
Below is my steps on hadoop system for getting filenames :
**_Steps 1)_ Generating Sequence file**
Command :
hadoop jar /mnt/Clustering/Checking/AllJars/binarypig-1.0-SNAPSHOT-jar-with-dependencies.jar com.endgame.binarypig.util.BuildSequenceFileFromDir /mnt/Clustering/Checking/text_files text_files_seq
**_Step 2)_ Running python script on sequence file through hadoop**
Command:
dumbo start dumbo_map_red.py -input text_files_seq -output out_res -hadoop /usr/local/hadoop
**_Step 3)_ Getting output in local directory**
Command:
dumbo cat out_res/part-* -hadoop /usr/local/hadoop > out_res.txt
where dumbo_map_red.py is
#!/usr/bin/env python
def mapper(key, value):
yield key, 1
def reducer(key, values):
yield key, sum(values)
if __name__ == "__main__":
import dumbo
dumbo.run(mapper, reducer)
Please help me know how to fetch filenames. If their is other package in
python which will allow me to work that way, please let me know..
Answer: Finally got the hint of mapping identifier in sequence file to actual file.
**The identifier is MD5 of file in the directory.**
|
how to get dynamically created method's name
Question: I have a method which return its own name. The method looks like this:
import inspect
class example(object):
def getvalue(self):
return inspect.stack()[0][3] #returns current method's name
Now I am creating a method dyamically in that example class like this.
import inspect
class example(object):
def getvalue(self):
return inspect.stack()[0][3] #returns current method's name
if __name__=="__main__":
e = example()
e.method=e.getvalue
print e.method()
This gives the output getvalue. Now I want my getvalue() method to return
dynamic method I created's name i.e method. Is there a way to do it in python?
If so what changes do I have to do in my getvalue() method so that it returns
dynamically created method's name.
import inspect
class example(object):
def getvalue(self):
return inspect.stack()[0][3] #what to write here...
Answer: This is not possible.
In Python the very same object can have multiple "names" and thus it doesn't
make sense to talk about "the" name of an object.
Methods are objects and the compiler for functions and methods stores the
"name" found when the compiling was done in a `__name__` field. If you later
move the object around and place it in places reachable using other names the
`__name__` field it has won't change.
|
Loop interrupt with an import module "interrupringcow" import error
Question: I have a problem with a downloaded module named interruptingcow It is supposed
to allow me to interrupt a loop after a set amount of time. I have the latest
version that i found installed with pip.
import time
from random import *
from interruptingcow import timeout
points = 0
error = 0
print('Today we will learn calculus! We will multiply random numbers from 2-10.\nPress Enter when ready!')
input()
try:
with timeout(20, exception=RuntimeException):
while True:
XX, YY = randint(2, 10), randint(2, 10)
print(XX, 'X', YY)
if XX*YY == int(input('odgovor? > ')):
print('GG! +1 point')
points += 1
else:
print('Error! The real result is:', XX*YY)
error = 1
break
except RuntimeException:
print('Your time is UP!')
print('Your score is:',points,'Thanks for playing! Press enter when you want to finish!')
input()
I think the problem is with the version of python? but i dont know how to fix
it. Any ideas? RuntimeException is underlined in the program, that is probably
one of the errors, the other one is this: ImportError: cannot import name
'GeneratorContextManager'
I looked at interruptingcow code and one of the imports is:
from contextlib import GeneratorContextManager
Thank you guyz in advance!
Answer: You don't need this `interruptingcow` module at all, it's easy to set up a
single timer to interrupt your loop with Python's built-in `signal` module:
import signal
def handler(signo, frame):
raise RuntimeError
signal.signal(signal.SIGALRM, handler)
signal.alarm(1) # seconds
while True:
print 'zzz'
Just substitute your own `while True` loop and I think this will do what you
need, without the extra dependency.
|
Converting a memory address to a function object
Question: A light in Maya has an attribute `Use Color Temperature`, which is a checkbox,
when I toggle it, a function is called inside Maya to actually do the work
behind the scene. Unfortunately the function address `<function callback at
0x0000000051CD7DD8>` is printed out instead of it's name. I don't know which
function is executed when clicking the checkbox. Is there any way of
converting this address to python function object or can I print the actual
name of the function using this memory address?
Answer: This means a function named `callback` is called.
I'd look for `def callback(...):` in the corresponding code file and see what
it does.
To re-produce your output all that's needed is a function with the name
`callback`:
>>> def callback():
... print "My name is Reut Sharabani"
...
>>> cb = callback
>>> cb
<function callback at 0x7f182f3ca6e0>
If you want to programmatically get a function's name:
>>> cb.__name__
'callback'
If you want to know more about it you can even disassemble it using
[`dis`](https://docs.python.org/2/library/dis.html) and it's [code
object](https://docs.python.org/2/c-api/code.html):
>>> import dis
>>> dis.dis(cb.__code__)
2 0 LOAD_CONST 1 ('My name is Reut Sharabani')
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
|
How to find the combinations (probability) for a,b,c,d,e
Question: How to find the combinations (probability) for a,b,c,d,e using
python/algorithm ?
Given that the length of string is 5 that is minimum 1 & maximum 5. All the
characters can be once .
Example :
a
b
c
d
e
ab
ac
abcde
acde
etc..
Answer:
import itertools
mystring = 'abcde'
for i in range(1,len(mystring)+1):
for combo in itertools.combinations(mystring, i):
print(''.join(combo))
Output:
a
b
c
d
e
ab
ac
ad
ae
bc
bd
be
cd
ce
de
abc
abd
abe
acd
ace
ade
bcd
bce
bde
cde
abcd
abce
abde
acde
bcde
abcde
|
In python how can I use/manipulate an object defined in a script from a function defined in an imported module?
Question: This might be a terribly simple one, but I don't know what's the "right"
answer. Assume that I have a script
import utils
bar = 1
utils.foo()
print bar
Furthermore, the module `utils` is:
def foo():
bar = bar+1
As given above, I ,obviously, get:
UnboundLocalError: local variable 'bar' referenced before assignment
**How can I use`bar` inside `foo()`?** In my specific case, I don't really
want to alter `foo`, but I _do_ need to be able to use it and its state inside
`foo()`.
One workaround would be to pass `bar` to `foo()`:
def foo(bar):
return bar+1
And replace the third line in the script: `bar = utils.foo(bar)`.
However, this feels like a cumbersome solution; in particular if bar is a
complex object.
I am interested in a _best-practice_ approach the case described above.
Answer: Why don't you want to alter foo? If you import a module, you want to use its
functionality. If the foo function is without parameters, then bar or other
variables in it are used in the module utils itself. If you want to use a
function with values that are not inside the module, then:
def foo(bar):
return bar+1
is totally acceptable.
EDIT: // When you create class foo1, just set bar in the constructor. class
foo1: def **init**(self, bar): self.bar = bar
Image this situation:
import someModule
# now you want to use a function of this module
foo()
Maybe then there would be an error like: bar is not defined or whatever -->
modules are not loosely coupled. Either make the function foo as you proposed
with parameters (totally acceptable) or set the bar value via a constructor or
setBar method.
|
Can't import the cx_Oracle module unless I'm using an interactive shell
Question: When using Python on an interactive shell I'm able to import the cx_Oracle
file with no problem. Ex:
me@server~/ $ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>>
As you can see, importing works without a hitch. However, when I try to run a
Python script doing the same thing, I get an error:
me@server~/ $ sudo script.py
Traceback (most recent call last):
File "/usr/local/bin/script.py", line 19, in <module>
import cx_Oracle
ImportError: No module named "cx_Oracle'
Here is the important section from script.py:
# 16 other lines above here
# Imports
import sys
import cx_Oracle
import psycopg2
...
I'm befuddled here. Other pertinent information is the server I'm running is
Ubuntu 14.04.1 LTS (upgraded from 12.04) 64bit. `which python` and `sudo which
python` both point to the same location. Also, doing this as root via `sudo su
-` gets the same results; import OK from interactive but error from script.
Nothing other than the OS upgrade happened between when this worked and when
it stopped working.
Answer: Sorry, all. This was a silly on my part. Turns out the script in question was
using Python3, and when the server upgraded, Python3 went from being 3.2
version to being 3.4 version.
Once the cx_Oracle module was set up in the 3.4 version, everything worked as
expected.
Phil, your final note talking about the shebang was what lead me to discover
this, so kudos to you! The reason I didn't mark your response as the answer
was because technically it wasn't but led me on the right path.
Cheers!
|
Unittest implementation Python property
Question: I have a class with the following property `clusters`:
import numpy as np
class ClustererKmeans(object):
def __init__(self):
self.clustering = np.array([0, 0, 1, 1, 3, 3, 3, 4, 5, 5])
@property
def clusters(self):
assert self.clustering is not None, 'A clustering shall be set before obtaining clusters'
return np.unique(self.clustering)
I now want to write a unittest for this simple property. I start off with:
from unittest import TestCase, main
from unittest.mock import Mock
class Test_clusters(TestCase):
def test_gw_01(self):
sut = Mock()
sut.clustering = np.array([0, 0, 1, 1, 3, 3, 3, 4, 5, 5])
r = ClustererKmeans.clusters(sut)
e = np.array([0, 1, 3, 4, 5])
# The following line checks to see if the two numpy arrays r and e are equal,
# and gives a detailed error message if they are not.
TestUtils.equal_np_matrix(self, r, e, 'clusters')
if __name__ == "__main__":
main()
However, this does not run.
TypeError: 'property' object is not callable
I next change the line `r = ClustererKmeans.clusters(sut)` to the following:
r = sut.clusters
But again, I get an unexpected error.
AssertionError: False is not true : r shall be a <class 'numpy.ndarray'> (is now a <class 'unittest.mock.Mock'>)
Is there an easy way to test the implementation of a property in Python using
the unittest framework?
Answer: To [call property](https://docs.python.org/2/howto/descriptor.html#invoking-
descriptors) directly you can replace in your original code
`ClustererKmeans.clusters(sut)` by `ClustererKmeans.clusters.__get__(sut)`.
Even if I'm a mocking enthusiastic IMHO this case is not a good example to
apply it. Mocking are useful to remove dependencies from class and resources.
In your case `ClustererKmeans` have a empty constructor and there isn't any
dependency to break. You can do it by:
class Test_clusters(TestCase):
def test_gw_01(self):
sut = ClustererKmeans()
sut.clustering = np.array([0, 0, 1, 1, 3, 3, 3, 4, 5, 5])
np.testing.assert_array_equal(np.array([0, 1, 2, 3, 4, 5]),sut.clusters)
If you would use mocking you can patch `ClustererKmeans()` object by using
`unittest.mock.patch.object`:
def test_gw_01(self):
sut = ClustererKmeans()
with patch.object(sut,"clustering",new=np.array([0, 0, 1, 1, 3, 3, 3, 4, 5, 5])):
e = np.array([0, 1, 3, 4, 5])
np.testing.assert_array_equal(np.array([0, 1, 2, 3, 4, 5]),sut.clusters)
...but why use patch when python give to you a simple and direct way to do it?
Another way to use mock framework should be trust `numpy.unique` and check if
the property do the right work:
@patch("numpy.unique")
def test_gw_01(self, mock_unique):
sut = ClustererKmeans()
sut.clustering = Mock()
v = sut.clusters
#Check is called ....
mock_unique.assert_called_with(sut.clustering)
#.... and return
self.assertIs(v, mock_unique.return_value)
#Moreover we can test the exception
sut.clustering = None
self.assertRaises(Exception, lambda s:s.clusters, sut)
I apologize for some errors but I don't test the code. I you notify to me I
will fix all as soon as possible.
|
Cryptography hex,binaries and ascii python
Question: I am new to cryptography and when it comes to xor I get really confused. Given
a text in ascii and a cipher in hex how can I get them to be both in the same
format? My current code is:
import binascii
string = b'09e1c5f70a65ac519458e7e53f36'
binary = binascii.unhexlify(string)
#This make the hex string to raw bytes
My question is how can I get an ascii string to be in raw bytes also so I can
xor? or if it is not possible what should I do to xor?
Answer: ascii is simply a byte string
XOR_WITH = 0x12
def xor_encode(byte):
if isinstance(byte,basestring):
byte = ord(byte) #convert to ascii integer value
byte = byte ^ XOR_WITH #encode
return chr(byte) # convert it back to a string and return
encoded_string = "".join(map(xor_encode,"Test String"))
maybe what you are looking for
|
Keeping a string without over writing it throughout the program in python
Question: I'm trying to make a program that stores a user's recipe, using a tkinter gui
to do so. I need to make a way to keep track of what is inputted, and store it
in a text file. I have tried using lists to no avail, and think that using a
string is the way forward, but have run into a problem - each time I try to
add to the string, it over writes and doesn't keep the data from before. I
have tried to use
mystring.join(a + b + etc)
but that didnt work, and my new code is as follows:
from tkinter import *
number_people = 1
itemslist = ''
itemslist1 = ''
def script (): # Puts main body of program into a function so that it can be re-run #
global number_people
number_people = 1
global itemslist, itemslist1
itemslist = ''
itemslist1 = ''
#### MAIN ####
fake_window = Tk() # #
new_recipe_window = fake_window # Opens window, allows it be closed #
start_window = fake_window # #
start_window.title("Recipe Book Task") # #
#### MAIN ####
### Functions ###
def close (x):
global start_window
global new_recipe_window
(x).withdraw()
def moreitems ():
a = item_box.get()
b = quantity_units_box.get()
c = len(a)
if a == '':
pass
elif b == '':
pass
else:
item_box.delete(0,c)
quantity_units_box.delete(0,c)
global itemslist
global itemslist1
itemslist1 = itemslist + a + ', ' + b + ', '
print ("Items list =", itemslist1)
def new_recipe ():
new_recipe_window = Tk()
new_recipe_window.title("New Recipe")
close(start_window)
recipe_name_label = Label(new_recipe_window, text="Recipe Name: ")
recipe_name_label.grid(row=0, column=0)
recipe_name_box = Entry(new_recipe_window)
recipe_name_box.grid(row=0, column=1)
def continue_1 ():
global check_box
check_box = recipe_name_box.get()
if check_box == '':
pass
else:
global itemslist
global itemslist1
itemslist1 = itemslist + check_box + ', '
print (itemslist1)
continue_button_1.destroy()
item_label = Label(new_recipe_window, text="Ingredient: ")
item_label.grid(row=1, column=0)
global item_box
item_box = Entry(new_recipe_window)
item_box.grid(row=1, column=1)
quantity_units_label = Label(new_recipe_window, text="Quantity and Units: ")
quantity_units_label.grid(row=2, column=0)
global quantity_units_box
quantity_units_box = Entry(new_recipe_window)
quantity_units_box.grid(row=2, column=1)
def continue_2 ():
check_box_1 = item_box.get()
check_box_2 = quantity_units_box.get()
if check_box_1 == '':
pass
elif check_box_2 == '':
pass
else:
global itemslist
itemslist.join(check_box_1)
itemslist.join(check_box_2)
continue_button_2.destroy()
more_items.destroy()
add_people_label = Label(new_recipe_window, text="Choose amount of people")
add_people_label.grid(row=3, column=0, columnspan=2)
def add ():
global number_people
number_people += 1
num_people_label.config(text="Number of people: " + str(number_people))
def minus ():
global number_people
if number_people > 1:
number_people -= 1
num_people_label.config(text="Number of people: " + str(number_people))
def finish ():
itemslist.join(str(number_people))
print("ItemsList = " + itemslist)
saveFile = open("Recipe_Book.txt", "a")
saveFile.write(itemslist + '\n')
saveFile.close
close(new_recipe_window)
script()
num_people_label = Label(new_recipe_window, text="Number of people: " + str(number_people))
num_people_label.grid(row=4, column=0, columnspan=2)
add_people_button = Button(new_recipe_window, text="+")
add_people_button.grid(row=5, column=1)
add_people_button.config(command=add)
minus_people_button = Button(new_recipe_window, text="-")
minus_people_button.grid(row=5, column=0)
minus_people_button.config(command=minus)
finish_button = Button(new_recipe_window, text="Finish")
finish_button.grid(row=6, column=0, columnspan=2)
finish_button.config(command=finish)
continue_button_2 = Button(new_recipe_window, text="Continue...")
continue_button_2.grid(row=3, column=0)
continue_button_2.config(command=continue_2)
more_items = Button(new_recipe_window, text="Add another item", command=moreitems)
more_items.grid(row=3, column=1)
continue_button_1 = Button(new_recipe_window, text="Continue...")
continue_button_1.grid(row=1, column=0)
continue_button_1.config(command=continue_1)
new_recipe = Button(start_window, text="New Recipe", command=new_recipe)
new_recipe.grid(row=0, column=0)
script()
So to recap, my question is how do I keep the string itemslist and itemslist1
from being overwritten, or is there another way I can do this?
**EDIT FOR AAAANTOINE**
I was about to clarify for you what I wanted, but I just figured out what I
was doing wrong, thanks for your help, you taught me what .join does, thanks.
Answer: Your code never actually assigns to `itemslist` other than at the beginning of
`script()`. The only time it ever appears on the left side of the assignment
operator is when it's being initialized.
You can probably change all instances of `itemslist1` to `itemslist` and have
a working program.
## Edit
On further review, I suspect that you think str.join(v) appends string `v` to
the str. That's not how join works.
>>> s = 'something'
>>> s.join('a')
'a'
`join` takes a list as an argument and joins its contents together, with the
`str` instance as a separator. Typically, the source string would actually be
an empty string or a comma.
>>> s.join(['a', 'b', 'c'])
'asomethingbsomethingc'
>>> ','.join(['a', 'b', 'c']) # comma separation
'a,b,c'
>>> '-'.join(s) # spell it out!
's-o-m-e-t-h-i-n-g'
### How do I do it, then?
You append to strings using this syntax:
>>> s = s + 'a'
>>> s
'somethinga'
(Or the shorthand version:)
>>> s += 'a'
>>> s
'somethinga'
|
python on windows 10
Question: I have python 2.7.2 on windows 10. When I load win32api and wmi it fails to
load. The python install on the windows 10 is same as on another windows 7 PC.
I don't have this issue on win 7. Below are the errors I get when I try to
import the above modules on windows 10.
>>> import win32api
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: DLL load failed: The specified module could not be found.
>>> import wmi
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\Python27\lib\site-packages\wmi.py", line 88, in <module>
from win32com.client import GetObject, Dispatch
File "c:\Python27\lib\site-packages\win32com\__init__.py", line 5, in <module>
import win32api, sys, os
ImportError: DLL load failed: The specified module could not be found.
What could be the cause for my issue? Is there a minimum python version that
is supposed to be used with windows 10?
Answer: I am also facing the same problem. I installed the latest updates for Windows
10 and the issue is resolved now
|
kartograph unknown shape type(25)
Question: I am trying to generate a svg map using kartograph py as below:
from kartograph import Kartograph
K = Kartograph()
config ={"layers": {"mylayer": {"src": "42MEE250GC_SIR.shp"}} }
K.generate(config, outfile='mymap.svg')
And I'm getting this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\k
artograph.py", line 46, in generate
_map = Map(opts, self.layerCache, format=format)
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\m
ap.py", line 48, in __init__
me.proj = me._init_projection()
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\m
ap.py", line 88, in _init_projection
map_center = self.__get_map_center()
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\m
ap.py", line 140, in __get_map_center
features = self._get_bounding_geometry()
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\m
ap.py", line 257, in _get_bounding_geometry
charset=layer.options['charset']
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\l
ayersource\shplayer.py", line 121, in get_features
geom = shape2geometry(shp, ignore_holes=ignore_holes, min_area=min_area, bbo
x=bbox, proj=self.proj)
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\l
ayersource\shplayer.py", line 159, in shape2geometry
raise KartographError('unknown shape type (%d)' % shp.shapeType)
kartograph.errors.KartographError: ←[0;31;40mKartograph-Error:←[0m unknown shape
type (25)
Looking at the source codes of kartograph we have this:
if shp.shapeType in (5, 15): # multi-polygon
geom = shape2polygon(shp, ignore_holes=ignore_holes, min_area=min_area)
elif shp.shapeType == 3: # line
geom = points2line(shp)
else:
raise KartographError('unknown shape type (%d) in shapefile %s' % (shp.shapeType, self.shpSrc))
return geom
Someone can help me with this?
I tried this below and I'm getting another error.
if shp.shapeType == 25:
return None
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\k
artograph.py", line 46, in generate
_map = Map(opts, self.layerCache, format=format)
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\m
ap.py", line 50, in __init__
me.bounds_poly = me._init_bounds()
File "c:\python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\m
ap.py", line 205, in _init_bounds
raise KartographError('no features found for calculating the map bounds')
kartograph.errors.KartographError: ←[0;31;40mKartograph-Error:←[0m no features f
ound for calculating the map bounds
Answer: The solution I found was to change the source code adding this:
if shp.shapeType == 25:
geom = points2line(shp)
I'm sure this is not the best solution but it solves my problem
|
Selenium Python_Using JetBrain_FileNotFoundError: [Errno 2] No such file or directory: 'C://Python34//Scripts//pythondata.xlsx'
Question: I trying to perform the data driven testing for `selenium python` but each
time i'm trying to execute i'm facing **Error** 2
No such file found
can some one tell me whats wrong with following code
and what file format Selenium Python Support? Is **`.xlsx or .csv`** and do we
need to save excel file in any specific 97-2000 format? or normal format will
work
Sample Code
class fblogin(unittest.TestCase):
def setUp(self):
self.driver=webdriver.Firefox()
self.driver.get("URL")
self.driver.maximize_window()
def test_fblogin(self):
driver=self.driver
wb=xlrd.open_workbook("C://Python34//Scripts//pythondata.xlsx")
sheetname = wb.sheet_names()
sh1 = wb.sheet_by_index(0)
i=0
while (i<2):
rownum=(i)
rows = sh1.row_values(rownum)
UserName = driver.find_element_by_id('UserName')
driver.find_element_by_id('UserName').send_keys(rows[0, 1])
driver.implicitly_wait(10)
print("The Gbouser name [" + rows[0, 1] + "] is entered")
Password = driver.find_element_by_id('Password')
driver.find_element_by_id('Password').send_keys(rows[1, 1])
print("The Password [" + rows[1, 1] + "] is entered")
driver.implicitly_wait(10)
driver.back()
i=i+1
Login = driver.find_element_by_name('Login').click()
driver.implicitly_wait(10)
def tearDown(self):
self.driver.quit()
if __name__ == "__main__":
unittest.main()
Answer: It means there is no `pythondata.xlsx` in `C://Python34//Scripts`.
You can use `os.path.exists` function to check if it exists:
import os.path
os.path.exists(file_path)
# update file_path with your own path
This returns True for both files and directories.
|
Python compare dict with csr_matrices as values
Question: I have two variables `r` and `e` which both are dictionaries, with strings as
keys and csr_matrices as values. Now I want to assert that they are equal. How
do I do this?
**Try 1:**
from scipy.sparse.csr import csr_matrix
import numpy as np
def test_dict_equals(self):
r = {'a': csr_matrix([[0, 0 ,1], [0, 1, 0], [1, 0, 0]])}
e = {'a': csr_matrix([[0, 0 ,1], [0, 1, 0], [1, 0, 0]])}
self.assertDictEqual(r, e)
This does not work:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all().
**Try 2:**
def test_dict_equals(self):
r = {'a': csr_matrix([[0, 0 ,1.01], [0, 1, 0], [1, 0, 0]])}
e = {'a': csr_matrix([[0, 0 ,1.01], [0, 1, 0], [1, 0, 0]])}
self.assertListEqual(r.keys(), e.keys())
for k in r.keys():
np.testing.assert_allclose(r[k], e[k])
This does also not work:
AssertionError: First sequence is not a list: dict_keys(['a'])
**Try 3:**
def test_dict_equals(self):
r = {'a': csr_matrix([[0, 0 ,1.01], [0, 1, 0], [1, 0, 0]])}
e = {'a': csr_matrix([[0, 0 ,1.01], [0, 1, 0], [1, 0, 0]])}
self.assertListEqual(list(r.keys()), list(e.keys()))
for k in r.keys():
np.testing.assert_allclose(r[k], e[k])
This does also not work:
TypeError: ufunc 'isinf' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
Answer: The `assertDictEqual` function will invoke the `__eq__` method of objects. In
the source code of `csr_matrix`, you can see there is no `__eq__` method.
You have to write a subclass of `csr_matrix` and then do the assertion. Here
is an example for `numpy.ndarray` for you. Code must be similar.
import copy
import numpy
import unittest
class SaneEqualityArray(numpy.ndarray):
def __eq__(self, other):
return (isinstance(other, SaneEqualityArray) and
self.shape == other.shape and
numpy.ndarray.__eq__(self, other).all())
class TestAsserts(unittest.TestCase):
def testAssert(self):
tests = [
[1, 2],
{'foo': 2},
[2, 'foo', {'d': 4}],
SaneEqualityArray([1, 2]),
{'foo': {'hey': SaneEqualityArray([2, 3])}},
[{'foo': SaneEqualityArray([3, 4]), 'd': {'doo': 3}},
SaneEqualityArray([5, 6]), 34]
]
for t in tests:
self.assertEqual(t, copy.deepcopy(t))
if __name__ == '__main__':
unittest.main()
Hope it helps.:)
|
Flask extensions IN PyCharm
Question: I'm actually having trouble using the `flask.ext` with PyCharm ide. I
installed the flask-script and flask-bootstrap but the Pycharm is unable to
recognize them. I'm getting the following error.
Traceback (most recent call last):
File "/home/brucewilson/PycharmProjects/demo_proj/demo.prj.py", line 5, in <module>
from flask.ext.bootstrap import Bootstrap
File "/home/brucewilson/flasky/venv/local/lib/python2.7/site-packages/flask/exthook.py", line 87, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named flask.ext.bootstrap
Answer: The correct way to import Flask-Bootstrap is with
from flask_bootstrap import Bootstrap
not with
from flask.ext.bootstrap import Bootstrap
As per the [documentation](http://pythonhosted.org/Flask-Bootstrap/basic-
usage.html).
|
How to separate data acquisition, processing, and visualization properly in Python?
Question: I am working on a project, where I want to perform data acquisition, data
processing and GUI visualization (using pyqt with pyqtgraph) all in Python.
Each of the parts is in principle implemented, but the different parts are not
well separated, which makes it difficult to benchmark and improve performance.
So the question is:
**Is there a good way to handle large amounts of data between different parts
of a software?**
I think of something like the following scenario:
* **Acquisition:** get data from some device(s) and store them in some _data container_ that can be accessed from somewhere else. (This part should be able to run without the processing and visualization part. This part is time critical, as I don't want to loose data points!)
* **Processing:** take data from the _data container_ , process it, and store the results in another data container. (Also this part should be able to run without the GUI and with a delay after the acquisition (e.g. process data that I recorded last week).)
* **GUI/visualization:** Take acquired and processed data from _container_ and visualize it.
* **save data:** I want to be able to store/stream certain parts of the data to disk.
When I say "large amounts of data", I mean that I get arrays with
approximately 2 million data points (16bit) per second that need to be
processed and possibly also stored.
Is there any framework for Python that I can use to handle this large amount
of data properly? Maybe in form of a data-server that I can connect to.
Answer: # How much data?
In other words, are you acquiring so much data that you cannot keep all of it
in memory while you need it?
For example, there are some measurements that generate so much data, the only
way to process them is after-the-fact:
1. Acquire the data to storage (usually [RAID0](http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0))
2. Post-process the data
3. Analyze the results
4. Select and archive subsets
### Small Data
If your computer system is able to keep pace with the generation of data, you
can use a separate Python
[**queue**](https://docs.python.org/2/library/queue.html) between each stage.
### Big Data
If your measurements are creating more data than your system can consume, then
you should start by defining a few tiers (maybe just two) of how important
your data is:
* _lossless_ \-- if a point is missing, then you might as well start over
* _lossy_ \-- if points or a set of data is missing, no big deal, just wait for the next update
> One analogy might be a video stream...
>
> * _lossless_ \-- gold-masters for archival
> * _lossy_ \-- YouTube, Netflix, Hulu might drop a few frames, but your
> experience doesn't significantly suffer
>
From your description, the **Acquisition** and **Processing** must be
_lossless_ , while the **GUI/visualization** can be _lossy_.
For _lossless_ data, you should use
[**queues**](https://docs.python.org/2/library/queue.html). For _lossy_ data,
you can use
[**deques**](https://docs.python.org/2/library/collections.html#collections.deque).
# Design
Regardless of your data container, here are three different ways to connect
your stages:
1. [Producer-Consumer](http://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem): P-C mimics a FIFO -- one actor generates data and another consumes it. You can build a chain of producers/consumers to accomplish your goal.
2. [Observer](http://en.wikipedia.org/wiki/Observer_pattern): While P-C is typically one-to-one, the observer pattern can also be one-to-many. If you need multiple actors to react when one source changes, the observer pattern can give you that capability.
3. [Mediator](http://en.wikipedia.org/wiki/Mediator_pattern): Mediators are usually many-to-many. If each actor can cause the others to react, then all of them can coordinate through the mediator.
It seems like you just need a 1-1 relationship between each stage, so a
producer-consumer design looks like it will suit your application.
|
Logging Nginx and uWSGI web server errors to Sentry
Question: I am currently using Sentry to log application level errors from Django web
application. Could it be possible to expand the scope of the Sentry to include
logging of web server errors (HTTP 408 timeouts and such)?
These requests never hit the application, so Django + Python logging
configuration never sees it. But from the devops perspective these might be
equally important error conditions need to deal with.
* Does Nginx or uWSGI support logging directly to Sentry with some addons? (Raven logging adapter?)
* Does Sentry support error capture from Apache like log-files, syslog or such?
Answer: You could try [SentryLogs](https://pypi.python.org/pypi/SentryLogs) or you
could use a Nginx custom `error_page` which fires of to Sentry.
|
Is there a way to call python with xlwings without reopening the Excel file?
Question: I am calling python from Excel using xlwings. I find that when running my
macro, Excel closes and reopens in order to run the code. It functions
correctly but it slows things down. In addition, if the Excel file is unsaved
a dialog will mention that the file is already open and that I will lose
unsaved changes.
Is there a way to call python without reopening the Excel file?
This is my python code (in loaddf.py):
from xlwings import Workbook, Range, Sheet
def my_macro():
wb = Workbook.caller()
Range('A1').value = Range('A1').value + 1
And the VBA code in my Excel file:
Sub loaddfsub()
RunPython ("import loaddf; loaddf.my_macro()")
End Sub
Thanks for the help.
Answer: It seems that under certain circumstances, Excel doesn't register an Excel
Workbook properly in the `RunningObjectTable`, a precondition so it can be
found via COM. So far I've only noticed this behaviour for Workbooks
downloaded from the internet given it opens them in the `Protected View` mode
first (depends on Settings). However, based on the feedback here, it seems
that it can also happen under other circumstances, possibly caused by some
add-ins or security settings.
I've implemented a fix for this which will be present in `v0.3.1`, but you can
get it right now directly from
[GitHub](https://github.com/ZoomerAnalytics/xlwings). Let me know if you need
help there.
**Update** (16-Jan-2015): xlwings v0.3.1 including this fix has just been
released.
**Update2** (13-Sept-2015): xlwings v0.4.0 should finally fix this bug in a
reliable way.
|
Ipython console in Spyder stuck on "connecting to kernel"
Question: I am new to python and coming from Matlab and I have installed the latest
version of Python(x,y) (2.7.9.0) on my Win 8 64 bit PC.
The problem that I have is that, each time I start Spyder, the default IPython
console gets stuck on "connecting to kernel". I can see that a new kernel is
launched each time because a new .json file appears in the directory
".ipython\profile_default\security". I can access this kernel by opening a new
IPython console by clicking on "connect to an existing kernel" and then
browsing to find it, then it works fine (except that the variables I create do
not appear in the variable explorer). I can also quit the kernel from this new
IPython console but this does not solve my problem because when I launch a new
IPython console by clicking on "open an IPython console" or restarting Spyder,
it still hangs on "connecting to kernel" and creates a new .json file.
The closest issue that I could find on a forum is this
[one](http://code.google.com/p/spyderlib/issues/detail?id=1991), the only
difference being that I do not have the "import sitecustomize" error in the
internal console. I have tried uninstalling Python(x,y) and python but to no
avail. Any hint would be really appreciated.
Answer: I run "Reset Spyder Settings" from the Windows Menu in the Anaconda section.
|
Python Urllib2 SSL error
Question: Python 2.7.9 is now much more strict about SSL certificate verification.
Awesome!
I'm not surprised that programs that were working before are now getting
CERTIFICATE_VERIFY_FAILED errors. But I can't seem to get them working
(without disabling certificate verification entirely).
One program was using urllib2 to connect to Amazon S3 over https.
I download the root CA certificate into a file called "verisign.pem" and try
this:
import urllib2, ssl
context = ssl.create_default_context()
context.load_verify_locations(cafile = "./verisign.pem")
print context.get_ca_certs()
urllib2.urlopen("https://bucket.s3.amazonaws.com/", context=context)
and I still get CERTIFICATE_VERIFY_FAILED errors, even though the root CA is
printed out correctly in line 4.
openssl can connect to this server fine. In fact, here is the command I used
to get the CA cert:
openssl s_client -showcerts -connect bucket.s3.amazonaws.com:443 < /dev/null
I took the last cert in the chain and put it in a PEM file, which openssl
reads fine. It's a Verisign certificate with:
Serial number: 35:97:31:87:f3:87:3a:07:32:7e:ce:58:0c:9b:7e:da
Subject key identifier: 7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33
SHA1 fingerprint: F4:A8:0A:0C:D1:E6:CF:19:0B:8C:BC:6F:BC:99:17:11:D4:82:C9:D0
Any ideas how to get this working with validation enabled?
Answer: To summarize the comments about the cause of the problem and explain the real
problem in more detail:
If you check the trust chain for the OpenSSL client you get the following:
[0] 54:7D:B3:AC:BF:... /CN=*.s3.amazonaws.com
[1] 5D:EB:8F:33:9E:... /CN=VeriSign Class 3 Secure Server CA - G3
[2] F4:A8:0A:0C:D1:... /CN=VeriSign Class 3 Public Primary Certification Authority - G5
[OT] A1:DB:63:93:91:... /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
The first certificate [0] is the leaf certificate sent by the server. The
following certifcates [1] and [2] are chain certificates sent by the server.
The last certificate [OT] is the trusted root certificate, which is not sent
by the server but is in the local storage of trusted CA. Each certificate in
the chain is signed by the next one and the last certificate [OT] is trusted,
so the trust chain is complete.
If you check the trust chain instead by a browser (e.g. Google Chrome using
the NSS library) you get the following chain:
[0] 54:7D:B3:AC:BF:... /CN=*.s3.amazonaws.com
[1] 5D:EB:8F:33:9E:... /CN=VeriSign Class 3 Secure Server CA - G3
[NT] 4E:B6:D5:78:49:... /CN=VeriSign Class 3 Public Primary Certification Authority - G5
Here [0] and [1] are again sent by the server, but [NT] is the trusted root
certificate. While this looks from the subject exactly like the chain
certificate [2] the fingerprint says that the certificates are different. If
you would take a closer looks at the certificates [2] and [NT] you would see,
that the public key inside the certificate is the same and thus both [2] and
[NT] can be used to verify the signature for [1] and thus can be used to build
the trust chain.
This means, that while the server sends the same certificate chain in all
cases there are multiple ways to verify the chain up to a trusted root
certificate. How this is done depends on the SSL library and on the known
trusted root certificates:
[0] (*.s3.amazonaws.com)
|
[1] (Verisign G3) --------------------------\
| |
/------------------ [2] (Verisign G5 F4:A8:0A:0C:D1...) |
| |
| certificates sent by server |
.....|...............................................................|................
| locally trusted root certificates |
| |
[OT] Public Primary Certification Authority [NT] Verisign G5 4E:B6:D5:78:49
OpenSSL library Google Chrome (NSS library)
But the question remains, why your verification was unsuccessful. What you did
was to take the trusted root certificate used by the browser (Verisign G5
4E:B6:D5:78:49) together with OpenSSL. But the verification in browser (NSS)
and OpenSSL work slightly different:
* NSS: build trust chain from certificates send by the server. Stop building the chain when we got a certificate signed by any of the locally trusted root certificates.
* OpenSSL_ build trust chain from the certificates sent by the server. After this is done check if we have a trusted root certificate signing the latest certificate in the chain.
Because of this subtle difference OpenSSL is not able to verify the chain
[0],[1],[2] against root certificate [NT], because this certificate does not
sign the latest element in chain [2] but instead [1]. If the server would
instead only sent a chain of [0],[1] then the verification would succeed.
This is a [long known
bug](http://rt.openssl.org/Ticket/Display.html?id=2732&user=guest&pass=guest)
and there exist
[patches](https://www.google.de/?q=patch%20x509_v_flag_trusted_first) and
hopefully the issue if finally addressed in OpenSSL 1.0.2 with the
introduction of the `X509_V_FLAG_TRUSTED_FIRST` option.
|
Performance of inline Python function definitions
Question: A general question for someone that knows function definition internals better
than I do.
In general, is there a performance trade off to doing something like this:
def my_function():
def other_function():
pass
# do some stuff
other_function()
Versus:
def other_function():
pass
def my_function():
# do some stuff
other_function()
I've seen developers inline functions before to keep a small, single use
function close to the code that actually uses it, but I always wondered if
there were a memory (or compute) performance penalty for doing something like
this.
Thoughts?
Answer: Splitting larger functions into more readable, smaller functions is part of
writing Pythonic code -- it should be obvious what you're trying to accomplish
and smaller functions are easier to read, check for errors, maintain, and
reuse.
As always, "which has better performance" questions should always be solved by
[profiling the code](https://docs.python.org/3/library/profile.html), which is
to say that it's often dependent on the signatures of the methods and what
your code is doing.
e.g. if you're passing a large dictionary to a separate function instead of
referencing a frame local, you'll end up with different performance
characteristics than calling a `void` function from another.
For example, here's some trivial behavior:
import profile
import dis
def callee():
for x in range(10000):
x += x
print("let's have some tea now")
def caller():
callee()
profile.run('caller()')
* * *
let's have some tea now
26 function calls in 0.002 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
2 0.000 0.000 0.000 0.000 :0(decode)
2 0.000 0.000 0.000 0.000 :0(getpid)
2 0.000 0.000 0.000 0.000 :0(isinstance)
1 0.000 0.000 0.000 0.000 :0(range)
1 0.000 0.000 0.000 0.000 :0(setprofile)
2 0.000 0.000 0.000 0.000 :0(time)
2 0.000 0.000 0.000 0.000 :0(utf_8_decode)
2 0.000 0.000 0.000 0.000 :0(write)
1 0.002 0.002 0.002 0.002 <ipython-input-3-98c87a49b247>:4(callee)
1 0.000 0.000 0.002 0.002 <ipython-input-3-98c87a49b247>:9(caller)
1 0.000 0.000 0.002 0.002 <string>:1(<module>)
2 0.000 0.000 0.000 0.000 iostream.py:196(write)
2 0.000 0.000 0.000 0.000 iostream.py:86(_is_master_process)
2 0.000 0.000 0.000 0.000 iostream.py:95(_check_mp_mode)
1 0.000 0.000 0.002 0.002 profile:0(caller())
0 0.000 0.000 profile:0(profiler)
2 0.000 0.000 0.000 0.000 utf_8.py:15(decode)
vs.
import profile
import dis
def all_in_one():
def passer():
pass
passer()
for x in range(10000):
x += x
print("let's have some tea now")
* * *
let's have some tea now
26 function calls in 0.002 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
2 0.000 0.000 0.000 0.000 :0(decode)
2 0.000 0.000 0.000 0.000 :0(getpid)
2 0.000 0.000 0.000 0.000 :0(isinstance)
1 0.000 0.000 0.000 0.000 :0(range)
1 0.000 0.000 0.000 0.000 :0(setprofile)
2 0.000 0.000 0.000 0.000 :0(time)
2 0.000 0.000 0.000 0.000 :0(utf_8_decode)
2 0.000 0.000 0.000 0.000 :0(write)
1 0.002 0.002 0.002 0.002 <ipython-input-3-98c87a49b247>:4(callee)
1 0.000 0.000 0.002 0.002 <ipython-input-3-98c87a49b247>:9(caller)
1 0.000 0.000 0.002 0.002 <string>:1(<module>)
2 0.000 0.000 0.000 0.000 iostream.py:196(write)
2 0.000 0.000 0.000 0.000 iostream.py:86(_is_master_process)
2 0.000 0.000 0.000 0.000 iostream.py:95(_check_mp_mode)
1 0.000 0.000 0.002 0.002 profile:0(caller())
0 0.000 0.000 profile:0(profiler)
2 0.000 0.000 0.000 0.000 utf_8.py:15(decode)
The two use the same number of function calls and there's no performance
difference, which backs up my claim that it really matters to test in specific
circumstances.
You can see that I have an unused import for the
[disassembly](https://docs.python.org/3/library/dis.html) module. This is
another helpful module that will allow you to see what your code is doing (try
`dis.dis(my_function)`). I'd post a profile of the testcode I generated, but
it would only show you more details that are not relevant to solving the
problem or learning about what's actually happening in your code.
|
Understanding argument parsing with argparse in Python
Question: I am now starting exploring Python, and was testing how arguments can be
passed to a script with "argparse". The way I wrote a sample script was as
following, where arguments passed through flags -i and -o are compulsory and
flag -u is optional:
#!/usr/bin/python
import sys
import argparse
## set usage options and define arguments
usage = "usage: %prog [options]"
parser = argparse.ArgumentParser(usage)
parser.add_argument("-i", action="store", dest="input", help="input file")
parser.add_argument("-o", action="store", dest="output", help="output file")
parser.add_argument("-u", action="store_true", dest="isunfolded", default=False, help="optional flag")
args = parser.parse_args()
print len(sys.argv)
if len(sys.argv) < 2:
# parser.print_help()
print 'Incorrect number of params'
exit()
else:
print "Correct number of params: ", len(sys.argv)
Running this script:
> ./test-args.py -i a -o b
prints:
5
Correct number of params: 5
I understand the printing statement in the if conditional (5 is higher than
2), however, after reading the argparse documentation
(<https://docs.python.org/3/library/argparse.html>) I still don't quite
understand why -i and -o flags are counted as arguments. This behaviour seems
to be quite different from e.g. perl Getopt::Std, which I'm more used to.
So, the question is what is the best way of parsing arguments in Python and to
evaluate the presence of mandatory arguments (without using required=True)
Answer: It gives you 5 because `sys.argv` contains the raw input passed to python as
arguments (the script name and 4 arguments).
You can see `argparse` as an abstraction for this, so once you use it, you can
forget about `sys.argv`. In most cases it is better not to mix these two
methods.
`argparse` its a nice way to handle arguments, I don't quite get why you don't
want to use the `required` option when that's exactly the way to go. Another
alternative is to parse the `sys.argv` yourself (regex maybe?) and drop
`argparse` altogether.
|
Filter on/Order by Postgres range type in SQLAlchemy
Question: SQLAlchemy supports Postgres range types, as described
[here](http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#range-
types). It uses the `postgresql+psycopg2` dialect for Postgres communication.
[These
testcases](https://bitbucket.org/zzzeek/sqlalchemy/src/55eacc8dbea3c3f98197bde9034fd6558fb2bc09/test/dialect/postgresql/test_types.py?at=master#cl-1378)
give usage examples for the range types in SQLALchemy.
How can I filter by, or order by, one component (lower or upper) of such a
range field in SQLAlchemy?
Using the example from the first link
from psycopg2.extras import DateTimeRange
from sqlalchemy.dialects.postgresql import TSRANGE
class RoomBooking(Base):
__tablename__ = 'room_booking'
room = Column(Integer(), primary_key=True)
during = Column(TSRANGE())
booking = RoomBooking(
room=101,
during=DateTimeRange(datetime(2013, 3, 23), None)
)
I would, e.g., like to filter on bookings with a during that begins on a given
datetime or order the bookings by the start of the datetime.
As such I'm looking to generate roughly this SQL:
SELECT room, during
FROM room_booking
WHERE lower(during) = foo
ORDER BY upper(during)
I have tried constructs like
RoomBooking.query.filter(RoomBooking.during.lower == foo).order_by(RoomBooking.during.upper)
but recognize that this is likely not working because lower is an attribute on
the python object and not associated with the underlying table column.
One possible solution to this might be finding a way to use the
[`upper()/lower()`](http://www.postgresql.org/docs/9.2/static/functions-
range.html#RANGE-FUNCTIONS-TABLE) range functions from SQLAlchemy.
Answer: One way to do this is to use the already existing `func.lower()/func.upper()`
methods in sqlalchemy:
from sqlalchemy import func
RoomBooking.query.filter(func.lower(RoomBooking.during) == foo).order_by(func.upper(RoomBooking.during))
These methods were probably introduced to support (un)capitalizing text – it
would be interesting to see if other unavailable postgres functions can be
implemented in a similar manner as well.
|
Uninstalling Python - Do I loose my installed packages and code?
Question: I have been having problems importing Tkinter. I have done research here and
found that it's because I have had both 64 bit and 32 bit python on my
machine. I currently use 32 bit but Tkinter is pointing to the 64 bit version.
I think the easiest fix is to uninstall python and reinstall it. Will I loose
all my downloaded libraries and code I've written if I do this?
It's python 2.7 on Windows 7.
Answer: All the downloaded libraries are in the `C:\Python27\Lib\site-packages`. You
can check this folder before you uninstall a version of Python.
I agree with Rinzler, for Python, both versions will have its own `Tkinter`.
Maybe it's just a problem of interpreter choice of your IDE.
|
how to use python to do a series of commands in terminal
Question: I put the following in my `python_go.py`
import os
os.system("cd some_dir") # This is the directory storing an existing virtual environment
os.system(". activate") #because I want to activate the virtual environment
os.system("cd another_dir") #this the directory I can start my work
I hope I can run the `python_go.py`, it can do the work mentioned above.
But when I run it, seems it can only do the first step, the rest of it, e.g.
`. activate` seems not working.
Can someone tell me how to do it? Thank you!!
Answer: If you're relying to os.system(". activate") to work if it's in the directory
some_dir that won't work because the current directory won't persist across
calls to os.system().
You're going to be better off calling a shell script that aggregates all three
of the commands you want to do and execute that once from the python script.
Otherwise you want to set up the environment for the parent python process
using os.chdir() before calling os.system on the activate call. Also, the
os.system(". activate") call won't do what you want because the "dot space"
notation will load information into a shell that's going to go away when the
os.system call finishes.
Edited (to your followup comment):
Your shell script should look like this (do_activate.sh):
cd some_dir
. activate
cd another_dir
and the python code like this:
os.system("db_activate.sh").
Keep in mind that whatever environment variables were saved by ". activate"
won't persist after the os.system call.
|
SQLAlchemy inheritance not working
Question: I'm using Flask and SQLAlchemy. I have used my own abstract base class and
inheritance. When I try to use my models in the python shell I get the
following error:
>>> from schedule.models import Task
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/teelf/projects/schedule/server/schedule/models.py", line 14, in <module>
class User(Base):
File "/home/teelf/projects/schedule/server/venv/lib/python3.4/site-packages/flask_sqlalchemy/__init__.py", line 536, in __init__
DeclarativeMeta.__init__(self, name, bases, d)
File "/home/teelf/projects/schedule/server/venv/lib/python3.4/site-packages/sqlalchemy/ext/declarative/api.py", line 55, in __init__
_as_declarative(cls, classname, cls.__dict__)
File "/home/teelf/projects/schedule/server/venv/lib/python3.4/site-packages/sqlalchemy/ext/declarative/base.py", line 254, in _as_declarative
**table_kw)
File "/home/teelf/projects/schedule/server/venv/lib/python3.4/site-packages/sqlalchemy/sql/schema.py", line 393, in __new__
"existing Table object." % key)
sqlalchemy.exc.InvalidRequestError: Table 'user' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns
on an existing Table object.
How do I fix this?
**Code:**
**manage.py** :
#!/usr/bin/env python
import os, sys
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from flask.ext.migrate import Migrate, MigrateCommand
from flask.ext.script import Manager
from server import create_app
from database import db
app = create_app("config")
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command("db", MigrateCommand)
if __name__ == "__main__":
manager.run()
**__init__.py** :
from flask import Flask
from flask.ext.login import LoginManager
from database import db
from api import api
from server.schedule.controllers import mod_schedule
def create_app(config):
# initialize Flask
app = Flask(__name__)
# load configuration file
app.config.from_object(config)
# initialize database
db.init_app(app)
api.init_app(app)
# initialize flask-login
login_manager = LoginManager(app)
# register blueprints
app.register_blueprint(mod_schedule)
return app
**database.py** :
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
**models.py** :
from sqlalchemy.dialects.postgresql import UUID
from database import db
class Base(db.Model):
__abstract__ = True
id = db.Column(UUID, primary_key=True)
class User(Base):
__tablename__ = "user"
username = db.Column(db.String)
password = db.Column(db.String)
first_name = db.Column(db.String)
last_name = db.Column(db.String)
authenticated = db.Column(db.Boolean, default=False)
def __init__(self, first_name, last_name, username):
self.first_name = first_name
self.last_name = last_name
self.username = username
def is_active(self):
""" All users are active """
return True
def get_id(self):
return self.username
def is_authenticated(self):
return self.authenticated
def is_anonymous(self):
""" Anonymous users are not supported"""
return False
**controllers.py** :
from flask import Blueprint
from flask.ext.restful import reqparse, Resource
from api import api
from server.schedule.models import User
mod_schedule = Blueprint("schedule", __name__, url_prefix="/schedule")
class Task(Resource):
def put(self):
pass
def get(self):
pass
def delete(self):
pass
api.add_resource(Task, "/tasks/<int:id>", endpoint="task")
Answer: Try adding
__table_args__ = {'extend_existing': True}
to your User class right under `__tablename__=`
cheers
|
parsing and saving csv file in python and csv module
Question: This script is meant to parse and sort a list from a csv file and save to a
newly created csv file, including headers.
I triyng to include the write function to save the output of this parser to a
new csv file with the following. This code create a csv but record the headers
only and in one column.
Here's the input:
Timestamp,Session Index,Event,Description,Version,Platform,Device,User ID,Params,
"Dec 27, 2014 05:26 AM",1,NoRegister,,1.4.0,iPhone,Apple iPhone 5c (GSM),,{},
"Dec 27, 2014 05:24 AM",1,NoRegister,,1.4.0,iPhone,Apple iPhone 5c (GSM),,{},
"Dec 27, 2014 05:23 AM",1,HomeTab,Which tab the user viewed ,1.4.0,iPhone,Apple iPhone 5s (GSM),,{ UserID : 54807; tabName : Home},
"Dec 27, 2014 05:23 AM",2,HomeTab,Which tab the user viewed ,1.4.0,iPhone,Apple iPhone 5s (GSM),,{ UserID : 54807; tabName : Home},
"Dec 27, 2014 05:23 AM",3,HomeTab,Which tab the user viewed ,1.4.0,iPhone,Apple iPhone 5s (GSM),,{ UserID : 54807; tabName : QuickAndEasy},
Here's the output I'd like to get saved to csv:
Timestamp,Session Index,Event,Description,Version,Platform,Device,User ID,TabName,RecipeID,Type,SearchWord,IsFromLabel,
"Dec 27, 2014 05:26 AM",1,NoRegister,,1.4.0,iPhone,Apple iPhone 5c (GSM),,,,,,,
"Dec 27, 2014 05:24 AM",1,NoRegister,,1.4.0,iPhone,Apple iPhone 5c (GSM),,,,,,,
"Dec 27, 2014 05:23 AM",1,HomeTab,Which tab the user viewed ,1.4.0,iPhone,Apple iPhone 5s (GSM),54807,Home,,,,,
"Dec 27, 2014 05:23 AM",2,HomeTab,Which tab the user viewed ,1.4.0,iPhone,Apple iPhone 5s (GSM),54807,Home,,,,,
"Dec 27, 2014 05:23 AM",3,HomeTab,Which tab the user viewed ,1.4.0,iPhone,Apple iPhone 5s (GSM),54807,QuickAndEasy,,,,,
The code:
import csv
def printfields(keys, linesets):
output_line = ""
for key in keys:
if key in linesets:
output_line += linesets[key] + ","
else:
output_line += ","
print output_line
def csvwriter(reader, path):
"""
write reader to a csv file path
"""
with open(path, "w") as csv_file:
writer = csv.writer(csv_file, delimiter=",")
for line1 in line:
if line1 in path:
writer.writerow(line1)
if __name__ == "__main__":
fields = [
"UserID", "tabName", "RecipeID", "type", "searchWord", "isFromLabel", "targetUID"
]
mappedLines = {}
with open('test.csv', 'r') as f:
reader = csv.DictReader(f)
for line in reader:
fieldPairs = [
p for p in
line['Params'].strip().strip('}').strip('{').strip().split(';')
if p
]
lineDict = {
pair.split()[0].strip(): pair.split(':')[1].strip()
for pair in fieldPairs
}
mappedLines[reader.line_num] = lineDict
path = "output.csv"
csvwriter(reader, path)
for key in sorted(mappedLines.keys()):
linesets = mappedLines[key]
printfields(fields, linesets)
Answer: `csv_writer` references a symbol `line` \-- this is not an argument to the
function. How are you expecting that to be provided?
|
Use of scipy.optimize.minimize in Neural Network
Question: Trying to use Backpropagation Neural Network for multiclass classification. I
have found [this code](http://danielfrg.com/blog/2013/07/03/basic-neural-
network-python/#disqus_thread) and try to adapt it. It is based on the
lections of [Machine Learning in Coursera from Andrew
Ng](http://sun.stanford.edu/~couvidat/MachineLearning/ex4.pdf).
I don't understand exactly the implementation of `scipy.optimize.minimize`
function here. It is used just once in the code. Is it iteratively updating
the weights of the network? How can I visualize (plot) it's performance to see
when it converges?
Using this function what parameters I can adjust to achieve better
performance? I found
[here](http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Neural_Network_Basics)
a list common parameters:
* Number of neurons in the hidden layer: this is `hidden_layer_size=25` in my code
* **Learning rate: can I still adjust that using built-in minimization function?**
* **Momentum:** is that `reg_lambda=0` in my case? Regularization parameter to avoid overfitting, right?
* Epoch: `maxiter=500`
Here is my training data (target class is in the last column):
* * *
65535, 3670, 65535, 3885, -0.73, 1
65535, 3962, 65535, 3556, -0.72, 1
65535, 3573, 65535, 3529, -0.61, 1
3758, 3123, 4117, 3173, -0.21, 0
3906, 3119, 4288, 3135, -0.28, 0
3750, 3073, 4080, 3212, -0.26, 0
65535, 3458, 65535, 3330, -0.85, 2
65535, 3315, 65535, 3306, -0.87, 2
65535, 3950, 65535, 3613, -0.84, 2
65535, 32576, 65535, 19613, -0.35, 3
65535, 16657, 65535, 16618, -0.37, 3
65535, 16657, 65535, 16618, -0.32, 3
The dependencies are so obvious, I think it should be so easy to classify
it...
But results are terrible. I get accuracy of 0.6 to 0.8. This is absolutely
inappropriate for my application. I know I need more data normally, but I
would be already happy when I could at least fit the training data (without
taking into account potential overfitting)
Here is the code:
import numpy as np
from scipy import optimize
from sklearn import cross_validation
from sklearn.metrics import accuracy_score
import math
class NN_1HL(object):
def __init__(self, reg_lambda=0, epsilon_init=0.12, hidden_layer_size=25, opti_method='TNC', maxiter=500):
self.reg_lambda = reg_lambda
self.epsilon_init = epsilon_init
self.hidden_layer_size = hidden_layer_size
self.activation_func = self.sigmoid
self.activation_func_prime = self.sigmoid_prime
self.method = opti_method
self.maxiter = maxiter
def sigmoid(self, z):
return 1 / (1 + np.exp(-z))
def sigmoid_prime(self, z):
sig = self.sigmoid(z)
return sig * (1 - sig)
def sumsqr(self, a):
return np.sum(a ** 2)
def rand_init(self, l_in, l_out):
self.epsilon_init = (math.sqrt(6))/(math.sqrt(l_in + l_out))
return np.random.rand(l_out, l_in + 1) * 2 * self.epsilon_init - self.epsilon_init
def pack_thetas(self, t1, t2):
return np.concatenate((t1.reshape(-1), t2.reshape(-1)))
def unpack_thetas(self, thetas, input_layer_size, hidden_layer_size, num_labels):
t1_start = 0
t1_end = hidden_layer_size * (input_layer_size + 1)
t1 = thetas[t1_start:t1_end].reshape((hidden_layer_size, input_layer_size + 1))
t2 = thetas[t1_end:].reshape((num_labels, hidden_layer_size + 1))
return t1, t2
def _forward(self, X, t1, t2):
m = X.shape[0]
ones = None
if len(X.shape) == 1:
ones = np.array(1).reshape(1,)
else:
ones = np.ones(m).reshape(m,1)
# Input layer
a1 = np.hstack((ones, X))
# Hidden Layer
z2 = np.dot(t1, a1.T)
a2 = self.activation_func(z2)
a2 = np.hstack((ones, a2.T))
# Output layer
z3 = np.dot(t2, a2.T)
a3 = self.activation_func(z3)
return a1, z2, a2, z3, a3
def function(self, thetas, input_layer_size, hidden_layer_size, num_labels, X, y, reg_lambda):
t1, t2 = self.unpack_thetas(thetas, input_layer_size, hidden_layer_size, num_labels)
m = X.shape[0]
Y = np.eye(num_labels)[y]
_, _, _, _, h = self._forward(X, t1, t2)
costPositive = -Y * np.log(h).T
costNegative = (1 - Y) * np.log(1 - h).T
cost = costPositive - costNegative
J = np.sum(cost) / m
if reg_lambda != 0:
t1f = t1[:, 1:]
t2f = t2[:, 1:]
reg = (self.reg_lambda / (2 * m)) * (self.sumsqr(t1f) + self.sumsqr(t2f))
J = J + reg
return J
def function_prime(self, thetas, input_layer_size, hidden_layer_size, num_labels, X, y, reg_lambda):
t1, t2 = self.unpack_thetas(thetas, input_layer_size, hidden_layer_size, num_labels)
m = X.shape[0]
t1f = t1[:, 1:]
t2f = t2[:, 1:]
Y = np.eye(num_labels)[y]
Delta1, Delta2 = 0, 0
for i, row in enumerate(X):
a1, z2, a2, z3, a3 = self._forward(row, t1, t2)
# Backprop
d3 = a3 - Y[i, :].T
d2 = np.dot(t2f.T, d3) * self.activation_func_prime(z2)
Delta2 += np.dot(d3[np.newaxis].T, a2[np.newaxis])
Delta1 += np.dot(d2[np.newaxis].T, a1[np.newaxis])
Theta1_grad = (1 / m) * Delta1
Theta2_grad = (1 / m) * Delta2
if reg_lambda != 0:
Theta1_grad[:, 1:] = Theta1_grad[:, 1:] + (reg_lambda / m) * t1f
Theta2_grad[:, 1:] = Theta2_grad[:, 1:] + (reg_lambda / m) * t2f
return self.pack_thetas(Theta1_grad, Theta2_grad)
def fit(self, X, y):
num_features = X.shape[0]
input_layer_size = X.shape[1]
num_labels = len(set(y))
theta1_0 = self.rand_init(input_layer_size, self.hidden_layer_size)
theta2_0 = self.rand_init(self.hidden_layer_size, num_labels)
thetas0 = self.pack_thetas(theta1_0, theta2_0)
options = {'maxiter': self.maxiter}
_res = optimize.minimize(self.function, thetas0, jac=self.function_prime, method=self.method,
args=(input_layer_size, self.hidden_layer_size, num_labels, X, y, 0), options=options)
self.t1, self.t2 = self.unpack_thetas(_res.x, input_layer_size, self.hidden_layer_size, num_labels)
np.savetxt("weights_t1.txt", self.t1, newline="\n")
np.savetxt("weights_t2.txt", self.t2, newline="\n")
def predict(self, X):
return self.predict_proba(X).argmax(0)
def predict_proba(self, X):
_, _, _, _, h = self._forward(X, self.t1, self.t2)
return h
##################
# IR data #
##################
values = np.loadtxt('infrared_data.txt', delimiter=', ', usecols=[0,1,2,3,4])
targets = np.loadtxt('infrared_data.txt', delimiter=', ', dtype=(int), usecols=[5])
X_train, X_test, y_train, y_test = cross_validation.train_test_split(values, targets, test_size=0.4)
nn = NN_1HL()
nn.fit(values, targets)
print("Accuracy of classification: "+str(accuracy_score(y_test, nn.predict(X_test))))
Answer: In the given code
[`scipy.optimize.minimize`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize)
iteratively minimizes function given it's derivative (Jacobi's matrix).
According to the documentation, use can specify `callback` argument to a
function that will be called after each iteration — this will let you measure
performance, though I'm not sure if it'll let you halt the optimization
process.
All parameters you listed are hyperparameters, it's hard to optimize them
directly:
_Number of neurons in the hidden layer_ is a discrete valued parameters, and,
thus, is not optimizable via gradient techniques. Moreover, it affects
NeuralNet architecture, so you can't optimize it while training the net. What
you can do, though, is to use some higher-level routine to search for possible
options, like exhaustive grid search with cross-validation (for example look
at [GridSearchCV](http://scikit-
learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html#sklearn.grid_search.GridSearchCV))
or other tools for hyperparameter search
([hyperopt](https://github.com/hyperopt/hyperopt),
[spearmint](https://github.com/JasperSnoek/spearmint),
[MOE](https://github.com/Yelp/MOE), etc).
_Learning rate_ does not seem to be customizable for most of the optimization
methods available. But, actually, learning rate in gradient descent is just a
Newton's method with Hessian "approximated" by `1 / eta I` — diagonal matrix
with inverted learning rates on the major diagonal. So you can try hessian-
based methods with this heuristic.
_Momentum_ is completely unrelated to regularization. It's an optimization
technique, and, since you use scipy for optimization, is unavailable for you.
|
What does cmdclass do in Pythons setuptools?
Question: I've recently got a pull request which added
class build_ext(_build_ext):
'to install numpy'
def finalize_options(self):
_build_ext.finalize_options(self)
# Prevent numpy from thinking it is still in its setup process:
__builtins__.__NUMPY_SETUP__ = False
import numpy
self.include_dirs.append(numpy.get_include())
to my `setup.py` resulting in:
from setuptools.command.build_ext import build_ext as _build_ext
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
class build_ext(_build_ext):
'to install numpy'
def finalize_options(self):
_build_ext.finalize_options(self)
# Prevent numpy from thinking it is still in its setup process:
__builtins__.__NUMPY_SETUP__ = False
import numpy
self.include_dirs.append(numpy.get_include())
config = {
'cmdclass':{'build_txt':build_ext}, #numpy hack
'setup_requires':['numpy'], #numpy hack
'name': 'nntoolkit',
'version': '0.1.25',
'author': 'Martin Thoma',
'author_email': '[email protected]',
'packages': ['nntoolkit'],
'scripts': ['bin/nntoolkit'],
'url': 'https://github.com/MartinThoma/nntoolkit',
'license': 'MIT',
'description': 'Neural Network Toolkit',
'long_description': """...""",
'install_requires': [
"argparse",
"theano",
"nose",
"natsort",
"PyYAML",
"matplotlib",
"h5py",
"numpy",
"Cython"
],
'keywords': ['Neural Networks', 'Feed-Forward', 'NN', 'MLP'],
'download_url': 'https://github.com/MartinThoma/nntoolkit',
'classifiers': ['Development Status :: 3 - Alpha'],
'zip_safe': False,
'test_suite': 'nose.collector'
}
setup(**config)
What does it do?
The [documentation](https://docs.python.org/2/distutils/apiref.html) only
states:
> cmdclass: A mapping of command names to
> [**`Command`**](https://docs.python.org/2/distutils/apiref.html?highlight=cmdclass#distutils.core.Command)
> subclasses (a dictionary)
Answer: Numpy libraries are written in C/C++. So unlike other packages, it needs to be
compiled before actually calling them. So 'build_ext' just compiles them.
Details in blog: <http://www.sadafnoor.com/blog/how-to-automate-numpy-in-
setuptool/>
|
Error in printing scraped webpage through bs4
Question: **Code:**
import requests
import urllib
from bs4 import BeautifulSoup
page1 = urllib.request.urlopen("http://en.wikipedia.org/wiki/List_of_human_stampedes")
soup = BeautifulSoup(page1)
print(soup.get_text())
print(soup.prettify())
**Error:**
Traceback (most recent call last):
File "C:\Users\sony\Desktop\Trash\Crawler Try\try2.py", line 9, in <module>
print(soup.get_text())
File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u014d' in position 10487: character maps to <undefined>
I think the problem lies mainly with urlib package. Here I am using urllib3
package. They changed the urlopen syntax from 2 to 3 version, which maybe the
cause of error. But that being said I have included the latest syntax only.
Python version 3.4
Answer: since you are importing `requests` you can use it instead of urllib like this:
import requests
from bs4 import BeautifulSoup
page1 = requests.get("http://en.wikipedia.org/wiki/List_of_human_stampedes")
soup = BeautifulSoup(page1.text)
print(soup.get_text())
print(soup.prettify())
Your problem is that python cannot encode the characters from the page that
you are scraping. For some more information see here:
<http://stackoverflow.com/a/16347188/2638310>
Since the wikipedia page is in UTF-8, it seems that BeautifulSoup is guessing
the encoding incorrectly. Try passing the `from_encoding` argument in your
code like this:
soup = BeautifulSoup(page1.text, from_encoding="UTF-8")
For more on encodings in BeautifulSoup have a look here:
<http://www.crummy.com/software/BeautifulSoup/bs4/doc/#encodings>
|
Can I defragment the heap in a Python script?
Question: I'm running a malware analysis experiment in Python and I need to create a big
object (512 MB, I think). While testing locally (64-bit system), there is no
problem, but when I try to run it on a remote 32-bit system (so the process
has a stack of max 4 GB), I get a MemoryError (the stack trace doesn't give
much information). The big allocation is:
from sklearn.grid_search import GridSearchCV
...
model = GridSearchCV(svm.LinearSVC(), {'C':numpy.logspace(-3,3,7)})
model.fit(train_vectors, labels)
I asked the sysadmin for the system and he tells me it's probably the previous
allocations that have fragmented the heap, so that the big allocation is no
longer possible.
I've tried to run gc.collect() right before the call that causes the big
allocation, but the problem persist.
I don't think there's a way to make the big allocation smaller.
Any suggestions on how I could defragment the heap?
Edit: I'm managed to make the training vectors a lot smaller. Now I need to
see if the malware detection technique still works. If it does, my problem
should be solved. The reason was that the vectors were numpy arrays and just
using the function tolist() made them a lot smaller.
Edit 2: Just using lists (of floats) wasn't enough. Because the values were
integer anyway, I casted the floats to ints, making the vectors a little
smaller. This had a big impact on the memory usage. I was saving the vectors
using cPickle and retrieving them using the same module. I'm guessing there's
a bug in that module somewhere that causes a memory when loading floats, that
is not present for ints.
TL;DR: I didn't find a way to defragment the heap, but was able to locate the
problem using memory-profiler (which I liked best) and heapy. I solved the
problem by making the vectors smaller and changing the data type to ints
(instead of floats). I suspect the cPickle module I used to store/load the
vectors has a memory leak when using floats, which is why I ran out of memory.
Answer: You should use a memory profiler.
From [Which Python memory profiler is
recommended?](http://stackoverflow.com/questions/110259/which-python-memory-
profiler-is-recommended) :
[Heapy](http://guppy-pe.sourceforge.net/)
[memory_profiler](http://pypi.python.org/pypi/memory_profiler)
[PySizer](http://pysizer.8325.org/)
[Dowser](http://www.aminus.net/wiki/Dowser)
or even the [objgraph library](http://mg.pov.lt/objgraph/)
|
Use a timeout to prevent deadlock when opening a file in Python?
Question: I need to open a file which is NFS mounted to my server. Sometimes, the NFS
mount fails in a manner that causes all file operations to deadlock. In order
to prevent this, I need a way to let the `open` function in python time out
after a set period. E.g. something like `open('/nfsdrive/foo', timeout=5)`. Of
course, the default `open` procedure has no `timeout` or similar keyword.
Does anyone here know of a way to effectively stop trying to open a (local)
file if the opening takes too long?
Note: I've already tried the urllib2 module, but it's timeout options only
work for web requests, not local ones.
Answer: You can try using [stopit](https://pypi.python.org/pypi/stopit#timeout-
context-manager)
from stopit import SignalingTimeout as Timeout
with Timeout(5.0) as timeout_ctx:
with open('/nfsdrive/foo', 'r') as f:
# do something with f
pass
There may be some issues with `SignalingTimeout` in multithreaded environments
(like Django). `ThreadingTimeout` on the other hand may cause problems with
resources on some virtual hostings when you run too many "time-limited"
functions
P.S. My example also limits processing time of opened file. To only limit file
opening you should use different approach with manual file opening/closing and
manual exception handling
|
Rotating Large Arrays, Fastest Possible
Question: I am relatively new to Python and looking for best optimized code for rotating
large multi-dimensional arrays. In the following code I have a 16X600000 32bit
floating point multi-dimensional array and according to the timer it takes
about 30ms to rotate the contents on my quad core acer windows 8 tablet. I was
considering using some Cython routines or something similar if it would be
possible to reduce the time required to rotate the array.
Eventually the code will be used to store y-axis values for a high speed data
plotting graph based around the VisPy package and the 32bit float array will
be passed to an OpenGL routine. I would like to achieve less than 1ms if
possible.
Any comments, recommendations or sample code would be much appreciated.
import sys, timeit
from threading import Thread
from PyQt4 import QtGui
import numpy as np
m = 16 # Number of signals.
n = 600000 # Number of samples per signal.
y = 0.0 * np.random.randn(m, n).astype(np.float32)
Running = False
class Window(QtGui.QWidget):
def __init__(self):
QtGui.QWidget.__init__(self)
self.button = QtGui.QPushButton('Start', self)
self.button.clicked.connect(self.handleButton)
layout = QtGui.QVBoxLayout(self)
layout.addWidget(self.button)
def handleButton(self):
global Running, thread, thrTest
if Running == True:
Running = False
self.button.setText('Start')
thrTest.isRunning = False
print ('stop')
else:
Running = True
self.button.setText('Stop')
thrTest = testThread()
thread = Thread(target=thrTest.run, daemon=True )
thread.start()
print ("Start")
class testThread(Thread):
def __init__(self):
self.isRunning = True
def run(self):
print('Test: Thread Started')
while self.isRunning == True:
start_time = timeit.default_timer()
y[:, :-1] = y[:, 1:]
elapsed = timeit.default_timer() - start_time
print ('Time (s)= ' + str(elapsed))
print('Test: Closed Thread')
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec_())
## Update
I guess there has been some confusion about exactly what I am trying to do so
I will try to explain a little better.
The ultimate goal is to have a fast real-time data logging device which draws
line on a graph representing the signal value. There will be multiple channels
and a sampling rate of at least 1ms and as much recording time as possible. I
have started with [this VisPy
example](http://vispy.readthedocs.org/en/latest/examples/demo/gloo/realtime_signals.html).
The code in the example which writes the new data into the arrays and sends it
to OpenGL is in the `On_Timer` function near the bottom. I have modified this
code slightly to integrate the OpenGL canvas into a Qt gui and added some code
to get data from an Arduino Mega through an ethernet socket.
Currently I can produce a real time graph of 16 lines with a sampling rate
right about 1ms and a frame rate of around 30Hz with a recording time of about
14 seconds. If I try to increase the channel count or the recording length any
more the program stops working as it cannot keep up with the flow of data
coming in through the Ethernet port at 1ms.
The biggest culprit I can find for this is the time it takes to complete the
data buffer shift using the `y[:, :-1] = y[:, 1:]` routine. Originally I
submitted benchmark code where this function was being timed in the hope that
someone knew of a way to do the same thing in a more efficient manner. The
purpose of this line is to shift the entire array one index to the left, and
then in my very next line of code I write new data to the first slot on the
right.
Below you can see my modified graph update routine. First it takes the new
data from the queue and unpacks into a temporary array, then it shifts the
contents of the main buffer array, and finally it copies the new data into the
last slot of the main array. Once the queue is empty it calls the update
function so that OpenGL updates the display.
def on_timer(self, event):
"""Add some data at the end of each signal (real-time signals)."""
k=1
s = struct.Struct('>16H')
AdrArray = 0.0 * np.random.randn(16,1).astype(np.float32)
if not q.qsize() == 0:
while q.qsize() > 0:
print (q.qsize())
print ('iin ' + str(datetime.datetime.now()))
AdrArray[:,0]= s.unpack_from(q.get(), offset=4)
y[:, :-1] = y[:, 1:]
y[:, -1:] = .002*AdrArray
print ('out ' + str(datetime.datetime.now()))
self.program['a_position'].set_data(y.ravel().astype(np.float32))
self.update()
Answer: Do you really want this 'roll'? It's shifting the values left, filling with
the last column
In [179]: y = np.arange(15).reshape(3,5)
In [180]: y[:,:-1]=y[:,1:]
In [181]: y
Out[181]:
array([[ 1, 2, 3, 4, 4],
[ 6, 7, 8, 9, 9],
[11, 12, 13, 14, 14]])
For nonstandard 'roll' like this it is unlikely that there's anything faster.
`np.roll` has a different fill on the left
In [190]: np.roll(y,-1,1)
Out[190]:
array([[ 1, 2, 3, 4, 0],
[ 6, 7, 8, 9, 5],
[11, 12, 13, 14, 10]])
For what it's worth, the core of `roll` is:
indexes = concatenate((arange(n - shift, n), arange(n - shift)))
res = a.take(indexes, axis)
Your particular 'roll' could be reproduced with a similar 'take'
In [204]: indexes=np.concatenate([np.arange(1,y.shape[1]),[y.shape[1]-1]])
In [205]: y.take(indexes,1)
Your `y[:,:-1]...` is faster than `roll` because it does not create a new
array; instead it just overwrites part of the existing one.
`take` accepts an `out` parameter, so this is possible:
y.take(indexes,1,y)
though speed wise this only helps with small arrays. For large ones your
overwriting assignment is faster.
I'd also suggest looking at using the transpose, and rolling on the `axis=0`.
For an `order='C'` array, the values of a row form a contiguous block.
The big time consumer is that you have to copy (nearly) all of the array from
one location to another, either in a new array, or onto itself. If the data
were in some sort of ring buffer, you could just change a pointer, without
having to copy any data.
|
uWSGI module found when invoked on the command line but not when invoked using emperor
Question: I have a python wsgi application which I am running under uWSGI.
When I try to run the same application using emperor I get an error -
ImportError: No module named app_thing
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
If I run uWSGI directly specifying the same config file that emperor is using
it finds the module and the application works.
uWSGI config file below.
[uwsgi]
socket = 127.0.0.1:8000
workers = 1
force-cwd = /home/user/py
module = app_thing:application
protocol = http
Any help appreciated...
Answer: Ok, turns out the fix involved setting the file rather than the module and
also adding the dir to the python path in the uWSGI config. Final config ended
up as
[uwsgi]
socket = 127.0.0.1:8000
workers = 1
force-cwd = /home/py
file = /home/py/app_thing.py
pythonpath = /home/py
|
Monte Carlo simulation Python
Question: Question: You have a bucket with 3 red balls and 3 green balls. Assume that
once you draw a ball out of the bucket, you don't replace it. What is the
probability of drawing 3 balls of the same color?
Problem: My code always gives 0.0
import random
def noReplacementSimulation(numTrials):
'''
Runs numTrials trials of a Monte Carlo simulation
of drawing 3 balls out of a bucket containing
3 red and 3 green balls. Balls are not replaced once
drawn. Returns the a decimal - the fraction of times 3
balls of the same color were drawn.
'''
for i in range(numTrials):
hola=0
b=['r','r','r','g','g','g']
for a in range(3):
ball=random.choice(b)
b.remove(ball)
if b[0]==b[1] and b[1]==b[2] and b[2]==b[0]:
hola=hola+1
return float(hola)/numTrials
Answer: Your code _doesn't_ always return `0.0`, only most of the time. You reset
hola=0
_inside the loop_ , so you can only end with either `hola == 0` or `hola ==
1`. You want
hola=0
for i in range(numTrials):
# stuff here
instead.
PS: the question asks "What is the probability of drawing 3 balls of the same
color?" but you're checking whether the three balls which are _left_ are of
the same colour. Those won't be the same in general.
|
mongodb import xml into mongodb
Question: I have problem with importing big xml file (1.3 gb) into mongodb in order to
search for most frequent words in map & reduce manner.
<http://dumps.wikimedia.org/plwiki/20141228/plwiki-20141228-pages-articles-
multistream.xml.bz2>
Here I enclose xml cut (first 10 000 lines) out from this big file:
http://www.filedropper.com/text2
I know that I can't import xml directly into mongodb. I used some tools do so.
I used some python scripts and all has failed.
Which tool or script should I use? What should be a key & value? I think the
best solution to find most frequent world would be this.
(_id : id, value: word )
then I would sum all the elements like in docs example:
<http://docs.mongodb.org/manual/core/map-reduce/>
Any clues would be greatly appreciated, but how to import this file into
mongodb to have collections like that?
(_id : id, value: word )
If you have any idea please share.
Edited After research, I would use python or js to complete this task.
I would extract only words in `<text></text>` section which is under
`/<page><revision>`, exlude <, > etc., and then separate words and upload
them to mongodb with pymongo or js.
So there are several pages with revision and text.
Edited
Answer: To save all this data, save them on `Gridfs`
And the easiest way to convert the `xml`, is to use this tool to convert it to
`json` and save it:
<http://stackoverflow.com/a/10201405/861487>
import xmltodict
doc = xmltodict.parse("""
... <mydocument has="an attribute">
... <and>
... <many>elements</many>
... <many>more elements</many>
... </and>
... <plus a="complex">
... element as well
... </plus>
... </mydocument>
... """)
doc['mydocument']['@has']
Out[3]: u'an attribute'
|
Converting C into Python
Question: I am trying to convert a C module that parses the output from the Linux
library rtl_fm. It is used for catching energy usage from an Efergy meter
throuh a DVB-T dongle
The C module works fine but I want it written in python to interact with other
python modules i have
I have put the constants in constant.py
I am totally stuck in converting the row: **cursamp = (int16_t) (fgetc(stdin) | fgetc(stdin) <<8);** that I have tried to convert in a lot of different ways. Every try ends with an error!
It seems to be two types of problems: 1. Type conversion of the input result
2. How to convert fgetc() into python.
I also have troubles converting **while(!feof(stdin))** into python
Anyone that could help?
C code below:
#include <stdio.h>
#include <stdint.h>
#include <time.h>
#include <math.h>
#include <stdlib.h> // For exit function
#define VOLTAGE 240 /* Refernce Voltage */
#define CENTERSAMP 100 /* Number of samples needed to compute for the wave center */
#define PREAMBLE_COUNT 40 /* Number of high(1) samples for a valid preamble */
#define MINLOWBIT 3 /* Number of high(1) samples for a logic 0 */
#define MINHIGHBIT 8 /* Number of high(1) samples for a logic 1 */
#define E2BYTECOUNT 8 /* Efergy E2 Message Byte Count */
#define FRAMEBITCOUNT 64 /* Number of bits for the entire frame (not including preamble) */
#define LOGTYPE 1 // Allows changing line-endings - 0 is for Unix /n, 1 for Windows /r/n
#define SAMPLES_TO_FLUSH 10 // Number of samples taken before writing to file.
// Setting this too low will cause excessive wear to flash due to updates to
// filesystem! You have been warned! Set to 10 samples for 6 seconds = every min.
int loggingok; // Global var indicating logging on or off
int samplecount; // Global var counter for samples taken since last flush
FILE *fp; // Global var file handle
int calculate_watts(char bytes[])
{
char tbyte;
double current_adc;
double result;
int i;
time_t ltime;
struct tm *curtime;
char buffer[80];
/* add all captured bytes and mask lower 8 bits */
tbyte = 0;
for(i=0;i<7;i++)
tbyte += bytes[i];
tbyte &= 0xff;
/* if checksum matches get watt data */
if (tbyte == bytes[7])
{
time( <ime );
curtime = localtime( <ime );
strftime(buffer,80,"%x,%X", curtime);
current_adc = (bytes[4] * 256) + bytes[5];
result = (VOLTAGE * current_adc) / ((double) 32768 / (double) pow(2,bytes[6]));
printf("%s,%f\n",buffer,result);
if(loggingok) {
if(LOGTYPE) {
fprintf(fp,"%s,%f\r\n",buffer,result);
} else {
fprintf(fp,"%s,%f\n",buffer,result);
}
samplecount++;
if(samplecount==SAMPLES_TO_FLUSH) {
samplecount=0;
fflush(fp);
}
}
fflush(stdout);
return 1;
}
//printf("Checksum Error \n");
return 0;
}
void main (int argc, char**argv)
{
char bytearray[9];
char bytedata;
int prvsamp;
int hctr;
int cursamp;
int bitpos;
int bytecount;
int i;
int preamble;
int frame;
int dcenter;
int dbit;
long center;
if(argc==2) {
fp = fopen(argv[1], "a"); // Log file opened in append mode to avoid destroying data
samplecount=0; // Reset sample counter
loggingok=1;
if (fp == NULL) {
perror("Failed to open log file!"); // Exit if file open fails
exit(EXIT_FAILURE);
}
} else {
loggingok=0;
}
printf("Efergy E2 Classic decode \n\n");
/* initialize variables */
cursamp = 0;
prvsamp = 0;
bytedata = 0;
bytecount = 0;
hctr = 0;
bitpos = 0;
dbit = 0;
preamble = 0;
frame = 0;
dcenter = CENTERSAMP;
center = 0;
while( !feof(stdin) )
{
cursamp = (int16_t) (fgetc(stdin) | fgetc(stdin)<<8);
/* initially capture CENTERSAMP samples for wave center computation */
if (dcenter > 0)
{
dcenter--;
center = center + cursamp; /* Accumulate FSK wave data */
if (dcenter == 0)
{
/* compute for wave center and re-initialize frame variables */
center = (long) (center/CENTERSAMP);
hctr = 0;
bytedata = 0;
bytecount = 0;
bitpos = 0;
dbit = 0;
preamble = 0;
frame = 0;
}
}
else
{
if ((cursamp > center) && (prvsamp < center)) /* Detect for positive edge of frame data */
hctr = 0;
else
if ((cursamp > center) && (prvsamp > center)) /* count samples at high logic */
{
hctr++;
if (hctr > PREAMBLE_COUNT)
preamble = 1;
}
else
if (( cursamp < center) && (prvsamp > center))
{
/* at negative edge */
if ((hctr > MINLOWBIT) && (frame == 1))
{
dbit++;
bitpos++;
bytedata = bytedata << 1;
if (hctr > MINHIGHBIT)
bytedata = bytedata | 0x1;
if (bitpos > 7)
{
bytearray[bytecount] = bytedata;
bytedata = 0;
bitpos = 0;
bytecount++;
if (bytecount == E2BYTECOUNT)
{
/* at this point check for checksum and calculate watt data */
/* if there is a checksum mismatch compute for a new wave center */
if (calculate_watts(bytearray) == 0)
dcenter = CENTERSAMP; /* make dcenter non-zero to trigger center resampling */
}
}
if (dbit > FRAMEBITCOUNT)
{
/* reset frame variables */
bitpos = 0;
bytecount = 0;
dbit = 0;
frame = 0;
preamble = 0;
bytedata = 0;
}
}
hctr = 0;
}
else
hctr = 0;
if ((hctr == 0) && (preamble == 1))
{
/* end of preamble, start of frame data */
preamble = 0;
frame = 1;
}
} /* dcenter */
prvsamp = cursamp;
} /* while */
if(loggingok) {
fclose(fp); // If rtl-fm gives EOF and program terminates, close file gracefully.
}
}
And the Python conversion (a little simplified without file logging):
from datetime import date
from datetime import time
from datetime import datetime
import cmath
import constant
import sys
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
class _Getch:
"""Gets a single character from standard input. Does not echo to the
screen."""
def __init__(self):
try:
self.impl = _GetchWindows()
except ImportError:
self.impl = _GetchUnix()
def __call__(self): return self.impl()
class _GetchUnix:
def __init__(self):
import tty, sys
def __call__(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
class _GetchWindows:
def __init__(self):
import msvcrt
def __call__(self):
import msvcrt
return msvcrt.getch()
getch = _Getch()
def calculate_watts(*args):
logger.info('Start Calculation')
now = datetime.now()
tbyte = 0
for i in range(0,7):
tbyte += bytes[i]
tbyte = tbyte & 0xff
if (tbyte == bytes[7]):
current_adc = (bytes[4] * 256) + bytes[5]
result = (constant.VOLTAGE * current_adc) / (32768 / pow(2,bytes[6]))
print "%s,%f\n" % (now,result)
exit(0)
else:
print "Checksum Error \n"
exit(1)
def main(*argv):
logger.info('Starting Main')
print "Efergy E2 Python decode \n\n"
cursamp = 0
prvsamp = 0
bytedata = 0
bytecount = 0
hctr = 0
bitpos = 0
dbit = 0
preamble = 0
frame = 0
dcenter = constant.CENTERSAMP
center = 0
while (1):
cursamp = (int)((int)(_Getch()) | (int)(_Getch())<<8)
logger.debug('cursamp: %f',cursamp)
if (dcenter > 0):
dcenter -= 1
center = center + cursamp #/* Accumulate FSK wave data */
if (dcenter == 0):
center = (center/constant.CENTERSAMP)
hctr = 0
bytedata = 0
bytecount = 0
bitpos = 0
dbit = 0
preamble = 0
frame = 0
else:
if ((cursamp > center) and (prvsamp < center)): #/* Detect for positive edge of frame data */
hctr = 0
else:
if ((cursamp > center) and (prvsamp > center)): #/* count samples at high logic */
hctr += 1
if (hctr > constant.PREAMBLE_COUNT):
preamble = 1
else:
if (( cursamp < center) and (prvsamp > center)):
#/* at negative edge */
if ((hctr > constant.MINLOWBIT) and (frame == 1)):
dbit += 1
bitpos += 1
bytedata = bytedata << 1
if (hctr > constant.MINHIGHBIT):
bytedata = bytedata | 0x1
if (bitpos > 7):
bytearray[bytecount] = bytedata
bytedata = 0
bitpos = 0
bytecount += 1
if (bytecount == constant.E2BYTECOUNT):
# /* at this point check for checksum and calculate watt data */
# /* if there is a checksum mismatch compute for a new wave center */
if (calculate_watts(bytearray) == 0):
dcenter = constant.CENTERSAMP #/* make dcenter non-zero to trigger center resampling */
if (dbit > constant.FRAMEBITCOUNT):
#/* reset frame variables */
bitpos = 0
bytecount = 0
dbit = 0
frame = 0
preamble = 0
bytedata = 0
hctr = 0
else:
hctr = 0
if ((hctr == 0) and (preamble == 1)):
#/* end of preamble, start of frame data */
preamble = 0
frame = 1
#/* dcenter */
prvsamp = cursamp
if __name__ == "__main__":
main()
Answer: C's `fgetc(stdin)` translates to Python 2's `ord(sys.stdin.read(1)[0])` \--
returning a numeric value from the next byte of `stdin`. (In Python 3, you'll
have to reopen `sys.stdin` as binary to achieve that, otherwise `.read(1)`
will get a Unicode character, not a byte).
The `|` and `<<` operators work the same in Python as in C, so, no problem
there.
At EOF, `sys.stdin.read(1)` returns an empty list (so the `[0]` would fail,
but you can check for that by "decomposing" the above expression). For
example:
ateof = False
def getabyte():
data = sys.stdin.read(1)
if data: return False, ord(data)
else: return True, 0
def getanint():
global ateof
ateof, byte1 = getabyte()
if not ateof:
ateof, byte2 = getabyte()
if ateof: return True, 0
else: return False, byte1 | (byte2<<8)
net of finicky issues wrt endianness (byte order) and character's signedness
(issues common to C and Python).
|
NameError: name 'BillPayer' is not defined
Question: I am getting the above error when trying to run my code. I believe it suffers
from being a broad error, as all my googling has not brought me to anything
that relates to my actual situation. I have organized the directory like this:
Project (BillPay)
bill.py (contains Class Bill())
billPayer.py (contains Class BillPayer())
main.py
__init__.py
I have left my `__init__.py` empty, which I understand to be ok.
In main.py, I have:
#!/usr/bin/python
import sys
sys.path.insert(0, "/home/matt/Documents/Code/BillPay")
import bill
import billPayer
bill_machine = BillPayer()
bill_machine.addBill(Google, 12345, 100.00, www.google.com)
bill_machine.printBills()
If I have imported the module billPayer (which contains the BillPayer class),
then I should be able create an instance of that class, right? What am I
missing?
Answer: Two ways to go about it either
`billPayer.BillPayer()`
or
`from billPayer import BillPayer`
|
Should I call connect() and close() for every Sqlite3 transaction?
Question: I want to write a Python module to abstract away database transactions for my
application. My question is whether I need to call `connect()` and `close()`
for every transaction? In code:
import sqlite3
# Can I put connect() here?
conn = sqlite3.connect('db.py')
def insert(args):
# Or should I put it here?
conn = sqlite3.connect('db.py')
# Perform the transaction.
c = conn.cursor()
c.execute(''' insert args ''')
conn.commit()
# Do I close the connection here?
conn.close()
# Or can I close the connection whenever the application restarts (ideally, very rarely)
conn.close()
I have don't much experience with databases, so I'd appreciate an explanation
for _why_ one method is preferred over the other.
Answer: You can use the same connection repeatedly. You can also use the connection
(and the cursor) as a context manager so that you don't need to explicitly
call `close` on either.
def insert(conn, args):
with conn.cursor() as c:
c.execute(...)
conn.commit()
with connect('db.py') as conn:
insert(conn, ...)
insert(conn, ...)
insert(conn, ...)
There's no reason to close the connection to the database, and re-opening the
connection each time can be expensive. (For example, you may need to establish
a TCP session to connect to a remote database.)
|
How to pass Django mock instance to class method?
Question: The Mock testing library is the one Django topic I just can't seem to wrap my
head around. For example, in the following code, why don't the mock User
instances that I create in my unit test appear in the User object that I query
in the 'get_user_ids' method? If I halt the test in the 'get_user_ids' method
via the debug call and do "User.objects.all()", there's nothing in the User
queryset and the test fails. Am I not creating three mock User instances that
will be queried the the UserProxy's static method?
I'm using Django 1.6 and Postgres 9.3 and running the test with the command
"python manage.py test -s apps.profile.tests.model_tests:TestUserProxy".
Thanks!
# apps/profile/models.py
from django.contrib.auth.models import User
class UserProxy(User):
class Meta:
proxy = True
@staticmethod
def get_user_ids(usernames):
debug()
user_ids = []
for name in usernames:
try:
u = User.objects.get(username__exact=name)
user_ids.append(u.id)
except ObjectDoesNotExist:
logger.error("We were unable to find '%s' in a list of usernames." % name)
return user_ids
# apps/profile/tests/model_tests.py
from django.test import TestCase
from django.contrib.auth.models import User
from mock import Mock
from apps.profile.models import UserProxy
class TestUserProxy(TestCase):
def test_get_user_ids(self):
u1 = Mock(spec=User)
u1.id = 1
u1.username = 'user1'
u2 = Mock(spec=User)
u2.id = 2
u2.username = 'user2'
u3 = Mock(spec=User)
u3.id = 3
u3.username = 'user3'
usernames = [u1.username, u2.username, u3.username]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)
Answer: Mocking is awesome for testing, and can lead to very clean tests, however it
suffers a little from (a) being a bit fiddly to get ones head around when
starting out, and (b) does require some effort often to set up mock objects
and have then injected/used in the correct places.
The mock objects you are creating for the users are objects that look like a
Django `User` model object, but they are not actual model objects, and
therefore do not get put into the database.
To get your test working, you have two options, depending on what kind of test
you want to write.
**Unit Test - Mock the data returned from the database**
The first option is to get this working as a unit test, i.e. testing the
`get_user_ids` method in isolation from the database layer. To do this, you
would need to mock the call to `User.objects.get(username__exact=name)` so
that it returns the three mock objects you created in your test. This would be
the more correct approach (as it is better to test units of code in
isolation), however it would involve more work to set up than the alternative
below.
One way to achieve this would be to firstly separate out the user lookup into
it's own function in _apps/profile/models.py_ :
def get_users_by_name(name):
return User.objects.get(username__exact=name)
This would need to be called in your function, by replacing the call to
`Users.objects.get(username__exact=name)` with `get_users_by_name(name)`. You
can then modify your test to patch the function like so:
from django.test import TestCase
from django.contrib.auth.models import User
from mock import Mock, patch
from apps.profile.models import UserProxy
class TestUserProxy(TestCase):
@patch('apps.profile.models.get_user_by_name')
def test_get_user_ids(self, mock_get_user_by_name):
u1 = Mock(spec=User)
u1.id = 1
u1.username = 'user1'
u2 = Mock(spec=User)
u2.id = 2
u2.username = 'user2'
u3 = Mock(spec=User)
u3.id = 3
u3.username = 'user3'
# Here is where we wire up the mocking - we take the patched method to return
# users and tell it that, when it is called, it must return the three mock
# users you just created.
mock_get_user_by_name.return_value = [u1, u2, u3]
usernames = [u1.username, u2.username, u3.username]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)
**Integration Test - Create real user objects**
The second approach is to modify this to be an integration test, i.e. one that
tests both this unit of code and also the interaction with the database. This
is a little less clean, in that you are now exposing your tests on the method
to the chance of failing because of problems in a different unit of code (i.e.
the Django code that interacts with the database). However, this does make the
setup of the test a lot simpler, and pragmatically may be the right approach
for you.
To do this, simply remove the mocks you have created and create actual users
in the database as part of your test.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.