text
stringlengths 226
34.5k
|
---|
JSON transfer not working between processes
Question: I am creating a board game, and long story short: I want a file that will
start up and set the main variables of the game (player name, board(array),
prompt, status - playing/gameover) and export these as a .json file. The set
up file is a large GIU file so I would like to run the main game (logic and
display) in a separate file.
I am practising doing this with a simple tic-tac-toe game, but for some reason
I cannot get the json file to export/import correctly (can't really tell
which) OR can't get an input function to work in the separate function.
(code is very basic and still incomplete, but I am simply trying to get the
first step to work - start up, ask for name, then run the file with a function
that will display the board and ask for the users next move)
solution 1) using subprocess.Popen
file1:
import json, subprocess, os
from distlib.compat import raw_input
print('Welcome to TIC-TAC-TOE!')
print()
#Set up of the start information that is going to be passed as JSON payload and sent to each process back and forth
#------------------------------------------------
name = raw_input('Please enter your name: ')
prompt = 'select a space: '
board = [0,1,2,
3,4,5,
6,7,8]
move = None
status = 'playing'
winner = None
iter = 0
#------------------------------------------------
#JSON payload {dictionary} to be sent
info = {'name':name, 'prompt':prompt, 'board':board, 'move':move, 'status': status, 'winner': winner, 'iter': iter, }
print('START, info dictionary as it goes out: ', info)
#file out which dumps info the info.json file
fout = open('info.json', 'w')
json.dump(info, fout)
fout.close
subprocess.Popen(["python3", "engineANDvisuals.py"])
file 2) 'engineANDvisuals.py'
#imported libraries
import random, funcs as f, json
from distlib.compat import raw_input
#front-end: iterface function
def interface(jsonFile):
fin = open(jsonFile, 'r') #open up the json file to read
info = json.load(fin) #load json file as info
fin.close #close the json file
print('FRONT, info dictionary as it comes IN: ', info)
print()
name = info['name'] #set json-name to name
prompt = info['prompt'] #set json-prompt to prompt
board = info['board'] #set json-board to board
winner = info['winner'] #set json-winner to winner
status = info['status'] #set json-status to status
f.printBoard(board) #prints out the board so that the user can see it
print(info['move'])
info['move'] = input(f.returnName(name)+', please '+ f.returnPrompt(prompt))
print('FRONT, info dictionary as it goes OUT: ', info)
fout = open(jsonFile, 'w') #open up the json file to write
json.dump(info, fout) #dumps new info into the file
fout.close #closes the json file
The f.functions are simple print functions located in another file. With this
method I am getting all the way up to the input line in the second file, then
the program just stops. It doesn't terminate, my cores aren't running hard at
all (so I don't think it's in a loop) it just stops after the program asks for
the space.
method 2) now if I use os.system instead to open up the process it gives me a
huge error that I am taking to mean it is importing/exporting the json file
incorrectly.
Traceback (most recent call last):
File "engineANDvisuals.py", line 105, in <module>
interface('info.json')
File "engineANDvisuals.py", line 19, in interface
info = json.load(fin) #load json file as info
File "/usr/lib/python3.4/json/__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.4/json/__init__.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.4/json/decoder.py", line 361, in raw_decode
raise ValueError(errmsg("Expecting value", s, err.value)) from None
ValueError: Expecting value: line 1 column 1 (char 0)
I am so insanely confused because when I run the second file alone after the
first it works perfectly.
I would really appreciate some help, I know it's probably some silly overlook,
a noob could really use some help, thank you so much.
Answer: Short answer : `fout.close` is missing the parens - you want `fout.close()` \-
or better use a `with` statement:
with open('info.json', 'w') as fout:
json.dump(info, fout)
Longer answer:
Without the parens, `fout.close` evals to the `close` method of `fout`, but
the method is not called:
>>> f = open("foo.txt", "w")
>>> print f.close
<built-in method close of file object at 0xf0d660>
Since the file is not closed, the buffer is not flushed to disk, so the
subprocess cannot read it's content.
Once the main process ends, the file object is closed at garbage collection
time and the buffer is flushed to disk, so if you execute the second script on
it's own at this time it does read the file content.
|
Cookies seem disabled (even though they're not!) with HttpURLConnection POST request to website on Android application
Question: **My problem:**
I have an Android application. I'm using HttpURLConnection to send a POST
request to a login form on an external website. The website uses cookies for
logins and I want to check if my login was successful by catching the 'Set-
Cookie' variable from the response header of my POST request. I'm doing this
with a class that extends android's AsyncTask class.
The problem is that the HTML response I get from my request clearly states
that I have cookies disables and naturally, I don't get any Set-Cookie headers
because of that.
**Desired result:**
The printed response header fields should include the 'Set-Cookie' header. The
HTML should either include a table with:
Error: 'Login failed. Both login name and password are case sensitive, check
that caps lock is not enabled."
or:
Info: 'Welcome! You are now logged in'
depending on what username and password were given.
**Actual result:**
POST request is successful, so is getting the header. This is the result of my
System.out.println:
Printing Response Header...
Key : null ,Value : [HTTP/1.1 200 OK]
Key : Connection ,Value : [close]
Key : Content-Language ,Value : [en]
Key : Content-Length ,Value : [14233]
Key : Content-Type ,Value : [text/html;charset=utf-8]
Key : Date ,Value : [Fri, 10 Jul 2015 15:32:17 GMT]
Key : Expires ,Value : [Sat, 1 Jan 2000 00:00:00 GMT]
Key : Server ,Value : [Zope/(2.13.20, python 2.7.6, linux2) ZServer/1.1]
Key : X-Android-Received-Millis ,Value : [1436542369028]
Key : X-Android-Response-Source ,Value : [NETWORK 200]
Key : X-Android-Sent-Millis ,Value : [1436542368849]
Key : X-Ua-Compatible ,Value : [IE=edge,chrome=1]
I see no Set-Cookie variable. This is unsurprising because in the HTML
response, I find this in my 'Error' table:
**"Cookies are not enabled. You must enable cookies to sign in."**
This is the site I get if I manually disable cookies in my laptop's webbrowser
and try logging in there. This is what I can't seem to fix.
**Code:**
The AsyncTask class is here:
import android.os.AsyncTask;
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.CookieHandler;
import java.net.CookieManager;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.List;
import java.util.Map;
public class TutorWebConnectionTask extends AsyncTask<String, String, Void> {
private final String USER_AGENT = "Mozilla/5.0 (Linux; Android 5.0.1; SAMSUNG GT-I9505 Build/LRX22C) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/2.1 Chrome/34.0.1847.76 Mobile Safari/537.36";
HttpURLConnection con;
@Override
protected Void doInBackground(String... strings) {
try {
String url = "http://tutor-web.net/login_form";
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
CookieHandler.setDefault(new CookieManager());
//add reuqest header
con.setRequestMethod("POST");
con.setRequestProperty("User-Agent", USER_AGENT);
con.setRequestProperty("Accept-Language", "en-US,en;q=0.5");
String urlParameters = "__ac_username=user&__ac_password=12345";
// Send post request
con.setDoOutput(true);
DataOutputStream wr = new DataOutputStream(con.getOutputStream());
wr.writeBytes(urlParameters);
wr.flush();
wr.close();
Map<String, List<String>> map = con.getHeaderFields();
System.out.println("Printing Response Header...\n");
for (Map.Entry<String, List<String>> entry : map.entrySet()) {
System.out.println("Key : " + entry.getKey()
+ " ,Value : " + entry.getValue());
}
int responseCode = con.getResponseCode();
System.out.println("\nSending 'POST' request to URL : " + url);
System.out.println("Post parameters : " + urlParameters);
System.out.println("Response Code : " + responseCode);
BufferedReader in = new BufferedReader(
new InputStreamReader(con.getInputStream()));
String inputLine;
StringBuffer response = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
if(inputLine.contains("enable cookies")) {System.out.println("Cookies are disabled -> " + inputLine);}
response.append(inputLine);
}
in.close();
//print result
System.out.println(response.toString());
}catch(Exception e) {e.printStackTrace();}
return null;
}
}
At the press of a button on my android application, in my mainActivity, I
simply do:
TutorWebConnectionTask task = new TutorWebConnectionTask();
task.execute();
**Various things I've tried doing to fix the problem:**
* Messing with USER_AGENT, keeping it as Mozilla/5.0 or making it as it is now, which is what whatsmyuseragent.com gives me when I go to the site on my phone. This site also tells me that I have cookies enabled!
* Using CookiePolicy as ACCEPT_ALL.
* Making sure that both 'Accept all cookies' and 'Accept third party cookies' are checked on my phone's browsers (chrome and default android browser).
* Using httpClient apache class to do the work. That didn't work, not even when I used httpContext variable that I saw suggested in similar stackoverflow threads. I stopped using httpClient because of this thread telling me to use HttpURLConnection instead: [Android HttpClient persistent cookies](http://stackoverflow.com/questions/4146861/android-httpclient-persistent-cookies)
* Using CookieStore to try and grab the cookies through that.
* Investigating what the tutor-web site is doing when it actually gives me the "You need to enable cookies" message. It seems to try to create a cookie and if it fails, it gives you this message.
* Reading varies stackoverflow threads and trying their solutions, with no results.
None of these work. If anyone has any idea how to fix this, please help.
Answer: The cookie needs to be set in the application prior to the POST, otherwise it
will not be seen until the second request to the server.
|
Unzip all zipped files in a folder to that same folder using Python 2.7.5
Question: I would like to write a simple script to iterate through all the files in a
folder and unzip those that are zipped (.zip) to that same folder. For this
project, I have a folder with nearly 100 zipped .las files and I'm hoping for
an easy way to batch unzip them. I tried with following script
import os, zipfile
folder = 'D:/GISData/LiDAR/SomeFolder'
extension = ".zip"
for item in os.listdir(folder):
if item.endswith(extension):
zipfile.ZipFile.extract(item)
However, when I run the script, I get the following error:
Traceback (most recent call last):
File "D:/GISData/Tools/MO_Tools/BatchUnzip.py", line 10, in <module>
extract = zipfile.ZipFile.extract(item)
TypeError: unbound method extract() must be called with ZipFile instance as first argument (got str instance instead)
I am using the python 2.7.5 interpreter. I looked at the documentation for the
zipfile module (<https://docs.python.org/2/library/zipfile.html#module-
zipfile>) and I would like to understand what I'm doing incorrectly.
I guess in my mind, the process would go something like this:
1. Get folder name
2. Loop through folder and find zip files
3. Extract zip files to folder
Thanks Marcus, however, when implementing the suggestion, I get another error:
Traceback (most recent call last):
File "D:/GISData/Tools/MO_Tools/BatchUnzip.py", line 12, in <module>
zipfile.ZipFile(item).extract()
File "C:\Python27\ArcGIS10.2\lib\zipfile.py", line 752, in __init__
self.fp = open(file, modeDict[mode])
IOError: [Errno 2] No such file or directory: 'JeffCity_0752.las.zip'
When I use print statements, I can see that the files are in there. For
example:
for item in os.listdir(folder):
if item.endswith(extension):
print os.path.abspath(item)
filename = os.path.basename(item)
print filename
yields:
D:\GISData\Tools\MO_Tools\JeffCity_0752.las.zip
JeffCity_0752.las.zip
D:\GISData\Tools\MO_Tools\JeffCity_0753.las.zip
JeffCity_0753.las.zip
As I understand the documentation,
zipfile.ZipFile(file[, mode[, compression[, allowZip64]]])
> Open a ZIP file, where file can be either a path to a file (a string) or a
> file-like object
It appears to me like everything is present and accounted for. I just don't
understand what I'm doing wrong.
Any suggestions?
Thank You
Answer: You need to construct a `ZipFile` object with the filename, and _then_ extract
it:
zipfile.ZipFile.extract(item)
is wrong.
zipfile.ZipFile(item).extractall()
will extract all files from the zip file with the name contained in `item`.
I think you should more closely read the documentation to `zipfile` :) but
you're on the right track!
|
Use a flask session inside a python thread
Question: How can I update a flask session inside a python thread? The below code is
throwing this error:
*** RuntimeError: working outside of request context
from flask import session
def test(ses):
ses['test'] = "test"
@app.route('/test', methods=['POST', 'GET'])
def mytest():
t = threading.Thread(target=test, args=(session, ))
t.start()
Answer: When you execute `t.start()`, you are creating an independent thread of
execution which is _not synchronized with the execution of the main thread in
any way._
The Flask [`session` object](//flask.pocoo.org/docs/0.10/api/#flask.session)
is **only defined in the[context of a particular HTTP
request](//flask.pocoo.org/docs/0.10/reqcontext/)**.
What does the variable `session` **mean** in the second thread (`t`)?
When `t` executes, there is no guarantee that the user request from the main
thread still exists or is in a modifiable state. Perhaps the HTTP request has
already been fully handled in the main thread.
Flask detects that you are trying to manipulate an object that is dependent on
a particular context, and that your code is not running in that context. So it
raises an exception.
There are a variety of approaches to synchronizing output from multiple
threads into a single request context but... **what are you actually trying to
do here?**
|
Python time.sleep closes terminal
Question: I used this setup.py script:
from distutils.core import setup
import py2exe
setup(console=['tcphost.py'])
to compile a code that imports this:
import os
import pygame.camera
import numpy as np
import time
import cv2
import socket
import autopy
import glob
def TCPclient ():
CreatePath()
ViHost = str(socket.gethostbyname(socket.gethostname()))
ViPort = 6869
AtHost = "192.168.56.1"
AtPort = ViPort
AtSock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
TryCon = True
while TryCon == True:
try:
print "Trying to connect..."
AtSock.connect((AtHost, AtPort))
TryCon = False
except:
print "Could not connect"
TryCon = True
time.sleep(30)
print ("Connected.")
AtSock.send("<||.IP..||>" + ViHost)
time.sleep(1)
AtSock.send("<||.PRT.||>" + str(ViPort))
time.sleep(1)
AtSock.send("<||.NAM.||>" + str(socket.gethostname()))
time.sleep(1)
AtSock.send("<||.EXT.||>")
time.sleep(1)
AtSock.close()
print ("Messages sent and socket closed.")
TCPserver (ViHost, ViPort)
if __name__ == "__main__":
TCPclient()
(can´t post all of the code because it's to big for stack overflow)
And it compiles fine however when i try to run the executable a terminal
window pops up prints
Trying to connect...
Could not connect
(as it should)but then closes realy fast however if i try to run it **from**
the terminal it works fine. Why and how can i make it stay opend?
**EDIT**
Just to make it clear if i double click the executable a window pops up and
closes. If i run the exe from the comand line everything is fine.
Answer: `time.sleep` is not what causing the script to stop.
Windows command is doing what it's supposed to do, it runs the script then
when everything is executed it closes on its own.
A common way to stop this from happening is to include `input()` at the end of
your script:
**Python 2.7**
raw_input("Press Enter to exit")
or **Python 3.4**
input("Press Enter to exit")
|
In python pandas DataFrames, what are the rules for automatic type conversion when setting values?
Question: If I have a dataframe which looks like
import pandas
d = pandas.DataFrame( data = {'col1':[100,101,102,103] } )
# col1
#0 100
#1 101
#2 102
#3 103
and I do
d.set_value( 0,'col1', '200')
it casts '200' to an integer:
type( d.col1[0] )
#numpy.int64
however if I do
d.set_value( 0,'col2', '200')
I get
type( d.col2[0] )
#str
as expected.
## More mysteries:
Further, say I do the following
[ type(x) for x in d.col1 ]
#[numpy.int64, numpy.int64, numpy.int64, numpy.int64]
d.set_value( [0,1,2,3], 'col1', ['101', '102', '103', 200] )
[ type(x) for x in d.col1 ]
#[str, str, str, str]
So even though `d.col1` was originally an integer column, it has now become a
string column. What are the rules for such type casting of entire columns ?
I am just curious what the rules are for automatic type-casting when
manipulating pandas dataframes.
Answer: pandas is column-major and every element in the same column must have the same
data type.
When you create a dataframe using
import pandas as pd
df = pd.DataFrame({'col':[100,101,102,103]})
df.col.dtype
Out[11]:
dtype('int64')
pandas automatically infer that all these input are numeric values and of
integer type. So when you set values for this column `col`, all your inputs
will be automatically casted into the current column `dtype` which is `int64`,
so the following will give you exactly the same output
df.set_value(0, 'col', '200') # cast string into int
df.set_value(0, 'col', 200) # int input
df.set_value(0, 'col', 200.1) # cast float64 into int64
But when you try to do `df.set_value(0, 'col1', '200')`, the current `df` has
no column `col1`, so pandas first create a new column named `col1`, and it
will try to infer the dtype for this new column based on your input.
df.set_value(0, 'col1', '200')
df.col1.dtype # dtype('O'), means object/string
df.set_value(0, 'col2', 200.1)
df.col2.dtype # dtype('float64')
|
How to send request to url that is set to 'login:admin' in google app engine?
Question: In my app.yaml, a url is defined to be:
- url: /api/.*
script: main.app
login: admin
secure: always
I tried to the following code to talk to the api
import requests
def main():
r = requests.get("https://test.appspots.com/api/get_data", auth=('[email protected]', 'password'))
print r.status_code, r.text
if __name__ == '__main__':
main()
But authentication has failed and, judging from the output, I am redirect to a
login page.
How can I use python to authenticate and access the url?
Answer: `login: admin` instructs Google App Engine to restrict URLs matching the given
pattern to users who are authenticated with Google _AND_ are Administrators of
your Google App Engine project. There is no way to use standard HTTP Basic
Authentication with this restriction. If you have a valid oAuth Bearer token
you can pass it in the header in `requests.get` to handle the required
authentication.
See this article on appidentity for some possible options:
<https://cloud.google.com/appengine/docs/python/appidentity/>
|
Series imported but unused error Python
Question:
import numpy as np
from pandas import Series, DataFrame
import pandas as pd
import matplotlib.pyplot as plt
iris_df = DataFrame()
iris_data_path = 'Z:\WORK\Programming\Python\irisdata.csv'
iris_df = pd.read_csv(iris_data_path,index_col=False,header=None,encoding='utf-8')
iris_df.columns = ['sepal length','sepal width','petal length','petal width','class']
print iris_df.columns.values
print iris_df.head()
print iris_df.tail()
irisX = irisdata[['sepal length','sepal width','petal length','petal width']]
print irisX.tail()
irisy = irisdata['class']
print irisy.head()
print irisy.tail()
colors = ['red','green','blue']
markers = ['o','>','x']
irisyn = np.where(irisy=='Iris-setosa',0,np.where(irisy=='Iris-virginica',2,1))
Col0 = irisdata['sepal length']
Col1 = irisdata['sepal width']
Col2 = irisdata['petal length']
Col3 = irisdata['petal width']
plt.figure(num=1,figsize=(16,10))
plt.subplot(2,3.1)
for i in range(len(colors)):
xs = Col0[irisyn==i]
xy = Col1[irisyn==i]
plt.scatter(xs,xy,color=colors[i],marker=markers[i])
plt.legend( ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica') )
plt.xlabel(irisdata.columns[0])
plt.ylabel(irisdata.columns[1])
plt.subplot(2,3,2)
for i in range(len(colors)):
xs = Col0[irisyn==i]
xy = Col2[irisyn==i]
plt.scatter(xs,xy,color=colors[i],marker=markers[i])
plt.xlabel(irisdata.columns[0])
plt.ylabel(irisdata.columns[2])
plt.subplot(2,3,3)
for i in range(len(colors)):
xs = Col0[irisyn==i]
xy = Col3[irisyn==i]
plt.scatter(xs,xy,color=colors[i],marker=markers[i])
plt.xlabel(irisdata.columns[0])
plt.ylabel(irisdata.columns[3])
plt.subplot(2,3,4)
for i in range(len(colors)):
xs = Col1[irisyn==i]
xy = Col2[irisyn==i]
plt.scatter(xs,xy,color=colors[i],marker=markers[i])
plt.xlabel(irisdata.columns[1])
plt.ylabel(irisdata.columns[2])
plt.subplot(2,3,5)
for i in range(len(colors)):
xs = Col1[irisyn==i]
xy = Col3[irisyn==i]
plt.scatter(xs,xy,color=colors[i],marker=markers[i])
plt.xlabel(irisdata.columns[1])
plt.ylabel(irisdata.columns[3])
plt.subplot(2,3,6)
for i in range(len(colors)):
xs = Col2[irisyn==i]
xy = Col3[irisyn==i]
plt.scatter(xs,xy,color=colors[i],marker=markers[i])
plt.xlabel(irisdata.columns[2])
plt.ylabel(irisdata.columns[3])
plt.show()
This is code from Howard Bandy's book Quantitative Technical Analysis. The
problem is that it is giving me errors even though I typed it out exactly like
it is in the book.
I still get the series imported but unused and undefined name irisdata
errors/warnings.
This is in the console:
Code:
runfile('Z:/WORK/Programming/Python/Scripts/irisplotpairsdata2.py', wdir='//AMN/annex/WORK/Programming/Python/Scripts')
['sepal length' 'sepal width' 'petal length' 'petal width' 'class']
sepal length sepal width petal length petal width class
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
sepal length sepal width petal length petal width class
145 6.7 3.0 5.2 2.3 Iris-virginica
146 6.3 2.5 5.0 1.9 Iris-virginica
147 6.5 3.0 5.2 2.0 Iris-virginica
148 6.2 3.4 5.4 2.3 Iris-virginica
149 5.9 3.0 5.1 1.8 Iris-virginica
Traceback (most recent call last):
File "<ipython-input-100-f0b2002668bd>", line 1, in <module>
runfile('Z:/WORK/Programming/Python/Scripts/irisplotpairsdata2.py', wdir='//AMN/annex/WORK/Programming/Python/Scripts')
File "C:\MyPrograms\Spyder(Python)\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 685, in runfile
execfile(filename, namespace)
File "C:\MyPrograms\Spyder(Python)\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "Z:/WORK/Programming/Python/Scripts/irisplotpairsdata2.py", line 24, in <module>
irisX = irisdata[['sepal length','sepal width','petal length','petal width']]
TypeError: list indices must be integers, not list
Obviously, the program does not run.
I'm using spyder with python 2.7. Which is the platform he was using in the
book.
Thanks for any insight.
Answer: Well Python is not wrong. You imported Series but never used, which is a
warning that does not cause crash. The crash happens because you are
dereferencing a variable, `irisdata`, which was never defined before. (Ctrl +
f irisdata in your code and take a look.) Judging by your code,
`irisdata`probably needs to contain the parsed data of
`Z:\WORK\Programming\Python\irisdata.csv` doesn't it? So you need to parse
that out and assign it to `irisdata`. See [this
post](http://stackoverflow.com/questions/24662571/python-import-csv-to-list)
eg.
import csv
...
irisdata = list(csv.reader(open(iris_data_path, 'rb')))
|
Is this a valid way to subclass a python thread to accept a variable update?
Question: I'm looking at the threading module in Python (version 3.4.3), and am having
difficulty finding a way to update a variable in the target function that is
called. I think I could create a global variable to share between the main
program and the thread that I am starting, but found myself creating the
following subclass instead. This seems to work for my purposes, but I'm
curious if this is just a hack or if it's valid.
The goal here is to create a separate thread that regularly (and quietly)
pings a server, and then updates a specified widget with the status of the
update:
from tkinter import *
import threading
class ipthread(threading.Thread):
def __init__(self, ip=None, labelobj=None):
self.ip = ip
threading.Thread.__init__(self, target=self.checkconnection, args=(labelobj,))
def newip(self, ip):
self.ip = ip
def checkconnection(self, widget):
while True:
self.response = os.system("ping -c 1 -W 10 " + self.ip + " > /dev/null 2> /dev/null")
if self.response==0:
widget.config(text="connected", bg='green')
else:
widget.config(text="no connection", bg='red')
time.sleep(1)
if __name__=="__main__":
win = Tk()
status = Label(win, text='')
status.pack()
ipchecker = ipthread(ip='192.168.1.1',widget=status)
time.sleep(10)
ipchecker.newip('192.168.1.2')
While I've put a simple routine here that calls the update after a 10-second
delay, in my program the thread is initialized when I create a tkinter Frame
control panel. The panel has a button that then calls the newip method to
update the thread. It works, so I'm feeling like I've accomplished something,
but is it overkill or unnecessary? I couldn't find another way to initialize
the "checkconnection" routine as a separate thread, and then be able to update
the ip address its using.
Answer: You ask several questions, I'll hopefully answer them all.
**Is this a valid way to pass arguments to a thread?**
There are two ways to pass arguments to a thread:
* Subclass Thread, redefine the function run and access the new variables from there, as in
class ipthread(threading.Thread):
def __init__(self, ip=None, labelobj=None):
self.ip = ip
self.labelobj = labelobj
def newip(self, ip):
self.ip = ip
def run(self):
while True:
self.response = os.system("ping -c 1 -W 10 " + self.ip + " > /dev/null 2> /dev/null")
if self.response==0:
self.labelobj.config(text="connected", bg='green')
else:
self.labelobj.config(text="no connection", bg='red')
time.sleep(1)
* Or you could specify the target to call
def checkconnection(widget, ip):
while True:
self.response = os.system("ping -c 1 -W 10 " + self.ip + " > /dev/null 2> /dev/null")
if self.response==0:
widget.config(text="connected", bg='green')
else:
widget.config(text="no connection", bg='red')
time.sleep(1)
th = threading.Thread(target=checkconnection, args=(labelobj, ip))
th.start()
**Is this the proper solution for this problem?**
No, it is not!
Widget libraries normally do not like a second thread to cut across their
widgets. You could use a TKInter
[timer](http://stackoverflow.com/a/2401181/1767653) instead of a thread, or
use a [custom event](http://stackoverflow.com/a/276069/1767653) to let the
thread tell the widget that the text has changed.
When using a timer, the GUI becomes unresponsive while the tick is being
handled. In your case, the ping has an unbounded delay so I would keep the
thread and let it send a custom event.
|
Detect open circle with Python / OpenCV
Question: I have a picture with random circles shown of which one circle is always open.
The size, position and color of the circles are different each time but the
background is always white.
I want to find the coordinates of the circle which is open programmatically.
Here is a sample picture:

This picture roughly has coordinates of x:285 y:70. Here is my attempt:
import numpy as np
import argparse
import cv2
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
image = cv2.imread(args["image"])
black = cv2.imread('black.png')
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(black,contours,-1,(250,250,250),2)
newblack = cv2.cvtColor(black, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(newblack, cv2.cv.CV_HOUGH_GRADIENT, 1, 1,
param1=42,
param2=35,
minRadius=15,
maxRadius=50)
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("output", np.hstack([image, output]))
print circles
cv2.waitKey(0)
Almost finished!
The param2 Value determines which circle is found. I tweaked my code so it
iterates over param2 values starting with a rather high value of 120.
When it finds a circle it stops.
# import the necessary packages
import numpy as np
import argparse
import cv2
import sys
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
str = 120
def findcircle( str ):
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1, 1,
param1=42,
param2=str,
minRadius=10,
maxRadius=100)
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
print circles
sys.exit()
while circles is None:
str = str-1
findcircle(str)
findcircle(str)
The success rate is around 80-100%
How do change the output of the variable "circles" to only show one circle
incase more then 1 is found and how do I remove unwanted spaces and brackets?
Answer:
# import the necessary packages
import numpy as np
import argparse
import cv2
import sys
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
str = 120
def findcircle( str ):
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1, 1,
param1=42,
param2=str,
minRadius=10,
maxRadius=100)
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
index = [2]
new_circles = np.delete(circles[0], index)
print new_circles
sys.exit()
while circles is None:
str = str-1
findcircle(str)
findcircle(str)
|
How to draw a polygon of n sides, m times
Question: I'm new in Python programming and I'm having a task to develop a simple
program that draws a polygon of n sides, m times, making the leftmost edge of
the next polygon touch the rightmost edge of the previous polygon. The code
below draws a polygon depending on the number of sides the user inputs and the
number of times the polygon should appear.
import turtle
def myTurtle():
num_side = raw_input("Enter the number of sides: " )
num_shap = raw_input("Enter the number of shapes: " )
num_sides = int(num_side)
num_shape = int(num_shap)
window = turtle.Screen()
window.bgcolor("red")
polygon = turtle.Turtle()
polygon.penup()
polygon.goto(-200, 200)
polygon.pendown()
side_length = 60
angle = 360.0 // num_sides
n = 0
for j in range(0, num_shape):
polygon.forward(side_length)
for i in range(num_sides):
polygon.pencolor("black")
polygon.forward(side_length)
polygon.right(angle)
n += side_length
window.exitonclick()
myTurtle()
The problem I'm having now is to make the next polygon go next to the previous
polygon.
* * *
I have been able to come up with a better but still not a perfect solution.
Just some polygon that are touching each other. How can i achieve this?
import turtle, math
def find_lenth(radius, sides):
angle = float(360 / sides)
otherangle = float((180 - angle) / 2)
radangle = float(angle * (math.pi / 180))
radangle2 = float(otherangle * (math.pi/180))
angles = math.sin(radangle) / math.sin(radangle2)
lenth = radius * angles
return lenth
def myTurtle():
num_side = raw_input("Enter the number of sides: " )
num_shap = raw_input("Enter the number of shapes: " )
num_sides = int(num_side)
num_shape = int(num_shap)
window = turtle.Screen()
window.bgcolor("red")
polygon = turtle.Turtle()
radius = 60
side_length = find_lenth(radius, num_sides)
angle = 360.0 // num_sides
delta = radius*2 #this value you must count
colors = ['blue','white','black','green']
for i in range(num_shape):
polygon.penup()
polygon.goto(-400+delta*i, 200)
polygon.pendown()
polygon.pencolor(colors[i%4])
n = 0
for j in range(num_sides):
polygon.forward(side_length)
polygon.right(angle)
window.exitonclick()
if __name__ == '__main__':
myTurtle()
Answer: This code will draw you some polygons. For moving pointer to the right value
of pixels you need to count it.
import turtle
def myTurtle():
num_side = raw_input("Enter the number of sides: " )
num_shap = raw_input("Enter the number of shapes: " )
num_sides = int(num_side)
num_shape = int(num_shap)
window = turtle.Screen()
window.bgcolor("red")
polygon = turtle.Turtle()
side_length = 60
angle = 360.0 // num_sides
delta = side_length*3 #this value you must count
colors = ['blue','white','black','green']
for i in range(num_shape):
polygon.penup()
polygon.goto(-400+delta*i, 200)
polygon.pendown()
polygon.pencolor(colors[i%4])
n = 0
for j in range(num_sides):
polygon.forward(side_length)
polygon.right(angle)
window.exitonclick()
if __name__ == '__main__':
myTurtle()
|
python file overwriting with os module
Question: I am writing a program that will move files from one folder to another.
Sometimes I overwrite files in the process. Whenever I run the file however, I
am asked "Overwrite C:... (Yes/No/All)". I want my program to always select
"All" automatically. Thank you in advance
import os
from tkinter import *
screen = Tk()
sourceplayers = 'C:\\Program Files (x86)\...\players'
destinationplayers = 'C:\\memory\Will\players'
sourceuserdata = ('C:\\\\Program Files (x86)\...\remote'
destinationuserdata = 'C:\\\\memory\\Will\\remote'
def copyout():
os.system ("""xcopy "%s" "%s" """ % (sourceplayers, destinationplayers))
os.system ("""xcopy "%s" "%s" """ % (sourceuserdata, destinationuserdata)) #save
def movein():
os.system ("""xcopy "%s" "%s" """ % (destinationplayers, sourceplayers))
os.system ("""xcopy "%s" "%s" """ % (destinationuserdata, sourceuserdata))
button = Button(screen, text="save", command=copyout)
button1 = Button(screen, text="overwrite", command=movein)
button.pack()
button1.pack()
screen.mainloop()
Answer: That is because you are using system calls to do the moving. You could use a
combination of
[os.listdir](https://docs.python.org/2/library/os.html#os.listdir) to list all
the files and directories inside the source one, and
[shutil.move](https://docs.python.org/2/library/shutil.html#shutil.move) to do
the moving.
|
Memory efficient sort of massive numpy array in Python
Question: I need to sort a VERY large genomic dataset using numpy. I have an array of
2.6 billion floats, dimensions = `(868940742, 3)` which takes up about 20GB of
memory on my machine once loaded and just sitting there. I have an early 2015
13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7
processor. Just loading the array overflows to virtual memory but not to the
point where my machine suffers or I have to stop everything else I am doing.
I build this VERY large array step by step from 22 smaller `(N, 2)` subarrays.
Function `FUN_1` generates 2 new `(N, 1)` arrays using each of the 22
subarrays which I call `sub_arr`.
The first output of `FUN_1` is generated by interpolating values from
`sub_arr[:,0]` on array `b = array([X, F(X)])` and the second output is
generated by placing `sub_arr[:, 0]` into bins using array `r = array([X,
BIN(X)])`. I call these outputs `b_arr` and `rate_arr`, respectively. The
function returns a 3-tuple of `(N, 1)` arrays:
import numpy as np
def FUN_1(sub_arr):
"""interpolate b values and rates based on position in sub_arr"""
b = np.load(bfile)
r = np.load(rfile)
b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1])
rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize...
return r[rate_r, 1], b_arr, sub_arr[:,1]
I call the function 22 times in a for-loop and fill a pre-allocated array of
zeros `full_arr = numpy.zeros([868940742, 3])` with the values:
full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1
In terms of saving memory at this step, I think this is the best I can do, but
I'm open to suggestions. Either way, I don't run into problems up through this
point and it only takes about 2 minutes.
Here is the sorting routine (there are two consecutive sorts)
for idx in range(2):
sort_idx = numpy.argsort(full_arr[:,idx])
full_arr = full_arr[sort_idx]
# ...
# <additional processing, return small (1000, 3) array of stats>
Now this sort had been working, albeit slowly (takes about 10 minutes).
However, I recently started using a larger, more fine resolution table of `[X,
F(X)]` values for the interpolation step above in `FUN_1` that returns `b_arr`
and now the SORT really slows down, although everything else remains the same.
Interestingly, I am not even sorting on the interpolated values at the step
where the sort is now lagging. Here are some snippets of the different
interpolation files - the smaller one is about 30% smaller in each case and
far more uniform in terms of values in the second column; the slower one has a
higher resolution and many more unique values, so the results of interpolation
are likely more unique, but I'm not sure if this should have any kind of
effect...?
**bigger, slower file:**
17399307 99.4
17493652 98.8
17570460 98.2
17575180 97.6
17577127 97
17578255 96.4
17580576 95.8
17583028 95.2
17583699 94.6
17584172 94
**smaller, more uniform regular file:**
1 24
1001 24
2001 24
3001 24
4001 24
5001 24
6001 24
7001 24
I'm not sure what could be causing this issue and I would be interested in any
suggestions or just general input about sorting in this type of memory
limiting case!
Answer: At the moment each call to `np.argsort` is generating a `(868940742, 1)` array
of int64 indices, which will take up ~7 GB just by itself. Additionally, when
you use these indices to sort the columns of `full_arr` you are generating
another `(868940742, 1)` array of floats, since [fancy indexing always returns
a copy rather than a
view](http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays).
One fairly obvious improvement would be to sort `full_arr` in place using its
[`.sort()`
method](http://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html).
Unfortunately, `.sort()` does not allow you to directly specify a row or
column to sort by. However, you _can_ specify a field to sort by for a
structured array. You can therefore force an inplace sort over one of the
three columns by getting a
[`view`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html)
onto your array as a structured array with three float fields, then sorting by
one of these fields:
full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0)
In this case I'm sorting `full_arr` in place by the 0th field, which
corresponds to the first column. Note that I've assumed that there are three
float64 columns (`'f8'`) - you should change this accordingly if your dtype is
different. This also requires that your array is contiguous and in row-major
format, i.e. `full_arr.flags.C_CONTIGUOUS == True`.
Credit for this method should go to Joe Kington for his answer
[here](http://stackoverflow.com/a/2828371/1461210).
* * *
Although it requires less memory, sorting a structured array by field is
unfortunately much slower compared with using `np.argsort` to generate an
index array, as you mentioned in the comments below (see [this previous
question](http://stackoverflow.com/q/19682521/1461210)). If you use
`np.argsort` to obtain a set of indices to sort by, you might see a modest
performance gain by using `np.take` rather than direct indexing to get the
sorted array:
%%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort()
x[idx]
# 1 loops, best of 100: 148 µs per loop
%%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort()
np.take(x, idx, axis=0)
# 1 loops, best of 100: 42.9 µs per loop
However I wouldn't expect to see any difference in terms of memory usage,
since both methods will generate a copy.
* * *
Regarding your question about why sorting the second array is faster - yes,
you should expect any reasonable sorting algorithm to be faster when there are
fewer unique values in the array because on average there's less work for it
to do. Suppose I have a random sequence of digits between 1 and 10:
5 1 4 8 10 2 6 9 7 3
There are 10! = 3628800 possible ways to arrange these digits, but only one in
which they are in ascending order. Now suppose there are just 5 unique digits:
4 4 3 2 3 1 2 5 1 5
Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I
could swap any pair of identical digits in the sorted vector without breaking
the ordering.
By default, `np.ndarray.sort()` uses
[Quicksort](https://en.wikipedia.org/wiki/Quicksort). The
[`qsort`](https://en.wikipedia.org/wiki/Quicksort#Repeated_elements) variant
of this algorithm works by recursively selecting a 'pivot' element in the
array, then reordering the array such that all the elements less than the
pivot value are placed before it, and all of the elements greater than the
pivot value are placed after it. Values that are equal to the pivot are
already sorted. Having fewer unique values means that, on average, more values
will be equal to the pivot value on any given sweep, and therefore fewer
sweeps are needed to fully sort the array.
For example:
%%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000)
x.sort()
# 1 loops, best of 100: 2.3 ms per loop
%%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000)
x.sort()
# 1 loops, best of 100: 4.62 ms per loop
In this example the dtypes of the two arrays are the same. If your smaller
array has a smaller item size compared with the larger array then the cost of
copying it due to the fancy indexing will also be smaller.
|
Plotting datetime output using matplotlib
Question: So I have this code based on a simple data array that looks like this:
5020 : 2015 7 11 11 42 54 782705
5020 : 2015 7 11 11 44 55 575776
5020 : 2015 7 11 11 46 56 560755
5020 : 2015 7 11 11 48 57 104872
and the plot looks like the following:
import scipy as sp
import matplotlib.pyplot as plt
data = sp.genfromtxt("E:/Python/data.txt", delimiter=" : ")
x = data[:,0]
y = data[:,1]
plt.scatter(x,y)
plt.title("Instagram")
plt.xlabel("Time")
plt.ylabel("Followers")
plt.xticks([w*2*60 for w in range(10)],
['2-minute interval %i'%w for w in range(10)])
plt.autoscale(tight=True)
plt.grid()
plt.show()
I'm looking for a simple way to use the datetime output as x intervals on the
graph, I can't figure out a way to make it understand it and there's this:
In [15]:sp.sum(sp.isnan(y))
Out[15]: 77
Which I guess is because of the spaces? I'm new to machine learning in Python,
forgive my ignorance.
Thank you very much.
Answer: I would solve this by directly passing datetime.datetime objects to pyplot.
Here is a short example:
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib
# Note: please figure out yourself the data input
x = [dt.datetime(2015,7,11,11,42,54),
dt.datetime(2015,7,11,11,44,56),
dt.datetime(2015,7,11,11,46,56),
dt.datetime(2015,7,11,11,48,57)]
#define the x limit:
xstart= dt.datetime(2015,7,11,11,40,54)
xstop = dt.datetime(2015,7,11,11,50,54)
y = [782705, 575776, 560755, 104872]
fig,ax= plt.subplots()
ax.scatter(x,y)
xfmt = matplotlib.dates.DateFormatter('%D %H:%M:%S')
ax.xaxis.set_major_formatter(xfmt)
ax.set_title("Instagram")
ax.set_xlabel("Time")
ax.set_ylabel("Followers")
ax.set_xlim(xstart,xstop)
plt.xticks(rotation='vertical')
plt.show()
Result: 
|
Setting attribute values of XML using Python's Etree
Question: I am trying to modify XML using Python's Etree Module. I am trying to look
through a fairly complicated XML for a 'ScanItem'. The children of ScanItem
has three keys (['name', 'type', 'value']). I am looking to modify the
contents of 'value'.
Here is my code so far:
from lxml import etree
model_dir = 'D:\MPhil\Model_Building\Models\Retinoic_acid\[07]\Model_LIne_10'
model_name = 'M28.cps'
model_file=os.path.join(model_dir,model_name) #model_file contains the XML
IA=Identifiability_Analysis() # custom class
copasiML_str= IA.read_copasiML_as_string(model_file)
copasiML=etree.fromstring(copasiML_str) # parse XML with etree
parameters_dict=IA.extract_parameters_to_dict(model_file) #extract some parameters from the XML
reaction_name_i='v1'
parameter_name_i='Kcat'
maxi = float(parameters_dict[reaction_name_i][parameter_name_i])*2 #calculating changes
mini = float(parameters_dict[reaction_name_i][parameter_name_i])/2 # calculating changes
query = "//*[@name='ScanItem']" #looks for the element called ScanItem
for j in copasiML_i.xpath(query):
children = list(j) #gets the children of ScanItem
for k in children:
if k.attrib['name']=='Maximum': #finds the 'bit' that I want to change
copasiML_i.set(k.attrib['value'],maxi) #an attempt at changing the value to maxi. This does not work
This last line gives the following error:
ValueError: Invalid attribute name u'0.0102086' #which is the value of 'maxi'
Does anybody know what I'm doing wrong?
Thanks
Answer: So I figured it out in the end. Its a simple assignment with the same syntax
as a normal Python dictionary
|
WAL-E: Cannot restart postgresql after backup-fetch
Question: This is on Ubuntu 14.04 LTS
**EDIT** :
WAL-E installed using pip, with secret keys managed by `envdir`, according to
the instructions <https://gist.github.com/elithrar/8682235> and
<https://github.com/wal-e/wal-e#dependencies>.
I am trying to restore a database using WAL-E, and everything appears to go
well initially, as I have Postgres installed and running, and can easily
create or restore a database and access it locally and remotely via pgadmin.
Where it goes bad is when I try to perform a restore from an S3 backup using
wal-e fetch-backup. It appears to go well up until the point of starting the
postgres.
There are a number of errors that come up, appearing to be either missing
packages or permissions issues, like the following:
* Starting PostgreSQL 9.3 database server
* The PostgreSQL server failed to start. Please check the log output:
2015-07-11 00:41:11 EDT LOG: database system was interrupted; last known up at 2015-06-30 05:00:02 EDT
2015-07-11 00:41:11 EDT LOG: starting archive recovery
Traceback (most recent call last):
File "/usr/local/bin/wal-e", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3084, in <module>
@_call_aside
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3070, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3097, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 651, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 952, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 847, in resolve
new_requirements = dist.requires(req.extras)[::-1]
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2602, in requires
dm = self._dep_map
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2803, in _dep_map
self.__dep_map = self._compute_dependencies()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2825, in _compute_dependencies
for req in self._parsed_pkg_info.get_all('Requires-Dist') or []:
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2794, in _parsed_pkg_info
metadata = self.get_metadata(self.PKG_INFO)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1617, in get_metadata
return self._get(self._fn(self.egg_info, name))
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1728, in _get
with open(path, 'rb') as stream:
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/six-1.9.0.dist-info/METADATA'
2015-07-11 00:41:11 EDT LOG: invalid checkpoint record
2015-07-11 00:41:11 EDT FATAL: could not locate required checkpoint record
2015-07-11 00:41:11 EDT HINT: If you are not restoring from a backup, try removing the file "/var/lib/postgresql/9.3/main/backup_label".
2015-07-11 00:41:11 EDT LOG: startup process (PID 1693) exited with exit code 1
2015-07-11 00:41:11 EDT LOG: aborting startup due to startup process failure
I had several of these and was able to resolve them by changing the group and
modifying the permissions on the noted files to match others in the directory,
but I suspect the problem has more to do with how and/or where these packages
were installed. After resolving the above issue, postgres still fails to start
up, returning the following:
* Starting PostgreSQL 9.3 database server
* The PostgreSQL server failed to start. Please check the log output:
2015-07-11 00:30:04 EDT LOG: database system was interrupted; last known up at 2015-06-30 05:00:02 EDT
2015-07-11 00:30:04 EDT LOG: starting archive recovery
Traceback (most recent call last):
File "/usr/local/bin/wal-e", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3084, in <module>
@_call_aside
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3070, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3097, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 651, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 952, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 839, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'wal-e==0.8.1' distribution was not found and is required by the application
2015-07-11 00:30:04 EDT LOG: invalid checkpoint record
2015-07-11 00:30:04 EDT FATAL: could not locate required checkpoint record
2015-07-11 00:30:04 EDT HINT: If you are not restoring from a backup, try removing the file "/var/lib/postgresql/9.3/main/backup_label".
2015-07-11 00:30:04 EDT LOG: startup process (PID 1495) exited with exit code 1
2015-07-11 00:30:04 EDT LOG: aborting startup due to startup process failure
It is complaining about `the 'wal-e==0.8.1' distribution was not found...`,
but it is clearly installed and executable:
ls -l /usr/local/lib/python2.7/dist-packages
total 608
drwxr-sr-x 2 root staff 4096 Jul 10 20:14 argparse-1.3.0.dist-info
-rw-r--r-- 1 root staff 88400 Jul 10 20:14 argparse.py
-rw-r--r-- 1 root staff 65659 Jul 10 20:14 argparse.pyc
drwxr-sr-x 6 root staff 4096 Jul 9 22:36 azure
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 azure-0.11.1.egg-info
drwxr-sr-x 5 root staff 4096 Jul 9 22:36 babel
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 Babel-1.3.egg-info
drwxr-sr-x 57 root staff 4096 Jul 9 22:36 boto
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 boto-2.38.0.dist-info
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 concurrent
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 dateutil
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 debtcollector
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 debtcollector-0.5.0.dist-info
-rw-r--r-- 1 root staff 207 Jul 10 20:10 easy-install.pth
-rw-r--r-- 1 root staff 126 Jul 10 20:33 easy_install.py
-rw-r--r-- 1 root staff 315 Jul 10 20:33 easy_install.pyc
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 futures-3.0.3.dist-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 gevent
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 gevent-1.0.2.egg-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 greenlet-0.4.7.egg-info
-rwxr-xr-x 1 root staff 82869 Jul 9 22:36 greenlet.so
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 iso8601
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 iso8601-0.1.10.egg-info
drwxr-sr-x 15 root staff 4096 Jul 9 22:36 keystoneclient
drwxr-sr-x 2 root staff 4096 Jul 10 20:33 _markerlib
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 msgpack
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 msgpack_python-0.4.6.egg-info
drwxr-sr-x 5 root staff 4096 Jul 9 22:36 netaddr
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 netaddr-0.7.15.dist-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 netifaces-0.10.4.egg-info
-rwxr-xr-x 1 root staff 58386 Jul 9 22:36 netifaces.so
drwxr-sr-x 4 root staff 4096 Jul 9 22:36 oslo
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 oslo_config
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 oslo.config-1.14.0.dist-info
-rw-r--r-- 1 root root 299 Jul 9 22:35 oslo.config-1.14.0-py2.7-nspkg.pth
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 oslo_i18n
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 oslo.i18n-2.1.0.dist-info
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 oslo_serialization
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 oslo.serialization-1.7.0.dist-info
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 oslo_utils
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 oslo.utils-1.8.0.dist-info
-rw-r--r-- 1 root root 299 Jul 9 22:35 oslo.utils-1.8.0-py2.7-nspkg.pth
drwxr-sr-x 5 root staff 4096 Jul 10 20:14 pbr
drwxr-sr-x 2 root staff 4096 Jul 10 20:14 pbr-1.3.0.dist-info
drwxr-sr-x 4 root staff 4096 Jul 10 20:10 pip-7.1.0-py2.7.egg
drwxr-sr-x 3 root staff 4096 Jul 10 20:33 pkg_resources
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 python_dateutil-2.4.2.dist-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 python_keystoneclient-1.6.0.dist-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 python_swiftclient-2.4.0.dist-info
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 pytz
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 pytz-2015.4.dist-info
drwxr-sr-- 3 root staff 4096 Jul 9 23:25 requests
drwxr-sr-- 2 root staff 4096 Jul 9 23:25 requests-2.7.0.dist-info
drwxr-sr-x 3 root staff 4096 Jul 10 20:33 setuptools
drwxr-sr-x 2 root staff 4096 Jul 10 20:33 setuptools-18.0.1.dist-info
drwxr-sr-x 3 root staff 4096 Jul 9 22:36 simplejson
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 simplejson-3.7.3.egg-info
drwxr-sr-- 2 root staff 4096 Jul 9 23:26 six-1.9.0.dist-info
-rw-r--r-- 1 root root 29664 Jul 9 23:26 six.py
-rw-r--r-- 1 root root 29006 Jul 9 23:26 six.pyc
drwxr-sr-x 4 root staff 4096 Jul 9 22:36 stevedore
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 stevedore-1.6.0.dist-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 swiftclient
drwxr-sr-x 7 root staff 4096 Jul 9 22:35 wal_e
drwxr-sr-x 2 root staff 4096 Jul 9 22:35 wal_e-0.8.1.egg-info
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 wrapt
drwxr-sr-x 2 root staff 4096 Jul 9 22:36 wrapt-1.10.5.egg-info
I have done quite a bit of searching, but haven't found anything that helps.
Any suggestions or points in the right direction are appreciated.
Additionally, it would be of great interest to solve this to ensure that my
primary installation will behave in the event that a restoration is necessary.
Backing up is pointless if I can't restore it.
Answer: I am not entirely sure what resolved the `The 'wal-e==0.8.1' distribution was
not found` errro, but on rebooting I never saw that again.
Aside from that, fixing this was rather straightforward.
The executable bit on a number of python distributions was not set.
Chowning these fixed the error:
chmod o+x /usr/local/lib/python2.7/dist-packages/requests-2.7.0.dist-info/
chmod o+x /usr/local/lib/python2.7/dist-packages/requests
...
|
How do I get the name of a key in PyWin32 giving its keycode?
Question: I'm reading the [PyWin32
docs](http://timgolden.me.uk/pywin32-docs/contents.html), and for some reason,
the [GetKeyNameText function](https://msdn.microsoft.com/en-
us/library/windows/desktop/ms646300\(v=vs.85\).aspx) is not there. It's not
possible to return the name using
[GetKeyState](http://timgolden.me.uk/pywin32-docs/win32api__GetKeyState_meth.html)
or
[GetKeyboardState](http://timgolden.me.uk/pywin32-docs/win32api__GetKeyboardState_meth.html)
because, obviously, they return **only** the state. So, why GetKeyNameText is
not there, and how can I get the name of a key giving its keycode (From 0 to
256)??
Example:
import win32api
if __name__ == "__main__":
while True:
for key in range(256):
if int(win32api.GetKeyState(key)):
print(win32api.GetKeyNameText(key)) # Not available in PyWin32.
Output:
Traceback (most recent call last):
File "main.py", line 32, in <module>
print(win32api.GetKeyNameText(key)) # Not available in Python.
AttributeError: 'module' object has no attribute 'GetKeyNameText'
Press any key to continue . . .
Answer: For the keys, you would most probably need to create a dictionary of VK_CODE
to the keys, The virtual key codes are present
[here](https://msdn.microsoft.com/en-
us/library/windows/desktop/dd375731\(v=vs.85\).aspx) , Example -
VK_CODE = {8: 'backspace',
9: 'tab',
12: 'clear',
13: 'enter',
16: 'shift',
17: 'ctrl',
18: 'alt',
19: 'pause',
20: 'caps_lock',
27: 'esc',
32: 'spacebar',
33: 'page_up',
34: 'page_down',
35: 'end',
36: 'home',
37: 'left_arrow',
38: 'up_arrow',
39: 'right_arrow',
40: 'down_arrow',
41: 'select',
42: 'print',
43: 'execute',
44: 'print_screen',
45: 'ins',
46: 'del',
47: 'help',
48: '0',
49: '1',
50: '2',
51: '3',
52: '4',
53: '5',
54: '6',
55: '7',
56: '8',
57: '9',
65: 'a',
66: 'b',
67: 'c',
68: 'd',
69: 'e',
70: 'f',
71: 'g',
72: 'h',
73: 'i',
74: 'j',
75: 'k',
76: 'l',
77: 'm',
78: 'n',
79: 'o',
80: 'p',
81: 'q',
82: 'r',
83: 's',
84: 't',
85: 'u',
86: 'v',
87: 'w',
88: 'x',
89: 'y',
90: 'z',
96: 'numpad_0',
97: 'numpad_1',
98: 'numpad_2',
99: 'numpad_3',
100: 'numpad_4',
101: 'numpad_5',
102: 'numpad_6',
103: 'numpad_7',
104: 'numpad_8',
105: 'numpad_9',
106: 'multiply_key',
107: 'add_key',
108: 'separator_key',
109: 'subtract_key',
110: 'decimal_key',
111: 'divide_key',
112: 'F1',
113: 'F2',
114: 'F3',
115: 'F4',
116: 'F5',
117: 'F6',
118: 'F7',
119: 'F8',
120: 'F9',
121: 'F10',
122: 'F11',
123: 'F12',
124: 'F13',
125: 'F14',
126: 'F15',
127: 'F16',
128: 'F17',
129: 'F18',
130: 'F19',
131: 'F20',
132: 'F21',
133: 'F22',
134: 'F23',
135: 'F24',
144: 'num_lock',
145: 'scroll_lock',
160: 'left_shift',
161: 'right_shift ',
162: 'left_control',
163: 'right_control',
164: 'left_menu',
165: 'right_menu',
166: 'browser_back',
167: 'browser_forward',
168: 'browser_refresh',
169: 'browser_stop',
170: 'browser_search',
171: 'browser_favorites',
172: 'browser_start_and_home',
173: 'volume_mute',
174: 'volume_Down',
175: 'volume_up',
176: 'next_track',
177: 'previous_track',
178: 'stop_media',
179: 'play/pause_media',
180: 'start_mail',
181: 'select_media',
182: 'start_application_1',
183: 'start_application_2',
186: ';',
187: '+',
188: ',',
189: '-',
190: '.',
191: '/',
192: '`',
219: '[',
220: '\\',
221: ']',
222: "'",
246: 'attn_key',
247: 'crsel_key',
248: 'exsel_key',
250: 'play_key',
251: 'zoom_key',
254: 'clear_key'}
Also, you should check `int(win32api.GetKeyState(key))` against `-127` , it
can be `1` if the key is toggled.
|
DB error: django.core.exceptions.ImproperlyConfigured
Question: Getting this error when trying to create a fixture and run next command in
terminal:
django-admin.py dumpdata data.json
Full traceback:
(project)litwisha@litwisha:~/PycharmProjects/test_project$ django-admin.py dumpdata data.json
Traceback (most recent call last):
File "/home/litwisha/.virtualenvs/project/bin/django-admin.py", line 5, in <module>
management.execute_from_command_line()
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/core/management/base.py", line 402, in run_from_argv
connections.close_all()
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/db/utils.py", line 258, in close_all
for alias in self:
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/db/utils.py", line 252, in __iter__
return iter(self.databases)
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/utils/functional.py", line 60, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/db/utils.py", line 151, in databases
self._databases = settings.DATABASES
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__
self._setup(name)
File "/home/litwisha/.virtualenvs/project/local/lib/python2.7/site-packages/django/conf/__init__.py", line 42, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DATABASES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Tried this [solution](http://stackoverflow.com/a/15556596/3525271), but
doesn't work:
When writing in shell:
from django.conf import settings
settings.configure()
get error:
RuntimeError: Settings already configured.
Maybe it because I use `virtualenv`? But I tried it without `virtualenv`, but
it doesn't work. I use IDE `PyCharm`
settings.py:
"""
Django settings for test_project project.
Generated by 'django-admin startproject' using Django 1.8.3.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'r1n@)rahbrh6t1hb-mn_83c^7ai@ij&7d8m8mo7tdm&5q#o5t&'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'app',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'test_project.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates'),],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'test_project.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'test1',
'USER': 'postgres',
'PASSWORD': '1111',
'HOST': '127.0.0.1',
'PORT': '5432'
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
Answer: I've reproduced your issue, and I've discovered that using:
python manage.py dumpdata > data.json
just works.
Note: You must use the greater than sign in order save the data to the
data.json file.
|
python 3 csv file - attribute error
Question: I am trying to read in the following csv file:
<https://github.com/eljefe6a/nfldata/blob/master/stadiums.csv>
I copied and pasted the contents it into excel and save it as a csv file
because it is in a unix format.
and I get the following attribute error message
Any help appreciated. Thank you.
import sys
import csv
with open('stadium.csv', newline='') as csvfile:
readCSV = csv.reader(csvfile,delimiter=',')
for line in readCSV:
line = line.strip()
unpacked = line.split(",")
stadium, capacity, expanded, location, surface, turf, team, opened, weather, roof, elevation = line.split(",")
results = [turf, "1"]
print("\t".join(results))
Error:
Traceback (most recent call last):
File "C:/Python34/mapper.py", line 31, in <module>
line = line.strip()
AttributeError: 'list' object has no attribute 'strip'
Answer: The CSV reader already separates all the fields for you. That is, your `line`
variable is already a list, not a string, so there's nothing to strip and
nothing to split. Use `line` the way you intended to use `unpacked`.
That's why you're using the `csv` package in the first place, remember.
|
Python Sorting Regular Expression
Question: Hi making a file that sorts through a txt file and selects the name: and the
first 3 stats and stores them in a dict then does the same for the next name +
3 stats if dict isn't smart storing them in a list would work also I think.
## the txt file looks like this:
player a 34 45 56 player b 38 93 75 playerc 39 29 18 playerd 38 98
I tried `player = re.findall(r"[-+]?\d*\.\d+|\d+", player_string)` But it only
gives me the player ratings and I think I need to use some kind of dict to
store all the different players.
If this is complicated you don't have to do the entire thing for me just point
me in the right direction. Thank you. I'm using py2.6
Answer: I think what you need is:
import re
player_string = "player a 34 45 56 player b 38 93 75 playerc 39 29 18 playerd 38 98"
pattern = re.compile(r"player\s+(\w+)\s+(\d+)\s+(\d+)\s+(\d+)")
matches = pattern.findall(player_string)
d = {}
for m in matches :
print m
d[m[0]] = m[1:]
print d
Pay attention that you wrote "playerc" and "playerd" without spaces and these
2 will not be found.
|
Scrapy 1.0 - Getting return value after running from python script
Question: I am using the below code to run my crawler from a python script:
import scrapy
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess()
process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished
where my `MySpider` returns an object.
How can I get the return value from this? Please keep in mind this is Scrapy
1.0. All the examples I found are for older versions of Scrapy.
The above code is from :
<http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-
script>
Answer: Well, the solution would be to implement custom item pipeline to store your
results in a file -- and read the results later.
Alternatively you can do this with configuring the export in Scrapy too. For
example export the items in a file called `results.json`:
settings.overrides['FEED_FORMAT'] = 'json'
settings.overrides['FEED_URI'] = 'results.json'
To avoid the temporal file you should use a custom pipeline class and handle
your results on the `spider_closed` signal.
You can refer to this answer for more details:
<http://stackoverflow.com/a/23574703/3941341>
|
python design custom iterator
Question: I am designing a custom iterator in python:
class Iterator():
def __init__(self):
return
def fit(self, n):
self.n = n
return self
def __iter__(self):
for i in range(self.n):
yield i
return
it = Iterator().fit(10)
for i in it:
print(i)
it.fit(20)
for i in it:
print(i)
It is working fine but I am wondering if it is possible that a new `fit` is
called before that the previous one is finished leading to strange behaviour
of the class.
If yes how should I design it to make it more robust? It is important to have
some parameters passed from the `fit` method.
EDIT: I will introduce an example that is similar to my original problem
The `iterator` class is designed to be used by a `User` class. It is important
that when the `evaluate` method is called all the numbers until `n/k` are
printed. Without any exception.
Maybe the use of a `iterator.fit(n)` method solves the problem? class
Iterator():
def __init__(self, k):
self.k = k
return
def fit(self, n):
for i in range(int(n/self.k)):
yield i
return
class User():
def __init__(self, iterator):
self.iterator = iterator
return
def evaluate(self, n):
for i in self.iterator.fit(n):
print(i)
return
it = Iterator(2)
u = User(it)
u.evaluate(10) # I want to be sure that all the numbers until 9 are printed
u.evaluate(20) # I want to be sure that all the numbers until 20 are printed
Answer: Because each call to `range` creates a new iterator, there will be no
conflicts if you make multiple calls to `fit`.
Your class is a bit weird. You could either remove the `__init__`, as it does
nothing, or put the `fit` method in there.
it = Iterator()
it1 = iter(it.fit(10))
it2= iter(it.fit(5))
print it1.next()
print it1.next()
print it2.next()
print it1.next()
>>0
1
0
2
|
pygame/python sprite not moving? What have i done wrong? (no errors)
Question: Alright, so here's my code, I get no errors but my sprite (player) is not
moving
import pygame,sys
import random
import time
#Colors
white = (255,255,255)
black = (0,0,0)
red = (255,0,0)
green = (0,255,0)
blue = (0,0,255)
class Block(pygame.sprite.Sprite):
def __init__(self, color = blue, width = 50, height = 50):
super(Block, self).__init__()
self.image = pygame.Surface((width, height))
self.image.fill(color)
self.rect = self.image.get_rect()
def set_position(self, x , y):
self.rect.x = x
self.rect.y = y
def set_image(self, filename = None):
if(filename != None):
self.image = pygame.image.load(filename)
self.rect = self.image.get_rect()
if (__name__ == "__main__"):
pygame.init()
window_size = window_width, window_height = 640,480
window = pygame.display.set_mode(window_size)
pygame.display.set_caption('Test')
window.fill(white)
clock = pygame.time.Clock()
#important variables
pos_x = 300
pos_y = 200
pos_x_change = 0
pos_y_change = 0
block_group = pygame.sprite.Group()
player = Block()
player.set_image('player.png')
player.set_position(pos_x,pos_y) #Player variable for pos
another_block = Block(red)
another_block.set_position(100,100)
block_group.add(player, another_block)
block_group.draw(window)
pygame.display.update()
running = True
while(running):
for event in pygame.event.get():
if (event.type == pygame.QUIT):
running = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
pos_x_change = -10
if event.key == pygame.K_RIGHT:
pos_x_change = 10
if event.key == pygame.K_UP:
pos_y_change -10
if event.key == pygame.K_DOWN:
pos_y_change = 10
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT:
pos_x_change = 0
if event.key == pygame.K_RIGHT:
pos_x_change = 0
if event.key == pygame.K_UP:
pos_y_change == 0
if event.key == pygame.K_DOWN:
pos_y_change == 0
pos_x += pos_x_change
pos_y += pos_y_change
clock.tick(60)
pygame.quit
quit()
As you can see, I clearly added the pos_x and pos_y and set it to pos_x +=
pos_x_change and pos_y += pos_y_change but the sprite is still not moving. I'm
guessing it's a misplacement of code because python heavily relies on
indentation? Please explain to me what i have done wrong, It would be greatly
appreciated.
Answer: It looks like you're updating the variables that contain the players x and y
coordinates, but you're not updating the display. That's why it looks like the
position isn't changing. I believe calling `player.set_position(pos_x,pos_y)`
just above the `clock.tick(60)` statement will fix the problem.
|
How to open a .npz-File
Question: first I want to say, that I`m completly new in Programming and Python.
Someone send me a .npz file. Is there anyone that could explain me how to open
that file. Or which code I have to write. I already googeld quite a long time,
but I just dont understand how to open it.
Thanks in advance.
Answer: Use the
[`load`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html)
function:
import numpy as np
data = np.load('your_file.npz)
|
PYTHON HELP, word not defined?
Question: so i keep getting the name 'word' is not defined on:
#for every character in secret_word
for char in word:
Context:
import time
name = input("What is your name? ")
print ("Hello, " + name, "Time to play hangman!")
print ("\n")
time.sleep(1)
print ("Start guessing...")
time.sleep(0.5)
list = ["secret"]
guesses = ' '
turns = 10
while turns > 0:
failed = 0
#for every character in secret_word
for char in words:
if char in guesses:
print ("char,")
else:
print ("_",)
if failed == 0:
print ("\nYou won")
# exit the script
break
print
guess = input("guess a character")
guesses += guess
if guess not in word:
turns -= 1
print ("Wrong\n")
print ("you have") , + turns, 'more guesses'
if turns == 0:
# print ("you lose\n")
print("exit")
Answer: You're referring to the variable "word" without defining it. Somewhere before
your for loop you have to define it like so:
word = "myString"
You should post your actual code here instead of a screenshot next time.
Highlight it and hit control + k to mark it as code.
|
Errors in python script to remove NULL chars in a file
Question: Newbie question: I want to write a python script which removes all the NULL
characters in a file and count how many non-NULL characters left.
## Equivalent Linux command line:
# Linux-machine:$ tr -dc [:print:] < raw-test.txt | wc -c
# Linux-machine:$ more -f raw-test.txt
abcdefghij^@^@^@^@^@^@.....
raw-test.txt file has 10 non-NULL characters in it (abcdefghij).
Python code:
* * *
import subprocess
p = subprocess.Popen(["tr", "-d", "-c", "[:print:]", "<", "raw-test.txt"], stdout=subprocess.PIPE)
output, err = p.communicate()
print output
## Errors:
# Linux-machine:$ python raw-test-trim.py
usage: tr [-Ccsu] string1 string2
tr [-Ccu] -d string1
tr [-Ccu] -s string1
tr [-Ccu] -ds string1 string2
# Linux-machine:$
Please help me see my errors.
Thanks much
Answer: IO redirections like `< infile`, `> outfile` are syntax from shell. OS doesn't
recognize it.
With subprocess, you should either set `shell=True` or use python open() to
redirect.
import subprocess
p = subprocess.Popen('tr -d -c [:print:] < raw-test.txt', shell=True, stdout=subprocess.PIPE)
output, err = p.communicate()
print output
with open('raw-test.txt', 'rb') as f:
p = subprocess.Popen(["tr", "-d", "-c", "[:print:]"], stdin=f, stdout=subprocess.PIPE)
output, err = p.communicate()
print output
|
Python reports invalid syntax when assigning user defined Kivy widget property
Question:
from kivy.uix.label import Label
from kivy.uix.scrollview import ScrollView
from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder
from kivy.base import runTouchApp
from kivy.properties import StringProperty
from kivy.properties import ListProperty
from kivy.graphics.vertex_instructions import Rectangle
from kivy.graphics.context_instructions import Color
Builder.load_string('''
<bbx>:
orientation: 'vertical'
my2App:
color: 1,0,0,1
<my2App>:
text: root.text
Label:
text: root.text
font_size: 16
size_hint_y: None
text_size: self.width, None
height: self.texture_size[1]
canvas:
Color:
rgba: root.color
Rectangle:
pos: self.pos
size: self.size
''')
class my2App(ScrollView):
text = StringProperty('default string'*200)
color = ListProperty([1,0,0,0.25])
class bbx(BoxLayout):
pass
runTouchApp(bbx())
* * *
This is my practice kivy code. my2App is a user defined widget mostly copied
from this tutorial (<https://www.youtube.com/watch?v=WdcUg_rX2fM>). The only
addition is the color property defined by ListProperty. Somehow this user
defined color property didn't work out. I tried to run my2App along and it
didn't work either.
> Traceback (most recent call last): File "test_anotherviky.py", line
> 38, in <module>
> ''') File "/usr/lib/python2.7/dist-packages/kivy/lang.py", line 1796, in load_string
> parser = Parser(content=string, filename=fn) File "/usr/lib/python2.7/dist-packages/kivy/lang.py", line 1185, in
> __init__
> self.parse(content) File "/usr/lib/python2.7/dist-packages/kivy/lang.py", line 1291, in parse
> rule.precompile() File "/usr/lib/python2.7/dist-packages/kivy/lang.py", line 1049, in
> precompile
> x.precompile() File "/usr/lib/python2.7/dist-packages/kivy/lang.py", line 976, in
> precompile
> self.co_value = compile(value, self.ctx.filename or '<string>', mode) File "<string>", line 5
> color: 1,0,0,1
> ^ SyntaxError: invalid syntax
Answer: Widget names must start with an upper case letter to work in kv, as it uses
this to distinguish them from properties. Here, it thinks `my2app:` is a
property setting.
|
Getting variable value from function of python main.py to .kv file
Question: hello guys i am not able to get variable w value from this function but if it
is out side of class so i can get the value but if it is in function i am not
able to get value
my main.py
class ExampleApp(App):
def build(self,App):
self.load_kv("exapmleapp.kv")
def my_any():
w="THIS IS STRING"
if __name__ == "__main__":
ExampleApp().run()
This is my kv file
Label:
text:app.w
All i want is the label which has text stored in w variable
thanks in advance
This is error what i got
Traceback (most recent call last):
File "test.py", line 67, in <module>
ExampleApp().run()
File "/usr/local/lib/python2.7/dist-packages/kivy/app.py", line 797, in run
self.load_kv(filename=self.kv_file)
File "/usr/local/lib/python2.7/dist-packages/kivy/app.py", line 594, in load_kv
root = Builder.load_file(rfilename)
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 1749, in load_file
return self.load_string(data, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 1828, in load_string
self._apply_rule(widget, parser.root, parser.root)
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 2018, in _apply_rule
e), cause=tb)
kivy.lang.BuilderException: Parser: File "./exampleapp.kv", line 3:
...
1:
2:Label:
>> 3: text:app.w
...
BuilderException: Parser: File "./exampleapp.kv", line 3:
...
1:
2:Label:
>> 3: text:app.w
...
AttributeError: 'ExampleApp' object has no attribute 'w'
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 1649, in create_handler
return eval(value, idmap)
File "./exampleapp.kv", line 3, in <module>
text:app.w
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 858, in __getattribute__
return getattr(object.__getattribute__(self, '_obj'), name)
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 2011, in _apply_rule
value, rule, rctx['ids'])
File "/usr/local/lib/python2.7/dist-packages/kivy/lang.py", line 1654, in create_handler
cause=tb)
Answer: I think your question can be solved in several different ways.
What way to choose depends on where you want your function to exist.
Here is an answer that runs on my computer, with py3.
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.boxlayout import BoxLayout
mykv = Builder.load_string("""
<MyLabels>:
Label:
text: root.my_any()
Label:
text: '2'
""")
class MyLabels(BoxLayout):
def my_any(self):
print('in my_any')
w = 'this is a string'
return w
class ExampleApp(App):
def build(self):
return MyLabels()
if __name__ == '__main__':
ExampleApp().run()
I added an extra "root window" holding the widgets, that should make it a bit
more easy to understand one way that also scales (the kv-file root now has <
brackets > and the python get's an additional class to care for the root
window).
Good to remember is that an App is an App, i.e. it's not part of the widget
set, which may be more easy to attach functions and properties to. Another way
to do it is with stringproperties (typically linked to a widget - like
label/button), the documentation has some examples of that.
|
Regex check if backslash before every symbols using python
Question: I met some problems when I'd like to check if the input regex if correct or
not.
I'd like to check is there one backslash before every symbol, but I don't know
how to implement using Python.
For example:
* `number: 123456789`. (return `False`)
* `phone\:111111` (return `True`)
I try to use `(?!)` and `(?=)` in Python, but it doesn't work.
Update:
I'd like to match the following string:
`\~`, `\!`, `\@`, `\$`, `\%`, `\^`, `\&`, `\*`, `\(`, `\)`, `\{`, `\}`, `\[`,
`\]`, `\:`, `\;`, `\"`, `\'`, `\>`, `\<`, `\?`
Thank you very much.
Answer: Check that the entire string is composed of single characters or pairs of
backslash+symbol:
import re
def has_backslash_before_every_symbol(s):
return re.match(r"^(\\[~!@$%^&*(){}\[\]:;"'><?]|[^~!@$%^&*(){}\[\]:;"'><?])*$", s) is not None
Python regex reference: <https://docs.python.org/3/library/re.html>
|
wxPython: Ensure only one instance of a panel is open
Question: I'm writing a multi-window, multi-frame application. For every new
window/frame that is opened, there should be one and only one instance of that
window. I want the user to be able to quickly switch between these windows, so
`ShowModal()` does not work. I've tried using `SingleInstanceChecker`, but I
can't get it to work as it's more for Apps than for frames. How should I
accomplish this?
Answer: I did a bit of Google-Fu and found this old thread:
* <https://groups.google.com/forum/#!topic/wxpython-users/VTPpXYZYHmM>
Using that as my template, I put together this little script and it appears to
work on my Linux box:
import wx
########################################################################
class MyPanel(wx.Panel):
""""""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent)
########################################################################
class SingleInstanceFrame(wx.Frame):
""""""
instance = None
init = 0
#----------------------------------------------------------------------
def __new__(self, *args, **kwargs):
""""""
if self.instance is None:
self.instance = wx.Frame.__new__(self)
elif isinstance(self.instance, wx._core._wxPyDeadObject):
self.instance = wx.Frame.__new__(self)
return self.instance
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
print id(self)
if self.init:
return
self.init = 1
wx.Frame.__init__(self, None, title="Single Instance Frame")
panel = MyPanel(self)
self.Show()
########################################################################
class MainFrame(wx.Frame):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
wx.Frame.__init__(self, None, title="Main Frame")
panel = MyPanel(self)
btn = wx.Button(panel, label="Open Frame")
btn.Bind(wx.EVT_BUTTON, self.open_frame)
self.Show()
#----------------------------------------------------------------------
def open_frame(self, event):
frame = SingleInstanceFrame()
if __name__ == '__main__':
app = wx.App(False)
frame = MainFrame()
app.MainLoop()
|
Pandas Module: Works in PyCharm but not Terminal
Question: I've written a python script (called python_script.py) in PyCharm that relies
on the pandas module. The thing is, when I run the script in PyCharm, it works
perfectly. But when I call it in terminal, I get the following error:
Traceback (most recent call last):
File "./sub_directory/python_script.py", line 9, in <module>
import pandas
ImportError: No module named pandas
Answer: You could install pandas into your anaconda installation by doing `conda
install pandas` at the terminal. That's probably the easiest solution, but you
could also use the other Python installation like this:
/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python python_script.py
You could also create an alias within your terminal so that `python` would
point to the system python installation. To do that you can put the line
alias python=/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
in your `~/.bash_profile`
(For reference: <http://www.moncefbelyamani.com/create-aliases-in-bash-
profile-to-assign-shortcuts-for-common-terminal-commands/>)
|
Conditional concatenation of python pandas dataframe (sql join on self)
Question: **[Aim]**
We have an existing dataframe and wish to extract a series of records and
concat (sql join on self) given a condition in one command OR in another
DataFrame.
**[Situation]**
Python version: 3.3.3
Pandas version: 0.15.1
We have a sizeable DataFrame with 10,000+ rows. This is just an example to
understand the logic.
DataFrame1 -> df1:
import pandas as pd
df1 = pd.DataFrame({'A': [1,2,3,1],
'B': [1,4,1,2],
'C': ['test1','test2','test3','test4']
})
Resulting in:
A B C
1 1 test1
2 4 test2
3 1 test3
1 2 test4
5 8 test5
**[Expected output]**
We are looking to output:
* All columns A, B, C where: B = 1 -> output = df1[df1['B'] == 1]
* Add to output all of those where
output['A'] == df1['A']
AND
df1['B'] == 2
Thus:
A B C
1 1 test1
3 1 test3
1 2 test4
It would be awesome to show the most pythonic/ pandanic (sounds weird) way of
doing this :)
Answer: Not sure whether there is a better way, but the following works. The idea is
to use `.isin` operator for your 2nd condition. The final boolean selector is
a `or` combination of 1st condition and 2nd condition.
import pandas as pd
import numpy as np
# your data
# =============================
df1 = pd.DataFrame({'A': [1,2,3,1],
'B': [1,4,1,2],
'C': ['test1','test2','test3','test4']
})
print(df1)
A B C
0 1 1 test1
1 2 4 test2
2 3 1 test3
3 1 2 test4
# processing
# =====================================
mask = df1.B == 1
df1[mask | ((df1.A.isin(df1[mask].A)) & (df1.B==2))]
A B C
0 1 1 test1
2 3 1 test3
3 1 2 test4
|
Can python-C++ extension get a C++ object and call its member function?
Question: I am writing a python/C++ application, that will call methods in C++ extension
from python.
Say my C++ has a class:
class A
{
private:
int _i;
public:
A(int i){_i=i;}
int get_i(){return _i;}
}
A a=A();
It there anyway that python can get `a` object in C++ and call its member
function, i.e.:
import cpp_extension
a=cpp_extension.A()
print a.get_i()
Any reference to general reading is also appreciated.
Answer: Yes. You can create a Python C++ extension where your C++ objects will be
visible from within Python as if they were built-in types.
There are two main ways to go about it.
1.Create the extension yourself following the documentation provided in the
[CPython API
Documentation](https://docs.python.org/2/extending/extending.html).
2.Create the extension using a tool such as
[boost::python](http://www.boost.org/doc/libs/1_58_0/libs/python/doc/) or
[SWIG](http://www.swig.org/).
In my experience boost::python is the best way to go about it (it saves you an
enormous amount of time, and the price you pay is that now you depend on
boost).
For your example, the boost::python bindings could look as follows:
// foo.cpp
#include <boost/python.hpp>
class A {
public:
A(int i)
: m_i{i} { }
int get_i() const {
return m_i;
}
private:
// don't use names such as `_i`; those are reserved for the
// implementation
int m_i;
};
BOOST_PYTHON_MODULE(foo) {
using namespace boost::python;
class_<A>("A", init<int>())
.def("get_i", &A::get_i, "This is the docstring for A::get_i")
;
}
Compile:
g++ -o foo.so foo.cpp -std=c++11 -fPIC -shared \
-Wall -Wextra `python2.7-config --includes --libs` \
-lboost_python
and run in Python:
>>> import foo
>>> a = foo.A(2)
>>> a.get_i()
2
>>> print a.get_i.__doc__
get_i( (A)arg1) -> int :
This is the docstring for A::get_i
C++ signature :
int get_i(A {lvalue})
|
wxPython: Dragging a file into window to get file path
Question: I want to drag a file into a window and get the file path. I've tried doing
this:
class CSVDropper(wx.FileDropTarget):
def __init__(self, data):
wx.FileDropTarget.__init__(self)
self.data = data
def OnDropFiles(self, x, y, filenames):
self.data = filenames
print self.data
then in the main window:
# Drag & Drop
self.csv_path = None
self.drop_table = CSVDropper(self.csv_path)
self.SetDropTarget(self.drop_table)
But this does nothing. I've tried running
[this](http://wiki.wxpython.org/DragAndDrop) tutorial code, but it doesn't do
anything either. How do I accomplish this?
Answer: When you print `self.data`, you should see a list of paths printed out.
Anyway, I wrote up a
[tutorial](http://www.blog.pythonlibrary.org/2012/06/20/wxpython-introduction-
to-drag-and-drop/) on drag-n-drop a while ago which shows how to do this.
Here's a slightly modified version of my code that both prints out the file
paths to stdout and to a text control too:
import wx
########################################################################
class MyFileDropTarget(wx.FileDropTarget):
""""""
#----------------------------------------------------------------------
def __init__(self, window):
"""Constructor"""
wx.FileDropTarget.__init__(self)
self.window = window
#----------------------------------------------------------------------
def OnDropFiles(self, x, y, filenames):
"""
When files are dropped, write where they were dropped and then
the file paths themselves
"""
self.window.SetInsertionPointEnd()
self.window.updateText("\n%d file(s) dropped at %d,%d:\n" %
(len(filenames), x, y))
print filenames
for filepath in filenames:
self.window.updateText(filepath + '\n')
########################################################################
class DnDPanel(wx.Panel):
""""""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent=parent)
file_drop_target = MyFileDropTarget(self)
lbl = wx.StaticText(self, label="Drag some files here:")
self.fileTextCtrl = wx.TextCtrl(self,
style=wx.TE_MULTILINE|wx.HSCROLL|wx.TE_READONLY)
self.fileTextCtrl.SetDropTarget(file_drop_target)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(lbl, 0, wx.ALL, 5)
sizer.Add(self.fileTextCtrl, 1, wx.EXPAND|wx.ALL, 5)
self.SetSizer(sizer)
#----------------------------------------------------------------------
def SetInsertionPointEnd(self):
"""
Put insertion point at end of text control to prevent overwriting
"""
self.fileTextCtrl.SetInsertionPointEnd()
#----------------------------------------------------------------------
def updateText(self, text):
"""
Write text to the text control
"""
self.fileTextCtrl.WriteText(text)
########################################################################
class DnDFrame(wx.Frame):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
wx.Frame.__init__(self, parent=None, title="DnD Tutorial")
panel = DnDPanel(self)
self.Show()
#----------------------------------------------------------------------
if __name__ == "__main__":
app = wx.App(False)
frame = DnDFrame()
app.MainLoop()
|
Why does Python refuse to execute this code in a new subprocess?
Question: I am trying to make a very simple application that allows for people to define
their own little python scripts within the application. I want to execute the
code in a new process to make it easy to kill later. Unfortunately, Python
keeps giving me the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/home/skylion/Documents/python_exec test.py", line 19, in <module>
code_process = Process(target=exec_, args=(user_input_code))
File "/usr/lib/python2.7/multiprocessing/process.py", line 104, in __init__
self._args = tuple(args)
TypeError: 'code' object is not iterable
>>>
My code is posted below
user_input_string = '''
import os
world_name='world'
robot_name='default_body + os.path.sep'
joint_names=['hingejoint0', 'hingejoint1', 'hingejoint2', 'hingejoint3', 'hingejoint4', 'hingejoint5', 'hingejoint6', 'hingejoint7', 'hingejoint8']
print(joint_names)
'''
def exec_(arg):
exec(arg)
user_input_code = compile(user_input_string, 'user_defined', 'exec')
from multiprocessing import Process
code_process = Process(target=exec_, args=(user_input_code))
code_process.start()
What am I missing? Is there something wrong with my user_input_string? With my
compile options? Any help would be appreciated.
Answer: I believe `args` must be a tuple. To create a single-element tuple, add a
comma like so: `args=(user_input_code,)`
|
Python storing user input
Question: I don't have a working code at all for this, but if I wanted to take user
input and store it, and then take a different input and store that to the same
list (Like a site storing login information of it's members and associating it
when they want to log back in) how would I do this in python? I have this
short little code:
from Tkinter import *
import tkSimpleDialog
import tkMessageBox
root = Tk()
w = Label(root, text="")
w.pack()
User_info = tkSimpleDialog.askstring("User Information", "What is your name?")
def List(List_name):
List_name = []
List_name.append(User_info)
return List_name
print List
yet this produces this result: function List at 0x7fdf1fa0f668
instead of (for instance) Johnny
Answer: You are not calling the function so you are seeing a reference to the
function, you also need to pass parameter:
print List(param)
What you really want is to remove the parameter and just call the function:
User_info = tkSimpleDialog.askstring("User Information", "What is your name?")
def List():
List_name = []
List_name.append(User_info)
return List_name
print List()
Or simply:
User_info = tkSimpleDialog.askstring("User Information", "What is your name?")
def List():
return [List_name]
print List()
A simple example of taking and saving the input to a file, obviously real
usernames and passwords would need to be stored a lot more securely:
master = Tk()
l = Label(master, text="Username")
l.pack()
# create Entry for user
e = Entry(master)
e.pack()
l2 = Label(master, text="Password")
l2.pack()
# create Entry for pass and show * when user types their password
e2 = Entry(master,show="*")
e2.pack()
e.focus_set()
# callback function to save the username and password
def callback():
with open("data.txt","a")as f:
f.write("{},{}\n".format(e.get(),e2.get()))
# command set to callback when button is pressed
b = Button(master, text="Save", width=10, command=callback)
b.pack()
mainloop()
Obviously you should be verifying the user actually entered something for both
and in the real word you would have to see if the username was taken etc..
|
questions on multiple plots and multiple legend on python
Question: I have the following code, could any one help me fix it? I would like to have
multiple legends on python numpy / matlab graph.
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(1)
x0 = np.array([[1,2,2,3]])
x1 = np.array([[2,2,4,3]])
y0 = np.array([[1,6,2,7]])
y0 = np.array([[4,2,2,5]])
p1= plt.scatter(x0,x1,color='blue',s=3)
p2= plt.scatter(y0,y0,color='red',s=3)
leg = plt.legend((p1,p2),('class0','class1'),fontsize=8)
plt.show()
I want my following figure to be on the fig and have its own legend
plt.hold(True)
z0 = np.array([[11,16,13,17]])
z1 = np.array([[13,16,12,17]])
p3 = plt.scatter(z0,z1,color='k')
plt.show()
How I should add a legend for `p3`?
Answer: Here I have all three scatterplots on the same figure with one legend in top-
right and the other on bottom-right.
We can get both legends to show up by adding the first legend to the axes as
described in the [matplotlib legend
guide](http://matplotlib.org/users/legend_guide.html#multiple-legends-on-the-
same-axes):
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(1)
x0 = np.array([[1,2,2,3]])
x1 = np.array([[2,2,4,3]])
y0 = np.array([[1,6,2,7]])
y0 = np.array([[4,2,2,5]])
p1= plt.scatter(x0,x1,color='blue',s=50, label='class0')
p2= plt.scatter(y0,y0,color='red',s=50, label='class1')
z0 = np.array([[11,16,13,17]])
z1 = np.array([[13,16,12,17]])
p3 = plt.scatter(z0,z1,color='k', s=75, label='class3')
leg = plt.legend(handles=[p1, p2], fontsize=8, loc=1)
ax = plt.gca().add_artist(leg)
plt.legend(handles=[p3], fontsize=8, loc=4)
plt.show()

|
How to uri encode a large text file with python 3.4?
Question: I have a large text file that i want to be uri argument encoded. I researched
a bit and came up with this:
import urllib
f=open('text.txt','r').read()
n= open('encodeTest.txt','w')
new=urllib.quote_plus(f)
n.write(new)
I get this error whenever I run it:
AttributeError: 'module' object has no attribute 'quote_plus'
Answer: You want
[urllib.parse.unquote_plus](https://docs.python.org/3.1/library/urllib.parse.html#urllib.parse.quote_plus).
But even this wouldn't work if you're encoding a large file since the [max URL
length](https://stackoverflow.com/questions/417142/what-is-the-maximum-length-
of-a-url-in-different-browsers) is ~2000 chars
|
Python not recognizing cookies in request header
Question: At work we've been developing a python application (django specifically) that
intermittently seem to behave as if it is not recognizing some of the cookies
being sent in the request.
The issue does not always occur, but once it does it seems to persist
indefinitely. Sometimes the issue can be resolved by clearing the cookies and
reloading the page.
The cookies are all valid (though there are quite a few 3rd-party ones in the
mix) and within the maximum size supported by both the servers and the
browsers.
Answer: # Solution
If your application needs to interpret the "Cookie" header in Python using
"SimpleCookie" (widely used by Python libraries and frameworks), and your
website's domain has cookies set that are outside of your control, avoid
versions of Python where Issue #22931 (<https://bugs.python.org/issue22931>)
was in play.
The bug existed in several versions of 3.3.x, 3.4.x and 3.5.x as well as
2.7.9.
# Details
The issue's diagnosis ended up being fairly simple, but I thought I'd share it
here using more general language since searching for the issue didn't yield
any useful results until it had been narrowed down to the existence of
specific **valid** characters in a few cookies.
In Python 2.7.9 (and several versions of 3.x) there is a bug where cookies
with "[" or "]" in their values causes the parsing of the "Cookie" header to
fail silently. Since the square brackets are valid characters for a cookie
value (<http://www.rfc-editor.org/rfc/rfc6265.txt>), and commonly used in 3rd
party libraries the issue can be detrimental to cookie driven functionality in
a web application.
It is particularly elusive because termination of the cookie parsing only
appears to occur once it attempts to parse the first cookie with a square
bracket in the value. This means that if the cookies happen to be sent in a
different order the issue may not occur.
**For example**
If the request header is formatted as `Cookie: important_cookie=foobar;
bad_character=[` than "important_cookie"'s value would be available in the
application -- however it would not have been if the request header had been
`Cookie: bad_character=[; important_cookie=foobar`.
* * *
Once you know that the square brackets are causing the issue, it is fairly
easy to find the underlying bug that was reported in Python, but honing in on
the underlying issue can be a chore.
|
optimizing matrix operations in python, numpy
Question: This is an optimization problem. Given matrices E, H, Q, F and the logic in
method my_func_basic (see code block), populate matrix V. Any potential ways,
such as through vectorization, to speed up the computation? Thanks.
import timeit
import numpy as np
n = 20
m = 90
# E: m x n
E = np.random.randn(m,n)
# H, Q: m x m
H = np.random.randn(m,m)
Q = np.random.randn(m,m)
# F: n x n
F = np.random.randn(n,n)
# V: m x m
V = np.zeros(shape=(m,m))
def my_func_basic():
for x in range(n):
for y in range(n):
if x == y:
V[x][y] = np.nan
continue
h = H[x][y]
e = np.array([E[x,:]+h*E[y,:]])
v1 = np.dot(np.dot(e,F),np.transpose(e))[0][0]
v2 = Q[x][x]+h**2*Q[y][y]
V[x][y] = v1/np.sqrt(v2)
print(timeit.timeit(my_func_basic,number=1000),'(sec), too slow...')
Answer: This would be one way to solve it with `vectorized` methods -
import numpy as np
def vectorized_approach(V,H,E,F,Q,n):
# Create a copy of V to store output values into it
V_vectorized = V.copy()
# Calculate v1 in a vectorized fashion
E1 = (E[None,:n,:]*H[:n,:n,None] + E[:n,None,:]).reshape(-1,n)
E2 = np.dot(E1,F)
v1_vectorized = np.einsum('ij,ji->i',E2,E1.T).reshape(n,n)
np.fill_diagonal(v1_vectorized, np.nan)
# Calculate v2 in a vectorized fashion
Q_diag = np.diag(Q[:n,:n])
v2_vectorized = Q_diag[:,None] + H[:n,:n]**2*Q_diag[None,:]
# Finally, get vectorized version of output V
V_vectorized[:n,:n] = v1_vectorized/np.sqrt(v2_vectorized)
return V_vectorized
Tests:
1) Setup inputs -
In [314]: n = 20
...: m = 90
...: # E: m x n
...: E = np.random.randn(m,n)
...: # H, Q: m x m
...: H = np.random.randn(m,m)
...: Q = np.random.randn(m,m)
...: # F: n x n
...: F = np.random.randn(n,n)
...: # V: m x m
...: V = np.zeros(shape=(m,m))
...:
2) Verify results -
In [327]: out_basic_approach = my_func_basic(V,H,E,F,Q,n)
...: out_vectorized_approach = vectorized_approach(V,H,E,F,Q,n)
...:
...: mask1 = ~np.isnan(out_basic_approach)
...: mask2 = ~np.isnan(out_vectorized_approach)
...:
In [328]: np.allclose(mask1,mask2)
Out[328]: True
In [329]: np.allclose(out_basic_approach[mask1],out_vectorized_approach[mask1])
Out[329]: True
3) Runtime tests -
In [330]: %timeit my_func_basic(V,H,E,F,Q,n)
100 loops, best of 3: 12.2 ms per loop
In [331]: %timeit vectorized_approach(V,H,E,F,Q,n)
1000 loops, best of 3: 222 µs per loop
|
Python - How to use a for loop to write data to a cell, then move to the next row
Question: I have what feels like a problem with a relatively simple solution, but to
this point it escapes my research. I'm attempting to write items from a tuple
to four consecutive rows using a for loop, but I can't seem to figure it out.
I suspect that it can be done with the iter_rows module in the openpyxl
package, but I haven't been able to properly apply it within the loop. The
following piece of code results in the generation of an .xlsx file with the
last item from the tuple assigned to cell 'A2':
from openpyxl import Workbook
nfc_east = ('DAL', 'WAS', 'PHI', 'NYG')
wb = Workbook()
ws = wb.active
row_cell = 2
for i in nfc_east:
column_cell = 'A'
ws.cell(row = row_cell, column = column_cell).value = str(i)
row_cell = row_cell + 1
wb.save("row_creation_loop.xlsx")
All suggestions and (constructive) criticism welcome. Thank you!
Answer: Your code doesn't run for me (`Invalid column index A`). Which version of pyxl
are you using? AFAIK pyxl uses integer indexes. The following code produces
the output you're after (I think).
from openpyxl import Workbook
nfc_east = ('DAL', 'WAS', 'PHI', 'NYG')
wb = Workbook()
ws = wb.active
start_row = 2
start_column = 1
for team in nfc_east:
ws.cell(row=start_row, column=start_column).value = team
start_row += 1
wb.save("row_creation_loop.xlsx")
# Prints...
#
# | A |
# 1 | |
# 2 | DAL |
# 3 | WAS |
# 4 | PHI |
# 5 | NYG |
|
Iam reading learning python the hard way, i am doing the first argv exercize and getting an error
Question: In learning python the hard way exercise 13 we import `argv` for the first
time. Here is the code:
from sys import argv
script, first, second, third = argv
print 'The script is called:' script
print 'Your first variable is:' first
print 'Your second variable is:' second
print 'Your third variable is:' third
And here is the output i am getting:
donny@donny:~/Documents/pygame-scripts$ python ex13.py first second third
Traceback (most recent call last):
File "ex13.py", line 1, in <module>
d
NameError: name 'd' is not defined
I can't figure this out. I googled it and checked that i copied the code
exactly. Any help would be greatly appreciated.
Answer: The problem is that you're missing commas between the arguments to `print`.
Try this instead:
from sys import argv
script, first, second, third = argv
print 'The script is called:', script
print 'Your first variable is:', first
print 'Your second variable is:', second
print 'Your third variable is:', third
**EDIT:**
To make your code compatible with Python 3, use `print(arg1, arg2)` instead of
`print arg1, arg2`.
|
d3 pie chart not loading
Question: All I want is to load a basic pie/donut chart, (actually a few bar plots in
addition to that , too), but looks like there is some error in my . If I
comment the same, I am able to serve the bare-bones python rendered page(but
not the pie chart, though).
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>{{title}}</title>
<link rel="stylesheet" href="/static/css/style.css">
<style>
.legend{
font-size: : 12px;
}
rect{
stroke-width: 2;
}
#chart_Imp,#chart_Bid{
width: 50%;
}
.axis{
font: 10px sans-serif;
}
.axis path,
.axis line{
fill: none;
stroke: #000;
shape-rendering: crispEdges;
}
</style>
</head>
<body>
<div style="max-width: 800px; border:2px black solid">
<h1>{{haider_Imp}}</h1>
<div id="chart_Imp"></div>
<h1>{{haider_Bid}}</h1>
<div id="chart_Bid"></div>
</div>
<div id="Bar-plots">
<div id="Bar-plots 1st set">
<h1>{{haider_cpa}}</h1>
<div id="cpa"></div>
<h1>{{haider_cpc}}</h1>
<div id="cpc"></div>
<h1>{{haider_cpm}}</h1>
<div id="cpm"></div>
</div>
<div id="Bar-plots 2nd set">
<h1>{{haider_avgbid}}</h1>
<div id="avg_bid"></div>
<h1>{{haider_winrate}}</h1>
<div id="winrate"></div>
</div>
</div>
<script src="/static/script/d3.min.js"></script>
<script>
(function(d3){
'use strict';
var width = 360;
var height = 360;
var radius = Math.min(width,height)/2;
var donutWidth = 75;
var legendRectSize = 18;
var legendSpacing = 4;
var color = d3.scale.category20b();
var svg = d3.select('#chart_Imp')
.append('svg')
.attr('width',width)
.attr('height',height)
.append('g')
.attr('transform','translate('+(width/2)+','+(height/2)+')');
var arc = d3.svg.arc()
.innerRadius(radius-donutWidth)
.outerRadius(radius);
var pie = d3.layout.pie()
.value(function(d) { return d.impsplit; })
.sort(null);
d3.csv('./static/summary.csv',function(error,dataset){
dataset.forEach(function(d) {
d.impsplit = +d.impsplit;
});
var path = svg.selectAll('path')
.data(pie(dataset))
.enter()
.append('path')
.attr('d',arc)
.attr('fill',function(d,i)
{
return color(d.data.label);
});
var legend = svg.selectAll('.legend1')
.data(color.domain())
.enter()
.append('g')
.attr('class', 'legend')
.attr('transform', function(d, i) {
var height = legendRectSize + legendSpacing;
var offset = height * color.domain().length / 2;
var horz = -2 * legendRectSize;
var vert = i * height - offset;
return 'translate(' + horz + ',' + vert + ')';
});
legend.append('rect')
.attr('width', legendRectSize)
.attr('height', legendRectSize)
.style('fill', color)
.style('stroke', color);
legend.append('text')
.attr('x', legendRectSize + legendSpacing)
.attr('y', legendRectSize - legendSpacing)
.text(function(d) { return d; });
});
(window.d3);
var margin = {top: 20,right:20, bottom: 70, left: 40},
width = 600-margin.left-margin.right,
height = 300 - margin.top - margin.bottom;
</script>
</body>
</html>
Answer: It might be because you use your piechart dataset outside of the .csv callback
function. I am under the impression that, when using the d3.csv() or d3.tsv()
functions, you need to use the retrieved data in the callback function. You
however, use your data outside the callback function.
check out [this answer](http://stackoverflow.com/questions/15754035/importing-
data-from-csv-using-d3-js), it might help out.
|
Python class can't be updated after being compiled
Question: I just started with python a couple of days ago, coming from a C++ background.
When I write a class, call it by a script, and afterwards update the interface
of the class, I get some behaviour I find very unintuitive.
Once successfully compiled, the class seems to be not changeable anymore. Here
an example:
testModule.py:
class testClass:
def __init__(self,_A):
self.First=_A
def Method(self, X, Y):
print X
testScript.py:
import testModule
tm=testModuleB.testClass(10)
tm.Method(3, 4)
Execution gives me
3
Now I change the argument list of `Method`:
def Method(self, X):
, I delete the testModule.pyc and in my script I call
tm.Method(3)
As result, I get
TypeError: Method() takes exactly 3 arguments (2 given)
What am I doing wrong? Why does the script not use the updated version of the
class? I use the Canopy editor but I saw this behaviour also with the
python.exe interpreter.
And apologies, if something similar was asked before. I did not find a
question related to this one.
Answer: `testModule` is already loaded in your interpreter. Deleting the `pyc` file
won't change anything. You will need to do `reload(testModule)`, or even
better restart the interpreter.
|
Why is Octave's function call overhead so much larger than both Matlab's and Python's?
Question: I have two pieces of code in Python and Octave that are structurally
identical. However, the Python version, implemented with numpy and scipy, is
~5x faster. I did a profile of the code, and I found that the main culprit in
the Octave code is 6 functions repeatedly called thousands of times in a loop.
These functions only compute numerical expressions, e.g. cos, cosh, so I was
surprised by how much time they were consuming (For reference, the two codes
both run under 2 seconds.)
I researched this strange phenomenon online and read a
[paper](http://www.researchgate.net/publication/221663825_Function_call_overhead_benchmarks_with_MATLAB_Octave_Python_Cythonand_C)
that showed that the function overhead in Octave, i.e. the setup needed for
the function to start executing the actual function code in the function's
body and cleaning it up afterwards, is approximately 30 times larger than that
of Matlab and approximately 100 times larger than that of Python.
This occurrence greatly baffles me--**How is it possible that calling a
function from Octave can be _this_ much slower than calling a function in the
two other similar languages? Furthermore, is there any way to remedy this
reduction in speed besides copying and pasting the function itself into the
body of the loop?**
EDIT: I've posted the main for loop from my code. It's an iterative
implementation of newton's method for multiple equations, so I'm not sure how
it could be vectorized.
for k = 1:10
for l = 1:50
% matrix of derivatives of equations with respect to variables
a = [dEq1_dq1(p1, p2, q1, q2, i, j), dEq1_dq2(p1, p2, q1, q2, i, j); dEq2_dq1(p1, p2, q1, q2, i, j), dEq2_dq2(p1, p2, q1, q2, i, j)];
% vector of equations
b = [Eq1(p1, p2, q1, q2, i, j); Eq2(p1, p2, q1, q2, i, j)];
% solution to ax=b
x = a \ b;
% iteratively update q
q1 -= beta*x(1);
q2 -= beta*x(2);
endfor
for l = 1:50
a = [dEp1_dp1(p1, p2, q1, q2, i, j), dEp1_dp2(p1, p2, q1, q2, i, j); dEp2_dp1(p1, p2, q1, q2, i, j), dEp2_dp2(p1, p2, q1, q2, i, j)];
b = [Ep1(p1, p2, q1, q2, i, j); Ep2(p1, p2, q1, q2, i, j)];
x = a \ b;
p1 -= beta*x(1);
p2 -= beta*x(2);
endfor
endfor
...
% derivatives of implicit equations with respect to variables
function val = dEp1_dp1(p1, p2, q1, q2, i, j)
% symmetric
if mod(i, 2) == 1
val = p1/(2*cos(p1/2)**2)+tan(p1/2);
% anti-symmetric
else
val = tan(p1/2)/(p1**2)-1/(2*p1*cos(p1/2)**2);
endif
end
...
function val = Ep1(p1, p2, q1, q2, i, j)
if mod(i, 2) == 1
val = p2*tanh(p2/2)+p1*tan(p1/2);
else
val = (1/p2)*tanh(p2/2)-(1/p1)*tan(p1/2);
endif
end
...
Answer: Comparing performance between languages is tricky business. Octave will tell
you right away that you should vectorize your code. That's what the language
was designed for. Python compiles his code into byte-code and that will allow
for optimizations. Matlab has JIT which does the same. But not Octave. Octave
will do exactly what you wrote, and will read your program one line at a time.
This means that your performance will suffer if you don't write good code.
And while there might be a large overhead for making a function call (I didn't
check your numbers), that's not so important if you only make a few functions
calls. You will often be dealing with large arrays, so it's the actual
"sciency" computations that should be causing your performance issues (unless
of course, you don't write proper Octave programs and use unnecessary loops).
The functions you mentioned, `cos` and `cosh`, will accept a vector so there's
is no need to use a for loop for it.
|
Python recursive file renaming
Question: I am pretty new to python and I am attempting to create a python script that
is able to recursively rename every file in a directory including
subdirectories. But every time I run the script I'm getting the error
OSError: [Errno 2] No such file or directory
The directory contains some text files and a folder with some other files.
Does anyone know why this keeps happening?
Code:
import os
path = "example path here"
new_filename= ""
i = 0
filenames = os.listdir(path) # is this line needed?
for dir,subdir,listfilename in os.walk(path):
for filename in listfilename:
i += 1
new_filename = 'filename' + str(i)
src = os.path.join(path, filename)
dst = os.path.join(path, new_filename)
os.rename(src, dst)
Answer: I'm fairly certain the issue stems from you joining new paths to the "path"
variable rather than the current directory returned from `walk`.
import os
path = "example path here"
new_filename= "" # This isn't C, we don't need to pre-declare a variable.
i = 0
filenames = os.listdir(path) # is this line needed? # not that I can see, no
for dir,subdir,listfilename in os.walk(path):
for filename in listfilename:
i += 1
new_filename = 'filename' + str(i)
src = os.path.join(dir, filename) # NOTE CHANGE HERE
dst = os.path.join(dir, new_filename) # AND HERE
os.rename(src, dst)
You should go read the docs for
[os.walk](https://docs.python.org/2/library/os.html#os.walk), I'm not sure you
fully understand what it does?
Also, don't call a variable `dir`, it'll mask the builtin by the same name.
Also also, you don't need to pre-declare `path` outside of your loops.
|
Three python modules, calling one another
Question: I am working on a project where I have three python modules (`a.py`, `b.py`
and `c.py`).
Module `a` is calling module `b`, module `b` is calling module `c`, and module
`c` is calling module `a`. But the behaviour is very bizzare when it runs.
Here are my three modules:
**_a.py_**
print('module a')
def a() :
print('inside a')
return True
import b
b.b()
**_b.py_**
print('module b')
def b() :
print('inside b')
return True
import c
c.c()
**_c.py_**
print('module c')
def c() :
print('inside c')
return True
import a
a.a()
When I run `a.py`, the output observed is :
module a
module b
module c
module a
inside b
inside a
inside c
inside b
Whereas the expected behavior is:
module a
module b
module c
module a
inside b
Why does this happen? Is there an alternative way for such an implementation?
Answer: I think the key misunderstanding is that you don't expect all the modules to
run after their imports, but they do. They get interrupted mid script to do
another import but they will return to finish out the commands.
So what ends up happening is: (I'm removing the function declarations, just
for clarity)
print('module a')
import b
>>> importing b
print('module b')
import c
>>> importing c
print('module c')
import a
>>> importing a
print('module a')
import b
>>> Doesn't reimport b
b.b()
a.a()
c.c()
b.b()
So to just show the order of commands without the imports and nesting:
print('module a')
print('module b')
print('module c')
print('module a')
b.b()
a.a()
c.c()
b.b()
And this does match your actual output.
|
Limiting Python Tkitner canvas scrollbar to only respond to mousewheel when full
Question: I've written the tkinter code below such that the canvas can be scrolled by
either the scrollbar or the mousewheel. The scrollbar works by default so that
it is only enabled if the content is larger than the canvas, I would like to
apply the same limitation to the mousewheel scrolling.
My code below produces a window where I can scroll through the buttons on the
canvas. This is fine if there are 3 or more buttons but with 1 or 2, it can of
course be scrolled anyway.
Any ideas? Thanks in advance.
from tkinter import *
root = Tk()
root.resizable(0, 0)
class Scroller():
def __init__(self):
self.scroll_canvas = Canvas(root, borderwidth=0, highlightthickness=0, height=200)
self.scroll_frame = Frame(self.scroll_canvas, background="#ffffff")
self.scroll_canvas.create_window(4, 4, window=self.scroll_frame)
self.vsb = Scrollbar(root, orient="vertical", command=self.scroll_canvas.yview)
self.scroll_canvas.configure(yscrollcommand=self.vsb.set)
self.vsb.pack(side="right", fill="y")
self.scroll_canvas.pack(side="left", fill='both', expand=True)
self.scroll_frame.bind("<Configure>", self.scroll_configure)
self.scroll_canvas.bind_all("<MouseWheel>", self.on_mousewheel)
def on_mousewheel(self, event):
if event.num == 5 or event.delta == -120:
direction = 1
if event.num == 4 or event.delta == 120:
direction = -1
self.scroll_canvas.yview_scroll(direction, "units")
def scroll_configure(self, event):
self.scroll_canvas.configure(scrollregion=self.scroll_canvas.bbox("all"))
buttons_frame = Frame(Scroller().scroll_frame, padx=5, pady=5)
buttons_frame.grid(row=1, column=1, sticky=N)
add_button = Button(buttons_frame, text='This\nis\na\nbutton.\nAdd Another Button',
command=lambda: add_another_button())
add_button.pack()
def add_another_button():
Button(buttons_frame, text='This\nis\na\nbutton.\nAdd Another Button', command=lambda: add_another_button()).pack()
root.mainloop()
Answer: As far as I can see you're not able to check how many buttons already have
been added, is this correct? I think in your current configuration it's not
possible to decide whether to scroll or not.
My approach would be to register how many buttons (or how much vertical space)
you've already used (update that when a new button is added), and then compare
that in `on_mousewheel` to the window size and decide if `yview_scroll()` is
triggered or not.
|
Another "slice indices must be integers or None or have an __index__ method" thread
Question: Trying to debug a Python script, and keep being this error message:
Traceback (most recent call last):
File "<input>", line 3, in <module>
TypeError: slice indices must be integers or None or have an __index__ method
Since the file names have no integers... this doesn't seem to make any sense.
Any suggestions?
import os
inputFolder = "d:\\full"
outputFolder = "d:\\clean"
for path, dir, file in os.walk(inputFolder):
for filename in file:
if filename.endswith(".jpeg", ".jpg"):
inputPath = inputFolder + os.sep + filename
print inputPath
Answer: You are misusing
[`str.endswith`](https://docs.python.org/2/library/stdtypes.html#str.endswith).
The second and third* parameters are `start` and `end`, which are used to
index into the string, **not** additional strings to check for. By default
these are both `None`, hence checking the whole string:
>>> 'foo'[None:None]
'foo'
This explains the seemingly-confusing error message; Python is trying to check
`filename['.jpg':None].endswith('.jpeg')`, which clearly doesn't make any
sense. Instead, to check for multiple strings, pass _a single tuple_ as the
first parameter:
if filename.endswith((".jpeg", ".jpg")):
# ^ note extra parentheses
Demo:
>>> 'test.jpg'.endswith('.jpg', '.jpeg')
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
'test.jpg'.endswith('.jpg', '.jpeg')
TypeError: slice indices must be integers or None or have an __index__ method
>>> 'test.jpg'.endswith(('.jpg', '.jpeg'))
True
* _(or third and fourth, as`instance.method(arg)` can be written `Class.method(instance, arg)`)_
|
Python Tkinter Grid Checkbox
Question: I was wondering if there is an easy way to create a grid of checkboxes using
Tkinter. I am trying to make a grid of 10 rows and columns (so 100 checkboxes)
so that only two checkboxes can be selected per row.
Edit: I'm using python 2.7 with spyder
What I have so far:
from Tkinter import*
master = Tk()
master.title("Select Groups")
rows=10
columns=10
for x in range(rows):
for y in range(columns):
Label(master, text= "Group %s"%(y+1)).grid(row=0,column=y+1)
Label(master, text= "Test %s"%(x+1)).grid(row=x+1,column=0)
Checkbutton(master).grid(row=x+1, column=y+1)
mainloop()
I'm trying to use state='Disabled' to grey out a row once two checkboxes have
been selected.
Answer: Here's an example using your provided 10x10 grid. It should give you the basic
idea of how to implement this.
Just make sure you keep a reference to every `Checkbutton` (`boxes` in the
example) as well as every `IntVar` (`boxVars` in the example).
Here's why:
-`Checkbuttons` are needed to call `config(state = DISABLED/NORMAL)`.
-`IntVars` are needed to determine the value of each `Checkbutton`.
Aside from those crucial elements its basically just some 2D array processing.
Here's my example code (**now based off of your provided code**).
from Tkinter import *
master = Tk()
master.title("Select Groups")
rows=10
columns=10
boxes = []
boxVars = []
# Create all IntVars, set to 0
for i in range(rows):
boxVars.append([])
for j in range(columns):
boxVars[i].append(IntVar())
boxVars[i][j].set(0)
def checkRow(i):
global boxVars, boxes
row = boxVars[i]
deselected = []
# Loop through row that was changed, check which items were not selected
# (so that we know which indeces to disable in the event that 2 have been selected)
for j in range(len(row)):
if row[j].get() == 0:
deselected.append(j)
# Check if enough buttons have been selected. If so, disable the deselected indeces,
# Otherwise set all of them to active (in case we have previously disabled them).
if len(deselected) == (len(row) - 2):
for j in deselected:
boxes[i][j].config(state = DISABLED)
else:
for item in boxes[i]:
item.config(state = NORMAL)
def getSelected():
selected = {}
for i in range(len(boxVars)):
temp = []
for j in range(len(boxVars[i])):
if boxVars[i][j].get() == 1:
temp.append(j + 1)
if len(temp) > 1:
selected[i + 1] = temp
print selected
for x in range(rows):
boxes.append([])
for y in range(columns):
Label(master, text= "Group %s"%(y+1)).grid(row=0,column=y+1)
Label(master, text= "Test %s"%(x+1)).grid(row=x+1,column=0)
boxes[x].append(Checkbutton(master, variable = boxVars[x][y], command = lambda x = x: checkRow(x)))
boxes[x][y].grid(row=x+1, column=y+1)
b = Button(master, text = "Get", command = getSelected, width = 10)
b.grid(row = 12, column = 11)
mainloop()
|
Having issues when install mitmproxy through pip
Question: I am having following issue when installing mitmproxy through pip. I have
tried other fixed related to egg error. Here on stack overflow. [Can't install
via pip because of egg_info
error](http://stackoverflow.com/questions/17886647/cant-install-via-pip-
because-of-egg-info-error) [pip install matplotlib fails: 'cannot build
package freetype; "python setup.py egg_info" failed with error code
1'](http://stackoverflow.com/questions/28914202/pip-install-matplotlib-fails-
cannot-build-package-freetype-python-setup-py-e)
104:bin user129856$ sudo pip install mitmproxy
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting mitmproxy
/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading mitmproxy-0.12.1.tar.gz (6.5MB)
100% |████████████████████████████████| 6.5MB 18kB/s
Collecting pyperclip>=1.5.8 (from mitmproxy)
Downloading pyperclip-1.5.11.zip
Collecting pyasn1>0.1.2 (from mitmproxy)
Downloading pyasn1-0.1.8.tar.gz (75kB)
100% |████████████████████████████████| 77kB 827kB/s
Collecting tornado>=4.0.2 (from mitmproxy)
Downloading tornado-4.2.tar.gz (433kB)
100% |████████████████████████████████| 434kB 260kB/s
Collecting lxml>=3.3.6 (from mitmproxy)
Downloading lxml-3.4.4.tar.gz (3.5MB)
100% |████████████████████████████████| 3.5MB 32kB/s
Collecting netlib<0.13,>=0.12 (from mitmproxy)
Downloading netlib-0.12.1.tar.gz (64kB)
100% |████████████████████████████████| 65kB 729kB/s
Complete output from command python setup.py egg_info:
warning: no files found matching 'OpenSSL/RATIONALE'
warning: no previously-included files found matching 'leakcheck'
warning: no previously-included files matching '*.py' found under directory 'leakcheck'
warning: no previously-included files matching '*.pem' found under directory 'leakcheck'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
no previously-included directories found matching 'doc/_build'
zip_safe flag not set; analyzing archive contents...
Installed /private/tmp/pip-build-wOHXdq/netlib/.eggs/pyOpenSSL-0.15.1-py2.7.egg
Searching for cffi
Reading https://pypi.python.org/simple/cffi/
Best match: cffi 1.1.2
Downloading https://pypi.python.org/packages/source/c/cffi/cffi-1.1.2.tar.gz#md5=ca6e6c45b45caa87aee9adc7c796eaea
Processing cffi-1.1.2.tar.gz
Writing /tmp/easy_install-_e2qwn/cffi-1.1.2/setup.cfg
Running cffi-1.1.2/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_e2qwn/cffi-1.1.2/egg-dist-tmp-382ExN
c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found
#include <ffi.h>
^
1 error generated.
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/private/tmp/pip-build-wOHXdq/netlib/setup.py", line 87, in <module>
"install": CFFIInstall,
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "build/bdist.macosx-10.10-intel/egg/setuptools/dist.py", line 268, in __init__
File "build/bdist.macosx-10.10-intel/egg/setuptools/dist.py", line 313, in fetch_build_eggs
File "build/bdist.macosx-10.10-intel/egg/pkg_resources/__init__.py", line 836, in resolve
File "build/bdist.macosx-10.10-intel/egg/pkg_resources/__init__.py", line 1081, in best_match
File "build/bdist.macosx-10.10-intel/egg/pkg_resources/__init__.py", line 1093, in obtain
File "build/bdist.macosx-10.10-intel/egg/setuptools/dist.py", line 380, in fetch_build_egg
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 629, in easy_install
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 659, in install_item
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 842, in install_eggs
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 1070, in build_and_install
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 1058, in run_setup
distutils.errors.DistutilsError: Setup script exited with error: command 'cc' failed with exit status 1
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-wOHXdq/netlib
**Updated after first response for libffi:**
After Installing libffi, it started breaking on libxml. I found the lxml on
pip. and its break again and looking for libxml :(
104:~ user2368563$ brew install libxml
Error: No available formula for libxml
Searching formulae...
libxml++ libxml2 libxmlsec1
Searching taps...
homebrew/versions/libxml278
104:~ user2368563$ brew install libxml2
Warning: libxml2-2.9.2 already installed
104:bin user2368563$ sudo pip install lxml
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting lxml
/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading lxml-3.4.4.tar.gz (3.5MB)
100% |████████████████████████████████| 3.5MB 116kB/s
Installing collected packages: lxml
Running setup.py install for lxml
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-bDtXaT/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-gmvCN9-record/install-record.txt --single-version-externally-managed --compile:
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
Building lxml version 3.4.4.
Building without Cython.
Using build configuration of libxslt 1.1.28
running install
running build
running build_py
creating build
creating build/lib.macosx-10.10-intel-2.7
creating build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/_elementpath.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/builder.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/cssselect.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/doctestcompare.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/ElementInclude.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/pyclasslookup.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/sax.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/usedoctest.py -> build/lib.macosx-10.10-intel-2.7/lxml
creating build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml/includes
creating build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/_diffcommand.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/_html5builder.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/_setmixin.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/builder.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/clean.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/defs.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/diff.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/ElementSoup.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/formfill.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/html5parser.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/soupparser.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/usedoctest.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron
copying src/lxml/isoschematron/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron
copying src/lxml/lxml.etree.h -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/lxml.etree_api.h -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/includes/c14n.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/config.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/dtdvalid.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/etreepublic.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/htmlparser.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/relaxng.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/schematron.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/tree.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/uri.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xinclude.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xmlerror.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xmlparser.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xmlschema.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xpath.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xslt.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/etree_defs.h -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/lxml-version.h -> build/lib.macosx-10.10-intel-2.7/lxml/includes
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/rng
copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/rng
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build/temp.macosx-10.10-intel-2.7
creating build/temp.macosx-10.10-intel-2.7/src
creating build/temp.macosx-10.10-intel-2.7/src/lxml
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -I/usr/include/libxml2 -I/private/tmp/pip-build-bDtXaT/lxml/src/lxml/includes -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.macosx-10.10-intel-2.7/src/lxml/lxml.etree.o -w -flat_namespace
In file included from src/lxml/lxml.etree.c:239:
/private/tmp/pip-build-bDtXaT/lxml/src/lxml/includes/etree_defs.h:14:10: fatal error: 'libxml/xmlversion.h' file not found
#include "libxml/xmlversion.h"
^
1 error generated.
error: command 'cc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-bDtXaT/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-gmvCN9-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-bDtXaT/lxml
Answer: If you read through your log carefully you might spot this line:
c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found
The "fatal error" part is especially important. :)
This means that the [ffi](https://sourceware.org/libffi/) headers couldn't be
located by your compiler. I'm not sure how to do it since I'm not a Mac user
but maybe homebrew could help you, or Google. To me it seems like you should
install [homebrew](http://brew.sh) and then just run:
brew install libffi
Then try pip again.
**Edit**
The full list of dependencies are:
* python
* libffi
* libssl
* libxml2
* libxslt1
So you'll need all those, and their headers, if you want to continue down this
path.
An easier solution is to download pre-built binaries for your Mac, from
[mitmproxy.org](http://mitmproxy.org/download/osx-mitmproxy-0.12.1.tar.gz)
(OSX Mountain Lion and later). I found this info in the [installation
docs](https://mitmproxy.org/doc/install.html#docOSX).
|
Solving overdetermined system in numpy when the value of one variable is already known
Question: I'm trying to solve an overdetermined system in Python, using the
`numpy.solve` function. I know the value of one of the variables and I know
that in theory I can find a unique solution for the system if I can somehow
plug in that known value.
My system is of the form `AxC=B`. The variables are split into two groups, one
group of N variables and one of T variables (although this does not matter for
the math). A is a `(T*N x T+N)` matrix, C is the variables vector, of length
`(T+N)`, and B is a vector of length `(T*N)`.
How do I tell `numpy.solve` (or another function in Python, but please don't
recommend least squares, I need the unique, exact solution, which I know
exists) to use the known value of one of the variables?
A simple example of my system would be:
|1 0 0 1 0| |n1| |B1|
|1 0 0 0 1| |n2| |B2|
|0 1 0 1 0| X |n3| = |B3|
|0 1 0 0 1| |t1| |B4|
|0 0 1 1 0| |t2| |B5|
|0 0 1 0 1| |B6|
The values of the elements of B would of course be known, as well as the value
of one of the variables, let's say I know that `t1=1`. The dots don't mean
anything I just put them there so the characters wouldn't bunch up.
Answer: As @Foon pointed out, the canonical way to do this is to subtract a column.
However, on a side note, as your problem is overdetermined, you _have_ to use
a method such as least squares. By definition, if it's an overdetermined
problem, there is no "unique, exact solution". (Otherwise it would be even-
determined - A square matrix.)
That aside, here's how you'd go about it:
Let's take your example equation:
|1 0 0 1 0| |n1| |B1|
|1 0 0 0 1| |n2| |B2|
|0 1 0 1 0| X |n3| = |B3|
|0 1 0 0 1| |t1| |B4|
|0 0 1 1 0| |t2| |B5|
|0 0 1 0 1| |B6|
As you noted, this is overdetermined. If we know one of our "model" variables
(let's say `n1` in this case), it will be even more overdetermined. It's not a
problem, but it means we'll need to use least squares, and there isn't a
completely unique solution.
So, let's say we know what `n1` should be.
In that case, we'd re-state the problem by subtracting `n1` multiplied by the
first column in the solution matrix from our vector of observations (This is
what @Foon suggested):
|0 0 1 0| |n2| |B1 - n1|
|0 0 0 1| |n3| |B2 - n1|
|1 0 1 0| X |t1| = |B3 - 0 |
|1 0 0 1| |t2| |B4 - 0 |
|0 1 1 0| |B5 - 0 |
|0 1 0 1| |B6 - 0 |
Let's use a more concrete example in numpy terms. Let's solve the equation `y
= Ax^2 + Bx + C`. To start with, let's generate our data and "true" model
parameters:
import numpy as np
# Randomly generate two of our model variables
a, c = np.random.rand(2)
b = 1
x = np.linspace(0, 2, 6)
y = a * x**2 + b * x + c
noise = np.random.normal(0, 0.1, y.size)
y += noise
First, we'll solve it _without) the knowledge that `B = 1`. We could use
`np.polyfit` for this, but to lead into the next bit, we'll use a lower-level
approach:
# I'm a geophysist, so I tend to use Gm=d instead of Ax=b
G = np.column_stack([x**2, x, np.ones_like(x)])
d = y
m, _, _, _ = np.linalg.lstsq(G, d)
print "Ideally, this would be 1: ", m[1]
As you can see, we'll get something close to, but not quite 1. In this case (I
didn't set the seed, so this will vary), the model parameters returned are
[ 0.13392633, 0.97217035, 0.33645734]
While the true parameters are:
[ 0.14592752, 1. , 0.31349185]
Now let's take the fact that we know `b` exactly into account. We'll make a
new `G` with one less column and subtract that column times `b` from our
observations (`d`/`y`):
G = np.column_stack([x**2, np.ones_like(x)])
d = y - b * x
m, _, _, _ = np.linalg.lstsq(G, d)
Now `m` is `[a, c]` and we've solved for those two variables using our
knowledge of `b`.
|
How do I take a number associated with a word and have the word printed that number of times?
Question: I have a text file with a list of words with a number and want to alter the
list so instead of having the number next to the word, the word is printed
that number of times.
So for example, with this list:
word, 2
for, 3
cat, 1
dog, 2
tiger, 1
I want it took look like this:
word
word
for
for
for
cat
dog
dog
tiger
For my python program I have this so far:
f = raw_input("Please enter a filename: ")
def openfile(f):
with open(f,'r') as a:
a = a.readlines()
b = [x.lower() for x in a]
return b
def fix(b):
newlist = []
for line in b:
split_line = line.split(',')
print openfile(f)
What I want to do is take the number and tell the program to print the word
that number of times and then delete the number but I am not sure how to do
that.
If you have any suggestions, answers, or need clarification please let me
know!
Thanks
Answer: If you want to change the file itself you can use
[fileinput.input](https://docs.python.org/2/library/fileinput.html#fileinput.input)
with `inplace=True` to change the file content:
import fileinput
import sys
for line in fileinput.input("in.txt",inplace=True):
if line.strip():
w,i = line.split(",")
sys.stdout.write("{}\n\n".format(w)*int(i))
Output:
word
word
for
for
for
cat
dog
dog
tiger
|
python assignment of complex objects
Question: I have following code which works fine. but the minute I remove comment across
statement print `list(B)` it fails and returns as an empty list. I'm thinking
perhaps X is getting address location of `list(B)` executed as part of print
statement.
import itertools
A = [1,2,3]
B = itertools.product(A,repeat=2)
print str(B)
#print list(B)
X = list(B)
print X
<itertools.product object at 0x7f5ac40a9a50>
[(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)]
Command took 0.03s
Answer: `B` is an _iterator_. If you ask for `list(B)` then you will _exhaust the
iterator_ , causing it to be empty the next time you do `list(B)`.
As a rule-of-thumb: when dealing with iterators, you rarely need to assign
them to names. Usually, you either just iterate over the iterator with `for-
in`, or you use `list` to convert the iterator into a list.
|
Python's nested classes
Question: I'm having trouble with Python's nested classes. Here's how I have the code
set up:
class Player:
class Doll2:
def __init__(self, stats):
self.role = stats[0]
self.level = float(stats[1])
self.hp = float(stats[2])
self.strength = float(stats[3])
self.skill = float(stats[4])
self.agility = float(stats[5])
self.constitution = float(stats[6])
self.charisma = float(stats[7])
self.intelligence = float(stats[8])
self.armor = float(stats[9])
self.damage_min = float(stats[10])
self.damage_max = float(stats[11])
self.resilience = float(stats[12])
self.critical = float(stats[13])
self.block = float(stats[14])
self.healing = float(stats[15])
self.threat = float(stats[16])
def __init__(self, name, server, province):
stats2 = get_info_doll(province, server, name, "2")
self.Doll2(stats2)
player1 = Player("Username", "us", "1")
print(player1.Doll2.hp)
And here is the error that I'm getting:
AttributeError: class Doll2 has no attribute 'hp'
What am I doing wrong?
Answer: `hp` is an attribute of instance (not of class) try this:
class Player:
class Doll2:
def __init__(self, stats):
# ... more assignments
self.hp = float(stats[2])
# ... more assignments
def __init__(self, name, server, province):
stats2 = get_info_doll(province, server, name, "2")
self.doll2 = self.Doll2(stats2) # create instance of Doll2
player1 = Player("Username", "us", "1")
print(player1.doll2.hp) # using instance instead of class
The important lines is:
`self.doll2 = self.Doll2(stats2)`
and
`print(player1.doll2.hp)`
|
LRANGE behaves in a very strange manner
Question: It seems as if I do not get the idea of this method (`LRANGE`). The situation
is following. I've added in `Python` some data to two lists, one called
`tables` and another one called `object_1_`. I pushed several items to list
`tables` and only one item to list `object_1_`. I did it like so:
db.lpush('object_1_','some_data')
db.lpush('tables','some data')
...
...
db.lpush('tables','some data')
When I go to `redis-cli`, I see something which seems strange to me. I do the
followng:
redis-cli> lrange object_1_ -1 0
some data
redis-cli> lrange tables -1 0
(empty list or set)
So, my question is why it is so? I try to understand why this construct
`lrange key_name -1 0` returns something for one-element list and returns
nothing for a larger list.
Answer: The order of [`LRANGE`](http://redis.io/commands/lrange)'s arguments is
important - you've switched the `start` and `end` in your sample, try doing
`lrange key 0 -1` ;)
|
wxpython - How to refresh listbox from outside of class?
Question:
class ListCtrl(wx.ListCtrl):
def __init__(self, parent):
super(ListCtrl, self).__init__(parent,size=(1200,700))
def delete_items(self):
self.DeleteAllItems()
class One(wx.Panel):
b =wx.Button()
b.bind(**Listbox.delete_items**)
class Two(wx.Panel):
self.lb = Listbox(self)
1. *In my application, I have two panels.. class One represents the sidebar panel which contains buttons. Class Two represents the main panel which contains the listbox.
2. How do I call a function via a button (in this case to delete items from a listbox ) whose parent belongs to another class (Two)?*
Answer: one way you could do this is with pub sub
from wx.lib.pubsub import Publisher
pub = Publisher()
all_options = "One Two Three".split()
class One(wx.Panel):
def on_delete_button(self,evt):
all_options.pop(0)
pub.sendMessage("update.options",
class Two(wx.Panel):
def __init__(self,*args,**kwargs):
self.lb = Listbox(self)
self.lb.SetItems(all_options)
pub.subscribe("update.options",lambda e:self.lb.SetItems(e.data))
that said there are many many ways to accomplish this
|
Adding new cookies with Mechanize Python
Question: I am trying to add cookies to a browser in mechanize so I am not redirected to
a click ok to agree page.
I have looked but can figure out how to do this.
I can do it using urllib2 already but wish to do it with mechanize
import urllib2
opener = urllib2.build_opener()
opener.addheaders.append(('Cookie', 'ASPSESSIONIDAEBDRQRT=HBODDIACJNHNMHNHBBIHOEGO; ASPSESSIONIDCEAATTSQ=ECNDDBKCJBMAHBIJOCJAEPEO'))
u = opener.open("https://www.transactionservices.dla.mil/daasinq/dodaac.asp")
How do I add that cookie string in mechanize? Thanks in advance
Answer: By using cookielib and Cookie built-in libraries to set cookies and append
them to your mechanize session.
import Cookie
import cookielib
cookiejar =cookielib.LWPCookieJar()
br = mechanize.Browser()
br.set_cookiejar(cookiejar)
cookie = cookielib.Cookie('ASPSESSIONIDAEBDRQRT=HBODDIACJNHNMHNHBBIHOEGO; ASPSESSIONIDCEAATTSQ=ECNDDBKCJBMAHBIJOCJAEPEO')
cookiejar.set_cookie(cookie)
AND Also, you can still add headers to your mechanize session:
br.addheaders = [('Cookie','cookiename=cookie value')]
|
Python sh, set user.email to commit (Git)
Question: I'm working with [Waliki](https://github.com/mgaitan/waliki/), specifically
with waliki.git,
Waliki.git use sh to manage Git, so there, when execute a commit, user ever is
setted to `settings.WALIKI_COMMITTER_EMAIL`, looking for source (as next),
`user` at `.git/config` never is changed.
class Git(object):
__shared_state = {} # it's a Borg
def __init__(self):
self.__dict__ = self.__shared_state
from waliki.settings import WALIKI_DATA_DIR
self.content_dir = WALIKI_DATA_DIR
os.chdir(self.content_dir)
if not os.path.isdir(os.path.join(self.content_dir, '.git')):
git.init()
git.config("user.email", settings.WALIKI_COMMITTER_EMAIL)
git.config("user.name", settings.WALIKI_COMMITTER_NAME)
self.git = git
def commit(self, page, message='', author=None, parent=None, extra_path=None):
path = page.path
paths_to_commit = [path]
if extra_path:
paths_to_commit.append(extra_path)
kwargs = {}
if isinstance(author, User) and author.is_authenticated():
kwargs['author'] = u"%s <%s>" % (author.get_full_name() or author.username, author.email)
elif isinstance(author, six.string_types):
kwargs['author'] = author
try:
there_were_changes = parent and parent != self.last_version(page)
status = git.status('--porcelain', path).stdout.decode('utf8')[:2]
if parent and status != "UU":
git.stash()
git.checkout('--detach', parent)
try:
git.stash('pop')
except:
git.checkout('--theirs', path)
if status == 'UU':
# See http://stackoverflow.com/a/8062976/811740
kwargs['i'] = True
git.add(path)
git_commit_cmd = git.commit.bake(allow_empty=True, allow_empty_message=True, m=message, **kwargs)
git_commit_cmd('--', *paths_to_commit)
last = self.last_version(page)
if parent and status != "UU":
git.checkout('master')
git.merge(last)
except ErrorReturnCode as e:
# TODO: make this more robust!
error = e.stdout.decode('utf8')
if 'CONFLICT' in error:
# For '-i' attribute see http://stackoverflow.com/q/5827944/811740
git_commit_cmd = git.commit.bake(allow_empty=True, allow_empty_message=True, m=_('Merged with conflict'), i=True, **kwargs)
git_commit_cmd('--', *paths_to_commit)
raise Page.EditionConflict(_('Automatic merge failed. Please, fix the conflict and save the page.'))
else:
raise
return there_were_changes
I add some code, so:
class Git(object): __shared_state = {} # it's a Borg
def __init__(self, author):
self.__dict__ = self.__shared_state
from waliki.settings import WALIKI_DATA_DIR
self.content_dir = WALIKI_DATA_DIR
os.chdir(self.content_dir)
if not os.path.isdir(os.path.join(self.content_dir, '.git')):
git.init()
git.config("user.email", settings.WALIKI_COMMITTER_EMAIL)
git.config("user.name", settings.WALIKI_COMMITTER_NAME)
else:
git.config("user.email", author)
git.config("user.name", author)
self.git = git
but Got:
RAN: '/usr/bin/git --no-pager commit -m Page created --allow-empty --allow-empty-message [email protected] -- content.md'
STDOUT:
STDERR:
fatal: No existing author found with '[email protected]'
How I can do Git set rightly current user to commit?
Answer: This is the part of interest:
if isinstance(author, User) and author.is_authenticated():
kwargs['author'] = u"%s <%s>" % (author.get_full_name() or author.username, author.email)
elif isinstance(author, six.string_types):
kwargs['author'] = author
You need to specify the author with the format `Full Name
<[email protected]>`.
[From the docs](https://www.kernel.org/pub/software/scm/git/docs/git-
commit.html):
--author=<author>
Override the commit author. Specify an explicit author using the standard
"A U Thor <[email protected]>" format. Otherwise <author> is assumed to be a
pattern and is used to search for an existing commit by that author (i.e.
rev-list --all -i --author=<author>); the commit author is then copied from the
first such commit found.
|
"django.db.utils.ProgrammingError: relation "app_user" does not exist" during manage.py test
Question: My setup:
* Django 1.8.3
* Python 2.7.10
* Ubuntu 14.04
* django-two-factor-auth==1.2.0
I get the following error when I run `python manage.py test`:
Traceback (most recent call last):
File "/src/venv/bin/django-admin.py", line 5, in <module>
management.execute_from_command_line()
File "/src/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/src/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/src/venv/lib/python2.7/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/src/venv/lib/python2.7/site-packages/django/core/management/base.py", line 393, in run_from_argv
self.execute(*args, **cmd_options)
File "/src/venv/lib/python2.7/site-packages/django/core/management/commands/test.py", line 74, in execute
super(Command, self).execute(*args, **options)
File "/src/venv/lib/python2.7/site-packages/django/core/management/base.py", line 444, in execute
output = self.handle(*args, **options)
File "/src/venv/lib/python2.7/site-packages/django/core/management/commands/test.py", line 90, in handle
failures = test_runner.run_tests(test_labels)
File "/src/venv/lib/python2.7/site-packages/django/test/runner.py", line 210, in run_tests
old_config = self.setup_databases()
File "/src/venv/lib/python2.7/site-packages/django/test/runner.py", line 166, in setup_databases
**kwargs
File "/src/venv/lib/python2.7/site-packages/django/test/runner.py", line 370, in setup_databases
serialize=connection.settings_dict.get("TEST", {}).get("SERIALIZE", True),
File "/src/venv/lib/python2.7/site-packages/django/db/backends/base/creation.py", line 368, in create_test_db
test_flush=not keepdb,
File "/src/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 120, in call_command
return command.execute(*args, **defaults)
File "/src/venv/lib/python2.7/site-packages/django/core/management/base.py", line 444, in execute
output = self.handle(*args, **options)
File "/src/venv/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 179, in handle
created_models = self.sync_apps(connection, executor.loader.unmigrated_apps)
File "/src/venv/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 317, in sync_apps
cursor.execute(statement)
File "/src/venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/src/venv/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/src/venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 63, in execute
return self.cursor.execute(sql)
django.db.utils.ProgrammingError: relation "app_user" does not exist
When I drop a `print(sql)` statement on line 62 in
`django/db/backends/utils.py`, I get following output:
CREATE DATABASE "test_dev"
SELECT c.relname, c.relkind
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r', 'v')
AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid)
CREATE TABLE "django_migrations" ("id" serial NOT NULL PRIMARY KEY, "app" varchar(255) NOT NULL, "name" varchar(255) NOT NULL, "applied" timestamp with time zone NOT NULL)
SELECT c.relname, c.relkind
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r', 'v')
AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid)
SAVEPOINT "s140275211773760_x1"
CREATE TABLE "distributedlock_lock" ("id" serial NOT NULL PRIMARY KEY, "key" varchar(255) NOT NULL, "value" varchar(255) NOT NULL, "timestamp" timestamp with time zone NULL)
RELEASE SAVEPOINT "s140275211773760_x1"
SAVEPOINT "s140275211773760_x2"
CREATE TABLE "djkombu_queue" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(200) NOT NULL UNIQUE)
RELEASE SAVEPOINT "s140275211773760_x2"
SAVEPOINT "s140275211773760_x3"
CREATE TABLE "djkombu_message" ("id" serial NOT NULL PRIMARY KEY, "visible" boolean NOT NULL, "sent_at" timestamp with time zone NULL, "payload" text NOT NULL, "queue_id" integer NOT NULL)
RELEASE SAVEPOINT "s140275211773760_x3"
SAVEPOINT "s140275211773760_x4"
CREATE TABLE "otp_static_staticdevice" ("id" serial NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "name" varchar(64) NOT NULL, "confirmed" boolean NOT NULL)
RELEASE SAVEPOINT "s140275211773760_x4"
SAVEPOINT "s140275211773760_x5"
CREATE TABLE "otp_static_statictoken" ("id" serial NOT NULL PRIMARY KEY, "device_id" integer NOT NULL, "token" varchar(16) NOT NULL)
RELEASE SAVEPOINT "s140275211773760_x5"
SAVEPOINT "s140275211773760_x6"
CREATE TABLE "otp_totp_totpdevice" ("id" serial NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "name" varchar(64) NOT NULL, "confirmed" boolean NOT NULL, "key" varchar(80) NOT NULL, "step" smallint NOT NULL CHECK ("step" >= 0), "t0" bigint NOT NULL, "digits" smallint NOT NULL CHECK ("digits" >= 0), "tolerance" smallint NOT NULL CHECK ("tolerance" >= 0), "drift" smallint NOT NULL, "last_t" bigint NOT NULL)
RELEASE SAVEPOINT "s140275211773760_x6"
CREATE INDEX "djkombu_queue_name_1c24e49fd475ad53_like" ON "djkombu_queue" ("name" varchar_pattern_ops)
ALTER TABLE "djkombu_message" ADD CONSTRAINT "djkombu_message_queue_id_12778caea7843dd_fk_djkombu_queue_id" FOREIGN KEY ("queue_id") REFERENCES "djkombu_queue" ("id") DEFERRABLE INITIALLY DEFERRED
CREATE INDEX "djkombu_message_46cf0e59" ON "djkombu_message" ("visible")
CREATE INDEX "djkombu_message_df2f2974" ON "djkombu_message" ("sent_at")
CREATE INDEX "djkombu_message_75249aa1" ON "djkombu_message" ("queue_id")
ALTER TABLE "otp_static_staticdevice" ADD CONSTRAINT "otp_static_staticdevice_user_id_39a61f1bd3ec970d_fk_app_user_id" FOREIGN KEY ("user_id") REFERENCES "ff_user" ("id") DEFERRABLE INITIALLY DEFERRED
So it is clear to me that my tests blow up while the test database is being
setup. Specifically, the attempt to create a foreign key constraint between
the `otp_static_staticdevice` table and my app's `app_user` table fails.
My immediate question is, why does django create the OTP table before my app's
table? My assumption is that the OTP app is listed first in my
`INSTALLED_APPS`. But this is not the case:
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.humanize',
'app',
...
'django_otp',
'django_otp.plugins.otp_static',
'django_otp.plugins.otp_totp',
'two_factor',
...
]
Next, I look at `django/core/management/commands/migrate.py`, trying to find
out how django determines its order for migrating apps.
Plopping a `pdb.set_trace()` statement on line 264
(<https://github.com/django/django/blob/1.8.3/django/core/management/commands/migrate.py#L264>)
and looking to see what `app_labels` contains, I get:
set(['djangosaml2', 'django_ace', 'recurly', 'staticfiles', 'distributedlock', 'app_overrides', 'messages', 'django_otp', 'kombu_transport_django', 'otp_totp', 'compressor', 'otp_static', 'humanize', 'ajax_select', 'django_extensions', 'import_export', 'raven_compat', 'crispy_forms', 'emoji'])
This is as far as I have gotten before I decided to ask for help. Does anyone
know how Django might end up _not_ creating all the project's apps in the
correct order so that decency conflicts do not occur?
Answer: Got the same issue, and since it happens on `./manage.py test`, your
migrations may be broken.
Since Django 1.7, there is a new setting called `MIGRATION_MODULES`, in which
you configure your app's migration modules.
Adding the following workaround in settings.py (found
[here](https://gist.github.com/NotSqrt/5f3c76cd15e40ef62d09)) skips migrations
on tests, and solved it for me:
class DisableMigrations(object):
def __contains__(self, item):
return True
def __getitem__(self, item):
return "notmigrations"
MIGRATION_MODULES = DisableMigrations()
|
Quickly find first entry with a value below or equal
Question: Let's say in Python I have a list of files with their respective sizes,
represented as a dict (I don't care about the structure, you can propose
another one):
from random import randint
def gen_rand_fileslist(nbfiles=100, maxvalue=100):
fileslist = {}
for i in xrange(nbfiles):
fileslist["file_"+str(i)] = randint(1, maxvalue)
return fileslist
fileslist = gen_rand_fileslist(10)
Example `fileslist`:
{'file_0': 2,
'file_1': 21,
'file_2': 20,
'file_3': 16,
'file_4': 12,
'file_5': 67,
'file_6': 95,
'file_7': 16,
'file_8': 2,
'file_9': 5}
Now I want to quickly find the highest value below the specified threshold.
For example:
get_value_below(fileslist, threshold=25) # result should be 'file_1' with value 21
The function get_value_below() is to be called in a tight loop, so it should
be as fast as possible, and any threshold can be specified (so sorting doesn't
help directly).
Is there a way to be faster than just walking through the whole list (linear
time)?
Answer: It all depends on how often you are going to search for a threshold in the
`fileslist`. If you are going to do more than `Θ(log n)` queries, then it's
better to sort first and then perform a binary search for each query.
Otherwise, if you want to perform one query only, then yes it's better to
linear search since the element you want can be virtually anywhere and you'll
definitely need to visit each element of the list.
If you are planning to use the sorting first and binary searching then, then
use
[bisect_right](https://docs.python.org/2/library/bisect.html#bisect.bisect_right)
which for an input `x`, it will return the position in the list that contains
the biggest element lower or equal to `x`.
|
QTreeView only edits in first column?
Question: I am trying to make a simple property editor, where the property list is a
nested dict and the data is displayed and edited in a QTreeView. (Before I get
to my question -- if anyone already has a working implementation of this in
Python 3 I'd love to be pointed at it).
Anyway, after much work I have my QAbstractItemModel and I can open a
QTreeView with this model and it shows the data. If I click on a label in the
first column (the key) then it opens up an editor, either a text editor or a
spinbox etc depending on the datatype. When I finish editing it calls my
"model.setData" where I reject it because I don't want to allow editable keys.
I can disable the editing of this by using flags and that works fine. I just
wanted to check that everything works the way that I'd expect it to.
Here is what doesn't happen: if I click on a cell in the second column (the value that I actually want to edit) then it bypasses the loading of an editor and simply calls model.setData with the current value. I am baffled. I've tried changing the tree selectionBehavior and selectionMode but no dice. I'm returning Qt.ItemIsEnabled | Qt.ItemIsSelectable | Qt.ItemIsEditable in flags. It seems to display fine. It just won't open up an editor.
Any thoughts about what stupid mistake I must be making? I'll include the code
below, with some print statements that I'm using to try to debug the thing.
Thanks
PS One thing that hung me up for a long time was that my QModelIndex members
would just disappear, so the indices that I got back were garbage. I found
that by keeping a reference to them (throwing them in a list) that they
worked. This seems to be a problem that springs up a lot in Qt work (I had the
same problem with menus disappearing -- I guess that means that I should think
about it sooner). Is there a "best practices" way of dealing with this?
# -*- coding: utf-8 -*-
from collections import OrderedDict
from PyQt4.QtCore import QAbstractItemModel, QModelIndex, Qt
from PyQt4.QtGui import QAbstractItemView
class PropertyList(OrderedDict):
def __init__(self, *args, **kwargs):
OrderedDict.__init__(self, *args, **kwargs)
self.myModel = PropertyListModel(self)
def __getitem__(self,index):
if issubclass(type(index), list):
item = self
for key in index:
item = item[key]
return item
else:
return OrderedDict.__getitem__(self, index)
class PropertyListModel(QAbstractItemModel):
def __init__(self, propList, *args, **kwargs):
QAbstractItemModel.__init__(self, *args, **kwargs)
self.propertyList = propList
self.myIndexes = [] # Needed to stop garbage collection
def index(self, row, column, parent):
"""Returns QModelIndex to row, column in parent (QModelIndex)"""
if not self.hasIndex(row, column, parent):
return QModelIndex()
if parent.isValid():
indexPtr = parent.internalPointer()
parentDict = self.propertyList[indexPtr]
else:
parentDict = self.propertyList
indexPtr = []
rowKey = list(parentDict.keys())[row]
childPtr = indexPtr+[rowKey]
newIndex = self.createIndex(row, column, childPtr)
self.myIndexes.append(childPtr)
return newIndex
def get_row(self, key):
"""Returns the row of the given key (list of keys) in its parent"""
if key:
parent = key[:-1]
return list(self.propertyList[parent].keys()).index(key[-1])
else:
return 0
def parent(self, index):
"""
Returns the parent (QModelIndex) of the given item (QModelIndex)
Top level returns QModelIndex()
"""
if not index.isValid():
return QModelIndex()
childKeylist = index.internalPointer()
if childKeylist:
parentKeylist = childKeylist[:-1]
self.myIndexes.append(parentKeylist)
return self.createIndex(self.get_row(parentKeylist), 0,
parentKeylist)
else:
return QModelIndex()
def rowCount(self, parent):
"""Returns number of rows in parent (QModelIndex)"""
if parent.column() > 0:
return 0 # only keys have children, not values
if parent.isValid():
indexPtr = parent.internalPointer()
try:
parentValue = self.propertyList[indexPtr]
except:
return 0
if issubclass(type(parentValue), dict):
return len(self.propertyList[indexPtr])
else:
return 0
else:
return len(self.propertyList)
def columnCount(self, parent):
return 2 # Key & value
def data(self, index, role):
"""Returns data for given role for given index (QModelIndex)"""
# print('Looking for data in role {}'.format(role))
if not index.isValid():
return None
if role in (Qt.DisplayRole, Qt.EditRole):
indexPtr = index.internalPointer()
if index.column() == 1: # Column 1, send the value
return self.propertyList[indexPtr]
else: # Column 0, send the key
if indexPtr:
return indexPtr[-1]
else:
return ""
else: # Not display or Edit
return None
def setData(self, index, value, role):
"""Sets the value of index in a given role"""
print('In SetData')
if not index.isValid():
return False
print('Trying to set {} to {}'.format(index,value))
print('That is column {}'.format(index.column()))
if not index.column(): # Only change column 1
return False
try:
ptr = index.internalPointer()
self.propertyList[ptr[:-1]][ptr[-1]] = value
self.emit(self.dataChanged(index, index))
return True
except:
return False
def flags(self, index):
"""Indicates what can be done with the data"""
if not index.isValid():
return Qt.NoItemFlags
if index.column(): # only enable editing of values, not keys
return Qt.ItemIsEnabled | Qt.ItemIsSelectable | Qt.ItemIsEditable
else:
return Qt.ItemIsEnabled | Qt.ItemIsSelectable | Qt.ItemIsEditable #Qt.NoItemFlags
if __name__ == '__main__':
p = PropertyList({'k1':'v1','k2':{'k3':'v3','k4':4}})
import sys
from PyQt4 import QtGui
qApp = QtGui.QApplication(sys.argv)
treeView = QtGui.QTreeView()
# I've played with all the settings on these to no avail
treeView.setHeaderHidden(False)
treeView.setAllColumnsShowFocus(True)
treeView.setUniformRowHeights(True)
treeView.setSelectionBehavior(QAbstractItemView.SelectRows)
treeView.setSelectionMode(QAbstractItemView.SingleSelection)
treeView.setAlternatingRowColors(True)
treeView.setEditTriggers(QAbstractItemView.DoubleClicked |
QAbstractItemView.SelectedClicked |
QAbstractItemView.EditKeyPressed |
QAbstractItemView.AnyKeyPressed)
treeView.setTabKeyNavigation(True)
treeView.setModel(p.myModel)
treeView.show()
sys.exit(qApp.exec_())
Answer: @strubbly was real close but forgot to unpack the tuple in his `index` method.
Here's the working code for Qt5. There are probably a couple of imports and
stuff that would need to be fixed. Only cost me a couple weeks of my life :)
import sys
from collections import OrderedDict
from PyQt5 import QtCore, QtWidgets
from PyQt5.QtCore import Qt
class TupleKeyedOrderedDict(OrderedDict):
def __init__(self, *args, **kwargs):
super().__init__(sorted(kwargs.items()))
def __getitem__(self, key):
if isinstance(key, tuple):
item = self
for k in key:
if item != ():
item = item[k]
return item
else:
return super().__getitem__(key)
def __setitem__(self, key, value):
if isinstance(key, tuple):
item = self
previous_item = None
for k in key:
if item != ():
previous_item = item
item = item[k]
previous_item[key[-1]] = value
else:
return super().__setitem__(key, value)
class SettingsModel(QtCore.QAbstractItemModel):
def __init__(self, data, parent=None):
super().__init__(parent)
self.root = data
self.my_index = {} # Needed to stop garbage collection
def index(self, row, column, parent):
if not self.hasIndex(row, column, parent):
return QtCore.QModelIndex()
if parent.isValid():
index_pointer = parent.internalPointer()
parent_dict = self.root[index_pointer]
else:
parent_dict = self.root
index_pointer = ()
row_key = list(parent_dict.keys())[row]
child_pointer = (*index_pointer, row_key)
try:
child_pointer = self.my_index[child_pointer]
except KeyError:
self.my_index[child_pointer] = child_pointer
index = self.createIndex(row, column, child_pointer)
return index
def get_row(self, key):
if key:
parent = key[:-1]
if not parent:
return 0
return list(self.root[parent].keys()).index(key[-1])
else:
return 0
def parent(self, index):
if not index.isValid():
return QtCore.QModelIndex()
child_key_list = index.internalPointer()
if child_key_list:
parent_key_list = child_key_list[:-1]
try:
parent_key_list = self.my_index[parent_key_list]
except KeyError:
self.my_index[parent_key_list] = parent_key_list
return self.createIndex(self.get_row(parent_key_list), 0,
parent_key_list)
else:
return QtCore.QModelIndex()
def rowCount(self, parent):
if parent.column() > 0:
return 0 # only keys have children, not values
if parent.isValid():
indexPtr = parent.internalPointer()
parentValue = self.root[indexPtr]
if isinstance(parentValue, OrderedDict):
return len(self.root[indexPtr])
else:
return 0
else:
return len(self.root)
def columnCount(self, parent):
return 2 # Key & value
def data(self, index, role):
if not index.isValid():
return None
if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole):
indexPtr = index.internalPointer()
if index.column() == 1: # Column 1, send the value
return self.root[indexPtr]
else: # Column 0, send the key
if indexPtr:
return indexPtr[-1]
else:
return None
else: # Not display or Edit
return None
def setData(self, index, value, role):
pointer = self.my_index[index.internalPointer()]
self.root[pointer] = value
self.dataChanged.emit(index, index)
return True
def flags(self, index):
if not index.isValid():
return 0
return Qt.ItemIsEnabled | Qt.ItemIsSelectable | Qt.ItemIsEditable
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
data = TupleKeyedOrderedDict(**{'1': OrderedDict({'sub': 'b'}), '2': OrderedDict({'subsub': '3'})})
model = SettingsModel(data)
tree_view = QtWidgets.QTreeView()
tree_view.setModel(model)
tree_view.show()
sys.exit(app.exec_())
|
How to get PySerial to accept 921600 Baud rate
Question: We have a motor controller that implements a USB->Virtual COM port that has a
fixed baud rate of 921600 (the manual even states that the baud rate cannot be
changed). I found that if I use a terminal program like Terminal, I can pass
the custom baud rate of 921600 and communicate with the instrument with no
issues. We are using Windows 7 pro, 64-bit version.
However, when I tried to do this in PySerial (v.2.7) using Python 2.7.10 (32
bit) like this:
import serial
ser = serial.Serial("COM3",921600)
I always encounter error saying that parameter is incorrect.
> File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 202, in
> _reconfigurePort raise ValueError("Cannot configure port, some setting was
> wrong. Original message: %r" % ctypes.WinError()) ValueError: Cannot
> configure port, some setting was wrong. Original message: WindowsError(87,
> 'The parameter is incorrect.')
The valid Baudrates seem to be the one listed in serialwin32.py
BAUDRATES = (50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800,
9600, 19200, 38400, 57600, 115200)
When I use any of the baud rate from the list there I can open the serial port
(but not necessary able to communicate with the instrument).
Just adding 921600 hundred to this list in serialwin32.py doesn't do anything.
I have searched several forums and websites and so far nobody seems to have an
answer on how to set this higher baud rate in Windows. The baudrate above
115200 used to be unreliable in older versions of Windows, but I assume that
Windows 7 should be able to handle a much higher transfer rate now especially
that many USB IC like FTDI and CH430 can handle a much higher baud rate than
115200.
Does anyone know a way to get pySerial to accept a higher baudrate than 115200
in Windows?
Answer: I try 921600 and haven't any problem.
Your adapter not supported high speed rs232.
You need buy a CP21XX or equal converter.
Moxa or Lantronix is good brand(test & using).
Try on: win7x64,Python2.7x32
|
numpy.ndarray objects not garbage collected
Question: While trying to fine-tune some memory leaks in the Python bindings for some
C/C++ functions I cam across some strange behavior pertaining to the garbage
collection of Numpy arrays.
I have created a couple of simplified cases in order to better explain the
behavior. The code was run using the `memory_profiler`, the output from which
follows immediately after. It appears that Python's garbage collection is not
working as expected when it comes to NumPy arrays:
# File deallocate_ndarray.py
@profile
def ndarray_deletion():
import numpy as np
from gc import collect
buf = 'abcdefghijklmnopqrstuvwxyz' * 10000
arr = np.frombuffer(buf)
del arr
del buf
collect()
y = [i**2 for i in xrange(10000)]
del y
collect()
if __name__=='__main__':
ndarray_deletion()
With the following command I invoked the `memory_profiler`:
`python -m memory_profiler deallocate_ndarray.py`
This is what I got:
Filename: deallocate_ndarray.py
Line # Mem usage Increment Line Contents
================================================
5 10.379 MiB 0.000 MiB @profile
6 def ndarray_deletion():
7 17.746 MiB 7.367 MiB import numpy as np
8 17.746 MiB 0.000 MiB from gc import collect
9 17.996 MiB 0.250 MiB buf = 'abcdefghijklmnopqrstuvwxyz' * 10000
10 18.004 MiB 0.008 MiB arr = np.frombuffer(buf)
11 18.004 MiB 0.000 MiB del arr
12 18.004 MiB 0.000 MiB del buf
13 18.004 MiB 0.000 MiB collect()
14 18.359 MiB 0.355 MiB y = [i**2 for i in xrange(10000)]
15 18.359 MiB 0.000 MiB del y
16 18.359 MiB 0.000 MiB collect()
I don't understand why even the forced calls to `collect` don't reduce the
memory usage of the program by freeing up some memory. Moreover, even if Numpy
arrays don't behave normally due to the underlying C constructs, why doesn't
the list (which is pure Python) get garbage collected?
I know that `del` does not directly call the underlying `__del__` method, but
you will note that all `del` statements in the code actually end up reducing
the reference count of the corresponding objects to zero (thereby making them
eligible for garbage collection AFAIK). Typically, I would expect to see a
negative entry in the increment column when an object undergoes garbage
collection. Can anyone shed some light on what is going on here?
NOTE: This test was run on OS X 10.10.4, Python 2.7.10 (conda), Numpy 1.9.2
(conda), Memory Profiler 0.33 (conda-binstar), psutil 2.2.1 (conda).
Answer: In order to see the memory garbage collected, I had to increase the size of
buf several orders of magnitude. Maybe the size is too small for
`memory_profiler` to detect the change (it queries the OS, so measurements are
not very precise) or maybe its too small for the Python garbage collector to
care, I don't know.
For example, replacing 10000 by 100000000 in the factor `buf` yields
Line # Mem usage Increment Line Contents
================================================
21 10.289 MiB 0.000 MiB @profile
22 def ndarray_deletion():
23 17.309 MiB 7.020 MiB import numpy as np
24 17.309 MiB 0.000 MiB from gc import collect
25 2496.863 MiB 2479.555 MiB buf = 'abcdefghijklmnopqrstuvwxyz' * 100000000
26 2496.867 MiB 0.004 MiB arr = np.frombuffer(buf)
27 2496.867 MiB 0.000 MiB del arr
28 17.312 MiB -2479.555 MiB del buf
29 17.312 MiB 0.000 MiB collect()
30 17.719 MiB 0.406 MiB y = [i**2 for i in xrange(10000)]
31 17.719 MiB 0.000 MiB del y
32 17.719 MiB 0.000 MiB collect()
|
Renaming multiple files at once with Python
Question: I am new to programming. I usually learn for a while , then take a long break
and forget most of what I learned. Nevermind that back info.
I tried to create a function which would help me rename files inside a folder
and then add an increment at the end of the new name (e.g. blueberry1,
blueberry 2,...)
import os
def rename_files(loc,new_name):
file_list= os.listdir(loc)
for file_name in file_list:
count=1
if count <= len(file_list):
composite_name = new_name+str(count)
os.rename(file_name, composite_name)
count+= 1
Well apparently this code doesn't work. Any idea how to fix it?
Answer: You need to join the file to the path:
os.rename(os.path.join(loc, file_name), composite_name)
You can also use enumerate for the count:
import os
def rename_files(loc,new_name):
file_list= os.listdir(loc)
for ind, file_name in enumerate(file_list,1):
composite_name = new_name+str(ind)
os.rename(os.path.join(loc, file_name), os.path.join(loc, composite_name)
`listdir` just returns the file names, not the path so python would have no
way of knowing where the original file actually came from unless your cwd was
the same directory.
|
Return the value of a 2D PDF given x and y in Python?
Question: I have some data that I plotted the PDF using matplotlib's hist2D function.
The result looks like this:

The hist2d function returns a triple of arrays: H,xedges,yedges. H being the
2D histogram value. Now I'd like to turn this discrete H matrix and turn it
into a function, that returns the value of H for any given (x,y) input. In
other words I'd like to turn my 2D histogram into a 2D step function. Is there
a specific function that would be computationally cheap that I could use on
that purpose?
This looks like a pretty simple operation (usually done for image processing
but with pixel indices instead of real numbers) but I'm unable to find
anything about it, can you please help me?
Answer: You can construct an interpolator from the counts like this:
from numpy import random, histogram2d, diff
import matplotlib.pyplot as plt
from scipy.interpolate import interp2d
# Generate sample data
n = 10000
x = random.randn(n)
y = -x + random.randn(n)
# bin
nbins = 100
H, xedges, yedges = histogram2d(x, y, bins=nbins)
# Figure out centers of bins
def centers(edges):
return edges[:-1] + diff(edges[:2])/2
xcenters = centers(xedges)
ycenters = centers(yedges)
# Construct interpolator
pdf = interp2d(xcenters, ycenters, H)
# test
plt.pcolor(xedges, yedges, pdf(xedges, yedges))
Result:

Note that this will be linearly interpolated rather than step-wise. For a
quicker version which assumes a regular grid, this will also work:
from numpy import meshgrid, vectorize
def position(edges, value):
return int((value - edges[0])/diff(edges[:2]))
@vectorize
def pdf2(x, y):
return H[position(yedges, y), position(xedges, x)]
# test - note we need the meshgrid here to get the right shapes
xx, yy = meshgrid(xcenters, ycenters)
plt.pcolor(xedges, yedges, pdf2(xx, yy))
|
python csv TypeError: unhashable type: 'list'
Question: Hi Im trying to compare two csv files and get the difference. However i get
the above mentioned error. Could someone kindly give a helping hand. Thanks
import csv
f = open('ted.csv','r')
psv_f = csv.reader(f)
attendees1 = []
for row in psv_f:
attendees1.append(row)
f.close
f = open('ted2.csv','r')
psv_f = csv.reader(f)
attendees2 = []
for row in psv_f:
attendees2.append(row)
f.close
attendees11 = set(attendees1)
attendees12 = set(attendees2)
print (attendees12.difference(attendees11))
Answer: Happened on the line
attendees11 = set(attendees1)
didn't it? You are trying to make a set from a list of lists but it is
impossible because set may only contain hashable types, which list is not. You
can convert the lists to tuples.
attendees1.append(tuple(row))
|
Reccommended way to redirect file-like streams in Python?
Question: I am writing a backup script for a sqlite database that changes very
intermittently. Here's how it is now:
from bz2 import BZ2File
from datetime import datetime
from os.path import dirname, abspath, join
from hashlib import sha512
def backup_target_database(target):
backup_dir = dirname(abspath(target))
hash_file = join(backup_dir, 'last_hash')
new_hash = sha512(open(target, 'rb').read()).digest()
if new_hash != open(hash_file, 'rb').read():
fmt = '%Y%m%d-%H%M.sqlite3.bz2'
snapshot_file = join(backup_dir, datetime.now().strftime(fmt))
BZ2File(snapshot_file, 'wb').write(open(target, 'rb').read())
open(hash_file, 'wb').write(new_hash)
Currently the database weighs just shy of 20MB, so it's not that taxing when
this runs and reads the whole file into memory (and do it twice when changes
are detected), but I don't want to wait until this becomes a problem.
What is the proper way to do this sort of (to use Bashscript terminology)
stream piping?
Answer: First, there's a duplication in your code (reading `target` file twice).
And you can use
[shutil.copyfileobj](https://docs.python.org/3.4/library/shutil.html#shutil.copyfileobj)
and
[hashlib.update](https://docs.python.org/3.4/library/hashlib.html#hashlib.hash.update)
for memory-efficient routine.
from bz2 import BZ2File
from datetime import datetime
from hashlib import sha512
from os.path import dirname, abspath, join
from shutil import copyfileobj
def backup_target_database(target_path):
backup_dir = dirname(abspath(target_path))
hash_path = join(backup_dir, 'last_hash')
old_hash = open(hash_path, 'rb').read()
hasher = sha512()
with open(target_path, 'rb') as target:
while True:
data = target.read(1024)
if not data:
break
hasher.update(data)
new_hash = hasher.digest()
if new_hash != old_hash:
fmt = '%Y%m%d-%H%M.sqlite3.bz2'
snapshot_path = join(backup_dir, datetime.now().strftime(fmt))
with open(target_path, 'rb') as target:
with BZ2File(snapshot_path, 'wb', compresslevel=9) as snapshot:
copyfileobj(target, snapshot)
(Note: I didn't test this code. If you have problem please notice me)
|
Read. Check. Write. A Broken Python script
Question: I am developing a word game, and for this game, I needed a list of words.
Sadly, this list was _so_ long that I just had to refine it (this list of
words can be found on any Mac at **/usr/share/dict/**).
To refine it, I decided to use my own Python scripts. I already wrote a script
before that removes all words that start with capital letters (thus removing
names of places, etc.), and it worked. This is it:
with open("/Users/me/Desktop/oldwords.txt", "r") as text:
with open("/Users/me/Desktop/newwords.txt", "w") as towriteto:
for word in text:
if word[0]==word[0].lower():
towriteto.write(word)
Then, I decided to refine it even further; I decided that I would delete all
words that are not in the pyenchant module English dictionary. This
opperation's code is very similar to the previous one's code. This is my code:
import enchant
with open("/Users/me/Desktop/newwords.txt", "r") as text:
with open("/Users/me/Desktop/words.txt", "w") as towriteto:
d = enchant.Dict("en_US")
for word in text:
if d.check(word):
towriteto.write(word)
Sadly, this did not write anything to the "towriteto" file, and after some
debugging, I found that
d.check(word) -> False
It always returned false. However, when I checked words separately, real words
returned True, and fake words returned False as they should.
I have no idea what is wrong with my second script. The file locations are
correct and the pyenchant installation had no issues.
Thanks in advance!
Answer: I don't know the input file format but if there is only one word per line, try
to remove the end-of-line character of `word` before to call `d.check(word)`:
word = word.rstrip()
|
How to simplify for loop in prime number generator in python
Question:
import math
def is_prime(num):
if num < 2:
return False
for i in range(2, int(math.sqrt(num))+ 1):
if num % i == 0:
return False
return True
Primes seems to be a popular topic but in the book in which I am learning
Python, I am on chpt 6 out of 21 and in the iteration chapter which it teaches
while loops. I have not learned for loops yet although I understand what they
do. So, let's say I have not learned for loops yet and am given only
if/elif/else statements and the while loops as my tools. How can I change the
for line of code into something more simple using the above tools? While
asking this question I quickly came up with this code:
def formula(num):
i = 2
while i >= 2:
return int(math.sqrt(num)+ 1)
def is_primetwo(num):
i = 2
if num < 2:
return False
formula(num)
if num % i == 0:
return False
return True
It works but would this be a simple version of the for loop or is there
something even more simple where I do not have to wrap a function within a
function?
Answer: How about something like:
from math import sqrt
def is_prime(num):
if (num < 2):
return False
i = 2
limit = int(sqrt(num) + 1)
while (i <= limit):
if num % i == 0:
return False
i = i + 1
return True
|
twisted critical unhandled error on scrapy tutorial
Question: I'm new in programming and I'm trying to learn scrapy, using scrapy tutorial:
<http://doc.scrapy.org/en/latest/intro/tutorial.html>
So I ran "scrapy crawl dmoz" command and got this error:
2015-07-14 16:11:02 [scrapy] INFO: Scrapy 1.0.1 started (bot: tutorial)
2015-07-14 16:11:02 [scrapy] INFO: Optional features available: ssl, http11
2015-07-14 16:11:02 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tu
torial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2015-07-14 16:11:05 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsol
e, LogStats, CoreStats, SpiderState
Unhandled error in Deferred:
2015-07-14 16:11:06 [twisted] CRITICAL: Unhandled error in Deferred:
2015-07-14 16:11:07 [twisted] CRITICAL:
I'm using windows 7 and python 2.7. Anybody knows what's the problem? How
could I fix that?
EDIT: My spider file code is:
# This package will contain the spiders of your Scrapy project
#
# Please refer to the documentation for information on how to create and manage
# your spiders.
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/computers/programming/languages/python/books/",
"http://www.dmoz.org/computer/programming/languages/python/resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename,'wb') as f:
f.write(response.body)
items.py code:
import scrapy
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
pip list:
* bootstrap-admin (0.3.3)
* cffi (1.1.2)
* characteristic (14.3.0)
* cryptography (0.9.3)
* cssselect (0.9.1)
* Django (1.7.7)
* django-auth-ldap (1.2.4)
* django-debug-toolbar (1.3.0)
* django-mssql (1.6.2)
* django-pyodbc (0.2.6)
* django-pyodbc-azure (1.2.2)
* django-redator (0.2.3)
* django-reversion (1.8.5)
* django-summernote (0.6.0)
* django-windows-tools (0.1.1)
* django-wysiwyg-redactor (0.4.3.2)
* enum34 (1.0.4)
* ez-setup (0.9)
* flup (1.0.2)
* idna (2.0)
* ipaddress (1.0.13)
* iso8601 (0.1.4)
* logging (0.4.9.6)
* lxml (3.4.4)
* mechanize (0.2.5)
* MySQL-python (1.2.4)
* pbr (0.10.8)
* Pillow (2.7.0)
* pip (7.1.0)
* pyasn1 (0.1.8)
* pyasn1-modules (0.0.6)
* pycparser (2.14)
* pymongo (2.6)
* pyodbc (3.0.7)
* pyOpenSSL (0.15.1)
* pypm (1.4.3)
* python-ldap (2.4.18)
* pythonselect (1.3)
* pywin32 (218.3)
* queuelib (1.2.2)
* Scrapy (1.0.1)
* selenium (2.44.0)
* service-identity (14.0.0)
* setuptools (18.0.1)
* six (1.9.0)
* sqlparse (0.1.15)
* stevedore (1.3.0)
* Twisted (15.2.1)
* virtualenv (1.11.6)
* virtualenv-clone (0.2.5)
* virtualenvwrapper (4.3.2)
* virtualenvwrapper-powershell (12.7.8)
* w3lib (1.11.0)
* xlrd (0.9.2)
* zope.interface (4.1.2)
Thx for the attention and sry for my poor English, isn't my native language.
Answer: I'm beginning to learn scrapy as well and encounter the same question with
yours. After struggling with it for an afternoon, finally I found it's due to
the pywin32 module only download without install. You can try input the
command below in the cmd to finish the pywin32 module install and try crawl
again:
python python27\scripts\pywin32_postinstall.py -install
I hope it will help!
|
Using TinyMCE with Django, I see ImportError: Cannot import name simplejson
Question: I'm using Django and TinyMCE on Windows. When I run the following in the
command prompt:
>
> python manage.py runserver
>
I get
>
> ImportError: Cannot import name simplejson
>
Below is the entire console output including traceback that I got (from
[Screenshot
here](https://docs.google.com/presentation/d/1M9bBw5nAZKEgkoUzfKqK5ygvXmoJPb3T3Knpl9hpFWs/edit?usp=sharing))
Has anyone got any tips?
C:\WINDOWS\system32>easy_install sinplejson
searching for simplejson
Best match: sinplejson 3.7.3
Processing simp1ejson-3.7.3-py2.7-win-amd64.egg
sinplejson 3.7.3 is already the active version in easy-install.pth
Using C:\python2?\lib\site-packages\sinp1ejson-3.7.3-pg2.7-win-and64.egg
processing dependencies for simplejson
finished processing dependencies for simplejson
C:\WINDOWS\system32>cd C:\home\genesis_book
C:\Home\Genesis_Book>python manage.pg runserver
Traceback (most recent call last):
File "manage.py". line 13. in (nodule)
execute_from_comnand_1ine(sys.argu)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py". Line 338. in execute_from_conmand_1ine
uti1ity.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py". Line 312. in execute
django.setup()
File "C:\Python27\lib\site-packages\django\__init__.py". line 18. in setup
apps.popu1ate(settings.INSTflLLED_BPP8)
Fi1e "C:\Python27\lib\site-packages\django\apps\registry.pg". line 138. in populate
app_config.import_mode1s(a11_mode1s)
File "C:\Python27\lib\site-packages\django\apps\config.py". line 198. in import_models
se1f.mode1s_modu1e = import_modu1e(mode1s_modu1e_name)
File "C:\Python27\lib\importlib\__init__.py". line 37. in import_modu1e
__import__(name)
File "C:\Home\Genesis_Book\webapp\tinymce\nodels.py". line 6. in (module)
from tinymce import widgets as tinymce_widgets
File "C:\Home\Genesis_Book\webapp\tinymce\widgets.py". line 19. in (nodule)
from django.uti1s import simplejson
ImportError: cannot import name simplejson
### Update
People have suggested that I use `easy_install simplejson`, but I still get
the error - I've added the output following that stage to my screenshot and
the console output above.
Answer: From django 1.7
> The module django.utils.simplejson will be removed. The standard library
> provides json which should be used instead.
To install
easy_install simplejson
|
UnicodeDecodeError - encoding in Python
Question: I'm trying to connect to a database on a web hotel. However, I get this error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf8 in position 24: ordinal not in range(128)
The error is on the same line as the hostname and database name. I'd rather
not give away the password and hostname... all I can say is that the strings
do not contain any special characters. I have
reload(sys)
sys.getdefaultencoding('UTF-8')
on the top, but after I run the program I get the same error. Hope someone can
help me on this!
UPDATE:
! -*- coding: utf-8 -*-
import MySQLdb, cgi, cgitb
import mysql.connector
import sys
reload(sys)
sys.getdefaultencoding()
class mySQL_handling():
def __init__(self):
self.conn = mysql.connector.connect(user = <string>, passwd = <string>,
host = <string>, database = <string>, port = 22)
def test(self):
print "top kek"
def run(self):
while True:
self.test()
if __name__ == "__main__":
pgi = mySQL_handling()
pgi.run()
Really sorry for not being able to give away the strings...
Answer: Usually if the database admits unicode strings its because of 2 reasons, you
havent configured your file settings to utf-8 and/or you havent set up the
encoding in the python shebang
# -*- coding: utf-8 -*-
if you have the file setup like this you better don't declare the unicode
strings with a u'' because its supossed for all of them to be unicode.
|
POST request with form data using Python's request
Question: I am trying to query this API from within Python. It seems to respond with 400
code. Can some one tell me how should this API be queried?
This API is documented at <http://text-processing.com/docs/phrases.html>
import requests
r = requests.post('http://text-processing.com/api/phrases/',
data= {'text':'This is California.'})
I guess I am misunderstanding the way that data should be posted here.
Answer: You are doing it exactly right; sending a `POST` request with a `text`
parameter gives me a 200 response with JSON body:
>>> import requests
>>> url = 'http://text-processing.com/api/phrases/'
>>> params = {'text': 'This is California.'}
>>> r = requests.post(url, data=params)
>>> r
<Response [200]>
>>> r.json()
{u'NP': [u'California', u'This'], u'GPE': [u'California'], u'VP': [u'is'], u'LOCATION': [u'California']}
If you get a 400 response instead you are sending too much data or an empty
value:
> ## Errors
>
> A 400 Bad Request response will be returned under the following conditions:
>
> * no value for text is provided
> * text exceeds 1,000 characters
> * an incorrect language is specified
>
`language` can be `english`, `dutch`, `portuguese` or `spanish`, but defaults
to `english` if you don't include it. If you do include it in your request you
can also get a 400 error if you don't set it to one of those 4 supported
values.
|
Requests works and URLFetch doesn't
Question: I'm trying to make a request to the particle servers in python in a google app
engine app.
In my terminal, I can complete the request simply and successfully with
requests as:
res = requests.get('https://api.particle.io/v1/devices', params={"access_token": {ACCESS_TOKEN}})
But in my app, the same thing doesn't work with urlfetch, which keeps telling
me it can't find the access token:
url = 'https://api.particle.io/v1/devices'
payload = {"access_token": {ACCESS_TOKEN}}
form_data = urllib.urlencode(payload)
res = urlfetch.fetch(
url=url,
payload=form_data,
method=urlfetch.GET,
headers={
'Content-Type':
'application/x-www-form-urlencoded'
},
follow_redirects=False
)
I have no idea what the problem is, and no way to debug. Thanks!
Answer: In a nutshell, your problem is that in your `urlfetch` sample you're embedding
your access token into the request body, and since you're issuing a GET
request -which cannot carry any request body with them- this information gets
discarded.
**Why does your first snippet work?**
Because `requests.get()` takes that optional `params` argument that means:
"take this dictionary I give you, convert all its key/value pairs into a
[query string](https://en.wikipedia.org/wiki/Query_string) and append it to
the main URL"
So, behind the curtains, `requests.get()` is building a string like this:
`https://api.particle.io/v1/devices?access_token=ACCESS_TOKEN`
That's the correct endpoint you should point your GET requests to.
**Why doesn't your second snippet work?**
This time, `urlfetch.fetch()` uses a different syntax than `requests.get()`
(but equivalent nonetheless). The important bit to note here is that `payload`
argument **doesn't** mean the same as our `params` argument that you used
before in `requests.get()`.
`urlfetch.fetch()` expects our query string -if any- to be already urlencoded
into the URL (that's why `urllib.urlencode()` comes into play here). On the
other hand, `payload` is where you should put your request body in case you
were issuing a POST, PUT or PATCH request, but particle.io's endpoint is not
expecting your OAuth access token to be there.
Something like this should work (disclaimer: not tested):
auth = {"access_token": {ACCESS_TOKEN}}
url_params = urllib.urlencode(auth)
url = 'https://api.particle.io/v1/devices?%s' % url_params
res = urlfetch.fetch(
url=url,
method=urlfetch.GET,
follow_redirects=False
)
Notice how now we don't need your previous `Content-type` header anymore,
since we aren't carrying any content after all. Hence, `headers` parameter can
be removed from this example call.
For further reference, take a look at `urlfetch.fetch()`
[reference](https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch#google.appengine.api.urlfetch.fetch)
and [this SO thread](http://stackoverflow.com/questions/14551194/how-are-
parameters-sent-in-an-http-post-request) that will hopefully give you a better
insight into HTTP methods, parameters and request bodies than my poor
explanation here.
**PS:** If particle.io servers support it (they should), you should move away
from this authentication schema and carry your tokens in a `Authorization:
Bearer <access_token>` header instead. Carrying access tokens in URLs is not a
good idea because they are much more visible that way and tend to stay logged
in servers, hence posing a security risk. On the other hand, in a TLS session
all request headers are always encrypted so your auth tokens are well hidden
there.
|
Input into a polynomial regression formula with Python
Question: I inherited a project in the middle of pandemonium and to makes matters worse
I am just learning python.
I managed to implement a polynomial function into my code and the results are
the same as the ones posted in the examples of this web page. [z =
numpy.polyfit(x, y, 5)]
However, I will like to know how to modify the program so I can insert an
input of one of the know values of y to find x.
In other words. I have x and y arrays where: \- array x holds values for know
kilograms weights (0.0, 0.5, 1.0, 1.5, 2.0) \- array y (0.074581967,
0.088474754, 0.106797419, 0.124461935, 0.133726833)
I have a program who reads a load of weight and the tension created by a
phidget and generates the array to be used by this program.
What I need to accomplish is to read the next value from the phidget and be
able to convert the reading into kilograms, from within the provided array.
Is there a way to do this? I feel I am only missing a line of code but I don't
know how to implement the results of the returned values from the formula. (z)
Thanks in advance.
# CODE ADDED AS REQUESTED
from Phidgets.PhidgetException import PhidgetException
from Phidgets.Devices.Bridge import Bridge, BridgeGain
import datetime
import os
import re
import sys
import time
import numpy
wgh = list()
avg = list()
buf = list()
x = []
y = []
def nonlinear_regression(): # reads the calibration mapping and generates the conversion coefficients
fd = open("calibration.csv", "r")
for line in fd:
[v0, v1] = line.split(",")
x.append(float(v0))
y.append(float(v1[:len(v1) - 1]))
xdata = numpy.array(x)
ydata = numpy.array(y)
z = numpy.polyfit(x, y, 5)
return z
def create_data_directory(): # create the data directory
if not os.path.exists("data"):
os.makedirs("data")
def parse_config(): # get config-file value
v = int()
config = open("config.properties", "r")
for line in config:
toks = re.split(r"[\n= ]+", line)
if toks[0] == "record_interval":
v = int(toks[1])
return v
def read_bridge_data(event): # read the data
buf.append(event.value)
def record_data(f_name, date, rms):
if not os.path.isfile(f_name):
fd = open(f_name, "w")
fd.write("time,weight\n")
fd.write(datetime.datetime.strftime(date, "%H:%M"))
fd.write(",")
fd.write(str(rms) + "\n")
fd.close()
else:
fd = open(f_name, "a")
fd.write(datetime.datetime.strftime(date, "%H:%M"))
fd.write(",")
fd.write(str(rms) + "\n")
fd.close()
print("Data recorded.")
def release_bridge(event): # release the phidget device
try:
event.device.closePhidget()
except:
print("Phidget bridge could not be released properly.")
sys.exit(1)
def main():
create_data_directory()
RECORD_INTERVAL = parse_config() # get the config-file value
calibrate = nonlinear_regression() # get calibration function; use like: calibrate(some_input)
bridge = Bridge()
try:
bridge.setOnBridgeDataHandler(read_bridge_data)
bridge.setOnDetachHandler(release_bridge) # when the phidget gets physically detached
bridge.setOnErrorhandler(release_bridge) # asynchronous exception (i.e. keyboard interrupt)
except:
print("Phidget bridge event binding failed.")
sys.exit(1)
try:
bridge.openPhidget()
bridge.waitForAttach(3000)
except:
print("Phidget bridge opening failed.")
sys.exit(1)
last_record = int()
while (True):
date = datetime.datetime.now()
f_name = "data\\" + datetime.datetime.strftime(date, "%B_%d_%Y") + ".csv"
curr = time.time() * 1000
if (curr - last_record) > (RECORD_INTERVAL * 1000):
try:
bridge.setDataRate(10)
last = time.time() * 1000
bridge.setEnabled(0, True)
while (time.time() * 1000 - last) < 1000: # collects over 1 sec
pass
bridge.setEnabled(0, False)
except:
print("Phidget bridge data reading error.")
bridge.setEnabled(0, False)
bridge.closePhidget()
sys.exit(1)
vol = sum(buf) / len(buf)
del buf[:]
last_record = curr
record_data(f_name, date, vol) # replace curr with calibrated data
#THIS IS WHERE I WILL LIKE TO INCORPORATE THE CHANGES TO SAVE THE WEIGHT
#record_data(f_name, date, conversion[0] * vol + conversion[1]) # using the linear conversion function
else:
time.sleep(RECORD_INTERVAL - 1) # to reduce the CPU's busy-waiting
if __name__ == "__main__":
main()
Answer: The linear conversion function from the calibrations is returned by
`numpy.polyfit` as an array of coefficients. Since you passed 5 for the degree
argument of `polyfit`, you will get an array of six coefficients:
f(x) = ax5 \+ bx4 \+ cx3 \+ dx2 \+ ex + f
Where a, b, c, d, e, and f are the elements of the `z` array returned by
`nonlinear_regression`.
To implement the linear conversion formula, simply use the power operator
`**`, the elements of `z`, and the value of `vol`:
`vol_calibrated = vol**5 * z[0] + vol**4 * z[1] + vol**3 * z[2] + vol**2 *
z[3] + vol * z[4] + z[5]`
Or more generally:
degree = len(z) - 1
vol_calibrated = sum(vol**(degree-i) * coeff for i, coeff in enumerate(z))
|
Using ProxyMesh (https://proxymesh.com ) IP in selenium chrome driver for web scrapping
Question: I have performed web-scrapping using python-scrapy framework with a Proxy Mesh
IP. If the proxy requires authentication I use the following code :
import base64
# Start your middleware class
class ProxyMiddleware(object):
# overwrite process request
def process_request(self, request, spider):
# Set the location of the proxy
request.meta['proxy'] = "http://....."
# Use the following lines if your proxy requires authentication
proxy_user_pass = "username:pwd"
# setup basic authentication for the proxy
encoded_user_pass = base64.encodestring(proxy_user_pass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
When I want to do the same while scraping using selenium chrome driver what is
the appropriate technique that can be used. I find examples using firefox but
no luck in chrome driver. Please share your ideas.
Answer: Going through documenation on [proxymesh how to configure http
client](https://proxymesh.com/http-client-examples/), you can set your profile
object as follow
profile = webdriver.FirefoxProfile()
profile.set_preference("network.proxy.type", 1)
profile.set_preference("network.proxy.http", "http://username:[email protected]")
profile.set_preference("network.proxy.http_port", "portnumber")
profile.update_preferences()
driver = webdriver.Firefox(firefox_profile=profile)
or you can use `webdriver.Proxy` object to set ssl credentials. Also there are
options available on proxymesh configuration panel where you can add your ip
address/hostname [of your server which is using proxymesh service].
|
How to slice strings backwards python
Question: I am learning python and I made a basic program where the user takes the link
of a photo and inputs it, then the program downloads that photo. In order to
make sure that the user doesn't enter a link that is a webpage instead of a
photo, I had the program check what the file extension was by using string
slicing, but I can't seem to find out how to slice the string backwards
I know that this is an dumb question but after an hour of searching I still
can't find the answer. Here is the code
import random
import urllib.request
import urllib.parse
def download_web_image(url, file_format):
try:
name = random.randrange(1, 1000)
full_name = str(name) + file_format
urllib.request.urlretrieve(url, full_name)
print("Image download successful!")
print("Image named " + full_name)
except:
print('Error')
def get_user_url():
url = input("Now enter the url of the photo you want to download:")
try:
if url[0:3:-1] is '.png':
download_web_image(url, ".png")
elif url[0:4:-1] is 'gepj.':
download_web_image(url, '.jpeg')
elif url[0:3:-1] is '.gpj':
download_web_image(url, '.jpg')
else:
print('the file format is uncompatible: ' + url[1:4:-1])
except:
print('The url is not valid!')
print('look for an image on a website, make sure it is a JPG or PNG file or it will not work!')
get_user_url()
Thank you for the help. and no, I do not want the string to show up backwards.
Answer: I suggest you use the built-in method
[`endswith`](https://docs.python.org/2.7/library/stdtypes.html?highlight=endswith#str.endswith)
saves the trouble of variable size extensions(`png`,`jpeg`,`jpg`...etc), this
way:
>>>url = 'https://www.python.org/static/community_logos/python-logo-master-v3-TM.png'
>>>url.endswith('.png')
True
|
python: sigsegv when using ctypes on cygwin
Question: I am trying to compile and use a c library in python using ctypes module. The
library is strangely working fine on Linux machine but throwing SIGSEGV on
Cygwin64.
import ctypes
import numpy as np
import pdb
xbry = np.array([0.9, 0.1, 0.2, 0.9])
ybry = np.array([0.9, 0.9, 0.1, 0.1])
beta = np.array([1.0, 1.0, 1.0, 1.0])
nx = 30
ny = 30
ul_idx = 0
nnodes=14
precision=1.0e-12
nppe=3
newton=True
thin=True
checksimplepoly=True
verbose=True
_libgridgen = np.ctypeslib.load_library('libgridgen.dll', '/home/Nikhil/python/octant/gridgen')
print _libgridgen
_libgridgen.gridgen_generategrid2.restype = ctypes.c_void_p
_libgridgen.gridnodes_getx.restype = ctypes.POINTER(ctypes.POINTER(ctypes.c_double))
_libgridgen.gridnodes_gety.restype = ctypes.POINTER(ctypes.POINTER(ctypes.c_double))
_libgridgen.gridnodes_getnce1.restype = ctypes.c_int
_libgridgen.gridnodes_getnce2.restype = ctypes.c_int
_libgridgen.gridnodes_getnx.restype = ctypes.c_int
_libgridgen.gridnodes_getny.restype = ctypes.c_int
_libgridgen.gridmap_build.restype = ctypes.c_void_p
nbry = len(xbry)
nsigmas = ctypes.c_int(0)
sigmas = ctypes.c_void_p(0)
nrect = ctypes.c_int(0)
xrect = ctypes.c_void_p(0)
yrect = ctypes.c_void_p(0)
ngrid = ctypes.c_int(0)
xgrid = ctypes.POINTER(ctypes.c_double)()
ygrid = ctypes.POINTER(ctypes.c_double)()
_gn = _libgridgen.gridgen_generategrid2(
ctypes.c_int(nbry),
(ctypes.c_double * nbry)(*xbry),
(ctypes.c_double * nbry)(*ybry),
(ctypes.c_double * nbry)(*beta),
ctypes.c_int(ul_idx),
ctypes.c_int(nx),
ctypes.c_int(ny),
ngrid,
xgrid,
ygrid,
ctypes.c_int(nnodes),
ctypes.c_int(newton),
ctypes.c_double(precision),
ctypes.c_int(checksimplepoly),
ctypes.c_int(thin),
ctypes.c_int(nppe),
ctypes.c_int(verbose),
ctypes.byref(nsigmas),
ctypes.byref(sigmas),
ctypes.byref(nrect),
ctypes.byref(xrect),
ctypes.byref(yrect) )
print 'run getx'
x = _libgridgen.gridnodes_getx(_gn)
print 'reshape result.'
x = np.asarray([x[0][i] for i in range(ny*nx)])
x.shape = (ny, nx)
print 'run gety'
y = _libgridgen.gridnodes_gety(_gn)
print 'reshape result.'
y = np.asarray([y[0][i] for i in range(ny*nx)])
y.shape = (ny, nx)
# backtrace
Program received signal SIGSEGV, Segmentation fault.
0x0000000542596595 in gridnodes_getx (gn=0x45ab50) at gridnodes.c:789
789 return gn->gx;
(gdb) backtrace
0 0x0000000542596595 in gridnodes_getx (gn=0x45ab50) at gridnodes.c:789
and this is the c code backtrace is referring to
int gridnodes_getnx(gridnodes* gn)
{
return gn->nx;
}
int gridnodes_getny(gridnodes* gn)
{
return gn->ny;
}
double** gridnodes_getx(gridnodes* gn)
{
return gn->gx;
}
double** gridnodes_gety(gridnodes* gn)
{
return gn->gy;
}
I would appreciate if some one can help me with this.
Answer: I managed to solve the issue by using a class
class gridnodes (ctypes.Structure):
_fields_ = [
('nx',ctypes.c_int), \
('ny',ctypes.c_int), \
('gx', ctypes.c_double), \
('gy', ctypes.c_double)]
and replacing
_libgridgen.gridgen_generategrid2.restype = ctypes.c_void_p
by
_libgridgen.gridgen_generategrid2.restype = ctypes.POINTER(gridnodes)
I also tried the suggestion by @eryksun and that works as well..
|
Memory leak in opencv 2.x
Question: I'm running a python script and it is giving insufficient memory error. I
tried executing this script with both 2.4.9 and 2.4.11 and got the error. Is
there any issue with these 2 versions of opencv?
* * *
import numpy as np
import cv2
MIN_MATCH_COUNT = 10
img1 = cv2.imread('./DSC_0022.jpg',0) # queryImage
img2 = cv2.imread('./template.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)
else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None
* * *
Insufficient Memory Error:
C:\>fe.py
OpenCV Error: Insufficient memory (Failed to allocate 139253572 bytes) in cv::OutOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52
Traceback (most recent call last):
File "C:\fe.py", line 15, in <module>
kp2, des2 = sift.detectAndCompute(img2,None)
cv2.error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 139253572 bytes in function cv::OutOfMemoryError
Answer: Even i was getting the same error when i was trying to run a python script
which was using OpenCV , This error was caused for me because free memory
wasn't left for the program and thus allocation of arrays wasn't possible and
then the error occurs . Try seeing for how the memory is being utilized after
running the python script and Try optimizing the code so that RAM utilization
is reduced .
|
elasticsearch python regex query
Question: I want to execute queries like "my * is" which should yield results like "my
name is", "my car is" etc.
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
client = Elasticsearch([
{'host': 'localhost', 'port':9200}
])
s = Search(using=client, index="index_name") \
.query('regexp', title="my * is")
response = s.execute()
But I am getting empty response.
Answer: Credits to Wikto Stribizew for the correct answer in the comments. this thing
worked for me.
my [^ ]* is
It is strange that *,? etc as such aren't accepted because they are actually
referring to some other things as mentioned in
[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-
dsl-regexp-query.html)
Thanks,
|
Writing a networkx graph with position, color, ect to gexf
Question: I have constructed a graph using networkx and used the spring layout function
to generate a nice plot, unfortunately this is not transferred to the gexf
file
I think that the point of the networkx tool is to be able to write readable
graph files so I hesitate to manually change the xml if there is a simple
solution in python
here is a link to the format that I want it in because the end goal is to use
the sigma js tool to put it in a web page
<https://github.com/jacomyal/sigmajs.org/blob/master/assets/data/les-
miserables.gexf>
or more specifically this format: <http://gexf.net/format/viz.html>
this is an example from the gephi example using les miserables charsacters is
there a way that simply using the the `nx.write_gexf(G, "")` command while
also writing the properties specifically can output it in the same format?
Answer: This is more of a GEPHI "issue" rather than a Networkx one.
Very briefly,
[networkx.write_gexf](https://networkx.github.io/documentation/latest/reference/readwrite.gexf.html)
will attempt to export every possible node and edge attribute that a `gexf`
file can describe. It is then up to users of GEPHI to re-asign a particular
node or edge attribute to an internal, GEPHI attribute.
Less briefly, suppose that:
import networkx
#Create a Graph
G = networkx.Graph()
G.add_node("Alpha", X=10, Y=10)
G.add_node("Beta", X=-10, Y=-10)
G.add_path(["Alpha", "Beta"])
Given this graph, let's now try to save it in GEXF with:
#Attempt to save the graph in gexf
#PLEASE NOTE: This call will succeed and MyGraph will be created on the disk.
#You can now do a cat MyGraph.gexf and verify that attributes X and Y are indeed included in the file.
networkx.write_gexf(G, "MyGraph.gexf)
#Add another node with an attribute of type tuple
G.add_node("Gamma", pos=(5,5))
#Attempt to save the graph in gexf again
#PLEASE NOTE: This call will fail because it is impossible to 'unpack' the tuple without further knowledge
networkx.write_gexf(G, "MyOtherGraph.gexf")
Now, a networkx.layout (e.g. `pos = networkx.layout.random_layout(G)`),
returns the positions of the nodes as an iterable array and these positions
can be saved back to the nodes but as the above example indicates, if you
attempt to save a graph with such node attributes, it will fail.
Therefore, I am afraid that you will have to unpack the coordinates returned
by a layout and assign them to single node attributes just as it is described
above (please see attributes `X` and `Y` which were used here).
Once this is done, the graph can be exported without any problems.
Now, once in Gephi and to achieve this re-assignment of a node attribute to an
internal Gephi attribute, you are first going to need to install [this
plugin](https://marketplace.gephi.org/plugin/recast-column/). Once this is
done, load your graph into GEPHI as per normal and then switch to the "Data
Laboratory" view where you can see all your nodes and their attributes.
**Provided that you have installed the recast plugin** , click on "More
actions" and then "Set Standart Column" (sic). This starts a rather self
explanatory dialog box which allows you to "map" a graph specific attribute to
an internal GEPHI attribute such as the `X-coordinate`. Use this to assign
both coordinates and then switch to the "Overview" view to see the nodes
repositioned to their saved positions.
Hope this helps.
|
Perform function on multiple columns in python
Question: I have a data array of 30 trials(columns) each of 256 data points (rows) and
would like to run a wavelet transform (which requires a 1D array) on each
column with the eventual aim of obtaining the mean coefficients of the 30
trials. Can someone point me in the right direction please?
Answer: If you have a multidimensional numpy array then you can use a for loop:
import numpy as np
A = np.array([[1,2,3], [4,5,6]])
# A is the matrix: 1 2 3
# 4 5 6
for col in A.transpose():
print("Column:", col)
# Perform your wavelet transform here, you can save the
# results to another multidimensional array.
This gives you access to each column as a 1D array.
Output:
Column: [1 4]
Column: [2 5]
Column: [3 6]
If you want to access the rows rather than the columns then loop through `A`
rather than `A.transpose()`.
|
Python-Jira installation without Admin rights
Question: I am writing a python tool that should get information from Jira. I wanted to
use Python-Jira but cannot install it properly. I am using (have to use)
python 2.7 which doesn't come with pip and I cannot install pip because I do
not have local admin rights (and won't get them without hassle).
Is there a way to install/use python-jira without the pip installation
process? I tried copying the jira package to the site-packages folder but it
seems I run into dependency problems ('ImportError: No module named six.moves'
when I try import Jira from jira) which to resolve it seems I have to follow
the pip installation process.
Thanks for your help.
Answer: Install [Virtualenv](https://virtualenv.pypa.io/en/latest/) and you will have
your own version of Python and Pip, so you should be able to install jira-
python properly.
There is a lot of guides how to do it.
[For Linux I recommend this one.](http://askubuntu.com/questions/244641/how-
to-set-up-and-use-a-virtual-python-environment-in-ubuntu)
[General Python Guide](http://docs.python-
guide.org/en/latest/dev/virtualenvs/)
|
Slow ANTLR4 generated Parser in Python, but fast in Java
Question: I am trying to convert ant [ANTLR3
grammar](https://github.com/fabsx00/codesensor/blob/master/CPPGrammar.g) to an
[ANTLR4 grammar](https://gist.github.com/zevektor/630f1e9356fe200574d9), in
order to use it with the antlr4-python2-runtime. This grammar is a C/C++ fuzzy
parser.
After converting it (basically removing tree operators and semantic/syntactic
predicates), I generated the Python2 files using:
`java -jar antlr4.5-complete.jar -Dlanguage=Python2 CPPGrammar.g4`
And the code is generated without any error, so I import it in my python
project (I'm using PyCharm) to make some tests:
import sys, time
from antlr4 import *
from parser.CPPGrammarLexer import CPPGrammarLexer
from parser.CPPGrammarParser import CPPGrammarParser
currenttimemillis = lambda: int(round(time.time() * 1000))
def is_string(object):
return isinstance(object,str)
def parsecommandstringline(argv):
if(2!=len(argv)):
raise IndexError("Invalid args size.")
if(is_string(argv[1])):
return True
else:
raise TypeError("Argument must be str type.")
def doparsing(argv):
if parsecommandstringline(argv):
print("Arguments: OK - {0}".format(argv[1]))
input = FileStream(argv[1])
lexer = CPPGrammarLexer(input)
stream = CommonTokenStream(lexer)
parser = CPPGrammarParser(stream)
print("*** Parser: START ***")
start = currenttimemillis()
tree = parser.code()
print("*** Parser: END *** - {0} ms.".format(currenttimemillis()-start))
pass
def main(argv):
tree = doparsing(argv)
pass
if __name__ == '__main__':
main(sys.argv)
The problem is that the parsing is very slow. With a file containing ~200
lines it takes more than 5 minutes to complete, while the parsing of the same
file in antlrworks only takes 1-2 seconds. Analyzing the antlrworks tree, I
noticed that the `expr` rule and all of its descendants are called very often
and I think that I need to simplify/change these rules to make the parser
operate faster: 
Is my assumption correct or did I make some mistake while converting the
grammar? What can be done to make parsing as fast as on antlrworks?
**UPDATE:** I exported the same grammar to Java and it only took 795ms to
complete the parsing. The problem seems more related to python implementation
than to the grammar itself. Is there anything that can be done to speed up
Python parsing?
I've read [here](https://github.com/antlr/antlr4-python3/issues/44) that
python can be 20-30 times slower than java, but in my case python is ~400
times slower!
Answer: I confirm that the Python 2 and Python 3 runtimes have performance issues.
With a few patches, I got a 10x speedup on the python3 runtime (~5 seconds
down to ~400 ms). <https://github.com/antlr/antlr4/pull/1010>
|
How to diagnose extra SQLAlchemy connections in Pyramid
Question: When my app runs, I'm very frequently getting issues around the connection
pooling (one is "QueuePool limit of size 5 overflow 10 reached", another is
"FATAL: remaining connection slots are reserved for non-replication superuser
connections").
I have a feeling that it's due to some code not closing connections properly,
or other code greedily trying to open new ones when it shouldn't, but I'm
using the default SQL Alchemy settings so I assume the pool connection
defaults shouldn't be unreasonable. We are using the
scoped_session(sessionmaker()) way of creating the session so multiple threads
are supported.
So my main question is if there is a tool or way to find out where the
connections are going? Short of being able to see as soon as a new one is
created (that is not supposed to be created), are there any obvious anti-
patterns that might result in this effect?
Pyramid is very un-opinionated and with DB connections, there seem to be two
main approaches (equally supported by Pyramid it would seem). In our case, the
code base when I started the job used one approach (I'll call it the "globals"
approach) and we've agreed to switch to another approach that relies less on
globals and more on Pythonic idioms.
About our architecture: the application comprises one repo which houses the
Pyramid project and then sources a number of other git modules, each of which
had their own connection setup. The "globals" way connects to the database in
a very non-ORM fashion, eg.:
(in each repo's __init__ file)
def load_database:
global tables
tables['table_name'] = Table(
'table_name', metadata,
Column('column_name', String),
)
There are related globals that are frequently peppered all over the code:
def function_needing_data(field_value):
global db, tables
select = sqlalchemy.sql.select(
[tables['table_name'].c.data], tables['table_name'].c.name == field_value)
return db.execute(select)
This _tables_ variable is latched onto within each git repo which adds some
more tables definitions and somehow the global _tables_ manages to work,
providing access to all of the tables.
The approach that we've moved to (although at this time, there are parts of
both approaches still in the code) is via a centralised connection, binding
all of the metadata to it and then querying the db in an ORM approach:
(model)
class ModelName(MetaDataBase):
__tablename__ = "models_table_name"
... (field values)
(function requiring data)
from models.db import DBSession
from models.model_name import ModelName
def function_needing_data(field_value):
return DBSession.query(ModelName).filter(
ModelName.field_value == field_value).all()
We've largely moved the code over to the latter approach which feels right,
but perhaps I'm mistaken in my intentions. I don't know if there is anything
inherently good or bad in either approach but could this (one of the
approaches) be part of the problem so we keep running out of connections? Is
there a telltale sign that I should look out for?
Answer: It appears that Pyramid functions best (in terms of handling the connection
pool) when you use the [Pyramid transaction
manager](http://docs.pylonsproject.org/projects/pyramid-tm/en/latest/)
(pyramid_tm). This [excellent article by Jon
Rosebaugh](https://metaclassical.com/what-the-zope-transaction-manager-means-
to-me-and-you/) provides some helpful insight into both how Pyramid apps
typically set up their database connections and how they _should_ set them up.
In my case, it was necessary to include the pyramid_tm package and then remove
a few occurrences where we were manually committing session changes since
pyramid_tm will automatically commit changes if it doesn't see a reason not
to.
[Update]
I continued to have connection pooling issues although much fewer of them.
After a lot of debugging, I found that the pyramid transaction manager (if
you're using it correctly) should not be the issue at all. The issue to the
other connection pooling issues I had had to do with scripts that ran via cron
jobs. A script will release it's connections when it's finished, but bad code
design may result in situations where the same script can be opened up and
starts running while the previous one is running (causing them both to run
slower, slow enough to have both running while a third instance of the script
starts and so on).
This is a more language- and database-agnostic error since it stems from poor
job-scripting design but it's worth keeping in mind. In my case, the script
had an "&" at the end so that each instance started as a background process,
waited 10 seconds, then spawned another, rather than making sure the first job
started AND completed, then waited 10 seconds, then started another.
Hope this helps when debugging this very frustrating and thorny issue.
|
Checking that array doesn't contain negative numbers, and running function again if it does
Question: My task today is to create a way for checking if a function's output contains
negative numbers, and if it does, then I must run the function until it
contains no negative numbers.
I'll post the full code later in the post, but this is my attempt at a
solution:
def evecs(matrixTranspose):
evectors = numpy.linalg.eig(matrixTranspose)[1][:,0]
return evectors
if any(x<0 for x in evectors) == False:
print(evectors)
evecs() is my function, and evectors is the output array, but I only want to
print evectors if there are no negative entries in it. I also want to later
add that if there _are_ negative entries in it, the code should run the evecs
function again until it finds an evectors that has no negative entries.
However, whenever I run it I get the error:
> global name evectors is not defined
Here's a link to my code, and the full output from the iPython console.
<http://pastebin.com/3Bk9h1gq>
Thanks!
Answer: You have not declared the variable `evectors` other than within the scope of
your function `evecs`.
evectors = evecs(matrixTranspose)
if any(x<0 for x in evectors) == False:
print(evectors)
**EDIT**
There are several issues:
1. Indentation is VERY important in Python. `MarkovChain` and `evecs` are two seperate functions. You had your `evacs` function indented an extra level in, embeddeding it within `MarkovChain`.
2. `MarkovChain` should return `matrixTransponse` if you plan to use it in another function call.
3. As a result of the above issue, your function call to `MarkovChain` needs to be assigned to a variable, `matrixTranponse`, otherwise you will get an error stating that `matrixTranspose` is not defined when you make your function call to `evecs` with it.
4. Since the initialization of the variable `matrixTranspose` isn't set until the function call to `MarkovChain` is completed, the remainder of your logic will need to be re-ordered.
I have applied all the above changes below and added comments to the changed
areas:
def MarkovChain(n,s) :
"""
"""
matrix = []
for l in range(n) :
lineLst = []
sum = 0
crtPrec = precision
for i in range(n-1) :
val = random.randrange(crtPrec)
sum += val
lineLst.append(float(val)/precision)
crtPrec -= val
lineLst.append(float(precision - sum)/precision)
matrix2 = matrix.append(lineLst)
print("The intial probability matrix.")
print(tabulate(matrix))
matrix_n = numpy.linalg.matrix_power(matrix, s)
print("The final probability matrix.")
print(tabulate(matrix_n))
matrixTranspose = zip(*matrix_n)
return matrixTransponse # issue 2
# issue 1
def evecs(matrixTranspose):
evectors = numpy.linalg.eig(matrixTranspose)[1][:,0]
return evectors
matrixTranponse = MarkovChain(4, 10000000000) # issue 3
# issue 4
evectors = evecs(matrixTranspose)
if any(x<0 for x in evectors) == False:
print(evectors)
|
Nested list doesn't work properly
Question:
import re
def get_number(element):
re_number = re.match("(\d+\.?\d*)", element)
if re_number:
return float(re_number.group(1))
else:
return 1.0
def getvalues(equation):
elements = re.findall("([a-z0-9.]+)", equation)
return [get_number(element) for element in elements]
eqn = []
eqn_no = int(raw_input("Enter the number of equations: "))
for i in range(eqn_no):
eqn.append(getvalues(str(raw_input("Enter Equation %d: " % (i+1)))))
print "Main Matrix: "
for i in range((eqn_no)):
for j in range((eqn_no+1)):
print "\t%f" %(eqn[i][j]),
print
print
equation=[]
equation=eqn
for k in range((eqn_no-1)):
for i in range((k+1),eqn_no):
for j in range((eqn_no+1)):
if(eqn[i][j]!=0):
eqn[i][j]=eqn[i][j]-(eqn[k][j]*(equation[i][k]/eqn[k][k]))
print "Matrix After %d step: " %(k+1)
for i in range(eqn_no):
for j in range((eqn_no+1)):
print "\t%f"%(eqn[i][j]),
equation[i][j]=eqn[i][j];
print
print
for input:
25x+5y+z=106.8
64x+8y+z=177.2
144x+12y+z=279.2
output is:
Main Matrix:
25.000000 5.000000 1.000000 106.800000
64.000000 8.000000 1.000000 177.200000
144.000000 12.000000 1.000000 279.200000
Matrix After 1 step:
25.000000 5.000000 1.000000 106.800000
0.000000 8.000000 1.000000 177.200000
0.000000 12.000000 1.000000 279.200000
Matrix After 2 step:
25.000000 5.000000 1.000000 106.800000
0.000000 8.000000 1.000000 177.200000
0.000000 0.000000 1.000000 279.200000
But it should be like
Main Matrix:
25.000000 5.000000 1.000000 106.800000
64.000000 8.000000 1.000000 177.200000
144.000000 12.000000 1.000000 279.200000
Matrix After 1 step:
25.000000 5.000000 1.000000 106.800000
0.000000 -4.80000 -1.56000 -96.208000
0.000000 -16.8000 -4.76000 -335.968000
Matrix After 2 step:
25.000000 5.000000 1.000000 106.800000
0.000000 -4.80000 -1.56000 -96.208000
0.000000 0.000000 0.699999 0.759981
First of all this is a partial code for solving root of n number of equations
using Naive Guass elemination method. Does anyone have any idea why the hell
on earth is this happening? Why the zero parts are changing and others aren't?
I have done this code in c++ and it works there perfectly but here I'm facing
many problem. Maybe I'm newbie to python. I'm using python 2.7.....
Answer: I think the problem is the assignment `equation = eqn`. Since `eqn` is a list,
it is a mutable and thus passed as a reference, when you assign a mutable, the
variable actually contains a pointer to that object. This means that
`equation` and `eqn` are the same list.
You should
from copy import deepcopy
equation = deepcopy(eqn)
You need `deepcopy` instead of `copy` because you have a list of lists, also
the inner list needs to be copied.
|
discontinous axis in subplot - python matplotlib
Question: I would like to have plot with an y axis that is devided into two parts. The
lower part should have a normal scale while the upper one should scale with a
factor of 10.
I already found some examples on how to make plots with broken x or y axes,
for example: <http://matplotlib.org/examples/pylab_examples/broken_axis.html>
But I do not understand how to achieve this, when I want to apply this to one
single subplot inside a 2x2 grid of plots. If it is important, I set up the
plots like this:
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.add_subplot(221)
[...]
fig.add_subplot(222)
[...]
Answer: Couldn't you set up a 4x4 grid of axes, and have 3 of the axes span 2x2 of
that space? Then the plot you want to have broken axes on can just cover the
remaining 2x2 space as parts `ax4_upper` and `ax4_lower`.
ax1 = plt.subplot2grid((4, 4), (0, 0), colspan=2, rowspan=2)
ax2 = plt.subplot2grid((4, 4), (0, 2), colspan=2, rowspan=2)
ax3 = plt.subplot2grid((4, 4), (2, 0), colspan=2, rowspan=2)
ax4_upper = plt.subplot2grid((4, 4), (2, 2), colspan=2, rowspan=1)
ax4_lower = plt.subplot2grid((4, 4), (3, 2), colspan=2, rowspan=1)
You can then set the `ylim` values for `ax4_upper` and `ax4_lower`, and
continue as your example showed:
# hide the spines between ax4 upper and lower
ax4_upper.spines['bottom'].set_visible(False)
ax4_lower.spines['top'].set_visible(False)
ax4_upper.xaxis.tick_top()
ax4_upper.tick_params(labeltop='off') # don't put tick labels at the top
ax4_lower.xaxis.tick_bottom()
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax4_upper.transAxes, color='k', clip_on=False)
ax4_upper.plot((-d,+d),(-d,+d), **kwargs) # top-left diagonal
ax4_upper.plot((1-d,1+d),(-d,+d), **kwargs) # top-right diagonal
kwargs.update(transform=ax4_lower.transAxes) # switch to the bottom axes
ax4_lower.plot((-d,+d),(1-d,1+d), **kwargs) # bottom-left diagonal
ax4_lower.plot((1-d,1+d),(1-d,1+d), **kwargs) # bottom-right diagonal
plt.show()

|
Spark Word2Vec example using text8 file
Question: I'm trying to run this example from apache.spark.org (code is below & entire
tutorial is here: <https://spark.apache.org/docs/latest/mllib-feature-
extraction.html>) using the text8 file that they reference on their site
(<http://mattmahoney.net/dc/text8.zip>):
import org.apache.spark._
import org.apache.spark.rdd._
import org.apache.spark.SparkContext._
import org.apache.spark.mllib.feature.{Word2Vec, Word2VecModel}
val input = sc.textFile("/Users/rkita/Documents/Learning/random/spark/MLlib/examples/text8",4).map(line => line.split(" ").toSeq)
val word2vec = new Word2Vec()
val model = word2vec.fit(input)
val synonyms = model.findSynonyms("china", 40)
for((synonym, cosineSimilarity) <- synonyms) {
println(s"$synonym $cosineSimilarity")
}
// Save and load model
model.save(sc, "myModelPath")
val sameModel = Word2VecModel.load(sc, "myModelPath")
I am working on Spark on my mac (2 cores, 8GB RAM), and I think I've set the
memory allocations correctly in my spark-env.sh file with the following:
export SPARK_EXECUTOR_MEMORY=4g
export SPARK_WORKER_MEMORY=4g
When I try to fit the model, I keep getting java heap errors. I got the same
result in python as well. I increased the java memory sizes using JAVA_OPTS as
well.
The file is only 100MB, so I think somehow my memory settings are not correct,
but I'm not sure if that's the root cause.
Has anyone else tried this example on a laptop?
I can't put the file on our company servers because we're not supposed to
import external data, so I'm reduced to working on my personal laptop. If you
have any suggestions, I'd appreciate hearing them. Thx!
Answer: First of all, I am a newcomer to Spark, so others may have quicker or better
solutions. I ran into the same difficulties to run this sample code. I manage
to make it work, mainly by:
1. Running my own Spark cluster on my machine: use the start scripts in the /sbin/ directory of your Spark installation. To do so, you have to configure the conf/spark-env.sh file according to your needs. DO NOT use 127.0.0.1 IP for Spark.
2. Compile and package Scala code as a jar (sbt package), then provide it to the cluster (see addJar(...) In Scala code). It seems it is possible to provide compiled code to Spark using classpath / extra classpath, but I did not try it yet.
3. Set executor memory and driver memory (see Scala code)
spark-env.sh:
export SPARK_MASTER_IP=192.168.1.53
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_DAEMON_MEMORY=1G
# Worker : 1 by server
# Number of worker instances to run on each machine (default: 1).
# You can make this more than 1 if you have have very large machines and would like multiple Spark worker processes.
# If you do set this, make sure to also set SPARK_WORKER_CORES explicitly to limit the cores per worker,
# or else each worker will try to use all the cores.
export SPARK_WORKER_INSTANCES=2
# Total number of cores to allow Spark applications to use on the machine (default: all available cores).
export SPARK_WORKER_CORES=7
#Total amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g
# (default: total memory minus 1 GB);
# note that each application's individual memory is configured using its spark.executor.memory property.
export SPARK_WORKER_MEMORY=8G
export SPARK_WORKER_DIR=/tmp
# Executor : 1 by application run on the server
# export SPARK_EXECUTOR_INSTANCES=4
# export SPARK_EXECUTOR_MEMORY=4G
export SPARK_SCALA_VERSION="2.10"
Scala file to run the example:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.mllib.feature.{Word2Vec, Word2VecModel}
object SparkDemo {
def log[A](key:String)(job : =>A) = {
val start = System.currentTimeMillis
val output = job
println("===> %s in %s seconds"
.format(key, (System.currentTimeMillis - start) / 1000.0))
output
}
def main(args: Array[String]):Unit ={
val modelName ="w2vModel"
val sc = new SparkContext(
new SparkConf()
.setAppName("SparkDemo")
.set("spark.executor.memory", "8G")
.set("spark.driver.maxResultSize", "16G")
.setMaster("spark://192.168.1.53:7077") // ip of the spark master.
// .setMaster("local[2]") // does not work... workers loose contact with the master after 120s
)
// take a look into target folder if you are unsure how the jar is named
// onliner to compile / run : sbt package && sbt run
sc.addJar("./target/scala-2.10/sparkling_2.10-0.1.jar")
val input = sc.textFile("./text8").map(line => line.split(" ").toSeq)
val word2vec = new Word2Vec()
val model = log("compute model") { word2vec.fit(input) }
log ("save model") { model.save(sc, modelName) }
val synonyms = model.findSynonyms("china", 40)
for((synonym, cosineSimilarity) <- synonyms) {
println(s"$synonym $cosineSimilarity")
}
val model2 = log("reload model") { Word2VecModel.load(sc, modelName) }
}
}
|
Get access token from Paypal in Python - Using urllib2 or requests library
Question: ### cURL
curl -v https://api.sandbox.paypal.com/v1/oauth2/token \
-H "Accept: application/json" \
-H "Accept-Language: en_US" \
-u "client_id:client_secret" \
-d "grant_type=client_credentials"
Parameters: `-u` take `client_id`:`client_secret`
Here I pass my `client_id` and `client_secret`, It's worked properly in cURL.
I am trying to same things implement on Python
### Python
import urllib2
import base64
token_url = 'https://api.sandbox.paypal.com/v1/oauth2/token'
client_id = '.....'
client_secret = '....'
credentials = "%s:%s" % (client_id, client_secret)
encode_credential = base64.b64encode(credentials.encode('utf-8')).decode('utf-8').replace("\n", "")
header_params = {
"Authorization": ("Basic %s" % encode_credential),
"Content-Type": "application/x-www-form-urlencoded",
"Accept": "application/json"
}
param = {
'grant_type': 'client_credentials',
}
request = urllib2.Request(token_url, param, header_params)
response = urllib2.urlopen(request)
print "Response______", response
Traceback:
> result = urllib2.urlopen(request)
>
>
> HTTPError: HTTP Error 400: Bad Request
>
Can you inform me whats wrong with my python code?
Answer: I would suggest using requests:
import requests
import base64
client_id = ""
client_secret = ""
credentials = "%s:%s" % (client_id, client_secret)
encode_credential = base64.b64encode(credentials.encode('utf-8')).decode('utf-8').replace("\n", "")
headers = {
"Authorization": ("Basic %s" % encode_credential),
'Accept': 'application/json',
'Accept-Language': 'en_US',
}
param = {
'grant_type': 'client_credentials',
}
url = 'https://api.sandbox.paypal.com/v1/oauth2/token'
r = requests.post(url, headers=headers, data=param)
print(r.text)
|
Excluding data from a pandas dataframe based on percentiles
Question: I am having difficulties trying to use Python to remove some data outliers
prior to producing a scatterplot. I have a n by 43 dataframe imported using
pandas. I have figured out how to determine the thresholds for the outliers
and applied this to the dataframe such that I now have some boolean values
corresponding to whether the data should be included in the scatter plot or
not. I am however stuck on how to use this information to exclude the
appropriate data points.
My code so far:
def identify_outliers(self,parameters_file):
data=pandas.read_csv(parameters_file) #import data
header=data.keys() #get header
quantiles = data.quantile([0.25,0.75],1) #determine thresholds for all data
for i in range(len(header)):
qnt_i = quantiles[header[i]].as_matrix() #get handle to quantiles for specific column of data
boolean_data=data[header[i]].between(qnt_i[0],qnt_i[1]) #identify data points that fall outside this range
for j in range(len(boolean_data)): #attempt to use boolean values to filter data in 'data' to only include 'True' (doesn't work)
if boolean_data[j]:
print data[header[i]]
Here is a snippet of data that is imported using pandas.read_csv
(v1).Kcat (v1).km (v11).k1
1.22E-02 1.20E-02 1.72E-06
0.0122441 1.42E-02 1.61E-06
1.04E-02 1.01E-02 1.00E-06
0.0136581 0.0185623 5.01158
0.0113221 0.0221445 0.0785929
0.506949 0.01 1.35E-06
1.16567 0.0141031 168.078
0.01 0.0100055 1.25E-06
0.0351003 153.682 163.082
0.0129821 0.0164996 0.0560866
0.01 1.61671 1166.5
0.0112294 0.0100472 1.17E-06
0.0104352 0.0124419 1.63E-06
0.0173491 0.01 0.000110292
0.01 0.0409099 1.00E-06
490.531 557.418 5.85845
199.639 0.79314 0.00155387
0.0104078 25.2456 0.0212165
0.920923 84.1231 1.04E-05
1.00E-02 3.07E+02 1.01E-06
0.01 0.0113395 1.23E-06
0.0799303 1.14812 1.00E-06
0.403507 0.76484 436.664
0.0118404 0.38389 1.06E-06
Does anybody have a suggestion as to how I can filter 'data' to remove all the
values which do not fall within the specified range
Thanks
Answer: **Edit:** Prior version used apply/lambda which is not really necessary. This
is a simpler version.
Here's a smaller dataframe based on just your first 10 rows.
df
v1 v2 v3
0 0.012200 0.012000 0.000002
1 0.012244 0.014200 0.000002
2 0.010400 0.010100 0.000001
3 0.013658 0.018562 5.011580
4 0.011322 0.022145 0.078593
5 0.506949 0.010000 0.000001
6 1.165670 0.014103 168.078000
7 0.010000 0.010006 0.000001
8 0.035100 153.682000 163.082000
9 0.012982 0.016500 0.056087
And here's mask that selects only values between the 25th and 75th
percentiles. Note that the syntax for this is somewhat precise so be careful
with the parentheses and such.
( df > df.quantile(.25) ) & ( df < df.quantile(.75) )
v1 v2 v3
0 True True True
1 True True True
2 False False False
3 True False False
4 False False True
5 False False False
6 False True False
7 False False False
8 False False False
9 True True True
This is column-based, btw. I just glanced quickly at your code and couldn't
easily tell if the percentile measures were intended per-column of for the
combination of the 3 columns. For the whole dataframe you can do:
( df > df.stack().quantile(.25) ) & ( df < df.stack().quantile(.75) )
|
Difference between .py and .app script on Google App Engine
Question: According to [Google App Engine
doc](https://cloud.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Script_handlers),
the script handler can call three types of Python scripts to handle the
request match by the URL pattern.
> A script: directive can contain either a file path ending in .py (meaning
> that the script uses CGI), or a Python module path, with package names
> separated by dots (meaning that the script uses WSGI). The last component of
> a script: directive using a Python module path is the name of a global
> variable in the module: that variable must be a WSGI app, and is usually
> called app by convention.
What is the difference between these three types and their use cases?
Answer: > A `script:` directive can contain either a **file** path ending in `.py`
> (meaning that the script uses CGI), or a **Python module** path, with
> package names separated by dots (meaning that the script uses WSGI). The
> last component of a `script:` directive using a **Python module** path is
> the name of a global variable in the module: that variable must be a WSGI
> app, and is usually called `app` by convention.
>
> **Note** : just like for a Python `import` statement, each subdirectory that
> is a package must contain a file named `__init__.py`.
There are actually only two methods of referencing the Python script. First, a
**file** path, e.g., `/home/tsr/myscript.py`. Second, a **Python module**
path, e.g., `mypackage.mymodule`.
See [Python's documentation on
packages](https://docs.python.org/3/tutorial/modules.html#packages) for more
information.
|
How to modify XLSX column formatting with Python
Question: I have hundreds of XLSX files which all have columns containing long numeric
account numbers. I need to automatically convert all of these files to CSV.
This is trivial with tools like `ssconvert`. However, due to a ~~bug~~
[feature](http://superuser.com/questions/413226/why-does-excel-treat-long-
numeric-strings-as-scientific-notation-even-after-chan) in Excel and
Libreoffice, long numeric fields will be displayed using scientific notation
and this formatted number (not the underlying data) will be preserved if
exported to CSV.
This means that any automated conversion to CSV will truncate account numbers,
since the value `1240800388917` will be written to the CSV as 1.2408E+12 or
1240800000000, causing data corruption.
This is easy to fix by manually opening the Excel file and setting these
columns to "text" format. However, it's a bit tedious to do this for hundreds
of files, especially because many of these files have strange macros and
formatting that make Libreoffice take several minutes to open each one
(another reason why I'd like to convert them all to CSV in the first place).
What's the easiest way to use Python to automatically open each file and
change an entire column's formatting to "text"? I see plenty of Python
examples with how to read XLS/XLSX files, and in some cases write them, but I
can find few guides on manipulating a column's default formatting.
Answer: Took me some trial and error and digging around in the code, but the solution
turned out to be trivial.
from openpyxl import load_workbook
wb = load_workbook('myfile.xlsx')
ws = wb.active
for row in ws.rows:
row[col_index].number_format = row[col_index].style.number_format = '@'
wb.save('myfile-fixed.xlsx')
|
How to sum elements of a list corresponding to elements of another list in python
Question: I have two lists:
a=[25,23,18,28]
and
b=[1,2,2,3]
I want to sum the corresponding values in `a` to similar values in `b`, so it
would look like this:
return_a=(25,41,28)
return_b=(1,2,3)
Sorry for the confusion. Stealing JPeroutek's clarification: It looks to me
like he wants only unique values to exist in `return_b`. The values in `a`
correspond to those in `b`. Wherever you have a duplicate in `b`, you sum the
corresponding `a` values.
Nathan Bartley's answer worked for me.
Answer: A good way to do this would be to use a dictionary. The logic is very much
like JPeroutek describes. You go through list b, store the corresponding
number in list a, and if you encounter a value in b you've already seen, then
you add the new number in a it. You might try something like this to generate
it:
res = {}
for ix in xrange(len(b)):
cur_b = b[ix] # grab the next number in b
cur_a = a[ix] # grab the corresponding number in a
try: # if we've seen cur_b before then we can add cur_a to it
res[cur_b] += cur_a
except KeyError: # otherwise we've never seen cur_b before so we set it to cur_a
res[cur_b] = cur_a
In case the try & except doesn't make sense you can rewrite those four lines
to look like this
if cur_b in res: # this asks if cur_b is in the set of keys of res
res[cur_b] += cur_a
else:
res[cur_b] = cur_a
This will result in a dictionary that looks like the following:
{(1, 25), (2, 41), (3, 28)}
It's important to note that the dictionary may not preserve the order that you
want. For example:
b = [3, 3, 2, 1]
a = [12, 4, 6, 6]
would result in
{(1, 6), (2, 6), (3, 15)}
If ordering is important, this would pose a problem with the next step.
* * *
You can split the dictionary into ret_a and ret_b by messing with the result
of
res.items()
for instance:
ret_a = [t[1] for t in res.items()]
ret_b = [t[0] for t in res.items()]
|
Datetime and Timestamp equality in Python and Pandas
Question: I've been playing around with datetimes and timestamps, and I've come across
something that I can't understand.
import pandas as pd
import datetime
year_month = pd.DataFrame({'year':[2001,2002,2003], 'month':[1,2,3]})
year_month['date'] = [datetime.datetime.strptime(str(y) + str(m) + '1', '%Y%m%d') for y,m in zip(year_month['year'], year_month['month'])]
>>> year_month
month year date
0 1 2001 2001-01-01
1 2 2002 2002-02-01
2 3 2003 2003-03-01
I think the unique function is doing something to the timestamps that is
changing them somehow:
first_date = year_month['date'].unique()[0]
>>> first_date == year_month['date'][0]
False
In fact:
>>> year_month['date'].unique()
array(['2000-12-31T16:00:00.000000000-0800',
'2002-01-31T16:00:00.000000000-0800',
'2003-02-28T16:00:00.000000000-0800'], dtype='datetime64[ns]')
My suspicions are that there is some sort of timezone difference underneath
the functions, but I can't figure it out.
**EDIT**
I just checked the python commands list(set()) as an alternative to the unique
function, and that works. This must be a quirk of the unique() function.
Answer: You have to convert to datetime64 to compare:
In [12]:
first_date == year_month['date'][0].to_datetime64()
Out[12]:
True
This is because `unique` has converted the dtype to `datetime64`:
In [6]:
first_date = year_month['date'].unique()[0]
first_date
Out[6]:
numpy.datetime64('2001-01-01T00:00:00.000000000+0000')
I think is because `unique` returns a np array and there is no dtype that
numpy understands `TimeStamp` currently: [Converting between datetime,
Timestamp and
datetime64](http://stackoverflow.com/questions/13703720/converting-between-
datetime-timestamp-and-datetime64)
|
PyQt error “QProcess: Destroyed while process is still running”
Question: When I try to run the following PyQt code for running processes and tmux, I
encounter the error `QProcess: Destroyed while process is still running.` How
can I fix this?
#!/usr/bin/env python
#-*- coding:utf-8 -*-
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class embeddedTerminal(QWidget):
def __init__(self):
QWidget.__init__(self)
self._processes = []
self.resize(800, 600)
self.terminal = QWidget(self)
layout = QVBoxLayout(self)
layout.addWidget(self.terminal)
self._start_process(
'xterm',
['-into', str(self.terminal.winId()),
'-e', 'tmux', 'new', '-s', 'my_session']
)
button = QPushButton('list files')
layout.addWidget(button)
button.clicked.connect(self._list_files)
def _start_process(self, prog, args):
child = QProcess()
self._processes.append(child)
child.start(prog, args)
def _list_files(self):
self._start_process(
'tmux', ['send-keys', '-t', 'my_session:0', 'ls', 'Enter']
)
if __name__ == "__main__":
app = QApplication(sys.argv)
main = embeddedTerminal()
main.show()
Answer: You usually get the error `QProcess: Destroyed while process is still running`
when the application closes and the process hadn't finished.
In your current code, your application ends at soon as it starts, because you
didn't call `app.exec_()`. You should do something like:
if __name__ == "__main__":
app = QApplication(sys.argv)
main = embeddedTerminal()
main.show()
sys.exit(app.exec_())
Now, it works fine, but when you close the application you will still get the
error message. You need to overwrite the close event to end the process
properly. This works, given you replace `child` by `self.child`:
def closeEvent(self,event):
self.child.terminate()
self.child.waitForFinished()
event.accept()
|
Decoding error in paths using nltk.corpus.gutenberg.fileids()
Question: When I run `nltk.corpus.gutenberg.fileids()` with Python 2.7 (Anaconda,
Windows) I get the following error:
File "C:\Anaconda\lib\ntpath.py", line 85, in join
result_path = result_path + '\\'
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 9:
ordinal not in range(128)
I don't have this error when I use Python 3.4. Maybe I'm wrong but I suspect
the path to contain an accent (as there is an accent in my Windows username).
When I add some `print` in `ntpath.py`, nothing is printed I don't know why
(?) so I'm unable to debug by myself.
**EDIT:** The `import nltk` is enough to get the error.
Answer: I'm guessing Python 2 nltk has some issues with non-ASCII paths. Using Python
3 is probably the simplest fix here, at least assuming you don't have too much
code that doesn't work in it. It's hard to say for sure, since you didn't
include the full traceback, but likely nltk would have to be patched to fix
this for Python 2. Otherwise, you would need to avoid paths with non-ASCII
characters (meaning avoiding your user directory or changing your username).
|
pyclbr cannot import classes from another directory?
Question: This works fine if A.py and B.py are in the same directory
# module.py
class A(object):
pass
class B(A):
pass
# module2.py
import module
class C(module.B):
pass
that works fine:
Python 2.7.8 |Continuum Analytics, Inc.| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32
>>> import pyclbr
>>> m = pyclbr.readmodule( "module2")
>>> m.items()[0][1].super
[<pyclbr.Class instance at 0x01EE61E8>]
but if I put module.py in, say, the foo directory and instead have:
# module2.py
import foo.module
class C(foo.module.B):
pass
pyclbr cannot parse foo.module.B:
Python 2.7.8 |Continuum Analytics, Inc.| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32
>>> import pyclbr
>>> m = pyclbr.readmodule( "module2")
>>> m.items()[0][1].super
['foo.module.B']
Answer: That's why you must create a empty `__init__.py` file in directory where
python module is placed. i.e:
.
├── main.py
├── foo
| ├── __init__.py
| └── module2.py
So you import like this:
# main.py
import foo.module2
pyclbr.readmodule("foo.module2.B")
Or pass second parameter to [pyclbr.readmodule(module,
path=None)](https://docs.python.org/2/library/pyclbr.html):
paths_lookup = ["foo", "other/path"]
pyclbr.readmodule("module2", paths_lookup)
When passing a list of paths to pyclbr.readmodule does Python merge those with
sys.path.
Anyway, just remember to create a `__init__.py` for each module directory.
|
Correct way to create /etc/machine-id from python
Question: I want to create systemd's `/etc/machine-id` from a python script instead of
using `systemd-machine-id-setup`. I am preparing a rootfs and I don't want to
depend on systemd being installed on the host.
My current code looks like this:
from uuid import uuid4
f = open("machine-id", "w")
f.write(uuid4().hex + "\n")
f.close()
Is this the correct way of doing it? Is it the correct byteorder? Did I miss
anything?
If needed, please provide correct code and/or references.
Answer: Yes. See the
[Documentation](http://www.freedesktop.org/software/systemd/man/machine-
id.html). Its a text file so byteorder is not relevant here. Also see the code
that handles/generates it: The `make_v4_uuid` function and the place where it
is called from at line 221 in
<https://github.com/systemd/systemd/blob/master/src/libsystemd/sd-id128/sd-
id128.c>
Although personally I would make it:
from uuid import uuid4
with open("machine-id", "w") as f:
f.write(uuid4().hex + "\n")
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.