text
stringlengths 226
34.5k
|
---|
Convert any JSON to name/value pairs
Question: I have a source JSON that can be in any potential format. For storage &
processing purposes, I'd like to save this data in 2 column format.
For example, I'd like the following JSON:
"record" {
"name1": "value1",
"name2": "value2",
"parameters": {
"param": {},
"paramSet": {
"items": [{
"id": "id1"
}, {
"id": "id2"
}]
}
}
}
To be converted to the following CSV-like format:
record:name1 , "value1"
record:name2 , "value2"
record:parameters:param , ""
record:parameters:paramSet:items#0:id , "id1"
record:parameters:paramSet:items#1:id , "id2"
My questions are:
1. Is there a formal name for this transformation (so that I can search better).
2. Is there a standard or convention for representing JSON in a 2-column format like this?
3. Are there any libraries in Python that can do this for me?
4. Are there libraries on other major programming languages that will make it easier to implement this?
Thanks in advance.
Answer: First I made json correct:
{
"record": {
"name1": "value1",
"name2": "value2",
"parameters": {
"param": {},
"paramSet": {
"items": [
{
"id": "id1"
},
{
"id": "id2"
}
]
}
}
}
}
Next some code for recursive mocking json
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
class Thing(object):
data = ''
output = []
def __init__(self, file_name):
with open(file_name) as data_file:
self.data = json.load(data_file)
def mock(self):
for (i, item) in enumerate(self.data):
if type(self.data[item]) == dict:
self._recursive(self.data[item], item)
for (i, data) in enumerate(self.output):
print(data)
def _recursive(self, request_data, path):
for (i, item) in enumerate(request_data):
if type(request_data[item]) == dict:
if len(request_data[item]) > 0:
path2 = "{}:{}".format(path, item)
self._recursive(request_data[item], path2)
else:
self.output.append("{}:{}, \"\"".format(path, item))
elif type(request_data[item]) == list:
for (j, list_item) in enumerate(request_data[item]):
path2 = "{}:{}#{}".format(path, item, j)
self._recursive(request_data[item][j], path2)
else:
self.output.append("{}:{}, {}".format(path, item, request_data[item]))
thing = Thing("input.json")
thing.mock()
Following code will output:
record:name1, value1
record:name2, value2
record:parameters:paramSet:items#0:id, id1
record:parameters:paramSet:items#1:id, id2
record:parameters:param, ""
|
Python callback issue with GPIO on raspberry
Question: I'm an absolute python newbie and this is my first raspberry project. I try to
build a simple music player in which every input button loads a different
album (8 albums) and 3 buttons to control playback (next, pause, last).
To load the music I use a USB drive which, as soon as it is connected
automatically triggers the copy process.
The buttons are debounced with a callback function. Everything works great,
except that after new music is loaded with the USB drive the buttons don't
work anymore.
Most likely it is a simple programming issue which I - as a beginner - just
don't see.
This is the code to work with two buttons:
#!/usr/bin/env python
import RPi.GPIO as GPIO
import os
import pyudev
from time import sleep
from mpd import MPDClient
from socket import error as SocketError
# Configure MPD connection settings
HOST = 'localhost'
PORT = '6600'
CON_ID = {'host':HOST, 'port':PORT}
#Configure Buttons
Button1 = 25
Button2 = 24
GPIO.setmode(GPIO.BCM)
GPIO.setup(Button1, GPIO.IN)
GPIO.setup(Button2, GPIO.IN)
client = MPDClient()
#Function to check if USB is connected
def checkForUSBDevice(name):
res = ""
context = pyudev.Context()
for device in context.list_devices(subsystem='block', DEVTYPE='partition'):
if device.get('ID_FS_LABEL') == name:
res = device.device_node
return res
#Function to load music from USB
def loadMusic(client, con_id, device):
os.system("mount "+device+" /music/usb")
os.system("/etc/init.d/mpd stop")
os.system("rm -r /music/mp3/*")
os.system("cp -r /music/usb/* /music/mp3/")
os.system("umount /music/usb")
os.system("rm /music/mpd/tag_cache")
os.system("/etc/init.d/mpd start")
os.system("mpc clear")
os.system("mpc ls | mpc add")
os.system("/etc/init.d/mpd restart")
#Function to connect to MPD
def mpdConnect(client, con_id):
try:
client.connect(**con_id)
except SocketError:
return False
return True
#Function to load an Album
def loadAlbum(number):
mpdConnect(client, CON_ID)
if client.status()["state"] == "play" or client.status()["state"] == "pause": client.stop()
os.system("mpc clear")
os.system("mpc ls "+str(number)+" | mpc add")
client.play()
client.disconnect()
#Callback Function
def buttonPressed(channel):
if channel == Button1:
print('Button 1 HIT')
loadAlbum(1)
elif channel == Button2:
print('Button 2 HIT')
loadAlbum(2)
def main():
GPIO.add_event_detect(Button1, GPIO.RISING, callback=buttonPressed, bouncetime=200)
GPIO.add_event_detect(Button2, GPIO.RISING, callback=buttonPressed, bouncetime=200)
# This function just creates an endless loop which does
# nothing, in order for the button detection to work
try:
flag = 0
while flag == 0:
device = checkForUSBDevice("MUSIC") # MUSIC is the name of my thumb drive
if flag == 1:
flag = 0
else:
flag = 0
if device != "":
# USB thumb drive has been inserted, new music will be copied
print('USB erkannt, Musik wird kopiert.', device)
loadMusic(client, CON_ID, device)
print('Musik wurde kopiert, USB kann entfernt werden!', device)
while checkForUSBDevice("MUSIC") == device:
sleep(1.0)
print('USB wurde entfernt.')
loadAlbum(1)
except KeyboardInterrupt:
GPIO.cleanup()
if __name__ == "__main__":
main()
Hope anyone can help me on this?
Matthias
Answer: Here is what did the trick for me. Probably it's not the best solution but it
seams to do the trick. Only the main function was changed. Changes are
highlighted at the beginning of the line with a comment.
def main():
GPIO.add_event_detect(Button1, GPIO.RISING, callback=buttonPressed, bouncetime=200)
GPIO.add_event_detect(Button2, GPIO.RISING, callback=buttonPressed, bouncetime=200)
# This function just creates an endless loop which does
# nothing, in order for the button detection to work
try:
flag = 0
while flag == 0:
device = checkForUSBDevice("MUSIC") # MUSIC is the name of my thumb drive
if flag == 1:
flag = 0
else:
flag = 0
if device != "":
# USB thumb drive has been inserted, new music will be copied
print('USB erkannt, Musik wird kopiert.', device)
# Stop the callback before loading the files from the USB:
GPIO.remove_event_detect(Button1)
GPIO.remove_event_detect(Button2)
loadMusic(client, CON_ID, device)
print('Musik wurde kopiert, USB kann entfernt werden!', device)
while checkForUSBDevice("MUSIC") == device:
sleep(1.0)
print('USB wurde entfernt.')
loadAlbum(1)
# Recall the main function
main()
except KeyboardInterrupt:
GPIO.cleanup()
|
Progress dots with a Thread in Python
Question: I am trying to create a thread in Python that will poll some server as long as
it won't get proper answer (HTTP GET). In order to provide convenient text UI
I want to print progress dots. Another dot with every connection attempt until
it finish (or just another dot with every another second of waiting).
I have found something like this:
<http://code.activestate.com/recipes/535141-console-progress-dots-using-
threads-and-a-context-/>
In this example we have context manager:
with Ticker("A test"):
time.sleep(10)
I am not sure if I understand that properly. I would like to do something
like:
with Ticker("A test: "):
result = -1
while result != 0:
result = poll_server()
print "Finished."
But this does not work. Any ideas?
Cheers
Answer: Python buffers your output, so many dots will appear at once. One way around
that is to `import sys` and use that: whenever you want to print a dot, say:
sys.stdout.write(".")
sys.stdout.flush()
The flush makes the dot appear immediately.
|
Caffe install getting ImportError: DLL load failed: The specified module could not be found
Question: I am trying to compile and run the snippets posted
[here](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb),
which basically is going to let me visualize the network internals(feature
maps).
I have successfully compiled `caffe` and `pycaffe` using the `caffe-windows`
branch, And I have copied the caffe folder, into `T:\Anaconda\Lib\site-
packages` folder. Yet still, when I try to run this snippet of code in jupyter
notebook :
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make sure that caffe is on the python path:
caffe_root = 'TC:/Caffe/' # this file is expected to be in {caffe_root}/examples
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
import os
if not os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print("Downloading pre-trained CaffeNet model...")
!../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
I get the following error :
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-e7a8ec94e861> in <module>()
8 sys.path.insert(0, caffe_root + 'python')
9
---> 10 import caffe
L:\Anaconda2\lib\site-packages\caffe\__init__.py in <module>()
----> 1 from .pycaffe import Net, SGDSolver
2 from ._caffe import set_mode_cpu, set_mode_gpu, set_device, Layer, get_solver
3 from .proto.caffe_pb2 import TRAIN, TEST
4 from .classifier import Classifier
5 from .detector import Detector
L:\Anaconda2\lib\site-packages\caffe\pycaffe.py in <module>()
11 import numpy as np
12
---> 13 from ._caffe import Net, SGDSolver
14 import caffe.io
15
ImportError: DLL load failed: The specified module could not be found.
Whats wrong here?
Notes:
I'm using `Anaconda2-2.4.1-Windows-x86_64.exe`
Answer: There's most likely a more specific dependency issue you are not seeing
(Protobuf / OpenCV). First try using the [C++
API](http://caffe.berkeleyvision.org/gathered/examples/cpp_classification.html)
to load an example and make sure all the DLL's load. Then you can more
confidently narrow things down to the Python side. I recommend the more recent
windows caffe instructions based off the branch you're using:
<https://initialneil.wordpress.com/2015/01/11/build-caffe-in-windows-with-
visual-studio-2013-cuda-6-5-opencv-2-4-9/>
I had to do a complete rebuild as detailed above (note that some dependencies
are easier to find with NuGet). Also be on the lookout for the right protobuf
binaries in various 3rdParty.zip files throughout the above blog.
If you are okay with a snapshot version of Caffe and you don't need to modify
the project itself, the following binaries are _much_ easier to install and
get working:
<https://initialneil.wordpress.com/2015/07/15/caffe-vs2013-opencv-in-windows-
tutorial-i/>
|
Decorate class that has no self in method signature in Python
Question: I am trying to apply decorator dynamically to classes. It works if I have a
class method including self in method signature.
**Working example:**
from functools import wraps
def debug(func):
@wraps(func)
def wrapper(*args, **kwargs):
print('awesome')
f = func(*args, **kwargs)
return f
return wrapper
def debugclass(cls):
# cls is a class
for key, val in vars(cls).items():
if callable(val):
setattr(cls, key, debug(val))
return cls
class Dude:
def test(self):
#def test(): # this does not work
pass
debugclass(Dude)
dude = Dude()
dude.test()
**How could I change Dude class method signature so that it would work without
self being part of signature?**
class Dude:
def test(): # without self
pass
debugclass(Dude)
dude = Dude()
dude.test()
**Getting error:**
Traceback (most recent call last):
File "withoutself.py", line 33, in <module>
dude.test()
File "withoutself.py", line 7, in wrapper
f = func(*args, **kwargs)
TypeError: test() takes no arguments (1 given)
Answer: For your `test()` method to be callable without a `self` or `cls` parameter
you need to make it a `staticmethod`.
class Dude:
@staticmethod
def test():
pass
Then you also need to update `debugclass` to wrap `staticmethod` objects as
they aren't callables. Unfortunately you'll need a different way to wrap the
staticmethod objects:
def debugclass(cls):
# cls is a class
for key, val in vars(cls).items():
if callable(val):
setattr(cls, key, debug(val))
elif isinstance(val, staticmethod):
setattr(cls, key, staticmethod(debug(val.__func__)))
return cls
>>> class Dude:
def test(self):
pass
@staticmethod
def test1():
pass
>>> debugclass(Dude)
<class __main__.Dude at 0x7ff731842f58>
>>> Dude().test()
awesome
>>> Dude.test1()
awesome
|
Cannot import cv2 in python in OSX
Question: I have installed OpenCV 3.1 in my Mac, cv2 is also installed through `pip
install cv2`.
vinllen@ $ pip install cv2
You are using pip version 7.1.0, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Requirement already satisfied (use --upgrade to upgrade): cv2 in /usr/local/lib/python2.7/site-packages
But it looks like `cv2` and `cv` cannot be used:
Python 2.7.10 (default, Jul 13 2015, 12:05:58)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv2
>>> import cv
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv
I have tried almost all the solutions list online, but cannot work.
Answer: **I do not know what`pip install cv2` actually installs... but is surely _not_
OpenCV.** `pip install cv2` actually installs
[this](https://pypi.python.org/pypi/cv2/1.0), which are some _blog
distribution utilities_ , not sure what it is, but it is **not** OpenCV.
* * *
To properly install OpenCV, check any of the links @udit043 added in the
comment, or refer to any of the tutorials bellow:
Find here a tutorial on how to install OpenCV on OS X:
<http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-
osx/>
You need to actually compile OpenCV from source and activate python bindings,
which takes a while.
Another option is to use `brew` to get OpenCV, but doesn't neccesarilly get
you the last version nor a fully optimized one:
<http://www.mobileway.net/2015/02/14/install-opencv-for-python-on-mac-os-x/>
|
Why doesn't this code work? *Complete Noob*
Question: I recently started learning Python, and I wanted to create a program that will
show me all the new releases from AllMusic, but it doesn't work. I'm sorry,
but I'm a complete noob. At first I want to just see the artist:
import requests
from bs4 import BeautifulSoup
def new_releases():
url = "http://allmusic.com/newreleases"
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for div in soup.findAll('div', {'class': 'artist'}):
for a in div.findAll('a'):
artist = a.string
print(artist)
new_releases()
What am I doing wrong? I don't get any errors, it jsut doesn't work for
whatever reason
Answer: Actually, your code is fine. But the site you are trying to grab prevents you
from doing that. You'd find that out if you printed your soup this way:
`print(soup)`.
To avoid that you may specify a User Agent (see
[wiki](https://en.wikipedia.org/wiki/User_agent)) header in your
`requests.get`:
source_code = requests.get(url, headers=headers)
where `headers` is a dictionary like this:
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux i586; rv:31.0) Gecko/20100101 Firefox/31.0'
}
Now it'll be working.
|
Python unittest on PyCharm sometimes passes without changing any code in SUT
Question: The problem is the first of it's kind which I have come across. I have a class
and the corresponding unit test. The test passes/fails randomly without any
change on the class under test. I mean I press **shift+F10** and immediately
press it again. One passes the other one fails.
This is the class (it looks kinda dirty, though. Forgive me)
class XmlSerializer:
def __init__(self, int_repr = 'int', str_repr = 'str'):
"""@int_repr: integer type representation (default: 'int')
@str_repr : string type representation (default: 'str')"""
self.xml_result = []
self.__int_repr = int_repr
self.__str_repr = str_repr
def serialize(self, element):
self.xml_result = []
self.__recurse2(element, indent='')
return ''.join(self.xml_result)
def __recurse2(self, element, indent):
if isinstance(element, int):
self.xml_result += indent + '\t<' + self.__int_repr + '>' + str(element) + '</' + self.__int_repr + '>\n'
if isinstance(element, str):
self.xml_result += indent + '\t<' + self.__str_repr + '>' + str(element) + '</' + self.__str_repr + '>\n'
elif type(element) in [type(list()), type(tuple()), type(dict())]:
for el in element:
self.__recurse2(el, indent + '\t')
else:
try: # Attribute names are printed only here
attrs = vars(element)
self.xml_result += indent + '<' + element.__class__.__name__ + '>\n'
for attr in attrs:
self.xml_result += indent + '\t' + '<'+ attr +'>\n'
self.__recurse2(attrs[attr], indent + '\t')
self.xml_result += indent + '\t' + '</'+ attr +'>\n'
self.xml_result += indent + '</' + element.__class__.__name__ + '>\n'
except Exception as ex:
pass
And here is the Test Class (the whole content of the file)
import unittest
from string_and_regex.xml_stuff import XmlSerializer
class Person:
def __init__(self):
self.name = "Sam"
self.age = 26
class Group:
def __init__(self):
self.name = 'sample object'
self.people = [Person(), Person()]
group_serialized = '<Group>\n' \
'\t<name>\n' \
'\t\t<str>sample object</str>\n' \
'\t</name>\n' \
'\t<people>\n' \
'\t\t<Person>\n' \
'\t\t\t<name>\n' \
'\t\t\t\t<str>Sam</str>\n' \
'\t\t\t</name>\n' \
'\t\t\t<age>\n' \
'\t\t\t\t<int>26</int>\n' \
'\t\t\t</age>\n' \
'\t\t</Person>\n' \
'\t\t<Person>\n' \
'\t\t\t<name>\n' \
'\t\t\t\t<str>Sam</str>\n' \
'\t\t\t</name>\n' \
'\t\t\t<age>\n' \
'\t\t\t\t<int>26</int>\n' \
'\t\t\t</age>\n' \
'\t\t</Person>\n' \
'\t</people>\n' \
"</Group>\n"
class TestXmlSerializer(unittest.TestCase):
def test_serialize(self):
serializer = XmlSerializer()
xml_result = serializer.serialize(Group())
self.assertEquals(group_serialized, xml_result)
if __name__ == '__main__':
unittest.main()
(Also forgive me for the test case I come up with, I know)
Answer: Dicts have no predictable iteration order. When you iterate over
`vars(element)`:
attrs = vars(element)
self.xml_result += indent + '<' + element.__class__.__name__ + '>\n'
for attr in attrs:
Each run of your test will iterate in its own arbitrary order. This might be
fairly consistent, or randomized, or it might behave in pretty much any way,
depending on specifics of the interpreter implementation and settings. You're
probably on Python 3, in which string hash codes are randomized.
Don't depend on dict iteration order.
|
sending ethernet data along with ARP protocol to automobile gateway in Python
Question: I have programmed in python, where an Array of Bytes will be send to activate
a a protocol feature in the automobile Gateway. Basically I am building a
lower Level ISO-OSI structure which sends a packet of Ethernet layer along
with the ARP protocol structure. I have connected the Raspberry pi 2 to a
Gateway via an Ethernet LAN cable. I am using an API in Raspberry to run the
code. I am using python 3 to compile.
#start of program
from socket import *
def sendeth(ethernet_packet,payload,interface = "eth0"):
"""Sending RAW Ethernet packets with ARP protocol."""
s= socket(AF_PACKET, SOCK_RAW)
s.bind((interface,0))
return s.send(ethernet_packet + payload)
def pack(byte_sequence):
"""convert list of bytes to byte string"""
return b"".join(map(chr, byte_sequence))
if __name__ == "__main__":
#desadd,srcadd,ethtype
ethernet_packet= [0x00, 0x36, 0xf8, 0x00, 0x5b, 0xed, 0xb8, 0x27, 0xcb, 0x8c, 0x1c, 0xf9, 0x08, 0x06]
#arpprotocol
arp_packet = [0x00, 0x01, 0x08, 0x00, 0x06, 0x04, 0x00, 0x01, 0xf0, 0x1f, 0xaf, 0x57, 0x33, 0xb1, 0xa9, 0xfe, 0x00, 0x14, 0x00, 0x36, 0xf8, 0x00, 0x5b, 0xed, 0xa9, 0xfe, 0x3f, 0x29]
payload = "".join(map(chr, arp_packet))
r = sendeth(pack(ethernet_packet),
pack(arp_packet))
printf("sent Etherent with ARP_Paket payload length is %d bytes" % r)
When I run the program in my raspberry PI
$sudo python3 APR.py
it throws an error,
Traceback (most recent call test):
File"APR.py", line 24,in <module>
r = sendeth(pack(ethernet_packet),
File"APR.py", line 13,in pack
return b"".join(map(chr, byte_sequence))
TypeError: sequence intem 0: expected Bytes, bytearray, or an object with the buffer Interface, str found.
I have tried my Level best in Google and Wikipedia for this plausible error,
but wasn't able to find any. I am just a month old in Python. So any leads or
help on this issue would be helpful and handy.
_Probable solution_
Since I am using python 3, it throws up an error. Where as in Python 2 its
perfectlf fine. May be in python 3 I am not able to send the hexa decimal as
strings!! If if this is the Problem can someone help me how to overcome the
error.
Answer: Your pack() function does not work, as the `chr` function return `str` but not
a byte string. See [chr() equivalent returning a bytes object, in
py3k](http://stackoverflow.com/questions/4523505/chr-equivalent-returning-a-
bytes-object-in-py3k) for possible solutions.
In your case, the easiest solution is to use `bytearray` directly:
r = sendeth(bytearray(ethernet_packet),
bytearrayarp_packet))
|
Problems crawling wordreference
Question: I am trying to crawl `wordreference`, but I am not succeding.
The first problem I have encountered is, that a big part is loaded via
`JavaScript`, but that shouldn't be much problem because I can see what I need
in the source code.
So, for example, I want to extract for a given word, the first two meanings,
so in this url:
`http://www.wordreference.com/es/translation.asp?tranword=crane` I need to
extract `grulla` and `grúa`.
This is my code:
import lxml.html as lh
import urllib2
url = 'http://www.wordreference.com/es/translation.asp?tranword=crane'
doc = lh.parse((urllib2.urlopen(url)))
trans = doc.xpath('//td[@class="ToWrd"]/text()')
for i in trans:
print i
The result is that I get an empty list.
I have tried to crawl it with scrapy too, no success. I am not sure what is
going on, the only way I have been able to crawl it is using `curl`, but that
is sloopy, I want to do it in an elegant way, with Python.
Thank you very much
Answer: It looks like you need a `User-Agent` header to be sent, see [Changing user
agent on urllib2.urlopen](http://stackoverflow.com/questions/802134/changing-
user-agent-on-urllib2-urlopen).
Also, just switching to [`requests`](http://docs.python-
requests.org/en/latest/) would do the trick (it automatically sends the
`python-requests/version` User Agent by default):
import lxml.html as lh
import requests
url = 'http://www.wordreference.com/es/translation.asp?tranword=crane'
response = requests.get("http://www.wordreference.com/es/translation.asp?tranword=crane")
doc = lh.fromstring(response.content)
trans = doc.xpath('//td[@class="ToWrd"]/text()')
for i in trans:
print(i)
Prints:
grulla
grúa
plataforma
...
grulla blanca
grulla trompetera
|
Django - RuntimeError: populate() isn't reentrant
Question: I'm trying to move my Django project to a production server(virtual machine
running ubuntu server 14.04 LTS X64) freshly installed.
All I did so far is installing the project requirements.txt (no apache server
installed nor mysql server)
When I try to run `manage.py runserver` I get this error:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 354, in execute
django.setup()
File "/usr/local/lib/python3.4/site-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.4/site-packages/django/apps/registry.py", line 78, in populate
raise RuntimeError("populate() isn't reentrant")
RuntimeError: populate() isn't reentrant
I have no idea how to trace the source of this issue, and if you need any
information let me know.
requirements.txt:
amqp==1.4.9
anyjson==0.3.3
billiard==3.3.0.22
celery==3.1.19
dj-database-url==0.3.0
dj-static==0.0.6
Django==1.7.1
django-appconf==0.6
django-celery==3.1.17
django-compressor==1.4
django-discover-runner==1.0
django-role-permissions==0.6.2
djangorestframework==3.0.0
drf-nested-routers==0.11.1
gunicorn==19.1.1
kombu==3.0.33
mysql-connector-python==2.1.3
pytz==2015.7
six==1.8.0
static3==0.5.1
Answer: After trying running `manage.py`alone, i got this error :
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/django/db/utils.py", line 108, in load_backend
return import_module('%s.base' % backend_name)
File "/usr/local/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2212, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2212, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2212, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2224, in _find_and_load_unlocked
ImportError: No module named 'mysql'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 345, in execute
settings.INSTALLED_APPS
File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 46, in __getattr__
self._setup(name)
File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 42, in _setup
self._wrapped = Settings(settings_module)
File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 94, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/local/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2212, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/soufiaane/django_projects/CapValue/CapValue/__init__.py", line 5, in <module>
from .celery_settings import app as celery_app # noqa
File "/home/soufiaane/django_projects/CapValue/CapValue/celery_settings.py", line 9, in <module>
django.setup()
File "/usr/local/lib/python3.4/site-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.4/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/usr/local/lib/python3.4/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/usr/local/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.4/site-packages/django/contrib/auth/models.py", line 40, in <module>
class Permission(models.Model):
File "/usr/local/lib/python3.4/site-packages/django/db/models/base.py", line 124, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "/usr/local/lib/python3.4/site-packages/django/db/models/base.py", line 299, in add_to_class
value.contribute_to_class(cls, name)
File "/usr/local/lib/python3.4/site-packages/django/db/models/options.py", line 166, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/usr/local/lib/python3.4/site-packages/django/db/__init__.py", line 40, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/local/lib/python3.4/site-packages/django/db/utils.py", line 242, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/local/lib/python3.4/site-packages/django/db/utils.py", line 126, in load_backend
raise ImproperlyConfigured(error_msg)
django.core.exceptions.ImproperlyConfigured: 'mysql.connector.django' isn't an available database backend.
Try using 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'postgresql_psycopg2', 'sqlite3'
Error was: No module named 'mysql'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 351, in execute
settings.configure()
File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 56, in configure
raise RuntimeError('Settings already configured.')
RuntimeError: Settings already configured.
which clearly indicates that i have an issue regarding mysql database support!
after struggeling with the mysql-connector-python package i was using in my
devloppment version, i couldn't get it working on my linux server due to some
compatibility issues.
so i end up installing mysqlclient package insteed and it worked like a charm!
|
Python time.sleep() not sleeping
Question: I'm coding a game, and the fight function seems to be tripping me up. Here's
the combat snippet of my code:
def combat(player, enemy, dun):
print("\n"*100 + "A " + enemy.name + " has attacked you!")
while player.health > 0 and enemy.health > 0:
print(enemy.name[:1].upper() + enemy.name[1:], "health:", enemy.health)
print("Your health:", player.health)
cmd = input(">")
if cmd == "attack":
enemy.health -= player.atk(enemy)
if cmd == "run":
coin = random.choice(["heads", "tails"])
if coin == "heads":
break
else:
print("You couldn't escape.")
if cmd == "equip":
target = input("Which item?\n>")
print(player.equip(target))
player.health -= enemy.atk(player)
if enemy.health <= 0:
print("You defeated the", enemy.name + "!")
if enemy.drop != None:
return "Enemy defeated.\nThe " + enemy.name + " dropped a " + enemy.drop + "!"
dun.data[dun.pos][2].append(enemy.drop)
else:
return "Enemy defeated."
time.sleep(1.5)
out = 1
elif player.health <= 0:
print("You died fighting %s..." % enemy.name)
dun.pos == (0, 0)
player.inventory == []
return "You reawaken in the same room you started in, pack empty..."
time.sleep(3)
I have imported the random module and the time module, it's just not in the
snippet.
When the sequence ends, it does not sleep and goes right away into the main
game loop (which I can give if needed).
Apart from being messy, what am I doing wrong?
Answer: The `return` statement immediately exits your function. Any code after it
_will not run_. You can `sleep` before your `return`:
example:
time.sleep(1.5)
return "Enemy defeated."
But it probably makes a lot more sense to just sleep after you call your
function:
combat(...)
time.sleep(5)
|
How can I use raw_input to receive input from stdin in python 2.7?
Question: To receive input from stdin in python 2.7, I typically `import sys` and use
`sys.stdin`. However, I have seen examples where `raw_input` is used to
receive input from stdin, including multi-line input. How exactly can I use
raw_input in place of sys.stdin? Here is an example problem:
input.txt:
Print
me
out
And I am running this command:
cat input.txt | python script.py
What can I put in script.py such that it will print out all lines of input
using `raw_input`?
Answer: You can do something like this:
while True:
try:
print raw_input()
except EOFError:
break
`raw_input` will only return single lines from stdin, and throws `EOFError`
when EOF is read.
|
Parallel Port access through python
Question: I am trying to run "parallel" package on my 64 bit system.
Import parallel
But I am getting this Error. I think this is some DLL problem, but don't know
which DLL I need and where to keep them.
My_Work\Signal_Generator_GUI\with_parallel.py", line 1, in <module>
import parallel
File "C:\Python27\lib\site-packages\parallel\__init__.py", line 13, in <module>
from parallel.parallelwin32 import Parallel # noqa
File "C:\Python27\lib\site-packages\parallel\parallelwin32.py", line 60, in <module>
_pyparallel = ctypes.windll.simpleio
File "C:\Python27\lib\ctypes\__init__.py", line 435, in __getattr__
dll = self._dlltype(name)
File "C:\Python27\lib\ctypes\__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
WindowsError: [Error 193] %1 is not a valid Win32 application
Answer: You need to install **simpleio.dll**. Here is a [giveio
installer](https://sourceforge.net/projects/pyserial/files/pyparallel/giveio/).
Also, please take a look at this [thread](http://forums.devshed.com/python-
programming-11/pyparallel-161278.html) for same problem and examples as well.
Finally, here is a [github
page](https://github.com/pyserial/pyparallel/issues/7) discussing the same
issue.
|
Storing code inside a dictionary
Question: I'm getting into Python, and though of writing my own script which allows me
to check arguments provided when a program is run. An example below of what
I'm trying to achieve:
`python file.py -v -h anotherfile.py`
or
`./file.py -v -h anotherfile.py`
In these two cases, the `-v` and `-h` arguments, print out the module version,
and a basic help file. I already have the code to differentiate between
arguments and files, except I want to create a generalised module on the
matter.
The following code written in Java-
// Somewhere.
public static HashMap<String, Runnable> args = new HashMap<String, Runnable>();
public void addArgument(String argument, Runnable command) {
if (argument.length() > 0) {
if (args.get(argument) == null) {
args.put(argument, command);
} else {
System.err.println("Cannot add argument: " + argument + " to HashMap as the mapping already exists.");
// Recover.
}
}
}
// Somewhere else.
foo.addArgument("-v", () -> {System.out.println("version 1.0");});
foo.args.get("-v").run();
-will run the Lambda Expressions _(atleast that's what I read they were when researching the topic)_ successfully. I have no idea how Lambda Expressions work however, and have only basic knowledge of using them.
**The point of this question, is how can I implement something like the Java
example, in Python, storing any type of code inside of an array?**
The thing is with the Java example though, if I have `int i = 0;` defined in
the class which executes `addArgument` and uses `i` somehow or rather, the
class containing `addArgument` knows to use `i` from the one that invoked it.
I'm worried that this may not be the same case for Python...
I want to be able to store them in dictionaries, or some other sort of key-
based array, so I can store them in the following manner:
# Very rough example on the latter argument, showing what I'm after.
addoption("-v", print("version 1.0"))
**EDIT: Example of what I want: (not working as is) (please ignore the ;'s)**
args = {};
def add(argument, command):
args[argument] = lambda: command; # Same problem when removing 'lambda:'
def run():
for arg in args:
arg(); # Causing problems.
def prnt():
print("test");
add("-v", prnt);
run();
Answer: EDIT: to correct your most recent code, iterating over a dictionary yields its
keys not its values, so you need:
args = {};
def add(argument, command):
args[argument] = command;
def run():
for arg in args:
args[arg]() # <<<<<<<<<<
def prnt():
print("test");
add("-v", prnt);
run();
For the example you give, a dictionary of functions, although possible, isn't
really the natural way to go about things in Python. The standard library has
a module, [`argparse`](https://docs.python.org/3/library/argparse.html)
specifically for dealing with all things commmand-line argument related, and
rather than using keys to a dictionary you can parse the command line
arguments and refer to their stored constants.
import argparse
def output_version():
print('Version 1.0!')
parser = argparse.ArgumentParser(description="A program to do something.")
parser.add_argument('-v', help='Output version number.', action='store_true')
args = parser.parse_args()
if args.v:
output_version()
(In fact outputting a version string is also handled natively by `argparse`:
see [here](https://docs.python.org/3/library/argparse.html#action)).
To print out the contents of a text file, `myfile.txt` when the `-p` switch is
given:
import argparse
def print_file(filename):
with open(filename) as fi:
print(fi.read())
parser = argparse.ArgumentParser(description="A program to do something.")
parser.add_argument('-p', help='Print a plain file.', action='store')
args = parser.parse_args()
if args.p:
print_file(args.p)
Use with e.g.
$ prog.py -p the_filename.txt
|
Python DateTime if statement behaves different in Azure.(Django WebApp)
Question: So i'm writing a little Django webApp. It uses JSON data from a API to render
everything. In localhost, everything runs fine. But in Azure it does not.
The is somewhere in this code:
for appointment in appointmentsMaandag:
starttijd = (datetime.datetime.fromtimestamp(appointment['start'])).strftime('%H%M')
SuMa = 0
if 800 <= int(starttijd) < 850:
SuMa = 0
elif 850 <= int(starttijd) < 940:
SuMa = 1
elif 940 <= int(starttijd) < 1050:
SuMa = 2
elif 1050 <= int(starttijd) < 1140:
SuMa = 3
elif 1140 <= int(starttijd) < 1240:
SuMa = 4
elif 1240 <= int(starttijd) < 1350:
SuMa = 5
elif 1350 <= int(starttijd) < 1440:
SuMa = 6
elif 1440 <= int(starttijd) < 1530:
SuMa = 7
else:
SuMa = 8
break
In azure, this always outputs the `ELSE`. `SuMa = 8` in this case. In
Localhost it does work. Since I have no experience whatsoever with azure, I
was wondering if any of you could help me.
I use VS 2015 with Python Tools.
Answer: It should be a timezone issue. As when we need to parse timestamp to datetime
object, it will need the timezone setting, and by default it will leverage the
system timezone. And the timezone on Azure services are all the same as
**America/Los_Angeles**.
So you need to set your local timezone in your code, E.G:
import pytz
localtz = pytz.timezone('Asia/Hong_Kong')
starttijd = (datetime.datetime.fromtimestamp(appointment['start'],tz=localtz)).strftime('%H%M')
print starttijd
...
|
If x is between y and z, print, no output with python csv
Question: I'm trying to get the first row of a csv to print when the condition is
between the second and third row.
Here is my current code:
from libnmap.parser import NmapParser
import csv
from netaddr import *
ip = raw_input("Enter an IP: ")
addr = (int(IPAddress(ip)))
with open('DHCPranges.csv', 'rb') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
for row in reader:
if addr >= (int(IPAddress(row['start_address*'])) and addr <= (int(IPAddress(row['end_address*'])))):
print (row['Network'])
When I input the IP, I don't get a print out. I have verified that the addr
variable is working properly, so the if statement must be wrong somehow.
The csv is created like so, with commas:
Network,start_address*,end_address*
Matt's network,10.0.0.1,10.0.1.100
Chris's network,10.0.1.102,10.0.2.100
Answer: You specify delimiter=',' but there are not comma delimeters in your file.
What is the actual delimiter? Tab? More than one space in the same place?
Try printing row.keys() in the loop, before the for line. If the csv example
is correct, it will give a single string.
|
How to extract raw html from a Scrapy selector?
Question: I'm extracting js data using response.xpath('//*')re_first() and later
converting it to python native data. The problem is extract/re methods don't
seem to provide a way to not unquote html i.e.
original html:
{my_fields:['O'Connor Park'], }
extract output:
{my_fields:['O'Connor Park'], }
turning this output into json won't work.
What's the easiest way around it?
Answer: **Short answer:**
* Scrapy/Parsel selectors' `.re()` and `.re_first()` methods replace HTML entities (except `<`, `&`)
* instead, use `.extract()` or `.extract_first()` to get raw HTML (or raw JavaScript instructions) and use Python's `re` module on extracted string
**Long answer:**
Let's look at an example input and various ways of extracting Javascript data
from HTML.
Sample HTML:
<html lang="en">
<body>
<div>
<script type="text/javascript">
var i = {a:['O'Connor Park']}
</script>
</div>
</body>
</html>
Using scrapy Selector, which is using the
[parsel](https://github.com/scrapy/parsel) library underneath, you have
several ways of extracting the Javascript snippet:
>>> import scrapy
>>> t = """<html lang="en">
... <body>
... <div>
... <script type="text/javascript">
... var i = {a:['O'Connor Park']}
... </script>
...
... </div>
... </body>
... </html>
... """
>>> selector = scrapy.Selector(text=t, type="html")
>>>
>>> # extracting the <script> element as raw HTML
>>> selector.xpath('//div/script').extract_first()
u'<script type="text/javascript">\n var i = {a:[\'O'Connor Park\']}\n </script>'
>>>
>>> # only getting the text node inside the <script> element
>>> selector.xpath('//div/script/text()').extract_first()
u"\n var i = {a:['O'Connor Park']}\n "
>>>
Now, Using `.re` (or `.re_first`) you get different result:
>>> # I'm using a very simple "catch-all" regex
>>> # you are probably using a regex to extract
>>> # that specific "O'Connor Park" string
>>> selector.xpath('//div/script/text()').re_first('.+')
u" var i = {a:['O'Connor Park']}"
>>>
>>> # .re() on the element itself, one needs to handle newlines
>>> selector.xpath('//div/script').re_first('.+')
u'<script type="text/javascript">' # only first line extracted
>>> import re
>>> selector.xpath('//div/script').re_first(re.compile('.+', re.DOTALL))
u'<script type="text/javascript">\n var i = {a:[\'O\'Connor Park\']}\n </script>'
>>>
The HTML entity `'` has been replaced by an
[apostrophe](https://en.wikipedia.org/wiki/Apostrophe#Unicode). This is due to
a
[`w3lib.html.replace_entities()`](https://w3lib.readthedocs.org/en/latest/w3lib.html#w3lib.html.remove_entities)
call in `.re/re_first` implementation (see `parsel` source code, in
[`extract_regex`](https://github.com/scrapy/parsel/blob/master/parsel/utils.py#L59)
function), which is not used when simply calling `extract()` or
`extract_first()`
|
Python UnicodeDecodeError exception
Question:
txt = input("vilken textfil vill du använda?")
fil = open(txt,"r")
spelare=[]
resultat=[]
bästnamn=None
bästkast=0
for line in fil:
kolumn=line.split()
kolumn1=len(kolumn[1])
kolumn2=len(kolumn[2])
if len(kolumn)<5:
mu=float(kolumn[1])
sigma=float(kolumn[2])
#print(mu,sigma)
#kast=random.normalvariate(mu,sigma)
#print(kast)
for r in range(0,6):
kast=random.normalvariate(mu,sigma)
resultat.append(kast)
if max(resultat)>bästkast:
bästkast=max(resultat)
bästnamn=kolumn[0]
print("Segrare", bästnamn, "som stötte", bästkast, "meter")
When I running the program I get this error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 85: ordinal not in range(128)
What am I doing wrong?
Thank you so much and yes I have python 3. With the change as you sad to
import the codecs, I now get another error insteed.
`ValueError: max() arg is an empty sequence`
What sholud I do? any suggestions?
Answer: As far as i know, you have to type English letters in order python be able to
read it. so you need to change these variables:
bästnamn
bästkast
**Edit:** `max()` is to give a maximum of a list and in string case it will
break it to letters and give you back the maximum letter in alphabet, like so:
myNumList = [1,2,4,5,8,50,10]`
print max(myNumList ) #result 50
myString = 'tree'
print max(myList) #result 't'
myMixList = [1,2,4,'Apple','20']`
print max(myMixList) #result 'Apple'
**Note** it wont work on single `integer` you will get error, but you do have
pass list to it.
your code is fine, just the document you are feeding to, might has some
mistakes init, I see your application expect at least 4 column but never use
the 4th column which the file type of it doesn't really matter, but i believe
your document should be like so:
[String/number] [Number] [Number]
example of your document should be like so:
String1 158 212
String2 584 795
1234567 845 356
String4 356 489
String5 876 215
String6 985 853
String7 111 809
String8 234 058
**Note:** if you have string in second or third column that could cause the
error, string can't convert to `float` as well as `random.normalvariate` wont
take string.
|
Django redirect() truncating URLs to domain name on production server using HTTPS
Question: I'm running Django 1.8, Python 2.7. I have noticed a problem on my site with
redirects which seems only to affect my production server using HTTPS
protocol, but not staging or dev servers running plain HTTP.
As examples, I've identified two simple things which do not currently work for
me over HTTPS:
1. Visiting <https://www.eduduck.com/support/thanks> with no trailing slash redirects to <https://www.eduduck.com/> instead of appending a slash and redirecting to <https://www.eduduck.com/support/thanks/> as I would have expected with (default settings) APPEND_SLASH = True.
2. Submitting a valid support form 'wrongly' redirects to <https://www.eduduck.com/> site base url when using HTTPS but works correctly with HTTP, redirecting to /support/thanks/
It's all routine stuff, or should be. Very perplexed by this; any pointers
most gratefully received. **NB: Problem only appears under HTTPS**
**support/urls.py**
from django.conf.urls import url
from django.views.generic import TemplateView
from . import views
urlpatterns = [
url(r'^$', views.support, name='support'),
url(
r'^thanks/$',
TemplateView.as_view(template_name ='support/thanks.html')),
]
**support/forms.py**
from django import forms
class SupportForm(forms.Form):
"""General request for support or contact"""
subject = forms.CharField(max_length = 100)
email = forms.EmailField(label='Your email')
message = forms.CharField(widget = forms.Textarea)
def clean_message(self):
"""Sensible messages cannot be too short"""
message = self.cleaned_data['message']
wc = len(message.split())
if wc < 4:
raise forms.ValidationError(
"Please muster four or more words of wisdom or woe!"
)
return message
**support/views.py**
from django.core.mail import send_mail
from django.http import HttpResponseRedirect
from django.shortcuts import render
from django.template import RequestContext
from .forms import SupportForm
import logging
logger = logging.getLogger(__name__)
def support(request):
"""Provide a support/contact email form"""
logger.info('Support view')
if request.method=="POST":
form = SupportForm(request.POST)
if form.is_valid():
cdata = form.cleaned_data
send_mail(cdata['subject'],
cdata['message'],
cdata.get('email', '[email protected]'),
['[email protected]'],
)
return HttpResponseRedirect('/support/thanks/')
else:
form = SupportForm()
return render(
request,
'support/support.html',
{'support_form': form},
)
Answer: So in the end, yes it was my nginx config. I had an old sites-enabled/default
config file, which I deleted. This made some progress, but I also had to edit
the config file for my production site. The diff for the final (working)
server block looks like this:
server {
- listen [::]:80 default_server;
+ listen 80 default_server;
server_name www.example.com;
return 301 https://$host$request_uri
}
As I mentioned in my earlier comment, despite reading the docs, I still don't
exactly know what is wrong with [::]:80 which I thought was for IPv6
compatibility.
|
Writing dataReceived (from Twisted) to a tkinter texxtbox
Question: Okay, so I'm sure this should be more simple than it is, but... Basically, I
have a twisted reactor listening on a specific port. I also have a tkinter
form containing a textbox. What I want to do is simply write the data received
to that textbox.
Below is what I have so far:
from twisted.web import proxy, http
from twisted.internet import reactor
from twisted.python import log
import ScrolledText
import sys
#Gui stuff
from Tkinter import *
import ttk
import Tkinter as tk
from ScrolledText import *
import tkMessageBox
from twisted.internet import tksupport
root = Tk()
class MyProxy(proxy.Proxy):
def dataReceived(self, data):
print data
textfield.delete(0,END)
textfield.insert(0, data)
textfield.pack()
return proxy.Proxy.dataReceived(self, data)
class ProxyFactory(http.HTTPFactory):
protocol=MyProxy
def guiLoop():
print "[+] Drawing GUI"
menubar = Menu(root)
connectMenu = Menu(menubar, tearoff=0)
connectMenu.add_command(label="Help")
connectMenu.add_command(label="About")
menubar.add_cascade(label="Proxy", menu=connectMenu)
root.minsize(300,300)
root.geometry("500x500")
root.textfield = ScrolledText()
root.textfield.pack()
#top = Tk()
root.title("Proxy")
root.config(menu=menubar)
tksupport.install(root)
def main():
#runReactor()
factory = ProxyFactory()
reactor.listenTCP(8080, factory)
reactor.callLater(0, guiLoop)
print "[+] Starting Reactor"
reactor.run()
if __name__ == "__main__":
main()
I've looked around, but there doesnt seem to be a clear way showing how to
write to a textbox from a different function.
Answer: There's nothing special -- you just need a reference to the widget, or call a
function that has a reference to the widget.
Since `root` is global, and the text widget is an attribute of `root`, you
should be able to do something like:
root.textfield.delete("1.0", "end")
root.textfield.insert("1.0", data)
|
Python access to Parent Directory
Question: I have a file in the directory
app
a
Ulil.py
b
main.py
I want to import Ulil.py (at app\a) into main.py (at app\b).
How do i go about doing this. I need to move the file around as well so I
don't want to put the entire path. I just to be in app\b and access only
app\a. Folder name stays the same.
Answer: First, create an empty file called `__init__.py` in **a**
Second, in **main.py** in **b** import it using
import sys
sys.path.insert( 0, '../' )
from a.Util import *
|
Why is this loop runs gradually slowly?
Question: The flowing code is a simple python loop.
def getBestWeightsByRandomGradientAscent(featureDatasList, classTypes, maxCycles=1):
"""
:param featureDatasList:
:param classTypes:
:param maxCycles: the loop time
:return:
"""
import random
featureDatas = np.array(featureDatasList)
m, n = np.shape(featureDatas)
weights = np.ones(n)
# the loop goes here... #
for j in range(maxCycles):
featureIndexs = range(m)
featureLen = len(featureIndexs)
for i in range(m):
delta = 4 / (1.0 + i + j) + 0.01
randIndex = int(random.uniform(0, featureLen))
sigmodInput = sum(featureDatas[randIndex] * weights)
estimateClass = calculateSigmodEstimateClassType(sigmodInput)
error = classTypes[randIndex] - estimateClass
weights += (error * delta) * featureDatas[randIndex]
del (featureIndexs[randIndex])
return weights
I find that when I run this loop for 1000 or more times,it runs quickly at
beginning,but it goes slower and slower as it runs,and finally keeps a slow
speed...Curious,I don't why.Does it cause by the range of variables or my
hardware problem? How can I fix this problem? Thanks a lot!
Answer: Consider using `xrange` instead of `range`. It may be memory or the GC. You
can profile your code with something like
<http://www.vrplumber.com/programming/runsnakerun/>
|
django foreign key quotes error
Question: I have two models in two files that import one another. One of them is
connected to another by foreign key. To avoid circular import, I am trying to
define the foreign key in quotes:
from pubscout.models import Campaign
class RuleSuite(models.Model):
campaign = models.ForeignKey('Campaign', verbose_name="Кампания")
this has worked before on other models, but this time I get an error:
...
...
File "/Users/1111/.virtualenvs/django_cpa/lib/python2.7/site-packages/django/contrib/admin/checks.py", line 719, in _check_list_filter_item
get_fields_from_path(model, field)
File "/Users/1111/.virtualenvs/django_cpa/lib/python2.7/site-packages/django/contrib/admin/utils.py", line 479, in get_fields_from_path
parent = get_model_from_relation(fields[-1])
File "/Users/1111/.virtualenvs/django_cpa/lib/python2.7/site-packages/django/contrib/admin/utils.py", line 430, in get_model_from_relation
return field.get_path_info()[-1].to_opts.model
File "/Users/1111/.virtualenvs/django_cpa/lib/python2.7/site-packages/django/db/models/fields/related.py", line 661, in get_path_info
opts = self.remote_field.model._meta
AttributeError: 'unicode' object has no attribute '_meta'
where should I look to fix it?
Answer: You need to qualify model name with application label:
campaign = models.ForeignKey('appname.Campaign', verbose_name="Кампания")
|
Difficulty serializing Geography column type using sqlalchemy marshmallow
Question: I an trying to use Marshmallow to do do deserialize and serialise SQLAlchemy
objects but run into a problem when dealing with Geography fields in the ORM.
Firstly the model:
class Address(db.Model, TableColumnsBase):
__tablename__ = 'address'
addressLine1 = db.Column(String(255), nullable=True, info="Street address, company name, c/o")
addressLine2 = db.Column(String(255), nullable=True, info="Apartment, suite, unit, building floor, etc")
countryCode = db.Column(String(255), nullable=True, info="Country code such as AU, US etc")
suburb = db.Column(String(255), nullable=True, info="Users suburb such as Elizabeth Bay")
postcode = db.Column(String(32), nullable=True, info="Users postcode such as 2011 for Elizabeth Bay")
state = db.Column(String(64), info="State for user such as NSW")
user_presence = one_to_many('UserPresence', backref = 'user', lazy='select', cascade='all, delete-orphan')
location = Column(Geography(geometry_type='POINT', srid=4326))
discriminator = Column('type', String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
def as_physical(self):
pa = { # TODO - stub, make it proper
"latitude": 2.1,
"longitude": 1.1,
"unitNumber": '1',
"streetNumber": self.addressLine1,
"streetName": self.addressLine2,
"streetType": 'street',
"suburb": self.suburb,
"postcode": self.postcode
}
return pa
Secondly the marshmallow schema:
class AddressSchema(ModelSchema):
class Meta:
model = Address
sqla_session
Then the error when trying to run:
/Users/james/.virtualenvs/trustmile-api-p2710/bin/python /Users/james/Documents/workspace/trustmile-backend/trustmile/tests/postgis_scratch.py
Traceback (most recent call last):
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/tests/postgis_scratch.py", line 1, in <module>
from app import db
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/__init__.py", line 54, in <module>
from app.api.consumer_v1 import bp as blueprint
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/api/__init__.py", line 4, in <module>
import courier_v1
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/api/courier_v1/__init__.py", line 5, in <module>
from .routes import routes
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/api/courier_v1/routes.py", line 10, in <module>
from .api.nearestNeighbours_latitude_longitude import NearestneighboursLatitudeLongitude
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/api/courier_v1/api/__init__.py", line 5, in <module>
from app.ops import requesthandler
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/ops/requesthandler.py", line 1, in <module>
from app.ops.consumer_operations import cons_ops, ConsumerOperationsFactory
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/ops/consumer_operations.py", line 19, in <module>
from app.users.serialize import ApplicationInstallationSchema, UserAddressSchema, ConsumerUserSchema, CourierUserSchema
File "/Users/james/Documents/workspace/trustmile-backend/trustmile/app/users/serialize.py", line 40, in <module>
class AddressSchema(ModelSchema):
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow/schema.py", line 116, in __new__
dict_cls=dict_cls
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/schema.py", line 57, in get_declared_fields
declared_fields = mcs.get_fields(converter, opts)
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/schema.py", line 90, in get_fields
include_fk=opts.include_fk,
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/convert.py", line 77, in fields_for_model
field = self.property2field(prop)
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/convert.py", line 95, in property2field
field_class = self._get_field_class_for_property(prop)
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/convert.py", line 153, in _get_field_class_for_property
field_cls = self._get_field_class_for_column(column)
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/convert.py", line 123, in _get_field_class_for_column
return self._get_field_class_for_data_type(column.type)
File "/Users/james/.virtualenvs/trustmile-api-p2710/lib/python2.7/site-packages/marshmallow_sqlalchemy/convert.py", line 145, in _get_field_class_for_data_type
'Could not find field column of type {0}.'.format(types[0]))
marshmallow_sqlalchemy.exceptions.ModelConversionError: Could not find field column of type <class 'geoalchemy2.types.Geography'>.
I've tried a few overrides on the location field however to no avail. Any help
much appreciated.
**Updated Jan 24 2016**
This is my code as it stands as the moment with a simpler model:
The SQLAlchemyModel:
class Location(db.Model, TableColumnsBase):
__tablename__ = "location"
loc = db.Column(Geography(geometry_type='POINT', srid=4326))
The marshmallow_sqlalchemy schema object
class LocationSchema(ModelSchema):
loc = GeographySerializationField(attribute='loc')
class Meta:
model = Location
sqla_session = db.session
model_converter = GeoConverter
Other plumbing as per suggestions:
class GeoConverter(ModelConverter):
SQLA_TYPE_MAPPING = ModelConverter.SQLA_TYPE_MAPPING.copy()
SQLA_TYPE_MAPPING.update({
Geography: fields.Str
})
class GeographySerializationField(fields.Field):
def _serialize(self, value, attr, obj):
if value is None:
return value
else:
if isinstance(value, Geography):
return json.dumps({'latitude': db.session.scalar(geo_funcs.ST_X(value)), 'longitude': db.session.scalar(geo_funcs.ST_Y(value))})
else:
return None
def _deserialize(self, value, attr, data):
"""Deserialize value. Concrete :class:`Field` classes should implement this method.
:param value: The value to be deserialized.
:param str attr: The attribute/key in `data` to be deserialized.
:param dict data: The raw input data passed to the `Schema.load`.
:raise ValidationError: In case of formatting or validation failure.
:return: The deserialized value.
.. versionchanged:: 2.0.0
Added ``attr`` and ``data`` parameters.
"""
if value is None:
return value
else:
if isinstance(value, Geography):
return {'latitude': db.session.scalar(geo_funcs.ST_X(value)), 'longitude': db.session.scalar(geo_funcs.ST_Y(value))}
else:
return None
In running the code:
from app.users.serialize import *
from app.model.meta.schema import Location
l = LocationSchema()
loc = Location(27.685994, 85.317815)
r = l.load(loc)
print r
The result I get is:
UnmarshalResult(data={}, errors={u'_schema': [u'Invalid input type.']})
Answer: You can override ModelConverter class and specify custom mapping for your
geography field. See Jair Perrut's answer here [How to use marshmallow to
serialize a custom sqlalchemy
field?](https://stackoverflow.com/questions/33160762/how-to-use-marshmallow-
to-serialize-a-custom-sqlalchemy-
field/34864669#34864669?newreg=150e656b5a134fb6856878052ca459b0)
from marshmallow_sqlalchemy import ModelConverter
from marshmallow import fields
class GeoConverter(ModelConverter):
SQLA_TYPE_MAPPING = ModelConverter.SQLA_TYPE_MAPPING.copy()
SQLA_TYPE_MAPPING.update({
Geography: fields.Str
})
class Meta:
model = Address
sqla_session = session
model_converter = GeoConverter
|
Whats the simplest and safest method to generate a API KEY and SECRET in Python
Question: I need to generate a API key and Secret that would be stored in a Redis
server. What would be the best way to generate a key and secret?
I am develop a Django-tastypie framework based app.
Answer: EDIT: for a very secure way of generating random number, you should use
urandom:
from binascii import hexlify
key = hexlify(os.urandom(lenght))
this will produce bytes, call `key.decode()` if you need a string
You can just generate keys of your desired length the python way:
import random
import string
def generate_key(length):
''.join(random.choice(string.ascii_letters + string.digits) for _ in range(length))
And then you can just call it with your desired length `key =
generate_key(40)`.
You can specify what alphabet you want to use, for example using only
`string.ascii_lowercase` for key consisting of only lowercase letters etc.
There is also Model for Api authentication in tastypie, might be worth
checking out <https://django-
tastypie.readthedocs.org/en/latest/authentication.html#apikeyauthentication>
|
Why python numpy.delete does not raise indexError when out-of-bounds index is in np array
Question: When using np.delete an indexError is raise when an out-of-bounds index is
used. When an out-of-bounds index is in a np.array used and the array is used
as the argument in np.delete, why doesnt this then raise an indexError?
np.delete(np.array([0, 2, 4, 5, 6, 7, 8, 9]), 9)
this gives an index-error, as it should (index 9 is out of bounds)
while
np.delete(np.arange(0,5), np.array([9]))
and
np.delete(np.arange(0,5), (9,))
give:
array([0, 1, 2, 3, 4])
Answer: This is a known "feature" and will be deprecated in later versions.
[From the source of
numpy](https://github.com/numpy/numpy/blob/v1.10.1/numpy/lib/function_base.py#L3867):
# Test if there are out of bound indices, this is deprecated
inside_bounds = (obj < N) & (obj >= -N)
if not inside_bounds.all():
# 2013-09-24, 1.9
warnings.warn(
"in the future out of bounds indices will raise an error "
"instead of being ignored by `numpy.delete`.",
DeprecationWarning)
obj = obj[inside_bounds]
Enabling DeprecationWarning in python actually shows this warning.
[Ref](http://stackoverflow.com/questions/20960110/how-do-i-get-warnings-warn-
to-issue-a-warning-and-not-ignore-the-line)
In [1]: import warnings
In [2]: warnings.simplefilter('always', DeprecationWarning)
In [3]: warnings.warn('test', DeprecationWarning)
C:\Users\u31492\AppData\Local\Continuum\Anaconda\Scripts\ipython-script.py:1: De
precationWarning: test
if __name__ == '__main__':
In [4]: import numpy as np
In [5]: np.delete(np.arange(0,5), np.array([9]))
C:\Users\u31492\AppData\Local\Continuum\Anaconda\lib\site-packages\numpy\lib\fun
ction_base.py:3869: DeprecationWarning: in the future out of bounds indices will
raise an error instead of being ignored by `numpy.delete`.
DeprecationWarning)
Out[5]: array([0, 1, 2, 3, 4])
|
Python matplotlib histogram: edit x-axis based on maximum frequency in bin
Question: I am trying to make a series of histograms by looping through a series of
arrays containing values. For each array my script is producing a separate
histogram. Using the default settings, this results in histograms in which the
bar with the highest frequency touches the top of the graph ([this is what it
looks like now)](http://i.stack.imgur.com/xVZMc.png). I would like there to be
some space: [this is what I want it to look
like.](http://i.stack.imgur.com/k9n3V.png)
My question is: how do I make the maximum value of the y-axis dependent of the
maximum frequency occurring in my bins? I want the y-axis to be slightly
longer than my longest bar.
I cannot do this by setting the value like so:
plt.axis([100, 350, 0, 5]) #[xmin, xmax, ymin, ymax]
or
matplotlib.pyplot.ylim(0,5)
because I am plotting a series of histograms, and the max frequencies strongly
vary.
My code now looks something like this:
import matplotlib.pyplot as plt
for LIST in LISTS:
plt.figure()
plt.hist(LIST)
plt.title('Title')
plt.xlabel("x-axis [unit]")
plt.ylabel("Frequency")
plt.savefig('figures/'LIST.png')
How do I define the y-axis to run from 0 to 1.1 * (the max frequency in 1
bin)?
Answer: If I understand correctly, this is what you are hoping to achieve?
import matplotlib.pyplot as plt
import numpy.random as nprnd
import numpy as np
LISTS = []
#Generate data
for _ in range(3):
LISTS.append(nprnd.randint(100, size=100))
#Find the maximum y value of every data set
maxYs = [i[0].max() for i in map(plt.hist,LISTS)]
print "maxYs:", maxYs
#Find the largest y
maxY = np.max(maxYs)
print "maxY:",maxY
for LIST in LISTS:
plt.figure()
#Set that as the ylim
plt.ylim(0,maxY)
plt.hist(LIST)
plt.title('Title')
plt.xlabel("x-axis [unit]")
plt.ylabel("Frequency")
#Got rid of the safe function
plt.show()
Produces the graphs with the largest y limit the same as maxY. Also some debug
output:
maxYs: [16.0, 13.0, 13.0]
maxY: 16.0
The function `plt.hist()` returns a tuple with the `x, y` data set. So you can
call `y.max()` to get the maximum for each set.
[Source.](http://stackoverflow.com/a/15558796/5782727)
|
Unable to Save Arabic Decoded Unicode to CSV File Using Python
Question: I am working with a twitter streaming package for python. I am currently using
a keyword that is written in unicode to search for tweets containing that
word. I am then using python to create a database csv file of the tweets.
However, I want to convert the tweets back to Arabic symbols when I save them
in the csv.
The errors I am receiving are all similar to "error ondata the ASCII caracters
in position ___ are not within the range of 128."
Here is my code:
class listener(StreamListener):
def on_data(self, data):
try:
#print data
tweet = (str((data.split(',"text":"')[1].split('","source')[0]))).encode('utf-8')
now = datetime.now()
tweetsymbols = tweet.encode('utf-8')
print tweetsymbols
saveThis = str(now) + ':::' + tweetsymbols.decode('utf-8')
saveFile = open('rawtwitterdata.csv','a')
saveFile.write(saveThis)
saveFile.write('\n')
saveFile.close()
return True
Answer: Here a snippet to write arabic in text
# coding=utf-8
import codecs
from datetime import datetime
class listener(object):
def on_data(self, tweetsymbols):
# python2
# tweetsymbols is str
# tweet = (str((data.split(',"text":"')[1].split('","source')[0]))).encode('utf-8')
now = datetime.now()
# work with unicode
saveThis = unicode(now) + ':::' + tweetsymbols.decode('utf-8')
try:
saveFile = codecs.open('rawtwitterdata.csv', 'a', encoding="utf8")
saveFile.write(saveThis)
saveFile.write('\n')
finally:
saveFile.close()
return self
listener().on_data("إعلان يونيو وبالرغم تم. المتحدة")
All you must know about encoding <https://pythonhosted.org/kitchen/unicode-
frustrations.html>
|
Adding new headers and splitting column
Question: I have a large `csv` with data that I have imported to python with pandas. The
first 3 rows of the `csv` look like the following.
“PATIENT”,"MD",“REFMD”,“DIAGNOSIS_HISTORY”,“AVAILABLE_STUDIES”
“patient1\nPID1\npAge1”,“MDname1\nMDname3”,” RefDoctorName1”,“Prostate cancer”,”No Path\n CT ClinicName (CAP) - 11/30/2015\n Nuclear: ClinicName (Bone Scan) - 11/30/2015"
"patient2\nPID2\npAge2”,”MDname2\nSeen 10/12/2015”,“RefDoctorName2”,”Prostate cancer”,”Path: O/S - Prostate Bx 11/12/2014”
I want to
* split the first `column` in 3 parts from “PATIENT” to “PATIENT_Name”, "PID", "pAGE" and
* in the second column remove the second MD if there one and add a new column of "MD2" to collect those times when a patient saw more than one MD at the same clinic.
* Also, I want to split out the incidences of \nSeen Date in the MD column and place that in a new column titled "Date_Seen".
I have all the columns split out, but having a hard time with the next step.
import pandas as pd
f = pd.read_csv("/path/file.csv")
pat = f.iloc[0:,:1]
refmd = f.iloc[0:,2:3]
diag = f.iloc[0:,3:4]
Answer: You could start with the following:
df.columns = [re.sub(r'[^A-Za-z0-9\\]+', '', c).strip() for c in df.columns]
for i, col in df.items():
df.loc[:, i] = col.str.replace(r'[^A-Za-z0-9\\ ]+', '').str.strip()
to get:
PATIENT MD REFMD \
0 patient1\nPID1\npAge1 MDname1\nMDname3 RefDoctorName1
1 patient2\nPID2\npAge2 MDname2\nSeen 10122015 RefDoctorName2
DIAGNOSISHISTORY AVAILABLESTUDIES
0 Prostate cancer No Path\n CT ClinicName CAP 11302015\n Nucle...
1 Prostate cancer Path OS Prostate Bx 11122014
To `split` and `expand` into new `columns` on the `newline` characters:
pat = df.iloc[:, 0].str.split(r'\\n', expand=True)
pat.columns = ['PATIENT_name', 'PID', 'pAGE']
PATIENT_name PID pAGE
0 patient1 PID1 pAge1
1 patient2 PID2 pAge2
and:
md = df.iloc[:, 1].str.split(r'\\n', expand=True)
md.columns = ['MD', 'MD2']
MD MD2
0 MDname1 MDname3
1 MDname2 Seen 10122015
|
Removing not used resources in Android
Question: Android has recently added `Unused resource remover` functionality into
Android Studio but for some reasons it doesn't work (I asked a question regard
that [here](http://stackoverflow.com/q/34866226/513413)).
I found [android-resource-remover](https://github.com/KeepSafe/android-
resource-remover) in my search to find a replacement tool. I had a little bit
problem in installation but I was successful to install it at the end ([thanks
to CA77y](https://github.com/KeepSafe/android-resource-remover/issues/40)).
I have two questions now:
1. By installation of my Python virtual environment a directory added to root of my project. Should I add it into repo (to be accessed by other developers) or I should add it into `.gitignore` (if it contains machine dependent configurations)?
2. The important thing is nothing happens when I run following command and no resource remove regardless of I run it under virtual env or not. This is screenshot of my output.
[](http://i.stack.imgur.com/Iybha.png)
So, any idea would be appreciated. Thanks.
Answer: For those who are looking a solution, I just found an answer to my old
question. You can find it [here](http://stackoverflow.com/q/34866226/513413).
Therefore, no need to use this lib since Android way is working fine.
|
Redshift: Serializable isolation violation on table
Question: I have a very large Redshift database that contains billions of rows of HTTP
request data.
I have a table called `requests` which has a few important fields:
* `ip_address`
* `city`
* `state`
* `country`
I have a Python process running once per day, which grabs all distinct rows
which have not yet been geocoded (do not have any city / state / country
information), and then attempts to geocode each IP address via Google's
Geocoding API.
This process (pseudocode) looks like this:
for ip_address in ips_to_geocode:
country, state, city = geocode_ip_address(ip_address)
execute_transaction('''
UPDATE requests
SET ip_country = %s, ip_state = %s, ip_city = %s
WHERE ip_address = %s
''')
When running this code, I often receive errors like the following:
psycopg2.InternalError: 1023
DETAIL: Serializable isolation violation on table - 108263, transactions forming the cycle are: 647671, 647682 (pid:23880)
I'm assuming this is because I have other processes constantly logging HTTP
requests into my table, so when I attempt to execute my UPDATE statement, it
is unable to select _all_ rows with the ip address I'd like to update.
My question is this: what can I do to update these records in a sane way that
will stop failing regularly?
Answer: Your code is violating the serializable isolation level of Redshift. You need
to make sure that your code is not trying to open multiple transactions on the
same table before closing all open transactions.
You can achieve this by locking the table in each transaction so that no other
transaction can access the table for updates until the open transaction gets
closed. Not sure how your code is architected (synchronous or asynchronous),
but this will increase the run time as each lock will force others to wait
till the transaction gets over.
Refer: <http://docs.aws.amazon.com/redshift/latest/dg/r_LOCK.html>
|
Custom Django field does not return Enum instances from query
Question: I have a simple custom field implemented to utilize Python 3 Enum instances.
Assigning enum instances to my model attribute, and saving to the database
works correctly. However, fetching model instances using a QuerySet results in
the enum attribute being a string, instead of the respective Enum instance.
How do I get the below `EnumField` to return valid `Enum` instances, rather
than strings?
fields.py:
from enum import Enum
from django.core.exceptions import ValidationError
from django.db import models
class EnumField(models.CharField):
description = 'Enum with strictly typed choices'
def __init__(self, enum_class, *args, **kwargs):
self._enum_class = enum_class
choices = []
for enum in self._enum_class:
title_case = enum.name.replace('_', ' ').title()
entry = (enum, title_case)
choices.append(entry)
kwargs['choices'] = choices
kwargs['blank'] = False # blank doesn't make sense for enum's
super().__init__(*args, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
args.insert(0, self._enum_class)
del kwargs['choices']
return name, path, args, kwargs
def from_db_values(self, value, expression, connection, context):
return self.to_python(value)
def to_python(self, value):
if value is None or isinstance(value, self._enum_class):
return value
else:
return self._parse_enum(value)
def _parse_enum(self, value):
try:
enum = self._enum_class[value]
except KeyError:
raise ValidationError("Invalid type '{}' for {}".format(
value, self._enum_class))
else:
return enum
def get_prep_value(self, value):
if value is None:
return None
elif isinstance(value, Enum):
return value.name
else:
msg = "'{}' must have type {}".format(
value, self._enum_class.__name__)
if self.null:
msg += ', or `None`'
raise TypeError(msg)
def get_choices(self, **kwargs):
kwargs['include_blank'] = False # Blank is not a valid option
choices = super().get_choices(**kwargs)
return choices
Answer: After a lot of digging, I was able to answer my own question:
`SubfieldBase` has been deprecated, and will be removed in Django 1.10; which
is why I left it out of the implementation above. However, it seems that what
it does is still important. Adding the following method to replaces the
functionality that `SubfieldBase` would have added.
def contribute_to_class(self, cls, name, **kwargs):
super(EnumField, self).contribute_to_class(cls, name, **kwargs)
setattr(cls, self.name, Creator(self))
The `Creator` descriptor is what calls `to_python` on attributes. If this
didn't happen, querys on models would result in the `EnumField` fields in the
model instances being simply strings, instead of Enum instances like I wanted.
|
Upgrading to latest pip caused ValueError
Question: The latest upgrade to `pip` (using Python 3.5) causes the following error to
occur for any `pip` command:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/bin/pip3.5", line 7, in <module>
from pip import main
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/__init__.py", line 15, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/vcs/subversion.py", line 9, in <module>
from pip.index import Link
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/index.py", line 29, in <module>
from pip.wheel import Wheel, wheel_ext
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/wheel.py", line 32, in <module>
from pip import pep425tags
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/pep425tags.py", line 214, in <module>
supported_tags = get_supported()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/pep425tags.py", line 162, in get_supported
arch = get_platform()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/pep425tags.py", line 119, in get_platform
major, minor, micro = release.split('.')
ValueError: not enough values to unpack (expected 3, got 2)
I'm even unable to upgrade or uninstall. What caused this and how can it be
fixed?
Answer: I went ahead and edited
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/pep425tags.py
at line 120.
Removed the `micro` value, and it now works again. Not sure what happened.
|
Find the indices of the lowest closest neighbors between two lists in python
Question: Given 2 numpy arrays of unequal size: A (a presorted dataset) and B (a list of
query values). I want to find the closest "lower" neighbor in array A to each
element of array B. Example code below:
import numpy as np
A = np.array([0.456, 2.0, 2.948, 3.0, 7.0, 12.132]) #pre-sorted dataset
B = np.array([1.1, 1.9, 2.1, 5.0, 7.0]) #query values, not necessarily sorted
print A.searchsorted(B)
# RESULT: [1 1 2 4 4]
# DESIRED: [0 0 1 3 4]
In this example, B[0]'s closest neighbors are A[0] and A[1]. It is closest to
A[1], which is why searchsorted returns index 1 as a match, but what i want is
the lower neighbor at index 0. Same for B[1:4], and B[4] should be matched
with A[4] because both values are identical.
I could do something clunky like this:
desired = []
for b in B:
id = -1
for a in A:
if a > b:
if id == -1:
desired.append(0)
else:
desired.append(id)
break
id+=1
print desired
# RESULT: [0, 0, 1, 3, 4]
But there's gotta be a prettier more concise way to write this with numpy. I'd
like to keep my solution in numpy because i'm dealing with large data sets,
but i'm open to other options.
Answer: You can introduce the optional argument `side` and set it to `'right'` as
mentioned in the
[`docs`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.searchsorted.html).
Then, subtract the final indices result by `1` for the desired output, like so
-
A.searchsorted(B,side='right')-1
Sample run -
In [63]: A
Out[63]: array([ 0.456, 2. , 2.948, 3. , 7. , 12.132])
In [64]: B
Out[64]: array([ 1.1, 1.9, 2.1, 5. , 7. ])
In [65]: A.searchsorted(B,side='right')-1
Out[65]: array([0, 0, 1, 3, 4])
In [66]: A.searchsorted(A,side='right')-1 # With itself
Out[66]: array([0, 1, 2, 3, 4, 5])
|
get all unique value from a sparse matrix[python/scipy]
Question: I am trying to make a machine learning lib work together with scipy sparse
matrix.
Below code is to detect if there are more than 1 class in `y` or not.Because
it doesn't make sense if there is only 1 class when doing classification.
import numpy as np
y = np.array([0,1,0,1,0,1])
uniques = set(y) # get {0, 1}
if len(uniques) == 1:
raise RuntimeError("Only one class detected, aborting...")
But `set(y)` not work if `y` is scipy sparse matrix.
How to efficiently get all unique value if `y` is scipy sparse matrix?
PS: I know `set(y.todense())` may work, but is cost too much memory
**UPDATE:**
>>> y = sp.csr_matrix(np.array([0,1,0,1,0,1]))
>>> set(y.data)
{1}
>>> y.data
array([1, 1, 1])
Answer: Sparse matrices store their values in different ways, but usually there is a
`.data` attribute that contains the nonzero values.
set(y.data)
might be all that you need. This should work for `coo`, `csr`, `csc`. For
others you many need to convert the matrix format (e.g. `y.tocoo`).
If that does not work, give us more details on the matrix format and problems.
|
ImportError: cannot import name 'PrintTable' in Python 3.5.1 by pyenv
Question: I installed pyenv to manage different versions of Python, and use `pip install
printtable` to download and install `printtable`.
But when I import this module in interactive shell, it doesn't work and shows
`ImportError`.
$ pyenv versions
system
2.7.11
* 3.5.1 (set by /Users/apple/.pyenv/version)
$ pip list
pip (8.0.0)
printtable (1.2)
setuptools (18.2)
$ python
Python 3.5.1 (default, Jan 21 2016, 12:50:43)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import printtable
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/apple/.pyenv/versions/3.5.1/lib/python3.5/site-packages/printtable/__init__.py", line 1, in <module>
from printtable import PrintTable
ImportError: cannot import name 'PrintTable'
How can I manage the modules in pyenv?
PS. I'm following the book `Automate the boring stuff` step by step. The
`printtable` part is in the end of Chapter 6.
Visit: <https://automatetheboringstuff.com/chapter6/>
Answer: I downloaded _printtable_ with
pip3 install --download /tmp printtable
and inspected the contentents of _printtable-1.2.tar.gz_. In
_printtable/printtable.py_ there is code like
def printTable(self,line_num=0):
....
print self.StrTable
indicating that this package is not python 3 compatible.
It may be possible to install this module by
tar xfv printtable-1.2.tar.gz
cd printtable-1.2/
2to3-3.5 --write printtable/*.py tests/*.py
python3.5 setup.py build
python3.5 setup.py install
|
Plotting 3D random walk in Python
Question: I want to plot 3-D random walk in Python. Something that is similar to picture
given below. Can you suggest me a tool for that. I am trying to use matplotlib
for it but getting confused on how to do it.
For now I have a `lattice` array of zeros which basically is `X*Y*Z`
dimensional and holds the information where random walker has walked, turning
`0`to `1` on each `(x,y,z)` that random walker has stepped.
How can I create 3D visual of walk? [](http://i.stack.imgur.com/p2TdR.png)
Answer: The following I think does what you are trying to do:
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import random
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
xyz = []
cur = [0, 0, 0]
for _ in xrange(20):
axis = random.randrange(0, 3)
cur[axis] += random.choice([-1, 1])
xyz.append(cur[:])
x, y, z = zip(*xyz)
ax.plot(x, y, z, label='Random walk')
ax.scatter(x[-1], y[-1], z[-1], c='b', marker='o') # End point
ax.legend()
plt.show()
This would give you a random walk looking something like this:
[](http://i.stack.imgur.com/84SGJ.png)
|
Python & BS4: Exclude blank and Grand Total Row
Question: Below is my code to extract data out of an HTML document and place it into
variables. I need to exclude the blank lines, as well as the "grand total"
line. I've added the HTML input of those segments beneath my code. I'm not
sure how to make it work. I can't use `len()` because the length is variable.
Any help?
from bs4 import BeautifulSoup
import urllib
import re
import HTMLParser
html = urllib.urlopen('RanpakAllocations.html').read()
parser = HTMLParser.HTMLParser()
#unescape doesn't seem to work
output = parser.unescape(html)
soup1 = BeautifulSoup(output, "html.parser")
Customer_No = []
Serial_No = []
data = []
#for hit in soup.findAll(attrs={'class' : 'MYCLASS'}):
rows = soup1.find_all("tr")
title = rows[0]
headers = rows[1]
datarows = rows[2:]
fields = []
try :
for row in datarows :
find_data = row.find_all(attrs={'face' : 'Arial,Helvetica,sans-serif'})
count = 0
for hit in find_data:
data = hit.text
count = count + 1
if count == 3 :
CSNO = data
if count == 9 :
ITNO = data
else :
continue
print CSNO, ITNO
print "new row"
except:
pass
Here is the input. The first `<tr>` is my last row of data, however my loop is
repeating for the blank rows and the grand total row below it.
<tr>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">12</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">F5684</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">20182</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">VELOCITY SOLUTIONS INC.</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">EQPRAN77717</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">RANPAK FILLPAK TT 2</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">W/UNIVERSAL STAND S/N 51345563</font></td>
<td nowrap="nowrap" align="right"><font size="3" face="Arial,Helvetica,sans-serif">1</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">51345563</font></td>
</tr>
<tr>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td align="left" colspan="5"><font size="1"> </font></td>
</tr>
<tr>
<td align="left"><font size="3" face="Arial,Helvetica,sans-serif"> </font></td>
<td align="left"><font size="3" face="Arial,Helvetica,sans-serif">Grand Total</font></td>
<td align="left" colspan="7"><font size="1"> </font></td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
Answer: I would do something like this:
from bs4 import BeautifulSoup
content = '''
<root>
<tr>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">12</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">F5684</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">20182</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">VELOCITY SOLUTIONS INC.</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">EQPRAN77717</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">RANPAK FILLPAK TT 2</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">W/UNIVERSAL STAND S/N 51345563</font></td>
<td nowrap="nowrap" align="right"><font size="3" face="Arial,Helvetica,sans-serif">1</font></td>
<td nowrap="nowrap" align="left"><font size="3" face="Arial,Helvetica,sans-serif">51345563</font></td>
</tr>
<tr>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td nowrap="nowrap" align="left"><font size="1"> </font></td>
<td align="left" colspan="5"><font size="1"> </font></td>
</tr>
<tr>
<td align="left"><font size="3" face="Arial,Helvetica,sans-serif"> </font></td>
<td align="left"><font size="3" face="Arial,Helvetica,sans-serif">Grand Total</font></td>
<td align="left" colspan="7"><font size="1"> </font></td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</root>'''
soup = BeautifulSoup(content, 'html')
answer = []
rows = soup.find_all('tr')
for row in rows:
if not row.text.strip():
continue
row_text = []
for cell in row.find_all('td'):
if cell.text.strip():
row_text.append(cell.text)
answer.append(row_text)
print(answer)
**Output**
[[u'12', u'F5684', u'20182', u'VELOCITY SOLUTIONS INC.', u'EQPRAN77717', u'RANPAK FILLPAK TT 2', u'W/UNIVERSAL STAND S/N 51345563', u'1', u'51345563'], [u'Grand Total']]
You can skip over entire empty rows using `if not row.text.strip(): continue`
(`row.text.strip()` returns an empty string, which evaluates to `False`).
For rows that you do iterate over, you can check each cell is not empty using
`if cell.text.strip()` before saving the relevant text.
|
Different datetime.strftime output with same locale settings
Question: I use python 2.7 and it turned out that datetime.strftime produces different
output on different environments (both unix-based) with the same locale
settings.
locale.setlocale(locale.LC_ALL, ('RU', 'utf-8'))
print locale.getlocale()
print datetime.date.today().strftime("%Y %d %B, %A")
On first env I got:
> ('ru_RU', 'UTF-8')
>
> 2016 21 января, четверг (month name is in genitive form)
On the second:
> ('ru_RU', 'UTF-8')
>
> 2016 21 Январь, Четверг (month name is in infinitive form)
As you can see there are also some differences in upper/lowercase letters.
PYTHONIOENCODING is set to utf_8 in both cases.
What is the reason of this behavior and more important is there a way to make
the second env work the way the first does?
Answer: You are looking at the output of the [C `strftime()`
call](http://linux.die.net/man/3/strftime) here; Python delegates to it. That
function picks those strings from locale files stored outside the control of
Python.
See the [`locale` man page](http://man7.org/linux/man-
pages/man5/locale.5.html) for a description of the file format; you are
looking for the `LC_TIME` `mon` and `day` lists here.
On Mac OS X stores things slightly differently, the files are stored in
`/usr/share/locale/`; for `ru_RU` time definitions there is a file called
`/usr/share/locale/ru_RU.UTF-8/LC_TIME`; it puts values one per line in a
certain order. The first 24 lines are months (abbreviated and full), for
example; with the full month names defined as:
января
февраля
марта
апреля
мая
июня
июля
августа
сентября
октября
ноября
декабря
Because this is OS and system specific, you'd have to use a different system
altogether to format your dates if you need these strings to be consistent
across different platforms.
If you are trying to _parse_ a date string, you won't get far with the
`datetime` or `time` modules. Try the [`dateparser`
project](http://dateparser.readthedocs.org/en/latest/) instead, which
[understands the different Russian
forms](https://github.com/scrapinghub/dateparser/blob/master/data/languages.yaml#L585-L706):
>>> import dateparser
>>> dateparser.parse(u'2016 21 января, четверг')
datetime.datetime(2016, 1, 21, 0, 0)
>>> dateparser.parse(u'2016 21 Январь, Четверг')
datetime.datetime(2016, 1, 21, 0, 0)
|
python virtualenv scipy import error undefined name
Question: I just started using virtualenv for my existing python project and ran into
some trouble...
When I try to import the following
from scipy.sparse.linalg import spsolve
it causes an import error if a virtualenv is activated
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python2.7/site-packages/scipy/sparse/linalg/__init__.py", line 110, in <module>
from .dsolve import *
File ".../lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/__init__.py", line 60, in <module>
from .linsolve import *
File ".../lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 10, in <module>
from . import _superlu
ImportError: .../lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/_superlu.so: undefined symbol: dtrsm_
When I use the global site-packages I don't get the error.
Can someone help me?
Answer: It appears there's some sort of trouble between numpy-1.10.2 and scipy (see
[here](https://github.com/scikit-learn/scikit-
learn/issues/6115#issuecomment-172272208)). Try the following (it fixed it for
me):
(ve) $ pip install numpy==1.10.1
(ve) $ pip install --upgrade --force-reinstall scipy
(ve) $ python
>>> from scipy.sparse.linalg import spsolve
|
Django1.7 'RemovedInDjango19Warning' when using apache with mod_wsgi
Question: i am using Django version 1.7 for an application, everything was running good
in my developpement PC (i was runnin it using `manage.py runserver` command.
now i am trying to move it to a production server. in the production server,
everything is still good when running the server using the `manage.py`command.
but accessing the application remotely (via apache2 & mod_wsgi), i get an
`RemovedInDjango19Warning` Exception.
after looking for a solution all i could find is how to ignore theses warnings
for manage.py which don't work for me and i don't know how to disable this
warnings from wsgi ?
**Traceback:**
Environment:
Request Method: GET
Request URL: http://192.168.0.17/
Django Version: 1.7.1
Python Version: 3.4.3
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_extensions',
'rest_framework',
'rest_framework_nested',
'django_gravatar',
'authentication',
'djcelery',
'job',
'seed',
'proxies',
'emails')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/usr/local/lib/python3.4/dist-packages/django/core/handlers/base.py" in get_response
98. resolver_match = resolver.resolve(request.path_info)
File "/usr/local/lib/python3.4/dist-packages/django/core/urlresolvers.py" in resolve
343. for pattern in self.url_patterns:
File "/usr/local/lib/python3.4/dist-packages/django/core/urlresolvers.py" in url_patterns
372. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/local/lib/python3.4/dist-packages/django/core/urlresolvers.py" in urlconf_module
366. self._urlconf_module = import_module(self.urlconf_name)
File "/usr/lib/python3.4/importlib/__init__.py" in import_module
109. return _bootstrap._gcd_import(name[level:], package, level)
File "/var/www/cvc.ma/CapValue/urls.py" in <module>
6. url(r'^api/v1/auth/', include('authentication.urls')),
File "/usr/local/lib/python3.4/dist-packages/django/conf/urls/__init__.py" in include
28. urlconf_module = import_module(urlconf_module)
File "/usr/lib/python3.4/importlib/__init__.py" in import_module
109. return _bootstrap._gcd_import(name[level:], package, level)
File "/var/www/cvc.ma/authentication/urls.py" in <module>
2. from authentication.views import LoginView, LogoutView
File "/var/www/cvc.ma/authentication/views.py" in <module>
4. from rest_framework import permissions, viewsets, status, views
File "/usr/local/lib/python3.4/dist-packages/rest_framework/viewsets.py" in <module>
24. from rest_framework import views, generics, mixins
File "/usr/local/lib/python3.4/dist-packages/rest_framework/views.py" in <module>
11. from rest_framework.request import Request
File "/usr/local/lib/python3.4/dist-packages/rest_framework/request.py" in <module>
20. from rest_framework.settings import api_settings
File "/usr/local/lib/python3.4/dist-packages/rest_framework/settings.py" in <module>
22. from django.utils import importlib, six
File "/usr/local/lib/python3.4/dist-packages/django/utils/importlib.py" in <module>
10. RemovedInDjango19Warning, stacklevel=2)
Exception Type: RemovedInDjango19Warning at /
Exception Value: django.utils.importlib will be removed in Django 1.9.
Answer: It's happening because the `django.utils.importlib` module is removed in
Django 1.9, in favor of the `importlib` in the standard library. Django Rest
Framework still uses it.
You can disable the warning by following the instructions on the accepted
answer of this question -- [How to suppress the deprecation warnings in
Django?](http://stackoverflow.com/questions/29562070/how-to-suppress-the-
deprecation-warnings-in-django)
|
Windows Python2.7 path parsing error
Question: I'm attempting to use the python 010 editor template
[parser](http://d0cs4vage.blogspot.com/2015/08/pfp-python-interpreter-
for-010-templates.html)
The doc specifically states (to get started):
import pfp
pfp.parse(data_file="C:\path2File\file.SWF",template_file="C:\path2File\SWFTemplate.bt")
However, it throws:
RuntimeError: Unable to invoke 'cpp'. Make sure its path was passed correctly
Original error: [Error 2] The system cannot find the file specified
I've tried everything, from using raw strings:
df = r"C:\path2File\file.swf"
tf = r"C:\path2File\SWFTemplate.bt"
To single and then double '\'s or '/'s in the string. However, it keeps
throwing the above error message.
I checked the files are in the path and ensured everything is properly
spelled, case sensitively.
To test my paths, I've used the windows "type" (equiv to *nix strings) and
passed the strings as args in a subprocess.Popen which worked.
Answer: The problem is that it's trying to invoke a C++ compiler: `cpp` and you don't
have one.
You'll need to install one, or make sure that your `PATH` has a `cpp.exe` on
it somewhere.
|
Python class can't find attribute
Question: Basically, I want to run the connect function but I keep getting the CMD error
message 'class StraussBot has no attribute 'connectSock' but I can obviously
see it does. I've tried searching on here and I can't find any resolutions to
this issue. SO it will be greatly appreciated if you could help me find why
this isn't finding the 'connectSock' function.
Code:
import socket
from config import HOST, PORT, CHANNEL
# User Info
USER = "straussbot" # The bots username
PASS = "oauth:sj175lp884ji5c9las089sm9vvaklf" # The auth code
class StraussBot:
def __init__(self):
self.Ssock = socket.socket()
def connectSock(self):
self.Ssock.connect((HOST, PORT))
self.Ssock.send(str("Pass " + PASS + "\r\n").encode('UTF-8'))
self.Ssock.send(str("NICK " + USER + "\r\n").encode('UTF-8'))
self.Ssock.send(str("JOIN " + CHANNEL + "\r\n").encode('UTF-8'))
if __name__ == "__main__":
print "Starting the bot..."
while True:
straussbot = StraussBot
try:
straussbot.connectSock()
except Exception as e:
print e
Answer: You forgot to instantiate an object of your class `StraussBot`.
straussbot = StraussBot
just assigns the name `straussbot` to refer to the class `StraussBot`. Change
that line to
straussbot = StraussBot()
to actually create an instance of your class. You can then call the
`connectSock` method on that instance as expected.
|
What is the proper technique for creating and managing complex data models in Django?
Question: I've got a relational database background, know a bit of Python, and am a
complete Django newbie / wannabe. I've been considering doing some projects in
Django. One thing I've noticed, however, is that, in Django, you create your
data model in code rather than with an ER modeling tool (like ERwin or
Embarcadero).
My question is:
Is it possible to import a complex data model (populated or not) into Django
and visualize it with an ER diagram?
Thanks!
Answer: One of the most valuable features of the Django is its ORM and some
`manage.py` commands to help untangle complex data models. If you haven't
explored [django_extensions](https://django-
extensions.readthedocs.org/en/latest/) yet, I think it has what you want:
`manage.py graph_models` gives you a customizable ER diagram of your models. I
use this often when dealing with large databases. I am often required to
import a large DB model (from an existing set of databases) and generate
almost all of the DB model using the django builtin `manage.py inspectdb >
models.py`. And with `graph_models` you can try a variety of layouts and
various amounts of detail to help you see the "big picture." Even if the
ultimate goal isn't to build a Django webapp, I find these tools helpful in
understanding a DB model quickly, especially when you don't have access to the
original DB creator(s) or their reasoning.
|
Execute a command with JSON string in python
Question: How do we execute a shell command with a JSON string in python?
The command is like:
tool --options '{"oldTool" : "yes"}'
Thanks!
Answer: I would import call from subprocess (`from subprocess import call`) and then
use the call command: `call("tool", "--options '{\"oldTool\" : \"yes\"}')`
(Backslashes can be replaced with whatever the python escape character is).
Although, your question is rather vague.
|
Installing Python Fancy Impute Module for K-Nearest Neighbors Imputation of Null Values
Question: I am using a 64bit Windows 10 machine.
I am trying to install the [fancy impute
module](https://github.com/hammerlab/fancyimpute) to do K-Nearest Neighbors
Imputation of null values in a data set.
I have had to separately install `cvxopt` with
conda install -c https://conda.binstar.org/eswears cvxopt
and Keras using
pip install keras
When I write into python from the command prompt
[Anaconda2] C:\Users\path>python
>>> import fancyimpute
I receive the below errors (I have truncated the error messages but can show
full errors on request)
Using Theano backend.
WARNING (theano.configdefaults): g++ not detected...
'g++' is not recognized as an internal or external command...
**EDIT** To remedy the problem, I have downloaded MinGW and followed the
directions [here](http://stackoverflow.com/questions/9741568/g-is-not-
recognized-as-an-internal-or-external-command-mingw), but am still receiving
the same error.
My questions are:
(1) is there another way around the error messages I am receiving?
(2) is there a python module that also does K-Nearest Neighbors Imputation of
null values?
Answer: Is there an error beyond the g++ warning? I may be wrong, but I was under the
impression that Theano will default to slower implementations in the absence
of a compiler. Since fancyimpute's. KNN implementation doesn't actually use
Theano then you should still be able to use that portion of the library.
|
Monitor two(2) serial ports at the same time asynchronously in Python
Question: I have two serial ports feeding data into Python. One is feeding GPS strings
(about 4 lines per second) and the other feeding data strings from a gas
monitor (about 1 line every second)
I would like to monitor both gps and gas feeds at the same time and combine
the data in realtime. I only need to receive data frpm the serial ports.
My problem is that I can't seem to figure out how to get both python functions
running at the same time.
I have the threading module and the multiprocessing module installed with
Python 2.7.
Any ideas on a good way to combine serial info? This is my third Python
program ever so please be gentle with me :-)
here's the code:
import threading
import multiprocessing
def readGas():
global GAScount
global GASline
while GAScount<15:
GASline = gas.readline()
GasString = GASline.startswith('$')
if GasString is True:
print GASline
GAScount = GAScount+1
def readGPS():
global GPScount
global GPSline
while GPScount<50:
GPSline = gps.readline()
GPSstring = GPSline.startswith('$')
if GPSstring is True:
print GPSline
GPScount = GPScount+1
openGPS()
openGas()
Answer: I think threads would be a good choice in your case, as you don't expect very
high frequencies. It will be much simpler than multiprocessing, because the
memory remains shared between threads and you have less to worry. The only
remaining issue is that your acquisitions won't be exactly at the same time
but rather one right after the other. If you really need synchronized data,
you need to go with multiprocessing.
for threading :
import threading
# create your threads:
gps = threading.Thread(target=openGPS)
gas=threading.Thread(target=openGas)
#start your threads:
gps.start()
gas.start()
And you now have 2 acquisition thread running. Due to python's Global
Interpret Lock, this is not properly at the same time but will alternate
between the 2 threads.
more informations on threading : <https://pymotw.com/2/threading/> more
informations on GIL : <https://wiki.python.org/moin/GlobalInterpreterLock>
|
Sympy seems to break down with higher numbers
Question: I've been playing around with sympy and decided to make an arbitrary equations
solver since my finance class was getting a little dreary. I wrote a basic
framework and started playing with some examples, but some work and some don't
for some reason.
from sympy import *
import sympy.mpmath as const
OUT_OF_BOUNDS = "Integer out of bounds."
INVALID_INTEGER = "Invalid Integer."
INVALID_FLOAT = "Invalid Float."
CANT_SOLVE_VARIABLES = "Unable to Solve for More than One Variable."
CANT_SOLVE_DONE = "Already Solved. Nothing to do."
# time value of money equation: FV = PV(1 + i)**n
# FV = future value
# PV = present value
# i = growth rate per perioid
# n = number of periods
FV, PV, i, n = symbols('FV PV i n')
time_value_money_discrete = Eq(FV, PV*(1+i)**n)
time_value_money_continuous = Eq(FV, PV*const.e**(i*n))
def get_sym_num(prompt, fail_prompt):
while(True):
try:
s = input(prompt)
if s == "":
return None
f = sympify(s)
return f
except:
print(fail_prompt)
continue
equations_supported = [['Time Value of Money (discrete)', [FV, PV, i, n], time_value_money_discrete],
['Time Value of Money (continuous)',[FV, PV, i, n], time_value_money_continuous]]
EQUATION_NAME = 0
EQUATION_PARAMS = 1
EQUATION_EXPR = 2
if __name__ == "__main__":
while(True):
print()
for i, v in enumerate(equations_supported):
print("{}: {}".format(i, v[EQUATION_NAME]))
try:
process = input("What equation do you want to solve? ")
if process == "" or process == "exit":
break
process = int(process)
except:
print(INVALID_INTEGER)
continue
if process < 0 or process >= len(equations_supported):
print(OUT_OF_BOUNDS)
continue
params = [None]*len(equations_supported[process][EQUATION_PARAMS])
for i, p in enumerate(equations_supported[process][EQUATION_PARAMS]):
params[i] = get_sym_num("What is {}? ".format(p), INVALID_FLOAT)
if params.count(None) > 1:
print(CANT_SOLVE_VARIABLES)
continue
if params.count(None) == 0:
print(CANT_SOLVE_DONE)
continue
curr_expr = equations_supported[process][EQUATION_EXPR]
for i, p in enumerate(params):
if p != None:
curr_expr = curr_expr.subs(equations_supported[process][EQUATION_PARAMS][i], params[i])
print(solve(curr_expr, equations_supported[process][EQUATION_PARAMS][params.index(None)]))
This is the code I have so far. I guess I can strip it down to a basic example
if need be, but I was also wondering if there was a better way to implement
this sort of system. After I have this down, I want to be able to add
arbitrary equations and solve them after inputting all but one parameter.
For example, if I put in (for equation 0), FV = 1000, PV = 500, i = .02, n is
empty I get 35.0027887811465 which is the correct answer. If I redo it and
change FV to 4000, it returns an empty list as the answer.
Another example, when I input an FV, PV, and an n, the program seems to hang.
When I input small numbers, I got RootOf() answers instead of a simple
decimal.
Can anyone help me?
Side note: I'm using SymPy 0.7.6 and Python 3.5.1 which I'm pretty sure are
the latest
Answer: This is a floating point accuracy issue. `solve` by default plugs solutions
into the original equation and evaluates them (using floating point
arithmetic) in order to sort out false solutions. You can disable this by
setting `check=False`. For example, for Hugh Bothwell's code
for fv in range(1870, 1875, 1):
sols = sp.solve(eq.subs({FV:fv}), check=False)
print("{}: {}".format(fv, sols))
which gives
1870: [66.6116466112007]
1871: [66.6386438584579]
1872: [66.6656266802551]
1873: [66.6925950919998]
1874: [66.7195491090752]
|
How to set regression intercept to 0 in Orange
Question: I wrote the bellow regression code in python using orange library but
import Orange
data = Orange.data.Table("lenses")
learner = Orange.regression.LinearRegressionLearner()
model = learner(data)
print (model.coefficients)
I need to set intercept to zero and I found this code
__init__(name=linear regression, intercept=True, compute_stats=True, ridge_lambda=None, imputer=None, continuizer=None, use_vars=None, stepwise=False, add_sig=0.05, remove_sig=0.2, **kwds)
in this page:
<http://orange.biolab.si/docs/latest/reference/rst/Orange.regression.linear.html>
but I don not know how to use it?
Answer: You just need to set intercept=False.
learner = Orange.regression.LinearRegressionLearner(intercept=False)
|
cannot import matlab file to python code
Question: I am on a project on machine learning. trained the data in matlab R2015a and
obtained a file abc.m . I get the expected result from matlab file while
giving input from the matlab command window. i have developed the interface in
pyqt5 and got the file in python . Want to import .m file to python for usage.
have Anaconda 2.4.1 installed on my computer with python 3.5. working on
Windows 8.1
Can anyone help ? totally stuck with the project.
My code is as follows :
import scipy.io as sio
import numpy as np
from sklearn import preprocessing
ab = sio.loadmat('latest.m')
......
But i get the following error :
Traceback (most recent call last):
File "test.py",line 8,in <module>
ab = sio.loadmat('latest.m')
File "C:\anaconda\Lib\site-packages\scipy\io\matlab\mio.py",line 58,in mat_reader_factory
mjv,mnv = get_matfile_version(byte_stream)
File "C:\anaconda\Lib\site-packages\scipy\io\matlab\miobase.py",line 241,in get_matfile_version
raise ValueError('Unknown mat file type,version %s,%s' % ret)
ValueError: Unknown mat file type,version 111,114
Answer: [`loadmat`](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.io.loadmat.html)
loads a MATLAB data file, e.g. the result of some code. MATLAB data files have
`.mat` extension.
Loading a `.m` file means loading a bunch of text that doesnt mean anything
without a MATLAB interpreter! What you want is to load a `.mat` file, a file
containing matrices, strings or whatever it has been saved there.
**EDIT** I have just seen @valtuarte 's link to [What is the difference
between .m and .mat files in
MATLAB](https://stackoverflow.com/questions/3947549/what-is-the-difference-
between-m-and-mat-files-in-matlab) . Have a check!
|
Counting and grouping in Python
Question: I was working on a problem in which I had to write a program that will count
the number of each item needed for the chefs to prepare. The items that a
customer can order are: salad, hamburger, and water. `salad:[# salad]
hamburger:[# hamburger] water:[# water]` for example `If order = "hamburger
water hamburger"` then the function returns `"salad:0 hamburger:2 water:1"`
My code is :
def ite(order,item):
v=order.split()
t=[]
for salad in v:
if salad==item:
t.append(item)
v.remove(item)
return len (t)
def item_order(order):
s=ite(order,'salad')
h=ite(order,'hamburger')
w=ite(order,'water')
x='salad:%s hamburger:%s water:%s' %(s,h,w)
return x
but when we give the input `item_order('water water water')`, my program
prints
salad:0 hamburger:0 water:2
instead of
salad:0 hamburger:0 water:3
It works fine if there are no two consecutive words. How can I correct this?
Answer: ## Solution
You can use
[collections.Counter](https://docs.python.org/3/library/collections.html?highlight=counter#collections.Counter):
from collections import Counter
def item_order(order, items=('salad', 'hamburger', 'water')):
counter = Counter(order.split())
return ' '.join(['{}: {}'.format(item, counter.get(item, 0)) for item in items])
print(item_order('water water water'))
print(item_order('water salad, salad'))
print(item_order('water hamburger'))
test it:
print(item_order('water water water'))
print(item_order('water salad, salad'))
print(item_order('water hamburger'))
prints:
salad: 0 hamburger: 0 water: 3
salad: 1 hamburger: 0 water: 1
salad: 0 hamburger: 1 water: 1
## Explanation
The items are given as default parameter:
def item_order(order, items=('salad', 'hamburger', 'water')):
This make the function more flexible because you can hand in other items if
desired:
def item_order(order, items=('salad', 'fruit', 'water')):
The use of a tuple is intentional here because mutable default parameters such
as a list may cause unintentional side effects. No problem here but could be
the vase in general.
After splitting the input string at white spaces into a list, `Counter` will
create a new `counter` instance:
counter = Counter(order.split())
For example:
>>> Counter('water water salad'.split())
Counter({'salad': 1, 'water': 2})
Finally, a list comprehension helps to create anew string:
' '.join(['{}: {}'.format(item, counter.get(item, 0)) for item in items])
The `' '.join` makes a new string form a list of strings, where the list
elements are separated by white space. For example:
>>> ' '.join(['abc', 'xyz', 'uvw'])
'abc xyz uvw'
The method `get()` of the Python dictionary returns the value for the key if
the key is in it, otherwise the default value. For example:
>>> d = {'a': 100, 'b': 200}
>>> d.get('a', 0)
100
>>> d.get('x', 0)
0
Setting this default to `0`, gives a zero count for items not contained in the
order:
counter.get(item, 0))
Finally, the `format()` method helps to put the value for the count in a
string. For example:
>>> '{}: {}'.format('abc', 10)
'abc: 10'
|
Problems with a function and odeint in python
Question: For a few months I started working with python, considering the great
advantages it has. But recently, i used odeint from scipy to solve a system of
differential equations. But during the integration process the implemented
function doesn't work as expected.
In this case, I want to solve a system of differential equations where one of
the initial conditions (x[0]) varies (between 4-5) depending on the value that
the variable reaches during the integration process (It is programmed inside
of the function by means of the if structure).
#Control of oxygen
SO2_lower=4
SO2_upper=5
if x[0]<=SO2_lower:
x[0]=SO2_upper
When the function is used by odeint, some lines of code inside the function
are obviated, even when the functions changes the value of x[0]. Here is all
my code:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.ion()
# Stoichiometric parameters
YSB_OHO_Ox=0.67 #Yield for XOHO growth per SB (Aerobic)
YSB_Stor_Ox=0.85 #Yield for XOHO,Stor formation per SB (Aerobic)
YStor_OHO_Ox=0.63 #Yield for XOHO growth per XOHO,Stor (Aerobic)
fXU_Bio_lys=0.2 #Fraction of XU generated in biomass decay
iN_XU=0.02 #N content of XU
iN_XBio=0.07 #N content of XBio
iN_SB=0.03 #N content of SB
fSTO=0.67 #Stored fraction of SB
#Kinetic parameters
qSB_Stor=5 #Rate constant for XOHO,Stor storage of SB
uOHO_Max=2 #Maximum growth rate of XOHO
KSB_OHO=2 #Half-saturation coefficient for SB
KStor_OHO=1 #Half-saturation coefficient for XOHO,Stor/XOHO
mOHO_Ox=0.2 #Endogenous respiration rate of XOHO (Aerobic)
mStor_Ox=0.2 #Endogenous respiration rate of XOHO,Stor (Aerobic)
KO2_OHO=0.2 #Half-saturation coefficient for SO2
KNHx_OHO=0.01 #Half-saturation coefficient for SNHx
#Other parameters
DT=1/86400.0
def f(x,t):
#Control of oxygen
SO2_lower=4
SO2_upper=5
if x[0]<=SO2_lower:
x[0]=SO2_upper
M=np.matrix([[-(1.0-YSB_Stor_Ox),-1,iN_SB,0,0,YSB_Stor_Ox],
[-(1.0-YSB_OHO_Ox)/YSB_OHO_Ox,-1/YSB_OHO_Ox,iN_SB/YSB_OHO_Ox-iN_XBio,0,1,0],
[-(1.0-YStor_OHO_Ox)/YStor_OHO_Ox,0,-iN_XBio,0,1,-1/YStor_OHO_Ox],
[-(1.0-fXU_Bio_lys),0,iN_XBio-fXU_Bio_lys*iN_XU,fXU_Bio_lys,-1,0],
[-1,0,0,0,0,-1]])
R=np.matrix([[DT*fSTO*qSB_Stor*(x[0]/(KO2_OHO+x[0]))*(x[1]/(KSB_OHO+x[1]))*x[4]],
[DT*(1-fSTO)*uOHO_Max*(x[0]/(KO2_OHO+x[0]))*(x[1]/(KSB_OHO+x[1]))* (x[2]/(KNHx_OHO+x[2]))*x[4]],
[DT*uOHO_Max*(x[0]/(KO2_OHO+x[0]))*(x[2]/(KNHx_OHO+x[2]))*((x[5]/x[4])/(KStor_OHO+(x[5]/x[4])))*(KSB_OHO/(KSB_OHO+x[1]))*x[4]],
[DT*mOHO_Ox*(x[0]/(KO2_OHO+x[0]))*x[4]],
[DT*mStor_Ox*(x[0]/(KO2_OHO+x[0]))*x[5]]])
Mt=M.transpose()
MxRm=Mt*R
MxR=MxRm.tolist()
return ([MxR[0][0],
MxR[1][0],
MxR[2][0],
MxR[3][0],
MxR[4][0],
MxR[5][0]])
#ODE solution
t=np.linspace(0.0,3600,3600)
#Initial conditions
y0=np.array([5,176,5,30,100,5])
Var=odeint(f,y0,t,args=(),h0=1,hmin=1,hmax=1,atol=1e-5,rtol=1e-5)
Sol=Var.tolist()
plt.plot(t,Var[:,0])
Thanks very much in advance!!!!!
Answer: Short answer:
You should not modify input state vector inside your ODE function. Instead try
the following and verify your results:
x0 = x[0]
if x0<=SO2_lower:
x0=SO2_upper
# use x0 instead of x[0] in the rest of this function body
I suppose that this is your problem, but I am not sure, since you did not
explain what exactly was wrong with the results. Moreover, you do not change
"initial condition". Initial condition is
y0=np.array([5,176,5,30,100,5])
you just change the input state vector.
Detailed answer:
Your odeint integrator is probably using one of the higher order adaptive
Runge-Kutta methods. This algorithm requires multiple ODE function evaluations
to calculate single integration step, therefore changing the input state
vector may lead to undefined results. In C++ boost::odeint this is even not
possible to do so, because input variable is "const". Python however is not as
strict as C++ and I suppose that it is possible to make this kind of bug
unintentionally (I did not try it, though).
EDIT:
OK, I understand what you want to achieve.
Your variable x[0] is constrained by modular algebra and it is not possible to
express in the form
x' = f(x,t)
which is one of the possible definitions of the Ordinary Differential
Equation, that ondeint library is meant to solve. However, few possible
"hacks" can be used here to bypass this limitation.
One possibility is to use a fixed step and low order (because for higher order
solvers you need to know, which part of the algorithm you are actually in, see
[RK4](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#The_Runge.E2.80.93Kutta_method)
for example) solver and change your dx[0] equation (in your code it is
MxR[0][0] element) to:
# at the beginning of your system
if (x[0] > S02_lower): # everything is normal here
x0 = x[0]
dx0 = # normal equation for dx0
else: # x[0] is too low, we must somehow force it to become S02_upper again
dx0 = (x[0] - S02_upper)/step_size // assuming that x0_{n+1} = x0_{n} + dx0*step_size
x0 = S02_upper
# remember to use x0 in the rest of your code and also remember to return dx0
However, I do not recommend this technique, because it makes you strongly
dependent on the algorithm and you must know the exact step size (although, I
may recommend it for saturation constraints). Another possibility is to
perform a single integration step at a time and correct your x0 each time it
is necessary:
// 1 do_step(sys, in, dxdtin, t, out, dt);
// 2 do something with output
// 3 in = out
// 4 return to 1 or finish
Sorry for C++ syntax, here is the exhaustive documentation ([C++ odeint
steppers](http://www.boost.org/doc/libs/master/libs/numeric/odeint/doc/html/boost_numeric_odeint/odeint_in_detail/steppers.html)),
and here is its equivalent in python ([Python ode
class](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode)).
C++ odeint interface is better for your task, however you may achieve exactly
the same in python. Just look for:
integrate(t[, step, relax])
set_initial_value(y[, t])
in docs.
|
Python3 Trying to execute sql query with dynamic column names
Question: I am using `sqlite3` and I am trying to dynamically return specific columns
from database using SELECT query, problem is I keep getting the column names
back instead of the actual rows. Here is an example code
import sqlite3
conn = sqlite3.connect('db_name.db')
c = conn.cursor()
query = 'SELECT ?, ? FROM devices'
columns = ('name','network-id')
c.execute(query, columns)
print(c.fetchall())
This is the result I get:
[('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id'), ('name', 'network_id')]
It is very annoying, I am only trying to get back specific columns from my
results, but I get the column names instead. Any help will be much appreciated
Answer: You cannot use SQL parameters for table or column names, only for literal
values.
Your query is the equivalent of:
SELECT 'name', 'network-id' from devices
Just put the column names directly into the query:
columns = ('name','network-id')
query = 'SELECT %s from devices' % ','.join(columns)
|
How to select all data in pymongo?
Question: I want to select all data or select with conditional in table `random` but I
can't find any guide in MongoDB in python to do this.
And I can't show all data was select.
Here my code:
def mongoSelectStatement(result_queue):
client = MongoClient('mongodb://localhost:27017')
db = client.random
cursor = db.random.find({"gia_tri": "0.5748676522161966"})
# cursor = db.random.find()
inserted_documents_count = cursor.count()
for document in cursor:
result_queue.put(document)
Answer: There is a quite comprehensive documentation for mongodb. For python (Pymongo)
here is the URL: <https://api.mongodb.org/python/current/>
Note: Consider the version you are running. Since the latest version has new
features and functions.
To verify pymongo version you are using execute the following:
import pymongo
pymongo.version
Now. Regarding the select query you asked for. As far as I can tell the code
you presented is fine. Here is the select structure in mongodb.
First off it is called **find()**.
In pymongo; if you want to select specific rows( **not** really **rows** in
mongodb they are called documents. I am saying rows to make it easy to
understand. I am assuming you are comparing mongodb to SQL); alright so If you
want to select specific document from the table (called **collection** in
mongodb) use the following structure (I will use **random** as collection
name; also assuming that the random table has the following attributes:
**age:10, type:ninja, class:black, level:1903**):
`db.random.find({ "age":"10" })` This will return all documents that have age
10 in them.
you could add more conditions simply by separating with **commas**
`db.random.find({ "age":"10", "type":"ninja" })` This will select all data
with age 10 and type ninja.
if you want to get all data just leave empty as:
db.random.find({})
Now the previous examples display everything (age, type, class, level and
_id). If you want to display **specific** attributes say only the age you will
have to add another argument to find called **projection** eg: (1 is show, 0
is do not show):
{'age':1}
Note here that this returns **age** as well as **_id**. _id is always returned
by default. You have to explicitly tell it not to returning it as:
db.random.find({ "age":"10", "name":"ninja" }, {"age":1, "_id":0} )
I hope that could get you started. Take a look at the documentation is very
thorough.
|
Looping a Selenium Python script
Question: I have the following script that opens a browser and logs in then closes the
browser:
from selenium import webdriver
browser=webdriver.Ie()
import time
x=4
for i in range(x):
browser.get("http://localhost:8080/customercareweb-prototype/")
username=browser.find_element_by_xpath(".//*[@placeholder='Username']").send_keys("op1")
password=browser.find_element_by_xpath(".//*[@placeholder='Password']").send_keys("op1")
time.sleep(2)
browser.find_element_by_xpath(".//*[@class='dlg-auth-button z-button']").click()
browser.close()
When i run this i get the error: `TypeError: string indices must be integers`
What am i doing wrong here. I just want to get this script to run for 4 times
in a row.
Answer: I managed to get it to work wrapping it i a function. But i am still puzzled
why i could not do it with simple "for" statement.
def login():
browser=webdriver.Ie()
browser.get("http://localhost:8080/customercareweb-prototype/")
username=browser.find_element_by_xpath(".//*[@placeholder='Username']").send_keys("op1")
password=browser.find_element_by_xpath(".//*[@placeholder='Password']").send_keys("op1")
time.sleep(2)
browser.find_element_by_xpath(".//*[@class='dlg-auth-button z-button']").click()
i=login()
for i in range(1):
login()
|
Robust endless loop for server written in Python
Question: I write a server which handles events and uncaught exceptions during handling
the event must not terminate the server.
The server is a single non-threaded python process.
I want to terminate on these errors types:
* KeyboardInterrupt
* MemoryError
* ...
The list of built in exceptions is long:
<https://docs.python.org/2/library/exceptions.html>
I don't want to re-invent this exception handling, since I guess it was done
several times before.
How to proceed?
1. Have a white-list: A list of exceptions which are ok and processing the next event is the right choice
2. Have a black-list: A list of exceptions which indicate that terminating the server is the right choice.
Hint: This question is not about running a unix daemon in background. It is
not about double fork and not about redirecting stdin/stdout :-)
Answer: I would do this in a similar way you're thinking of, using the 'you shall not
pass' _[Gandalf](https://www.youtube.com/watch?v=V4UfAL9f74I)_ exception
handler _[`except Exception` to catch all non-system-exiting
exceptions](https://docs.python.org/2.7/library/exceptions.html#exceptions.Exception)_
while creating a black-listed `set` of exceptions that should pass and end be
re-raised.
Using the _Gandalf handler_ will make sure
[`GeneratorExit`](https://docs.python.org/2.7/library/exceptions.html#exceptions.GeneratorExit),
[`SystemExit`](https://docs.python.org/2.7/library/exceptions.html#exceptions.SystemExit)
and
[`KeyboardInterrupt`](https://docs.python.org/2.7/library/exceptions.html#exceptions.KeyboardInterrupt)
(all system-exiting exceptions) pass and terminate the program if no other
handlers are present higher in the call stack. Here is where you can check
with [**`type(e)`**](https://docs.python.org/2.7/library/functions.html#type)
that a `__class__` of a caught exception `e` actually belongs in the set of
black-listed exceptions and
re-[`raise`](https://docs.python.org/2.7/library/exceptions.html#exceptions.KeyboardInterrupt)
it.
As a small demonstration:
import exceptions # Py2.x only
# dictionary holding {exception_name: exception_class}
excptDict = vars(exceptions)
exceptionNames = ['MemoryError', 'OSError', 'SystemError'] # and others
# set containing black-listed exceptions
blackSet = {excptDict[exception] for exception in exceptionNames}
Now `blackSet = {OSError, SystemError, MemoryError}` holding the classes of
the non-system-exiting exceptions we want to **not** handle.
A `try-except` block can now look like this:
try:
# calls that raise exceptions:
except Exception as e:
if type(e) in blackSet: raise e # re-raise
# else just handle it
An **example** which catches all exceptions using
[`BaseException`](https://docs.python.org/2.7/library/exceptions.html#exceptions.BaseException)
can help illustrate what I mean. (this is done _for demonstration purposes
only_ , in order to see how this raising will eventually terminate your
program). **Do note** : _I'm**not** suggesting you use `BaseException`; I'm
using it in order to **demonstrate** what exception will actually 'pass
through' and cause termination_ (i.e everything that `BaseException` catches):
for i, j in excptDict.iteritems():
if i.startswith('__'): continue # __doc__ and other dunders
try:
try:
raise j
except Exception as ex:
# print "Handler 'Exception' caught " + str(i)
if type(ex) in blackSet:
raise ex
except BaseException:
print "Handler 'BaseException' caught " + str(i)
# prints exceptions that would cause the system to exit
Handler 'BaseException' caught GeneratorExit
Handler 'BaseException' caught OSError
Handler 'BaseException' caught SystemExit
Handler 'BaseException' caught SystemError
Handler 'BaseException' caught KeyboardInterrupt
Handler 'BaseException' caught MemoryError
Handler 'BaseException' caught BaseException
Finally, in order to make this Python 2/3 agnostic, you can `try` and `import
exceptions` and if that fails (which it does in Python 3), fall-back to
importing [`builtins`](https://docs.python.org/3.5/library/builtins.html)
which contains all `Exceptions`; we search the dictionary by name so it makes
no difference:
try:
import exceptions
excDict = vars(exceptions)
except ImportError:
import builtins
excDict = vars(builtins)
I don't know if there's a smarter way to actually do this, another solution
might be instead of having a `try-except` with a signle `except`, having 2
handlers, one for the black-listed exceptions and the other for the general
case:
try:
# calls that raise exceptions:
except tuple(blackSet) as be: # Must go first, of course.
raise be
except Exception as e:
# handle the rest
|
Importing from parent directory gets error
Question: The project structure on my local machine is setup like this:
python/
__init__.py
readText.py
testing/
__init__.py
removeDuplicates.py
In removeDuplicates.py I am trying to import as follows:
from python import readText
This gives: ImportError: No module name 'python'
My **init**.py in both folders are blank by the way.
Answer: You need the parent directory of your `python` subdirectory to be present in
`sys.path`. If you execute your script from that directory, the import should
work. But the easiest way to do this is to export the environment variable
PYTHONPATH.
|
Chapter 7, Automate the boring stuff with Python, practice project: regex version of strip()
Question: I am reading the book "Automate the boring stuff with Python'. In Chapter 7,
in the project practice: the regex version of strip(), here is my code (I use
Python 3.x):
def stripRegex(x,string):
import re
if x == '':
spaceLeft = re.compile(r'^\s+')
stringLeft = spaceLeft.sub('',string)
spaceRight = re.compile(r'\s+$')
stringRight = spaceRight.sub('',string)
stringBoth = spaceRight.sub('',stringLeft)
print(stringLeft)
print(stringRight)
else:
charLeft = re.compile(r'^(%s)+'%x)
stringLeft = charLeft.sub('',string)
charRight = re.compile(r'(%s)+$'%x)
stringBoth = charRight.sub('',stringLeft)
print(stringBoth)
x1 = ''
x2 = 'Spam'
x3 = 'pSam'
string1 = ' Hello world!!! '
string2 = 'SpamSpamBaconSpamEggsSpamSpam'
stripRegex(x1,string1)
stripRegex(x2,string2)
stripRegex(x3,string2)
And here is the output:
Hello world!!!
Hello world!!!
Hello world!!!
BaconSpamEggs
SpamSpamBaconSpamEggsSpamSpam
So, my regex version of strip() nearly work as the original version. In the
origninal version, the output always is "BaconSpamEggs" no matter you passed
in 'Spam', 'pSam', 'mapS', 'Smpa'... So how to fix this in Regex version???
Answer: You could check for multiple characters in the regex like this:
charLeft = re.compile(r'^([%s]+)' % 'abc')
print charLeft.sub('',"aaabcfdsfsabca")
>>> fdsfsabca
Or even better, do it in a single regex:
def strip_custom(x=" ", text):
return re.search(' *[{s}]*(.*?)[{s}]* *$'.format(s=x), text).group(1)
split_custom('abc', ' aaabtestbcaa ')
>>> test
|
Cross class variable only initialized once
Question: I'm having some troubles with the scope in Python. I initialize a variable
(`basePrice`) in a class (`STATIC`). The value is static and the
initialization takes some work, so I only want to do it once. Another class
`Item`, is a class that is created a lot. An `Item` object needs to calculate
its variable `price` by using `basePrice`. How can I initialize `basePrice`
only once and use it in an `Item` object?
class STATIC:
basePrice = 0
def __init__(self):
self.basePrice = self.difficultCalculation()
def getBasePrice()
return self.basePrice
import STATIC
class Item:
price = 0
def __init__(self,price_):
self.price = price_ - STATIC.getBasePrice()
Answer: It might be better to write a module. For example, create the file `STATIC.py`
as shown below.
basePrice = difficultCalculation()
def difficultCalculation():
# details of implementation
# you can add other methods and classes and variables
Then, in your file containing `Item`, you can do:
from STATIC import basePrice
This will allow all access to `basePrice` from anywhere within that file.
|
python oauth2 client issues when trying to get authorization token
Question: I am trying to use OAuth2 to get an authorization token using Python to a REST
API. I am successful doing so using CURL but not with python. I am using the
examples provided at the following docs: <https://requests-
oauthlib.readthedocs.org/en/latest/oauth2_workflow.html>
The following is my code:
#!/usr/bin/python
import requests
import requests_oauthlib
from requests_oauthlib import OAuth2Session
from oauthlib.oauth2 import BackendApplicationClient
client_id = 'AAAAAA'
client_secret = 'BBBBBB'
client = BackendApplicationClient(client_id=client_id)
oauth = OAuth2Session(client=client)
token = oauth.fetch_token(token_url='https://example.com/as/token.oauth2', client_id=client_id, client_secret=client_secret)
print token
I am getting the following error:
oauthlib.oauth2.rfc6749.errors.InvalidClientError: (invalid_client) client_id value doesn't match HTTP Basic username value
This is a very basic API that only needs client_id and client_credentials to
get an authorization token.
All information would be greatly appreciated.
Answer: The documentation specifies the following items:
client_id = r'your_client_id'
client_secret = r'your_client_secret'
redirect_uri = 'https://your.callback/uri'
By client key do you perhaps mean client key?
token = oauth.fetch_token(token_url='https://example.com/as/token.oauth2', client_id=client_id, client_secret=client_secret)
Try changing it to the above and give it a spin. using r'' for raw input
instead and the token given.
|
.Convert String into a 2Bytearray(U32)- python
Question: I have a string `text="0000001011001100"` I want to convert this string into a
2 byte array something like this **(b'\x00\x02')**
byte_array=(socket.htons(text)).to_bytes(2,sys.byteorder)
But this is not working and giving an error that **int** required I have
converted the text to **int** but then the entire string changes
I need help on this
Answer: You can convert text to integer and then you can use `struct` module
import struct
text = "0000001011001100"
number = int(text, 2) # 716
result = struct.pack("h", number)
b'\xcc\x02'
# or with > to change bytes order
result = struct.pack(">h", number)
b'\x02\xcc'
see: <https://docs.python.org/3/library/struct.html>
|
How to get garbage values in python
Question: This question may not have any practical value, this is just out of curiosity.
In C/C++ when you declare a variable like below:
int c;
variable `c` will have some garbage value in it. As per my understanding
python doesn't have variables as in C/C++ but it has name-bindings with
objects. So, according to me there is no way to get garbage as you always
first create the object and then assign it a tag.
Garbage value actually is just whatever was stored at the location variable
refers. But, I want to know if there is some way to get garbage values in
python. In other words is there some way to access memory location without
first initializing it. In my view it should not be possible and I appreciate
it if this is by design. But I am just curious.
I am still learning both python and C++ and am not an expert so forgive me if
this is a stupid question.
Answer: This things usually depend on the interpreter that you are using. For example
with CPython, you could do something like in [this
answer](http://stackoverflow.com/questions/121396/accessing-object-memory-
address), or basically use C libraries like [this other
answer](http://stackoverflow.com/questions/8250625/access-memory-address-in-
python) suggests.
> to access memory location without first initializing it
Could be done with something like this,
from ctypes import string_at
from sys import getsizeof
from binascii import hexlify
# create a dummy object, just to get a memory location
a = 0x7fff
# and use it as a pivoting point to access other locations
offset = 100
print(hexlify(string_at(id(a)+offset, getsizeof(a))))
Needless to say this kind of things have its risks, but are an interesting
experiment.
|
regex works on pythex but not python2.7, finding unicode representations by regex
Question: I am having a strange regex issue where my regex works on pythex, but not in
python itself. I am using 2.7 right now. I want to remove all unicode
instances like `\x92`, of which there are many (like `'Thomas Bradley
\x93Brad\x94 Garza',`:
import re, requests
def purify(string):
strange_issue = r"""\\t<td><font size=2>G<td><a href="http://facebook.com/KilledByPolice/posts/625590984135709" target=new><font size=2><center>facebook.com/KilledByPolice/posts/625590984135709\t</a><td><a href="http://www.orlandosentinel.com/news/local/lake/os-leesburg-officer-involved-shooting-20130507"""
unicode_chars_rgx = r"[\\][x]\d+"
unicode_matches = re.findall(unicode_chars_rgx, string)
bad_list = [strange_issue]
bad_list.extend(unicode_matches)
for item in bad_list:
string = string.replace(item, "")
return string
name_rgx = r"(?:[<][TDtd][>])|(?:target[=]new[>])(?P<the_deceased>[A-Z].*?)[,]"
urls = {2013: "http://www.killedbypolice.net/kbp2013.html",
2014: "http://www.killedbypolice.net/kbp2014.html",
2015: "http://www.killedbypolice.net/" }
names_of_the_dead = []
for url in urls.values():
response = requests.get(url)
content = response.content
people_killed_by_police_that_year_alone = re.findall(name_rgx, content)
for dead_person in people_killed_by_police_that_year_alone:
names_of_the_dead.append(purify(dead_person))
dead_americans_as_string = ", ".join(names_of_the_dead)
print("RIP, {} since 2013:\n".format(len(names_of_the_dead))) # 3085! :)
print(dead_americans_as_string)
In [95]: unicode_chars_rgx = r"[\\][x]\d+"
In [96]: testcase = "Myron De\x92Shawn May"
In [97]: x = purify(testcase)
In [98]: x
Out[98]: 'Myron De\x92Shawn May'
In [103]: match = re.match(unicode_chars_rgx, testcase)
In [104]: match
How can I get these `\x00` characters out? Thank you
Answer: Certainly not by trying to find things that look like "`\\x00`".
If you want to destroy the data:
>>> re.sub('[\x7f-\xff]', '', "Myron De\x92Shawn May")
'Myron DeShawn May'
More work, but tries to preserve the text as well as possible:
>>> import unidecode
>>> unidecode.unidecode("Myron De\x92Shawn May".decode('cp1251'))
"Myron De'Shawn May"
|
For loop outputting one character per line
Question: I'm writing a quick python script wrapper to query our crashplan server so I
can gather data from multiple sites then convert that to json for a migration
and I've got most of it done. It's probably a bit ugly, but I'm one step away
from getting the data I need to pass on to the json module so I can format the
data I need for reports.
The script should query ldap, get a list of names from a list of sites, then
create a command (which works).
But when printing the list in a for loop it prints out each character, instead
of each name. If I just print the list it prints out each name on a single
line. This obviously munges up the REST call as the username isn't right.
'''
Crashplan query script
Queries the crashplan server using subprocess calls and formats the output
'''
import subprocess
import json
password = raw_input("What password do you want to use: ")
sitelist = ['US - DC - Washington', 'US - FL - Miami', 'US - GA - Atlanta', 'CA - Toronto']
cmdsites = ""
for each in sitelist:
cmdsites = cmdsites + '(OfficeLocation={})'.format(each)
ldap_cmd = "ldapsearch -xLLL -S OfficeLocation -h ldap.local.x.com -b cn=users,dc=x,dc=com '(&(!(gidNumber=1088))(|%s))' | grep -w 'uid:' | awk {'print $2'}" % cmdsites
users = subprocess.check_output([ldap_cmd], shell=True)
##### EVERYTHING WORKS UP TO THIS POINT #####
for each in users:
# subprocess.call(['curl -X GET -k -u "admin:'+password+'" "https://crashplan.x.com:4285/api/User?username='+each+'@x.com&incBackupUsage=true&strKey=lastBackup"'], shell=True) ### THIS COMMAND WORKS IT JUST GETS PASSED THE WRONG USERNAME
print each #### THIS PRINTS OUT ONE LETTER PER LINE ####
print type(users) #### THIS PRINTS OUT ONE NAME PER LINE ####
Answer: You get an output as a string which, when iterated, produce _one character per
iteration_.
You should [split it by line
breaks](https://docs.python.org/2/library/stdtypes.html#str.splitlines):
for each in users.splitlines():
print each
|
nosetests does another output (ImportError) then simple unittest (No Error) Why?
Question: When i make a simple Test with unittest only it does not show any error. But
when i try to nosetests my test files it does a ImportError. Here are the
nessesary Informations.
# Project Structure:
-rwxrwxr-x __init__.py
-rw-rw-r-- __init__.pyc
drwxrwxr-x romannumeralconverter
drwxrwxr-x tests
./romannumeralconverter:
-rwxrwxr-x __init__.py
-rw-rw-r-- __init__.pyc
-rwxrwxr-x romannumeralconverter.py
-rw-rw-r-- romannumeralconverter.pyc
./tests:
-rwxrwxr-x __init__.py
-rw-rw-r-- __init__.pyc
-rw-rw-r-- romannumeralconvertertest.py
-rw-rw-r-- romannumeralconvertertest.pyc
# Testfile - romannumeralconvertertest.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
from romannumeralconverter.romannumeralconverter import RomanNumeralConverter
class RomanNumeralConverterTest(unittest.TestCase):
def test_parsing_millenia(self):
"""Testing if millenia"""
value = RomanNumeralConverter("M")
self.assertEquals(1000, value.convert_to_decimal())
#@unittest.skip("demonstrating skipping")
def test_parsing_century(self):
value = RomanNumeralConverter("C")
self.assertEquals(100, value.convert_to_decimal())
def test_parsing_half_century(self):
value = RomanNumeralConverter("L")
self.assertEquals(50, value.convert_to_decimal())
if __name__ == "__main__":
suite = unittest.TestLoader().loadTestsFromTestCase(RomanNumeralConverterTest)
unittest.TextTestRunner(verbosity=3).run(suite)
# Appfile: - romannumeralconverter.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
class RomanNumeralConverter(object):
def __init__(self, roman_numeral):
self.roman_numeral = roman_numeral
self.digit_map = {"M": 1000, "D": 500, "C": 100, "L": 50, "X": 10, "V": 5, "I": 1}
self.i_rule = ('V','X')
self.x_rule = ('L','C')
self.c_rule = ('D','M')
self.rules = {"I": self.i_rule, "X": self.x_rule, "C": self.c_rule}
def zwei_convert_to_decimal(self):
val = 0
oldelement = 0
for char in self.roman_numeral:
if oldelement != 0 and self.digit_map[char] > self.digit_map[oldelement]:
pass
oldelement = char
def convert_to_decimal(self):
val = 0
for char in self.roman_numeral:
val += self.digit_map[char]
return val
# Normal Test output:
▶ python romannumeralconvertertest.py
test_parsing_century (__main__.RomanNumeralConverterTest) ... ok
test_parsing_half_century (__main__.RomanNumeralConverterTest) ... ok
test_parsing_millenia (__main__.RomanNumeralConverterTest)
Testing if millenia ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK
# nosetests output:
▶ nosetests romannumeralconvertertest.py
E
======================================================================
ERROR: Failure: ImportError (cannot import name RomanNumeralConverter)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/loader.py", line 420, in loadTestsFromName
addr.filename, addr.module)
File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/home/username/Development/learning/romannumeralconverter/tests/romannumeralconvertertest.py", line 5, in <module>
from romannumeralconverter.romannumeralconverter import RomanNumeralConverter
ImportError: cannot import name RomanNumeralConverter
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
# sys.path
/home/username/Development/learning/romannumeralconverter/tests
/home/username/Development/learning/romannumeralconverter/tests
/home/username/Development/learning/romannumeralconverter
/usr/lib/python2.7
/usr/lib/python2.7/plat-x86_64-linux-gnu
/usr/lib/python2.7/lib-tk
/usr/lib/python2.7/lib-old
/usr/lib/python2.7/lib-dynload
/home/username/.local/lib/python2.7/site-packages
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages/gtk-2.0
/usr/lib/python2.7/dist-packages/ubuntu-sso-client
Answer: By default nose makes some adjustments to the `sys.path`. There is one
possible solution you can find here: [Accepted answer of - Python Nose Import
Error](http://stackoverflow.com/questions/3073259/python-nose-import-
error/3073368#3073368).
From your project structure, if you delete the `__init__.py` from:
/home/username/Development/learning/romannumeralconverter
directory, it should work.
|
open a browser page and take screenshot every n hour in python 3
Question: I want to open a web page then take a screenshot every 2 hours via python.
here is my code to open a page at every 2 hour interval
import time
import webbrowser
total_breaks = 12
break_count = 0
while(break_count < total_breaks):
time.sleep(7200)
webbrowser.open("https://mail.google.com/mail/u/2")
break_count = break_count + 1
i followed [Take a screenshot of open website in python
script](http://stackoverflow.com/questions/12563350/take-a-screenshot-of-open-
website-in-python-script) but didn't get any success I am using python 3.5 . I
got a module wxpython but it supportts only 2.x . So is there way to take a
screen shot every 2 hours using python 3
Answer: Here's what you have to with Selenium to get started.
1. Install Selenium using pip by `pip install selenium` in the command prompt.
* * *
2. Make sure you have the python path in your environment variables.
* * *
3. Run this script by changing the email and the password below.
* * *
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox()
browser.implicitly_wait(30)
browser.get('https://accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1<mpl=default<mplcache=2&emr=1&osid=1#identifier')
email = browser.find_element_by_xpath('//*[@id="Email"]')
email.clear()
email.send_keys("[email protected]") # change email
email.send_keys(Keys.RETURN)
password = browser.find_element_by_xpath('//*[@id="Passwd"]')
password.clear()
password.send_keys("password") # Change Password
password.send_keys(Keys.RETURN)
time.sleep(10)
browser.save_screenshot('screen_shot.png')
browser.close()
4. And finally you can run `schtask` on the saved python file from the command prompt. `schtask` is the `CRON` equivalent on windows.
* * *
schtasks /Create /SC MINUTE /MO 120 /TN screenshot /TR "PATH_TO_PYTHON_FILE\FILE.py"
|
Vlfeat for Ipython
Question: I've been trying to set up vlfeat library for Jupyter which comes with
anaconda. I've installed from this <https://anaconda.org/menpo/vlfeat> but
cant import the library in the notebook. Can someone Guide on how to set it
up?
Answer: Try:
from cyvlfeat import sift
That should get the module loaded.
|
python pandas and matplotlib installation conflict
Question: I am using a Mac OSX Yosemite 10.10.5 and I am trying to practice data science
with python on my laptop. I am using python 3.5.1 on a virtualenv however when
I install pandas and matplotlib seems like both of them are having a conflict
when trying to be imported. Both has same error and the output is:
>>> import matplotlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/site-packages/matplotlib/__init__.py", line 1131, in <module>
rcParams = rc_params()
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/site-packages/matplotlib/__init__.py", line 975, in rc_params
return rc_params_from_file(fname, fail_on_error)
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/site-packages/matplotlib/__init__.py", line 1100, in rc_params_from_file
config_from_file = _rc_params_in_file(fname, fail_on_error)
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/site-packages/matplotlib/__init__.py", line 1018, in _rc_params_in_file
with _open_file_or_url(fname) as fd:
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/site-packages/matplotlib/__init__.py", line 1000, in _open_file_or_url
encoding = locale.getdefaultlocale()[1]
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/locale.py", line 559, in getdefaultlocale
return _parse_localename(localename)
File "/Users/deanarmada/.virtualenvs/python3-data-science/lib/python3.5/locale.py", line 487, in _parse_localename
raise ValueError('unknown locale: %s' % localename)
ValueError: unknown locale: UTF-8
Answer: Just run:
export LC_ALL=C
before accessing python through terminal.
|
text extraction line splitting across multiple lines with python
Question: I have the following code:
f = open('./dat.txt', 'r')
array = []
for line in f:
# if "1\t\"Overall evaluation" in line:
# words = line.split("1\t\"Overall evaluation")
# print words[0]
number = int(line.split(':')[1].strip('"\n'))
print number
This is capable of grabbing the last int from my data, which looks like this:
299 1 "Overall evaluation: 3
Invite to interview: 3
Strength or novelty of the idea (1): 4
Strength or novelty of the idea (2): 3
Strength or novelty of the idea (3): 3
Use or provision of open data (1): 4
Use or provision of open data (2): 3
""Open by default"" (1): 2
""Open by default"" (2): 3
Value proposition and potential scale (1): 4
Value proposition and potential scale (2): 2
Market opportunity and timing (1): 4
Market opportunity and timing (2): 4
Triple bottom line impact (1): 4
Triple bottom line impact (2): 2
Triple bottom line impact (3): 2
Knowledge and skills of the team (1): 3
Knowledge and skills of the team (2): 4
Capacity to realise the idea (1): 4
Capacity to realise the idea (2): 3
Capacity to realise the idea (3): 4
Appropriateness of the budget to realise the idea: 3"
299 2 "Overall evaluation: 3
Invite to interview: 3
Strength or novelty of the idea (1): 3
Strength or novelty of the idea (2): 2
Strength or novelty of the idea (3): 4
Use or provision of open data (1): 4
Use or provision of open data (2): 3
""Open by default"" (1): 3
""Open by default"" (2): 2
Value proposition and potential scale (1): 4
Value proposition and potential scale (2): 3
Market opportunity and timing (1): 4
Market opportunity and timing (2): 3
Triple bottom line impact (1): 3
Triple bottom line impact (2): 2
Triple bottom line impact (3): 1
Knowledge and skills of the team (1): 4
Knowledge and skills of the team (2): 4
Capacity to realise the idea (1): 4
Capacity to realise the idea (2): 4
Capacity to realise the idea (3): 4
Appropriateness of the budget to realise the idea: 2"
364 1 "Overall evaluation: 3
Invite to interview: 3
...
I also need to grab the "record identifier" which in the above example would
be `299` for the first two instances and then `364` for the next one.
The commented out code above, if I delete the last lines and just use it, as
so:
f = open('./dat.txt', 'r')
array = []
for line in f:
if "1\t\"Overall evaluation" in line:
words = line.split("1\t\"Overall evaluation")
print words[0]
# number = int(line.split(':')[1].strip('"\n'))
# print number
can grab the record identifiers.
but I'm having trouble putting the two together.
Ideally what I want is something like the following:
368
=2+3+3+3+4+3+2+3+2+3+2+3+2+3+2+3+2+4+3+2+3+2
=2+3+3+3+4+3+2+3+2+3+2+3+2+3+2+3+2+4+3+2+3+2
and so on for all records.
How can I combine the above two script components to achieve that?
Answer: Regex is the ticket. You can do it with two patterns. Something like this:
import re
with open('./dat.txt') as fin:
for line in fin:
ma = re.match(r'^(\d+) \d.+Overall evaluation', line)
if ma:
print("record identifier %r" % ma.group(1))
continue
ma = re.search(r': (\d+)$', line)
if ma:
print(ma.group(1))
continue
print("unrecognized line: %s" % line)
Note: The last print statement is not part of your requirements, but whenever
I debug regex, I always add some sort of catchall to assist with debugging bad
regex statements. Once I get my patterns straight, I remove the catchall.
|
flask app only shows list of files with apache2
Question: Im trying to host a flask app with an apache2 server. The server works but I'm
only seeing a list of files, the wonderful "index of" page. My code is pretty
simple. This is my hello.py file in /var/www/flask_dev:
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run(host='0.0.0.0')
I also created an apache config file located in /etc/apache2/sites-
available/flask_dev.conf:
ServerName example.com
<VirtualHost *:80>
ServerAdmin webmaster@localhost
WSGIDaemonProcess hello user=www-data group=www-data threads=5 python-path=/var/www/flask_dev
WSGIScriptAlias / /var/www/flask_dev/start.wsgi
<Directory /var/www/flask_dev>
WSGIProcessGroup hello
WSGIApplicationGroup %{GLOBAL}
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
And the needed wsgi file also located in /var/www/flask_dev/start.wsgi:
from hello import app as application
import sys
sys.stdout = sys.stderr
I'm not sure what I did wrong, I just followed a simple tutorial.
Thanks for your help :)
Answer: You probably did not install `mod_wsgi` module for Apache.
<http://flask.pocoo.org/docs/0.10/deploying/mod_wsgi/>
Apache needs to import the `mod_wsgi` module for it to work with python.
Further instructions for installation can be found at.
<https://code.google.com/p/modwsgi/wiki/QuickInstallationGuide>
Once installed, edit your `httpd.conf` with `LoadModule wsgi_module
modules/mod_wsgi.so`
If you are on Windows, you will have to download the appropriate `mod_wsgi.so`
for the python version and architecture. Rename the file to `mod_wsgi.so` if
it has any python specific version naming and set conf to LoadModule.
|
Python daemon starts correctly with init-script but fails on startup
Question: I have some python daemons for raspberry pi applications running on startup
with init-scripts.
The init script runs fine from the console, starts and ends the background
process correctly.
The script was made autostart with sudo insserv GartenwasserUI
It starts on startup, which is proof by having the LCD backlight on, but is
not in the process list after logon. A manual start with sudo service
GartenwasserUI start works immediately.
What could be wrong?
Here the script
#!/bin/sh
### BEGIN INIT INFO
# Provides: GartenwasserUI
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: GartenwasserUI acts as a mqtt client for LCD displaying and switching gpio through mqtt
# Description: Put a long description of the service here
### END INIT INFO
# Change the next 3 lines to suit where you install your script and what you want to call it
DIR=/usr/local/bin/Gartenwasser
DAEMON=$DIR/GartenwasserUI.py
DAEMON_NAME=GartenwasserUI
# Add any command line options for your daemon here
DAEMON_OPTS=""
# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using the Raspberry Pi GPIO from Python.
DAEMON_USER=root
# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid
. /lib/lsb/init-functions
do_start () {
log_daemon_msg "Starting system $DAEMON_NAME daemon"
cd $DIR
#python ./$DAEMON_NAME.py &
start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS --verbose -stdout /var/log/GartenwasserUI.log
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME daemon"
start-stop-daemon --stop --pidfile $PIDFILE --retry 10
log_end_msg $?
}
case "$1" in
start|stop)
do_${1}
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0
And the script itself:
#!/usr/bin/env python
__author__ = "Bernd Gewehr"
# import python libraries
import os
import signal
import sys
import time
# import libraries
import lib_mqtt as MQTT
from Adafruit_CharLCDPlate import Adafruit_CharLCDPlate
#DEBUG = False
DEBUG = True
MQTT_TOPIC_IN = "/Gartenwasser/#"
MQTT_TOPIC = "/Gartenwasser"
MQTT_QOS = 0
VALVE_STATE = [0, 0, 0, 0, 0]
def on_message(mosq, obj, msg):
"""
Handle incoming messages
"""
topicparts = msg.topic.split("/")
if DEBUG:
print msg.topic
print topicparts
for i in range(0,len(topicparts)):
print i, topicparts[i]
print msg.payload
pin = int('0' + topicparts[len(topicparts) - 1])
value = int(msg.payload)
if topicparts[2] == "in":
if pin == 29:
VALVE_STATE[0] = value
if pin == 31:
VALVE_STATE[1] = value
if pin == 33:
VALVE_STATE[2] = value
if pin == 35:
VALVE_STATE[3] = value
Message = 'V1: ' + str(VALVE_STATE[0]) + ' V2: ' + str(VALVE_STATE[1]) + '\nV3: ' + str(VALVE_STATE[2]) + ' V4: ' + str(VALVE_STATE[3])
lcd.clear()
lcd.message(Message)
# End of MQTT callbacks
def cleanup(signum, frame):
"""
Signal handler to ensure we disconnect cleanly
in the event of a SIGTERM or SIGINT.
"""
# Cleanup modules
MQTT.cleanup()
lcd.stop()
# Exit from application
sys.exit(signum)
def loop():
"""
The main loop in which we mow the lawn.
"""
while True:
time.sleep(0.08)
buttonState = lcd.buttons()
for b in btn:
if (buttonState & (1 << b[0])) != 0:
if DEBUG: print 'Button pressed for GPIO ' + str(b[1])
if b[1] > 0: MQTT.mqttc.publish(MQTT_TOPIC + '/in/' + str(b[1]), abs(VALVE_STATE[b[2]]-1), qos=0, retain=True)
time.sleep(.5)
break
# Use the signal module to handle signals
for sig in [signal.SIGTERM, signal.SIGINT, signal.SIGHUP, signal.SIGQUIT]:
signal.signal(sig, cleanup)
# Initialise our libraries
lcd = Adafruit_CharLCDPlate()
lcd.backlight(True)
MQTT.init()
MQTT.mqttc.on_message = on_message
MQTT.mqttc.subscribe(MQTT_TOPIC_IN, qos=MQTT_QOS)
# Clear display and show greeting, pause 1 sec
lcd.clear()
lcd.message("Gartenwasser\nstartet...")
time.sleep(1)
Message = 'V1: ' + str(VALVE_STATE[0]) + ' V2: ' + str(VALVE_STATE[1]) + '\nV3: ' + str(VALVE_STATE[2]) + ' V4: ' + str(VALVE_STATE[3])
lcd.clear()
lcd.message(Message)
# Cycle through backlight colors
#col = (lcd.RED, lcd.YELLOW, lcd.GREEN, lcd.TEAL,
# lcd.BLUE, lcd.VIOLET, lcd.WHITE, lcd.OFF)
#for c in col:
# lcd.ledRGB(c)
# sleep(.5)
# assign GPIO & Status index of VALVA_STATUS
btn = ((lcd.LEFT, 29, 0),
(lcd.UP, 31, 1),
(lcd.DOWN, 33, 2),
(lcd.RIGHT, 35, 3),
(lcd.SELECT, 0, 4))
# start main procedure
loop()
Answer: I found the solution: When the machine starts up, the mosquitto service starts
withs S02. Since the python daemons init scripts had no info that they depend
on mosquitto, they were started as S02 also.
The change that solved the problem:
Set the dependencies in the LSB header of the init script:
# Required-Start: $remote_fs $syslog mosquitto
After that, both python daemons were executed correctly as S03.
|
Python 3.4: Unknown format code 'x'
Question: I have issue about packet sniffer in Python3.
version of python: 3.4
I followed some tutorial that works, but not on my computer. This code has to
get mac address, convert it to string and in main() method should print to me
destination mac, source mac and protocol.
code: sniffer_demo.py
import socket
import struct
import textwrap
def main():
conn = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(3))
while True:
raw_data, addr = conn.recvfrom(65536)
# one's and zero's put to the method ehternet_frame
dest_mac, src_mac, eth_proto, data = ethernet_frame(raw_data)
print('\nEthernet Frame:')
print('Destination: {}, Source: {}, Protocol: {}'.format(dest_mac, src_mac, eth_proto))
# Unpack ethernet frame
def ethernet_frame(data):
dest_mac, src_mac, proto = struct.unpack('! 6s 6s H', data[:14])
return get_mac_addr(dest_mac), get_mac_addr(src_mac), socket.htons(proto), data[14:]
# Return properly formatted MAC address: (ie AA:BB:CC:DD:EE:FF)
def get_mac_addr(bytes_addr):
bytes_str = map('{:02x}'.format, bytes_addr)
return ':'.join(bytes_str).upper()
main()
The error is:
Traceback (most recent call last):
File "sniffer_demo.py", line 28, in <module>
main()
File "sniffer_demo.py", line 11, in main
dest_mac, src_mac, eth_proto, data = ethernet_frame(raw_data)
File "sniffer_demo.py", line 19, in ethernet_frame
return get_mac_addr(dest_mac), get_mac_addr(src_mac), socket.htons(proto), data[14:]
File "sniffer_demo.py", line 24, in get_mac_addr
bytes_str = map('{:02x}'.format, bytes_addr)
ValueError: Unknown format code 'x' for object of type 'str'
How to fix that?
Answer: While trying to reproduce, your code seems to work exactly as expected. I do
not see any shebang-line. Are you sure you are executing using `python3`?
Sample output from `# python3.4 snif2.py`:
Ethernet Frame:
Destination: A4:17:31:xx:xx:xx, Source: 00:0C:F6:xx:xx:xx, Protocol: 8
Ethernet Frame:
Destination: 00:0C:F6:xx:xx:xx, Source: A4:17:31:xx:xx:xx, Protocol: 8
^CTraceback (most recent call last):
File "snif2.py", line 27, in <module>
main()
File "snif2.py", line 9, in main
raw_data, addr = conn.recvfrom(65536)
KeyboardInterrupt
|
NetBeans complains, but the code runs
Question: I am new to python. I am writing programs in NetBeans.
* NetBeans 8.1
* Python Plugin for NetBeans
* Python 3.5.1
* Plugin is set up for 3.5.1, instead of the default 2.7
NetBeans complains when I write the statement
print ("_ ", end='')
The error is
no viable alternative at input '='
It appears that NetBeans is checking for 2.7 syntax, instead of 3.5. I am able
to run the code, so NetBeans is using 3.5 to execute.
How do I configure NetBeans so it uses the correct syntax checking?
* * *
After the recommendation of @alecxe, I reported a bug to NetBeans.
NetBeans does not support python 3.x. The plugin runs the correct version, but
the IDE syntax checking is linked to 2.x.
> Thank you for your report. Note that we do not officially support Python 3.x
> yet. However, It is a high-importance task on our nbPython Jira board...
> Marking this bug as Duplicate. Suggest you follow
> [Bug#229940](https://netbeans.org/bugzilla/show_bug.cgi?id=229940) for
> notification.
PS. PyCharm is great.
Answer: The problem is reproducible on my end too. Even if the default Python
Environment is set to Python3.5 and the Project Interpreter is also set to
Python3.5, it still uses Python2 specific syntax checks. For example, it does
not highlight the `print` if it is used as a statement and not function:
[](http://i.stack.imgur.com/jjIEXm.png)
I don't think this particular behavior is configurable and **this is a bug**
(I suspect the bundled Jython is used for the "live" syntax checking). You
should definitely file an issue
[here](https://netbeans.org/bugzilla/buglist.cgi?bug_status=NEW&bug_status=STARTED&bug_status=REOPENED&list_id=648030&order=changeddate%20DESC%2Cbug_status%2Cpriority%2Cassigned_to%2Cbug_id&product=python&query_based_on=&query_format=advanced).
* * *
External tools like `PyLint` might help, but it is [not yet
integrated](https://netbeans.org/bugzilla/show_bug.cgi?id=161404).
* * *
And, just saying, PyCharm has a completely free community edition.
|
Kivy does not detect OpenGL 2.0
Question: I have decided to do some programming in Kivy cross platform and installed
Kivy on my computer successfully. The problem is that when I run my code, I
get this error:
[INFO ] [Kivy ] v1.9.1
[INFO ] [Python ] v3.4.4 (v3.4.4:737efcadf5a6, Dec 20 2015, 20:20:57) [MSC v.1600 64 bit (AMD64)]
[INFO ] [Factory ] 179 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_sdl2 (img_pil, img_ffpyplayer ignored)
[INFO ] [OSC ] using <thread> for socket
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] OpenGL version <b'1.1.0'>
[INFO ] [GL ] OpenGL vendor <b'Microsoft Corporation'>
[INFO ] [GL ] OpenGL renderer <b'GDI Generic'>
[INFO ] [GL ] OpenGL parsed version: 1, 1
[CRITICAL ] [GL ] Minimum required OpenGL version (2.0) NOT found!
OpenGL version detected: 1.1
Version: b'1.1.0'
Vendor: b'Microsoft Corporation'
Renderer: b'GDI Generic'
Try upgrading your graphics drivers and/or your graphics hardware in case of problems.
The application will leave now.
And this error box pops out:

I have checked OpenGL version of my GPU via GPU Caps Viewer verifying me up to
OpenGL Version 2.1, but Kivy somehow doesn't detect OpenGL 2.1 and defaults to
GDI Generic from Microsoft instead. I did some research on internet and found
out that best way to resolve this problem is to update your graphical card's
driver from your graphical card manufacturer, but this didn't work in my case.
I have updated my graphic drivers (I am running NVIDIA GeForce GT 435M on
64-bit Windows 8).
**My question is:** Is there a way to let Kivy switch from GDI Generic driver
to NVIDIA driver? Or is there a problem somewhere else?
Answer: On windows 7 pro 32bit adding `Config.set('graphics', 'multisamples', '0')`
solved the error for me.
import kivy
kivy.require('1.9.1') # replace with your current kivy version !
from kivy.app import App
from kivy.uix.label import Label
# add the following 2 lines to solve OpenGL 2.0 bug
from kivy import Config
Config.set('graphics', 'multisamples', '0')
class MyApp(App):
def build(self):
return Label(text='Hello world')
if __name__ == '__main__':
MyApp().run()
After the change, the OpenGL version is reported correctly:
> [INFO ] [GL ] GLEW initialization succeeded
>
> [INFO ] [GL ] OpenGL version <2.1.0 - Build 8.15.10.2281>
|
Travis throws Python syntax error from a print statement in node-sass when using pytest
Question: I'm having a strange problem with Travis when testing a Django app with
[pytest-django](https://pypi.python.org/pypi/pytest-django). All my tests pass
locally and apparently on travis as well but no matter what I do, I get errors
from `node-sass` and `node-gyp` each time.
None of my test use any node modules (if that is even possible). I do use gulp
that has gulp-sass but that seems to work fine when it's ran before the tests
run.
Here is the error output:
$ py.test --ds=<project>.settings.travis
============================= test session starts ==============================
platform linux -- Python 3.4.2 -- py-1.4.26 -- pytest-2.6.4
django settings: <project>.settings.travis (from command line option)
plugins: django
collected 13 items / 7 errors
<project>/tests/test_<suite_1>.py ..........
<project>/tests/test_<suite_2>.py ...
==================================== ERRORS ====================================
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/MSVSSettings_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/common_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/easy_xml_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/input_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/generator/msvs_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/generator/ninja_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
ERROR collecting node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/generator/xcode_test.py
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/_pytest/python.py:463: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=True)
../../../virtualenv/python3.4.2/lib/python3.4/site-packages/py/_path/local.py:629: in pyimport
__import__(pkgpath.basename)
E File "/home/travis/build/<user>/<project>/node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py", line 37
E print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
E ^
E SyntaxError: invalid syntax
===================== 13 passed, 7 error in 20.94 seconds ======================
Answer: The problem turned out to be the (old) version of `pytest` already installed
in Travis satisfying the dependency in `pytest-django`. Adding the correct
version of `pytest` to `requirements.txt` fixed the issue.
|
Replace multiple text values in a single file
Question: Python 3 - attempt at Cisco Router deployment script. I am attempting to
replace multiple text values in a input value of 'router-input.txt'.
Unfortunately, I can't figure out how to replace multiple values in a single
file. At the end of running the below code, only the last value IOSOLD is
entered.
import fileinput
HOSTNAME = input("Hostname: ")
IOSCURRENT = input("Current IOS image: ")
IOSOLD = input("Old IOS image: ")
f = open("router-input.txt",'r')
filedata = f.read()
f.close()
newdata = filedata.replace("$HOSTNAME", HOSTNAME )
newdata = filedata.replace("$IOSCURRENT", IOSCURRENT )
newdata = filedata.replace("$IOSOLD", IOSOLD )
f = open('output.txt','w')
f.write(newdata)
f.close()
Answer: You keep editing `filedata` and saving it in `newdata`. Each time, you're
overwriting your previous changes. Try:
newdata = filedata.replace("$HOSTNAME", HOSTNAME )
newdata = newdata.replace("$IOSCURRENT", IOSCURRENT )
newdata = newdata.replace("$IOSOLD", IOSOLD )
|
How to install and use rpy2 on Ubuntu
Question: I am trying to use Python to call R through rpy2. I am working on Ubuntu
15.10. I have installed Python 3.5.1 as part of Anaconda 2.4.1 (64bit), R and
rpy2 version 2.7.6. When I tried $ python -m 'rpy2.tests' on the terminal, I
am getting the following error:
Traceback (most recent call last):
File "/home/thirsty/anaconda3/lib/python3.5/runpy.py", line 170, in _run_module_as_main
"__main__", mod_spec)
File "/home/thirsty/anaconda3/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/thirsty/anaconda3/lib/python3.5/site-packages/rpy2/tests.py", line 23, in <module>
import rpy2.tests_rpy_classic
File "/home/thirsty/anaconda3/lib/python3.5/site-packages/rpy2/tests_rpy_classic.py", line 3, in <module>
import rpy2.rpy_classic as rpy
File "/home/thirsty/anaconda3/lib/python3.5/site-packages/rpy2/rpy_classic.py", line 5, in <module>
import rpy2.rinterface as ri
File "/home/thirsty/anaconda3/lib/python3.5/site-packages/rpy2/rinterface/__init__.py", line 99, in <module>
from rpy2.rinterface._rinterface import *
ImportError: /home/thirsty/anaconda3/bin/../lib/libreadline.so.6: undefined symbol: PC
Answer: I have resolved the issue. The versions of python, rpy2 are fine. Probably the
command $ python -m 'rpy2.tests' may not be right way of testing rpy2. After
starting python shell, when I typed import rpy2.robjects as robjects it worked
without any errors and I was able to read files using R.
|
Python: With Scrapy Script- Is this the best way to scrape urls from forums?
Question: What I want to do:
* Scrape all urls from this website: <http://www.captainluffy.net/> (my friends website, who I have permission to scrape urls from)
* However, I can't just brute everything, as I'll end up with lots of duplicate links (98% being duplicates)
* Even if I make my log file only contain unique urls, it still could be a couple of million links (which will take quite some time to get).
I can pause/resume my scrappy script thanks to this:
<http://doc.scrapy.org/en/latest/topics/jobs.html>
I've set the script so it splits every 1,000,000 records.
And the Python dictionary only checks url keys for duplicates within each text
file. So at the very least, the urls within each file will be unique. If I had
a bigger dictionary. it would tremendously slow down the process IMO. Having 1
duplicate (every 1,000,000 logs) is better than thousands.
This is the Python script code I'm currently using:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
from scrapy.item import Item, Field
class MyItem(Item):
url=Field()
f=open("items0"+".txt","w")
num=open("number.txt","w")
class someSpider(CrawlSpider):
name = "My script"
domain=raw_input("Enter the domain:\n")
allowed_domains = [domain]
starting_url=raw_input("Enter the starting url with protocol:\n")
start_urls = [starting_url]
i=0
j=0
dic={}
global f
rules = (Rule(LxmlLinkExtractor(allow_domains=(domain)), callback='parse_obj', follow=True),)
def parse_obj(self,response):
for link in LxmlLinkExtractor(allow_domains=(self.domain)).extract_links(response):
item = MyItem()
item['url'] = link.url
if self.dic.has_key(item['url']):
continue
global f
global num
f.write(item['url']+"\n")
self.dic[item['url']]=True
self.i+=1
if self.i%1000000==0:
self.j+=1
f.close()
f=open("items"+str(self.j)+".txt","w")
num.write(str(self.j+1)+"\n")
Does anybody have a better method to scrape?
How many log files do you estimate my scrapy script will take from a website
like this?
Answer: Scrapy drop duplicate request by
[DUPEFILTER_CLASS](http://doc.scrapy.org/en/latest/topics/settings.html#dupefilter-
class), the default setting is
[RFPDupeFilter](https://github.com/scrapy/scrapy/blob/1.0/scrapy/dupefilters.py#L28),
which is similar like your method but not save seen urls to many files.
I have created a POC.
# -*- coding: utf-8 -*-
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
class ExampleSpider(CrawlSpider):
name = "ExampleSpider"
allowed_domains = ["www.example.com", "www.iana.org"]
start_urls = (
'http://www.example.com/',
)
rules = (Rule(LxmlLinkExtractor(allow_domains=allowed_domains), callback='parse_obj', follow=True),)
log_file = open('test.log', 'a')
def parse_obj(self, response):
#self.logger.info(response.url)
self.logger.info(self.settings['DUPEFILTER_CLASS'])
self.log_file.write(response.url + '\n')
Run it with `scrapy crawl ExampleSpider -s DUPEFILTER_DEBUG=1`, there should
be some debug info like following.
> [scrapy] DEBUG: Filtered duplicate request: <GET
> <http://www.iana.org/about/framework>>
|
Can't seem to rid of the error message no matter what I've tried. From "Hello! Python" book
Question: Here's the error I'm getting when I try to run the code below:
Traceback (most recent call last):
File "/Users/JPagz95/Documents/Hunt_the_Wumpus_3.py", line 76, in <module>
visit_cave(0)
File "/Users/JPagz95/Documents/Hunt_the_Wumpus_3.py", line 15, in visit_cave
unvisited_caves.remove(cave_number)
AttributeError: 'range' object has no attribute 'remove'
I understand how the int and range don't have those functions, but they
shouldn't be defined that way originally. I realize it's a lot of code but I
feel like the problem might be in more than one place. This code is based off
of chapter to of the "Hello! Python" book if that helps. I would appreciate
any help I can get!
from random import choice
cave_numbers = range(0,20)
unvisited_caves = range(0,20)
visited_caves = []
def create_tunnel(cave_from, cave_to):
""" create a tunnel between cave_from and cave_to """
caves[cave_from].append(cave_to)
caves[cave_to].append(cave_from)
def visit_cave(cave_number):
""" mark a cave as visited """
visited_caves.append(cave_number)
unvisited_caves.remove(cave_number)
def choose_cave(cave_list):
""" pick a cave from a list, provided that the
cave has less than 3 tunnels """
cave_number = choice(cave_list)
while len(caves[cave_number]) >= 3:
cave_number = choice(cave_list)
return cave_number
def print_caves():
""" print out the current cave structure """
for number in cave_numbers:
print (number, ":", caves[number])
print ('----------')
def setup_caves(cave_numbers):
""" create a starting list of caves """
caves = []
for number in cave_numbers:
caves.append([number])
return caves
def link_caves():
""" make sure all of the caves are connected with two way tunnels"""
while unvisited_caves != []:
this_cave = choose_cave(visited_caves)
next_cave = choose_cave(unvisited_caves)
create_tunnel(this_cave, next_cave)
visit_cave(next_cave)
def finish_caves():
""" link the rest of the caves with one way tunnels """
for cave in cave_numbers:
while len(caves[cave]) < 3:
passage_to = choose_cave(cave_numbers)
caves[cave].append(passage_to)
def print_location(player_location):
""" tell the player about where they are """
print ("You are in a cave ", player_location)
print ("From here you can see caves: ")
print (caves[player_location])
if wumpus_location in caves[player_location]:
print ("I smell a wumpus!")
def get_next_location():
""" Get the player's next location """
print ("Which cave next?")
player_input = input(">")
if (not player_input.isdigit() or
int(player_input) not in
caves[player_location]):
print(player_input + "?")
print ("That's not a direction I can see!")
return none
else:
return int(player_input)
caves = setup_caves(cave_numbers)
visit_cave(0)
print_caves()
link_caves()
print_caves()
finish_caves()
wumpus_location = choice(cave_numbers)
player_location = choice(cave_numbers)
while player_location == wumpus_location:
player_location = choice(cave_numbers)
print ("Welcome to Hunt the Wumpus!")
print ("You can see ", len(cave_numbers), " caves")
print ("To play, just type the number")
print ("of the cave you wish to enter next")
while True:
print_location(player_location)
new_location = get_next_location()
if new_location != None:
player_location = new_location
if player_location == wumpus_location:
print ("Aargh! You got eaten by a wumpus!")
Answer: Make sure you declare as
unvisited_caves = list(range(0,20))
by default `range(0,20)` returns an object that "generates" the values of 0 to
20. It is much more efficient because it just needs to remember the start and
end values (0 and 20) along the step size (in your case 1).
Contrast this with a **list** that must store every value from 0 to 20, and
you can remove individual values from this list.
|
Convert Json Dictionary Objects to Python dicitonary
Question: I have a Json file with dictionary like objects
{"d1a": 91, "d1b": 2, "d1c": 1, "d1d": 5, "d1e": 7, "d1f": 77, "d1e": 999}
{"d2a": 1, "d2b": 2, "d2c": 3, "d2d": 4, "d2e": 5, "d2f": 6, "d2e": 7}
{"d3a": 1, "d3b": 2, "d3c": 3, "d3d": 4, "d3e": 5, "d3f": 6, "d3e": 7}
I want to convert this into a python dictionary with the same format
with open(myfile.json, 'r') as myfile:
# not sure how the conversion will start
Answer: If this is how the file contents looks as is, it is not a valid JSON as a
whole, but each line is.
You can read the file _line by line_ and call `json.loads()`:
import json
with open(myfile.json, 'r') as myfile:
for line in myfile:
print(json.loads(line))
And you can have a list of dictionaries if you would use a [list
comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-
comprehensions):
objs = [json.loads(line) for line in myfile]
You can also call the `loads()` once if you would surround the contents with
`[` and `]` and put commas at the end of each line:
with open("test.json") as myfile:
data = "[" + ",".join(myfile.readlines()) + "]"
print(json.loads(data))
|
Check type existence in Python module
Question: What would be the most Pythonic way of determining if an object's type is
contained in a specific module?
For example, let's say I want to match date, time, and datetime classes from
the datetime module.
import datetime
mylist = [obj1, obj2, obj3, ...]
for obj in mylist:
if type(obj) in [datetime.datetime, datetime.time, datetime.date]:
#Do thing
It seems silly creating a matching instance of the three for each loop. Is
there some simpler way of saying "if type(obj) in some-module"?
Answer: [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance)
can check against multiple types:
> `isinstance(object, classinfo)`
>
> Return true if the object argument is an instance of the `classinfo`
> argument, or of a (direct, indirect or virtual) subclass thereof. If object
> is not an object of the given type, the function always returns false.
> **If`classinfo` is a tuple of type objects (or recursively, other such
> tuples), return true if object is an instance of any of the types**. If
> `classinfo` is not a type or tuple of types and such tuples, a `TypeError`
> exception is raised.
if isinstance(obj, (datetime.datetime, datetime.time, datetime.date)):
|
How can i animate text by character in Tkinter?
Question: So i know that in the python window you can use this..
for char in introstring:
sleep(0.2)
sys.stdout.write(char)
sys.stdout.flush()
and it will display the text in the window, character by character at the
speed of 0.2
How can I transition this into a Tkinter window? For example, I have the the
text:
canvas.create_text((720,454),text="Just in case you start to feel sniffily, or something",fill="white", font=('arial'))
Is there a way to get the same animation but in the Tkinter GUI, I have set
up? Ive heard something about the .after command, but I cant seem to find how
it would apply for this.
Answer: Here's a very simple example that prints text character by character to a
canvas, with a delay of 500 milliseconds between characters.
import Tkinter as tk
root = tk.Tk()
canvas = tk.Canvas(root)
canvas.pack()
canvas_text = canvas.create_text(10, 10, text='', anchor=tk.NW)
test_string = "This is a test"
#Time delay between chars, in milliseconds
delta = 500
delay = 0
for i in range(len(test_string) + 1):
s = test_string[:i]
update_text = lambda s=s: canvas.itemconfigure(canvas_text, text=s)
canvas.after(delay, update_text)
delay += delta
root.mainloop()
This code has been tested on Python 2.6. To run it on Python 3 you need to
change the `import` statement to `import tkinter as tk`
* * *
Here's a more sophisticated example that displays text typed into an Entry
widget. The text is shown when `Enter` is pressed in the Entry widget.
#!/usr/bin/env python
''' animate text in a tkinter canvas
See http://stackoverflow.com/q/34973060/4014959
Written by PM 2Ring 2016.01.24
'''
import Tkinter as tk
class Application(object):
def __init__(self):
root = tk.Tk()
self.canvas = tk.Canvas(root)
self.canvas.pack()
self.canvas_text = self.canvas.create_text(10, 10, text='', anchor=tk.NW)
self.entry = tk.Entry(root)
self.entry.bind("<Return>", self.entry_cb)
self.entry.pack()
root.mainloop()
def animate_text(self, text, delta):
''' Animate canvas text with a time delay given in milliseconds '''
delay = 0
for i in range(len(text) + 1):
update_text = lambda s=text[:i]: self.canvas.itemconfigure(self.canvas_text, text=s)
self.canvas.after(delay, update_text)
delay += delta
def entry_cb(self, event):
self.animate_text(self.entry.get(), 250)
app = Application()
|
Python - CSV to Matrix
Question: Can you help me with this problem?
I`m new in programming and want to find out how to create a matrix, which
looks like this:
matrix = {"hello":["one","two","three"],
"world": ["five","six","seven"],
"goodbye":["one","two","three"]}
I want to import a csv, which has all the strings (one, two three,...) in it
and I tried with the split method, but I`m not getting there... Another
problems are the names of the categories (hello, world, goodbye)
Do you have any suggestions?
Answer: have you looked into the csv module?
<https://docs.python.org/2/library/csv.html>
import csv
TEST_TEXT = """\
hello,one,two,three
world,four,five,six
goodbye,one,two,three"""
TEST_FILE = TEST_TEXT.split("\n")
#file objects iterate over newlines anyway
#so this is how it would be when opening a file
#this would be the minimum needed to use the csv reader object:
for row in csv.reader(TEST_FILE):
print(row)
#or to get a list of all the rows you can use this:
as_list = list(csv.reader(TEST_FILE))
#splitting off the first element and using it as the key in a dictionary
dict_I_call_matrix = {row[0]:row[1:] for row in csv.reader(TEST_FILE)}
print(dict_I_call_matrix)
without_csv = [row.split(",") for row in TEST_FILE] #...row in TEST_TEXT.split("\n")]
matrix = [row[1:] for row in without_csv]
labels = [row[0] for row in without_csv]
|
scrapy "Missing scheme in request url"
Question: Here's my code below-
import scrapy
from scrapy.http import Request
class lyricsFetch(scrapy.Spider):
name = "lyricsFetch"
allowed_domains = ["metrolyrics.com"]
print "\nEnter the name of the ARTIST of the song for which you want the lyrics for. Minimise the spelling mistakes, if possible."
artist_name = raw_input('>')
print "\nNow comes the main part. Enter the NAME of the song itself now. Again, try not to have any spelling mistakes."
song_name = raw_input('>')
artist_name = artist_name.replace(" ", "_")
song_name = song_name.replace(" ","_")
first_letter = artist_name[0]
print artist_name
print song_name
start_urls = ["www.lyricsmode.com/lyrics/"+first_letter+"/"+artist_name+"/"+song_name+".html" ]
print "\nParsing this link\t "+ str(start_urls)
def start_requests(self):
yield Request("www.lyricsmode.com/feed.xml")
def parse(self, response):
lyrics = response.xpath('//p[@id="lyrics_text"]/text()').extract()
with open ("lyrics.txt",'wb') as lyr:
lyr.write(str(lyrics))
#yield lyrics
print lyrics
I get the correct output when I use the scrapy shell, however, whenever I try
to run the script using scrapy crawl I get the ValueError. What am I doing
wrong? I went through this site, and others, and came up with nothing. I got
the idea of yielding a request through another question over here, but it
still didn't work. Any help?
My traceback-
Enter the name of the ARTIST of the song for which you want the lyrics for. Minimise the spelling mistakes, if possible.
>bullet for my valentine
Now comes the main part. Enter the NAME of the song itself now. Again, try not to have any spelling mistakes.
>your betrayal
bullet_for_my_valentine
your_betrayal
Parsing this link ['www.lyricsmode.com/lyrics/b/bullet_for_my_valentine/your_betrayal.html']
2016-01-24 19:58:25 [scrapy] INFO: Scrapy 1.0.3 started (bot: lyricsFetch)
2016-01-24 19:58:25 [scrapy] INFO: Optional features available: ssl, http11
2016-01-24 19:58:25 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'lyricsFetch.spiders', 'SPIDER_MODULES': ['lyricsFetch.spiders'], 'BOT_NAME': 'lyricsFetch'}
2016-01-24 19:58:27 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-01-24 19:58:28 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-01-24 19:58:28 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-01-24 19:58:28 [scrapy] INFO: Enabled item pipelines:
2016-01-24 19:58:28 [scrapy] INFO: Spider opened
2016-01-24 19:58:28 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-01-24 19:58:28 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-01-24 19:58:28 [scrapy] ERROR: Error while obtaining start requests
Traceback (most recent call last):
File "C:\Users\Nishank\Miniconda2\lib\site-packages\scrapy\core\engine.py", line 110, in _next_request
request = next(slot.start_requests)
File "C:\Users\Nishank\Desktop\SNU\Python\lyricsFetch\lyricsFetch\spiders\lyricsFetch.py", line 26, in start_requests
yield Request("www.lyricsmode.com/feed.xml")
File "C:\Users\Nishank\Miniconda2\lib\site-packages\scrapy\http\request\__init__.py", line 24, in __init__
self._set_url(url)
File "C:\Users\Nishank\Miniconda2\lib\site-packages\scrapy\http\request\__init__.py", line 59, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: www.lyricsmode.com/feed.xml
2016-01-24 19:58:28 [scrapy] INFO: Closing spider (finished)
2016-01-24 19:58:28 [scrapy] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 1, 24, 14, 28, 28, 231000),
'log_count/DEBUG': 1,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'start_time': datetime.datetime(2016, 1, 24, 14, 28, 28, 215000)}
2016-01-24 19:58:28 [scrapy] INFO: Spider closed (finished)
Answer: As @tintin said, you are missing the `http` scheme in the URLs. Scrapy needs
fully qualified URLs in order to process the requests.
As far I can see, you are missing the scheme in:
start_urls = ["www.lyricsmode.com/lyrics/ ...
and
yield Request("www.lyricsmode.com/feed.xml")
In case you are parsing URLs from the HTML content, you should use `urljoin`
to ensure you get a fully qualified URL, for example:
next_url = response.urljoin(href)
|
Fitting a curve python
Question: I am trying to fit a curve in python with this function
def func(x,a,c,d,e):
return a*((x/45)**c)*((1+(x/45)**d)/2)**((e-c)/d)
but I get this error: TypeError: unsupported operand type(s) for /: 'list' and
'int'
What should I do?
Answer: You have to cast `x` as a numpy array.
import numpy as np
def func(x,a,c,d,e):
x=np.array(x)
return a*((x/45)**c)*((1+(x/45)**d)/2)**((e-c)/d)
|
Django, AttributeError: 'module' object has no attribute
Question: Excuse me for my english. I start Django Project (I'm beginner on Django &
Python) with Django Rest Framework.
My project :
# tree -I 'env'
.
├── api
│ ├── admin.py
│ ├── admin.pyc
│ ├── apps.py
│ ├── apps.pyc
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── migrations
│ │ ├── 0001_initial.py
│ │ ├── 0001_initial.pyc
│ │ ├── __init__.py
│ │ └── __init__.pyc
│ ├── models.py
│ ├── models.pyc
│ ├── serializers.py
│ ├── serializers.pyc
│ ├── tests.py
│ ├── tests.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── views.py
│ └── views.pyc
├── db.sqlite3
├── hugs
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
└── manage.py
My project (Hugs) urls.py
from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^', include('api.urls')),
]
My app (api) urls.py :
from django.conf.urls import url
from . import views
urlpatterns = ['',
url(r'^snippets/$', views.snippet_list),
url(r'^snippets/(?P<pk>[0-9]+)/$', views.snippet_detail),
]
My app views (api/views.py) :
from django.shortcuts import render
# Create your views here.
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from rest_framework.renderers import JSONRenderer
from rest_framework.parsers import JSONParser
from api.models import Snippet
from api.serializers import SnippetSerializer
class JSONResponse(HttpResponse):
"""
An HttpResponse that renders its content into JSON.
"""
def __init__(self, data, **kwargs):
content = JSONRenderer().render(data)
kwargs['content_type'] = 'application/json'
super(JSONResponse, self).__init__(content, **kwargs)
@csrf_exempt
def snippet_list(request):
....
....
Error :
AttributeError: 'module' object has no attribute 'snippet_list'
Callstack :
# ./manage.py runserver
Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x7fe26a8ba050>
Traceback (most recent call last):
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 116, in inner_run
self.check(display_num_errors=True)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/checks/registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/checks/urls.py", line 10, in check_url_config
return check_resolver(resolver)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/checks/urls.py", line 19, in check_resolver
for pattern in resolver.url_patterns:
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 417, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 410, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/var/www/html/web-hugs/developments/darksite/hugs/urls.py", line 21, in <module>
url(r'^', include('api.urls')),
File "/var/www/html/web-hugs/developments/darksite/env/local/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 52, in include
urlconf_module = import_module(urlconf_module)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/var/www/html/web-hugs/developments/darksite/api/urls.py", line 5, in <module>
url(r'^snippets/$', views.snippet_list),
AttributeError: 'module' object has no attribute 'snippet_list'
Thanking you in advance ! :)
Good day !
Answer: `snippet_list` should not be defined inside JSONResponse: unindent it.
Note that Django has provided a JSONResponse class since 1.8, there is no need
to define your own.
|
Pants includes OS X specific Python wheels
Question: **TLDR** : Pants fetches OS X specific wheels bc I'm developing on Mac. How
can I avoid this, or specify that I will deploy to Ubuntu?
**Full story** :
Trying to package a Python application with Pants. Going great so far, but ran
into a problem which I've been stuck at for a while. I'm developing on a
macbook but deploying to EC2 Ubuntu.
Here's what I've done so far:
1. Created virtualenv.
2. Added BUILD files to applications, with the [suggested 3rd party pattern](https://pantsbuild.github.io/3rdparty_py.html) for third party packages.
3. Ran `./pants run.py backend:admin_server` which runs fine and generated `dist/admin_server.pex`
4. Scp that .pex onto a fresh EC2 Ubuntu box.
However when I run the application there, I get:
Failed to execute PEX file, missing compatible dependencies for:
mysql-python
pycrypto
The problem seems to be that Pants takes OS X specific wheels for these 2:
pex: - MySQL_python-1.2.5-cp27-none-macosx_10_11_intel.whl pex: -
pycrypto-2.6.1-cp27-none-macosx_10_11_intel.whl
How can I avoid that, or specify which OS they should run on?
Here's the full output:
ubuntu@ip-***:~$ export PEX_VERBOSE=1
ubuntu@ip-***:~$ python admin_server.pex
pex: Found site-library: /usr/local/lib/python2.7/dist-packages
pex: Found site-library: /usr/lib/python2.7/dist-packages
pex: Tainted path element: /usr/local/lib/python2.7/dist-packages
pex: Tainted path element: /usr/lib/python2.7/dist-packages
pex: Scrubbing from site-packages: /usr/local/lib/python2.7/dist-packages
pex: Scrubbing from site-packages: /usr/lib/python2.7/dist-packages
pex: Scrubbing from user site: /home/ubuntu/.local/lib/python2.7/site-packages
pex: Failed to resolve a requirement: MySQL-python==1.2.5
pex: Failed to resolve a requirement: pycrypto==2.6.1
pex: Unresolved requirements:
pex: - mysql-python
pex: - pycrypto
pex: Distributions contained within this pex:
pex: - six-1.10.0-py2.py3-none-any.whl
pex: - protobuf-2.6.1-py2.7.egg
pex: - setuptools-19.5-py2.py3-none-any.whl
pex: - MySQL_python-1.2.5-cp27-none-macosx_10_11_intel.whl
pex: - pycrypto-2.6.1-cp27-none-macosx_10_11_intel.whl
pex: - futures-3.0.4-py2-none-any.whl
pex: - webapp2-2.5.2-py2-none-any.whl
pex: - requests-2.9.0-py2.py3-none-any.whl
pex: - jmespath-0.9.0-py2.py3-none-any.whl
pex: - beautifulsoup4-4.4.1-py2-none-any.whl
pex: - python_dateutil-2.4.2-py2.py3-none-any.whl
pex: - boto3-1.2.3-py2.py3-none-any.whl
pex: - WebOb-1.5.1-py2.py3-none-any.whl
pex: - cssutils-1.0.1-py2-none-any.whl
pex: - webapp2_static-0.1-py2-none-any.whl
pex: - Paste-2.0.2-py2-none-any.whl
pex: - docutils-0.12-py2-none-any.whl
pex: - botocore-1.3.22-py2.py3-none-any.whl
pex: - protobuf_to_dict-0.1.0-py2-none-any.whl
Failed to execute PEX file, missing compatible dependencies for:
mysql-python
pycrypto
PS: to make sure I didn't include my versions of the python libraries, I pip
uninstalled both PyCrypto and MySQL-Python.
Answer: One of the nice things about distributing your project as a PEX file is that
you can prepare it to run on multiple platforms. For example, one PEX can run
on both Linux and Mac platforms. For many projects, there is nothing special
to do other than build a PEX. But when your project has dependencies on
platform specific binary code, you will need to perform some extra steps.
One example of a library that contains platform specific code is the `psutil`
library. It contains C code that it compiled into a shared library when the
module is installed. To create a PEX file that contains such dependencies, you
must first provide a pre-built version of that library for all platforms other
than the one where you are running pants.
The easiest way to pre-build libraries is to use the pip tool to build wheels.
This recipe assumes the following:
* You want to build a multi-platform pex to run on both Linux and mac You are going to pre-build the libraries in the Linux environment, then build the PEX on the mac environment.
* Your project directory lives under ~/src/cookbook
Let’s take a simple program that references a library and create a pex from
it.
# src/python/ps_example/main.py
import psutil
for proc in psutil.process_iter():
try:
pinfo = proc.as_dict(attrs=['pid', 'name'])
except psutil.NoSuchProcess:
pass
else:
print(pinfo)
With Pants, you can define an executable by defining a python_binary target in
a BUILD file:
# src/python/ps_example/BUILD
python_binary(name='ps_example',
source = 'main.py',
dependencies = [
':psutil', # defined in requirements.txt
],
)
# Defines targets from specifications in requirements.txt
python_requirements()
In the same directory, list the python libraries in a requirements.txt file:
# src/python/ps_example/requirements.txt
psutil==3.1.1
Now, to to make the multi-platform pex, you'll need access to a Linux box to
create the linux version of psutil wheel. Copy the requirements.txt file to
the linux machine, then, execute the pip tool:
linux $ mkdir ~/src/cookbook/wheelhouse
linux $ pip wheel -r src/python/multi-platform/requirements.txt \
--wheel-dir=~/src/cookbook/wheelhouse
This will create a platform specific wheel file.
linux $ ls ~/src/cookbook/wheelhouse/
psutil-3.1.1-cp27-none-linux_x86_64.whl
Now, you will need to copy the platform specific wheel over to the machine
where you want to build your multi-platform pex (in this case, your mac
laptop). If you use this recipe on a regular basis, you will probably want to
configure a Python Respository to store your pre-built libraries.
We’ll use the same BUILD file setup as in above, but modify python_binary to
specify the `platforms=` parameter.
# src/python/ps_example/BUILD
python_binary(name='ps_example',
source = 'main.py',
dependencies = [
':psutil', # defined in requirements.txt
],
platforms=[
'linux-x86_64',
'macosx-10.7-x86_64',
],
)
# Defines targets from specifications in requirements.txt
python_requirements()
You will also need to tell pants where to find the pre-built python packages.
Edit `pants.ini` and add:
[python-repos]
repos: [
"%(buildroot)s/wheelhouse/"
]
Now, copy the file `psutil-3.1.1-cp27-none-linux_x86_64.whl` over to the mac
workstation and place it in a directory named `wheelhouse/` under the root of
your repo.
Once this is done you can now build the multi-platform pex with
mac $ ./pants binary src/python/py_example
You can verify that libraries for both mac and Linux are included in the pex
by unzipping it:
mac $ unzip -l dist/ps_example.pex | grep psutil
17290 12-21-15 22:09 .deps/psutil-3.1.1-cp27-none-linux_x86_64.whl/psutil-3.1.1.dist-info/DESCRIPTION.rst
19671 12-21-15 22:09 .deps/psutil-3.1.1-cp27-none-linux_x86_64.whl/psutil-3.1.1.dist-info/METADATA
1340 12-21-15 22:09 .deps/psutil-3.1.1-cp27-none-linux_x86_64.whl/psutil-3.1.1.dist-info/RECORD
103 12-21-15 22:09
... .deps/psutil-3.1.1-cp27-none-macosx_10_11_intel.whl/psutil-3.1.1.dist-info/DESCRIPTION.rst
19671 12-21-15 22:09 .deps/psutil-3.1.1-cp27-none-macosx_10_11_intel.whl/psutil-3.1.1.dist-info/METADATA
1338 12-21-15 22:09 .deps/psutil-3.1.1-cp27-none-macosx_10_11_intel.whl/psutil-3.1.1.dist-info/RECORD
109 12-21-15 22:09
...
|
How to Make Tkinter Canvas Transparent
Question: In my program, I am trying to overlay different canvas geometries on top of an
image. However, my problem is that the canvas itself has a color that blocks
most of the image. How can I make this canvas transparent, so that only the
geometries that I draw are visible? Here is my code in case my explanation did
not suffice.
#!/usr/bin/python
import tkinter
from tkinter import *
from PIL import Image, ImageTk
root = Tk()
root.title("Infrared Camera Interface")
root.resizable(width=FALSE, height=FALSE)
class MyApp:
def __init__(self, parent):
#Set the dimensions of the window
parent.minsize(width=600, height=600)
parent.maxsize(width=600, height=600)
#Prepare the image object being loaded into the stream camera
self.imgName = 'Dish.png'
self.img = Image.open(self.imgName)
self.img = self.img.resize((560, 450), Image.ANTIALIAS)
#Display the image onto the stream
self.displayimg = ImageTk.PhotoImage(self.img)
self.imglabel = Label(root, image=self.displayimg).place(x=0, y= 0)
self.C = tkinter.Canvas(root, bg="white", height=560, width=450)
coord = 10, 50, 240, 210
arc = self.C.create_arc(coord, start=0, extent=150, fill="red")
self.C.pack()
myapp = MyApp(root)
root.mainloop()
[](http://i.stack.imgur.com/W5VPu.png)
Answer: As I know you can make canvas transparent.
But you can draw image directly on canvas:
[canvas.create_images()](http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.create_image-
method)
|
how to predict unique list in lists?
Question: This is list of tile placements. Each integer stands for an id of a tile. Each
time an integer is added to a new list it means that a new tile is placed.
When a tile is removed, the last integer is removed from a new list. I want
that every time a tile is placed the list to be unique. A list dont have to be
unique, when a tile is removed. The code is placing these tiles in a for loop.
So for this example, the last list of the lists is wrong, because it wasn't
unique when a tile was placed. Is there a way to exclude numbers which will
make the new list not unique. So for this example is there a way to exclude
the id 18, before it adds to the list.
I know this is a very vague question, but I am new to python and can't make
the code of this assignment easier. I hope someone could help me with this
vague question
[[1, 2, 3, 13, 4, 5, 6, 7, 17],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9, 10],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9, 10, 18],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9, 11],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9, 11, 18],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9, 10],
[1, 2, 3, 13, 4, 5, 6, 7, 17, 8, 15, 9, 10, 18]]
The lists must be in this order. So for example i have had these lists:
[[1, 2, 3, 13, 4, 5, 6, 7, 17],
[1, 2, 3, 13, 4, 5, 6, 7],
[1, 2, 3, 13, 4, 5, 6, 7, 8],
[1, 2, 3, 13, 4, 5, 6, 7],
[1, 2, 3, 13, 4, 5, 6, 7, 19],
[1, 2, 3, 13, 4, 5, 6, 7]]
I want to exlude the ids 17,8,19
So for [1, 2, 3, 13, 4, 5, 6, 7] the output must look like this ( id ont care
if the output is a list or integers)
[17,8,19]
But when i have this list [1, 2, 3, 13, 4, 5, 6] in lists
[[1, 2, 3, 13, 4, 5, 6, 7, 17],
[1, 2, 3, 13, 4, 5, 6],
[1, 2, 3, 13, 4, 5, 6, 7, 8],
[1, 2, 3, 13, 4, 5, 6, 7],
[1, 2, 3, 13, 4, 5, 6, 7, 19],
[1, 2, 3, 13, 4, 5, 6, 7]]
The output is this:
[7]
I hope this will make it more clear.
Answer: I tried with `itertools` and `collections`\- pass a list, a list element index
and to be added value to the `adder function` if uniquness is kept the adder
will add that passed value otherwise return intact list.`compare_func` return
`TRUE` if list is unique using `all`.
import collections,itertools
compare_func = lambda x, y: collections.Counter(x) != collections.Counter(y)
lst = [[1, 2, 3],[1, 2, 3,4]]
def adder(mylist,indx,val):
mylist[indx].append(val)
if all([compare_func(*i) for i in list(itertools.combinations(lst,2))]):
print "Added item"
else:
print "Did not add item"
mylist[indx].pop()
return mylist
Now run `print adder(lst,0,4)`
Output-
Did not add item
[[1, 2, 3], [1, 2, 3, 4]]
But if run
`print adder(lst,1,4)`
Output-
Added item
[[1, 2, 3], [1, 2, 3, 4, 4]]
* * *
## EDIT
After OP cleared question i added this portion-
Try using `set` as below-
import collections,itertools
data = [[1, 2, 3, 13, 4, 5, 6, 7, 17],
[1, 2, 3, 13, 4, 5, 6, 7],
[1, 2, 3, 13, 4, 5, 6, 7, 8],
[1, 2, 3, 13, 4, 5, 6, 7],
[1, 2, 3, 13, 4, 5, 6, 7, 19],
[1, 2, 3, 13, 4, 5, 6, 7]]
interscntion = set.intersection(*map(set,data))
d = collections.Counter([i for j in data for i in j if i not in list(interscntion)])
if len(set(it[1] for it in d.most_common()))>1:
print [max(d.most_common(),key=lambda x:x[1])[0]]
else:
print [j[0] for j in d.most_common()]
Output-
[8, 17, 19]
|
Storing JSON data in Aerospike in Python
Question: I'm trying to retrieve response from ip-api.com for most IP ranges. But I want
to store that data in Aerospike but I'm having some errors.
Here is the Python script
# import the module
from __future__ import print_function
import aerospike
import urllib2
config = {
'hosts': [ ('127.0.0.1', 3000) ]
}
try:
client = aerospike.client(config).connect()
except:
import sys
print("failed to connect to the cluster with", config['hosts'])
sys.exit(1)
key = ('ip', 'hit', 'trial')
try:
for i in range(0,255):
for j in range(0,255):
for k in range(0,255):
for l in range(0,255):
if not((i == 198 and j == 168) or (i == 172 and j > 15 and j < 32) or (i == 10)):
response = urllib2.urlopen('http://ip-api.com/json/'+str(i)+'.'+str(j)+'.'+str(k)+'.'+str(l))
html = response.read()
client.put(key, html)
except Exception as e:
import sys
print("error: {0}".format(e), file=sys.stderr)
client.close()
I'm new to Python as well as Aerospike, infact any no-SQL databases. Any help
would be appreciated.
Answer: All code from aerospike perspective it right, except you would want to change
html = response.read()
client.put(key, html)
to
import json
client.put(key, json.load(response))
The response is a json string which needs to be converted to json object
|
How do I take a Python List of Dictionary Values and display in Kivy UI as a table (Using ListView Widget)?
Question: **Background:** I am working in Python 2.7.10 on Red Hat Linux 6. I have Kivy
1.9.2 installed and I am developing an app that will display some data from
Oracle Database tables. I am using cx_Oracle to connect and query my Oracle
Database.
Currently, I am able to query my database and return a list of tuples that I
am converting into a list of dictionaries.
See "Figure 1" below for the dictionary of values I'd like to display in a
ListView widget.
**Problem:** I've spent some time searching and have referenced Kivy's
documentation on ListProperty, DictProperty as well as ListAdapter and
DictAdapter at the following links: <https://kivy.org/docs/api-
kivy.properties.html>
<https://kivy.org/docs/api-kivy.adapters.adapter.html>
I have not been able to find a source that explains the exact case I am
working with here:
**I have a list of Dictionary key, value pairs for each row from the database
that I am returning. How can I take this list of Dictionary key, value pairs
and successfully display as ListItemLabels formatted like a result returned
from the database?**
Error: The error I am receiving is `ValueError: too many values to unpack`
that can be seen in "Figure 4" below
Please let me know what other information might be helpful. Thanks
======================================
Figure 1 - List of Dictionary Values
[{'PLAYER_NAME': 'NAME', 'LOST': 'LOST', 'GP': 'GP', 'CAR': 'CAR', 'LNG': 'LNG', 'TEAM': 'Nebraska', 'YDSG': 'YDS/G', 'TD': 'TD', 'FUM': 'FUM', 'YDS': 'YDS'},
{'PLAYER_NAME': 'Homerecord', 'LOST': '0', 'GP': '7', 'CAR': '262', 'LNG': '55', 'TEAM': 'Nebraska', 'YDSG': '174.3', 'TD': '14', 'FUM': '0', 'YDS': '1220'},
{'PLAYER_NAME': 'Awayrecord', 'LOST': '0', 'GP': '5', 'CAR': '172', 'LNG': '69', 'TEAM': 'Nebraska', 'YDSG': '158.8', 'TD': '6', 'FUM': '0', 'YDS': '794'},
{'PLAYER_NAME': 'vsAPrankedteams', 'LOST': '0', 'GP': '2', 'CAR': '74', 'LNG': '21', 'TEAM': 'Nebraska', 'YDSG': '158', 'TD': '5', 'FUM': '0', 'YDS': '316'},
{'PLAYER_NAME': 'vsUSArankedteams', 'LOST': '0', 'GP': '2', 'CAR': '74', 'LNG': '21', 'TEAM': 'Nebraska', 'YDSG': '158', 'TD': '5', 'FUM': '0', 'YDS': '316'},
{'PLAYER_NAME': 'vs.ConferenceTeams', 'LOST': '0', 'GP': '8', 'CAR': '289', 'LNG': '69', 'TEAM': 'Nebraska', 'YDSG': '154.4', 'TD': '15', 'FUM': '0', 'YDS': '1235'},
{'PLAYER_NAME': 'vs.non-ConferenceTeams', 'LOST': '0', 'GP': '4', 'CAR': '145', 'LNG': '32', 'TEAM': 'Nebraska', 'YDSG': '194.8', 'TD': '5', 'FUM': '0', 'YDS': '779'},
{'PLAYER_NAME': 'Inwins', 'LOST': '0', 'GP': '5', 'CAR': '189', 'LNG': '69', 'TEAM': 'Nebraska', 'YDSG': '211.2', 'TD': '10', 'FUM': '0', 'YDS': '1056'},
{'PLAYER_NAME': 'Inlosses', 'LOST': '0', 'GP': '7', 'CAR': '245', 'LNG': '55', 'TEAM': 'Nebraska', 'YDSG': '136.9', 'TD': '10', 'FUM': '0', 'YDS': '958'},
{'PLAYER_NAME': 'September', 'LOST': '0', 'GP': '4', 'CAR': '145', 'LNG': '32', 'TEAM': 'Nebraska', 'YDSG': '194.8', 'TD': '5', 'FUM': '0', 'YDS': '779'},
{'PLAYER_NAME': 'October', 'LOST': '0', 'GP': '5', 'CAR': '177', 'LNG': '69', 'TEAM': 'Nebraska', 'YDSG': '149', 'TD': '9', 'FUM': '0', 'YDS': '745'},
{'PLAYER_NAME': 'November', 'LOST': '0', 'GP': '3', 'CAR': '112', 'LNG': '38', 'TEAM': 'Nebraska', 'YDSG': '163.3', 'TD': '6', 'FUM': '0', 'YDS': '490'},
{'PLAYER_NAME': 'Finalmargin0-7', 'LOST': '0', 'GP': '6', 'CAR': '214', 'LNG': '55', 'TEAM': 'Nebraska', 'YDSG': '153.8', 'TD': '9', 'FUM': '0', 'YDS': '923'},
{'PLAYER_NAME': 'Finalmargin8-14', 'LOST': '0', 'GP': '3', 'CAR': '106', 'LNG': '28', 'TEAM': 'Nebraska', 'YDSG': '152', 'TD': '5', 'FUM': '0', 'YDS': '456'},
{'PLAYER_NAME': 'Finalmargin15+', 'LOST': '0', 'GP': '3', 'CAR': '114', 'LNG': '69', 'TEAM': 'Nebraska', 'YDSG': '211.7', 'TD': '6', 'FUM': '0', 'YDS': '635'}]
Figure 2 - The Python Code I am working with
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.tabbedpanel import TabbedPanel
from kivy.uix.gridlayout import GridLayout
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.listview import * #ListItemLabel, ListItemButton
from kivy.lang import Builder
from kivy.properties import *
from kivy.event import *
import cx_Oracle
import os
import json
from decimal import Decimal
os.environ["ORACLE_HOME"] = "/u01/app/oracle..." #related to cx_Oracle
os.environ["LD_LIBRARY_PATH"] = "/u01/app/oracle..." #related to cx_Oracle
print(os.environ["ORACLE_HOME"])
print(os.environ["LD_LIBRARY_PATH"])
class TabData(TabbedPanel): #Root Widget
first = ListProperty()
search_input = ObjectProperty()
def on_enter(self):
self.return_data()
def query(self):
search = TabData()
con = cx_Oracle.connect('SCOTT/*******@localhost/j1db') #cx_Oracle connection object
cur = con.cursor()
statement = 'select * from FBS_SPLT_RUSH where TEAM = :t'
exe = cur.execute(statement, {'t': str(self.search_input.text)})
columns = [i[0] for i in cur.description]
exe2 = [dict(zip(columns, row)) for row in cur]
return exe2
def return_data(self):
for row in self.query():
self.first.append(row)
print(self.first)
print self.search_input
return self.first
def args_converter(self, index, data_item):
key, value = data_item
for key, value in data_item:
return {'text': (key, value)}
class TeamStatsApp(App):
def build(self):
return Builder.load_file('/usr/games/team stats/TeamStats.kv')
if __name__ == '__main__':
TeamStatsApp().run()
Figure 3 - .kv kivy file I have set up to display ListView and some other
widgets
#: kivy 1.0
#: import main main
#: import ListAdapter kivy.adapters.listadapter.ListAdapter
#: import DictAdapter kivy.adapters.dictadapter.DictAdapter
#: import sla kivy.adapters.simplelistadapter
#: import Label kivy.uix.label
#: import ListItemLabel kivy.uix.listview.ListItemLabel
#: import ListItemButton kivy.uix.listview.ListItemButton
#: import CompositeListItem kivy.uix.listview.CompositeListItem
#: import ut kivy.utils
TabData:
id: rootrun
do_default_tab: False
search_input: search_box
TabbedPanelItem:
text: "hello"
BoxLayout:
orientation: "vertical"
TextInput:
id: search_box
focus: True
size_hint_y: .1
multiline: False
on_text_validate: root.on_enter()
Button:
size_hint_y: .1
text: "Return"
on_press: root.return_data()
GridLayout:
cols: 5
ListView:
adapter:
ListAdapter(data=root.first, cls=ListItemButton, args_converter=root.args_converter)
Figure 4 - Log with Error after running this code
[ [1;32mINFO [0m ] [Base ] Leaving application in progress...
Traceback (most recent call last):
File "/usr/games/team stats/main.py", line 57, in <module>
TeamStatsApp().run()
File "/usr/local/lib/python2.7/site-packages/kivy/app.py", line 828, in run
runTouchApp()
File "/usr/local/lib/python2.7/site-packages/kivy/base.py", line 487, in runTouchApp
EventLoop.window.mainloop()
File "/usr/local/lib/python2.7/site-packages/kivy/core/window/window_sdl2.py", line 622, in mainloop
self._mainloop()
File "/usr/local/lib/python2.7/site-packages/kivy/core/window/window_sdl2.py", line 365, in _mainloop
EventLoop.idle()
File "/usr/local/lib/python2.7/site-packages/kivy/base.py", line 327, in idle
Clock.tick()
File "/usr/local/lib/python2.7/site-packages/kivy/clock.py", line 515, in tick
self._process_events()
File "/usr/local/lib/python2.7/site-packages/kivy/clock.py", line 647, in _process_events
event.tick(self._last_tick, remove)
File "/usr/local/lib/python2.7/site-packages/kivy/clock.py", line 406, in tick
ret = callback(self._dt)
File "/usr/local/lib/python2.7/site-packages/kivy/uix/listview.py", line 950, in _spopulate
self.populate()
File "/usr/local/lib/python2.7/site-packages/kivy/uix/listview.py", line 998, in populate
item_view = self.adapter.get_view(index)
File "/usr/local/lib/python2.7/site-packages/kivy/adapters/listadapter.py", line 211, in get_view
item_view = self.create_view(index)
File "/usr/local/lib/python2.7/site-packages/kivy/adapters/listadapter.py", line 228, in create_view
item_args = self.args_converter(index, item)
File "/usr/games/team stats/main.py", line 47, in args_converter
key, value = data_item
ValueError: too many values to unpack
Answer: I would suggest doing it without a listview, which might be more clear for
somebody new to kivy. Here is a simple, minimalistic example of creating a
table out of list of dicts:
test.kv:
#:kivy 1.9.0
<PlayerRecord>:
size_hint_y: None
height: '30dp'
width: '100dp'
canvas.before:
Color:
rgb: 0.2, 0.2, 0.2
Rectangle:
pos: self.pos
size: self.size
<TableHeader>
size_hint_y: None
height: '30dp'
width: '100dp'
canvas.before:
Color:
rgb: 0.5, 0.5, 0.5
Rectangle:
pos: self.pos
size: self.size
AnchorLayout:
anchor_x: 'center'
anchor_y: 'center'
ScrollView:
size_hint_y: None
height: '200dp'
MyGrid:
cols: 3
size_hint_y: None
height: self.minimum_height
spacing: '1dp'
main.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from kivy.app import App
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
class TableHeader(Label):
pass
class PlayerRecord(Label):
pass
class MyGrid(GridLayout):
def __init__(self, **kwargs):
super(MyGrid, self).__init__(**kwargs)
self.fetch_data_from_database()
self.display_scores()
def fetch_data_from_database(self):
self.data = [
{'name': 'name', 'score': 'score', 'car': 'car'},
{'name': 'przyczajony', 'score': '1337', 'car': 'Fiat 126p'},
{'name': 'Krusader Jake', 'score': '777', 'car': 'Ford'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'},
{'name': 'dummy', 'score': '0', 'car': 'none'}
]
def display_scores(self):
self.clear_widgets()
for i in xrange(len(self.data)):
if i < 1:
row = self.create_header(i)
else:
row = self.create_player_info(i)
for item in row:
self.add_widget(item)
def create_header(self, i):
first_column = TableHeader(text=self.data[i]['name'])
second_column = TableHeader(text=self.data[i]['score'])
third_column = TableHeader(text=self.data[i]['car'])
return [first_column, second_column, third_column]
def create_player_info(self, i):
first_column = PlayerRecord(text=self.data[i]['name'])
second_column = PlayerRecord(text=self.data[i]['score'])
third_column = PlayerRecord(text=self.data[i]['car'])
return [first_column, second_column, third_column]
class Test(App):
pass
Test().run()
* * *
In order to set a number of columns based on number of keys in a row, simply
move `cols` property from kv to py file, and attach number of keys to it:
main.py (fragment):
from kivy.properties import NumericProperty
...
class MyGrid(GridLayout):
cols = NumericProperty()
def fetch_data_from_database(self):
self.data = [{...},...]
self.cols = len(self.data[0].keys())
...
def create_header(self, i):
cols = []
row_keys = self.data[i].keys()
row_keys.reverse()
for key in row_keys:
cols.append(TableHeader(text=self.data[i][key]))
return cols
def create_player_info(self, i):
cols = []
row_keys = self.data[i].keys()
row_keys.reverse()
for key in row_keys:
cols.append(PlayerRecord(text=self.data[i][key]))
return cols
...
Keys were returned in order _car-score-name_ , so I put `.reverse()` to fix
it.
|
Where to save python modules
Question: I'm just learning about modules in python 3.5. While I can usually install and
import packages using sudo pip install {package}, I can't seem to figure out
how to import my own files.
I made a test.py file with a single definition to test. I saved it to the
site-packages folder. I can't seem to import from there. I need help
understanding how to import files.
I read online about possibly using sys.path however, I don't know how that
works.
Answer: If I had the following file structure:
/home/foo
/home/foo/__init__.py
/home/foo/bar.py
and I wanted to
import foo.bar
foo.bar.somefn()
I would first need to do:
import sys
sys.path.append("/home/foo")
Notice the **init**.py file, which tells python to look for submodules. We
don't necessarily need it in this instance, but it's good practice to have:
[What is __init__.py for?](http://stackoverflow.com/questions/448271/what-is-
init-py-for)
However, since this can get repetitive, daunting and tough to track, there are
lots of tools available to help you setup as your code expands to have
dependencies and lots of files/folders. I suggest you read a bit about
pip/disttools/easy-install and how to make a package with a 'setup.py'.
[What is setup.py?](http://stackoverflow.com/questions/1471994/what-is-setup-
py)
In addition, you might want to explore the world of Virtual Environments and
deployment solutions such as Buildout and Anaconda. These help keep the code
separated as you have several projects on the go with different needs, or even
different python builds and platform requirements. When it comes time to
deploy your code to other machines, you'll have a much easier time replicating
the needs of the project.
Below is a series of articles with info explaining everything from packages,
pip and setup.py to starting up your work environment with Buildout. Seems
like a great series:
<http://reinout.vanrees.org/weblog/tags/softwarereleasesseries.html>
The official docs for making a setup.py:
<https://docs.python.org/3/distutils/setupscript.html>
|
How to create a new data object in memory, rather than pointing to one? (in Python 3)
Question: As an illustration of my question, say I want to swap two elements in an
array:
# Array Integer Integer -> Array
# I want to swap the values at locations i1 and i2.
# I want to return the array with values swapped.
def swap(A, i1, i2):
newA = A
newA[i1] = A[i2]
newA[i2] = A[i1]
return newA
Run this code, and an array is returned with only one value changed:
> testArray = [1, 2, 3, 4]
> swap(testArray, 0, 1)
[2, 2, 3, 4]
Also, if I now check what testArray is (I want it to still be [1, 2, 3, 4]):
> testArray
[2, 2, 3, 4]
So my questions are:
1. I guess newA = A uses a pointer to A. I'm used to programming in a style where I return a new data structure each time. I'd like to create a whole new array, newA, which just has the same values as A. Then I can let garbage collection take care of newA later. Can I do this in python?
2. What is newA = A really doing?
3. Why would someone create a new variable (like newA) to point to the old one (A)? Why wouldn't they just mutate A directly?
4. And why does the syntax behave differently for atomic data?
i.e.
a = 1
b = a # this same syntax doesn't seem to be a pointer.
b = 2
> a
1
Answer: If it is list of integers then you can do:
def swap(A, i1, i2):
temp = A[i1]
A[i1] = A[i2]
A[i2] = temp
return A
or more pythonic way
def swap(A, i1, i2):
A[i1], A[i2] = A[i2], A[i1]
return A
-
newA = A
this create "alias" - both variables use the same list in memory. When you
change value in A then you change value in newA too.
see visualization on PythonTutor.com (it is long link with Python code)
[http://pythontutor.com/visualize.html#code=A+%3D+%5B1,+2,+3,+4%5D%0A%0AnewA+%3D+A&mode=display&origin=opt-
frontend.js&cumulative=false&heapPrimitives=false&textReferences=false&py=2&rawInputLstJSON=%5B%5D&curInstr=2](http://pythontutor.com/visualize.html#code=A+%3D+%5B1,+2,+3,+4%5D%0A%0AnewA+%3D+A&mode=display&origin=opt-
frontend.js&cumulative=false&heapPrimitives=false&textReferences=false&py=2&rawInputLstJSON=%5B%5D&curInstr=2)
-
To create copy you can use `slicing`
newA = A[:] # python 2 & 3
or
import copy
newA = copy.copy(A)
newA = copy.deepcopy(A)
or on Python 3
newA = A.copy()
-
`integers` and `float` are kept in variable but other objects are too big to
keep it in variable so python keeps only reference/pointer to memory with this
big object. Sometimes it is better to send reference (to function or class)
than clone all data and send it.
a = 1
b = a # copy value
a = [1,2,3] # big object - variable keeps reference/pointer
b = a # copy reference
|
Python/Kivy Attribute Error
Question: I'm working on some code to create a UI for a touchscreen in Python/Kivy. I'm
new to both, and am having a bit of trouble with it. I'm getting an
AttributeError raised on `return PtWidg()`, but the console isn't giving me
anything super helpful to work off of:
Traceback (most recent call last):
File "/Users/revascharf/Documents/COLLEGE WORK/SENIOR YEAR/touchscreenInterface/touchUI.py", line 30, in <module>
ptApp().run()
File "/Applications/Kivy.app/Contents/Resources/kivy/kivy/app.py", line 802, in run
root = self.build()
File "/Users/revascharf/Documents/COLLEGE WORK/SENIOR YEAR/touchscreenInterface/touchUI.py", line 26, in build
return PtWidg()
File "/Applications/Kivy.app/Contents/Resources/kivy/kivy/uix/widget.py", line 320, in __init__
Builder.apply(self, ignored_consts=self._kwargs_applied_init)
File "/Applications/Kivy.app/Contents/Resources/kivy/kivy/lang.py", line 1970, in apply
self._apply_rule(widget, rule, rule, ignored_consts=ignored_consts)
File "/Applications/Kivy.app/Contents/Resources/kivy/kivy/lang.py", line 2044, in _apply_rule
cls = Factory_get(cname)
File "/Applications/Kivy.app/Contents/Resources/kivy/kivy/factory.py", line 130, in __getattr__
raise AttributeError
AttributeError
Process finished with exit code 1
This is my python file, **touchUI.py:**
import kivy
import datetime
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.button import Button
kivy.require('1.9.0')
from kivy.uix.image import Image
class SendButton(Button):
def on_press(self):
now = datetime.datetime.now()
self.text = 'minute is ' + str(now.minute)
class PtWidg(Widget):
pass
class ptApp(App):
def build(self):
return PtWidg()
if __name__ == '__main__':
ptApp().run()
And here is the contents of my .kv file, **pt.kv:**
#kivy 1.9.0
<sendButton>:
size: 40, 30
#pos: center_x + width / 4, center_y - height / 4
<PtWidg>:
Image:
center_x: root.width / 4
top: root.top - 50
source: 'SensoryWalkLogo.png'
height: db(50)
width: db(50)
sendButton:
center_x: root.width - root.width / 4
top: root.top - 50
text: 'Send minute to MSP430'
font_size: 40
Really, any tips or tricks would help me out a lot. Thank you!
Answer: Your code has these errors:
1. You wrote `sendButton` instead of `SendButton` in kv file.
2. Also in kv file, you wrote `db(50)` instead of `dp(50)`. Using string `'50dp'` would be also a valid option.
|
List into json format using Python
Question: I have a list like below...
lst = ['dosa','idly','sambar']
i need to convert the above data to below format by using Python.
[{'menuitem': 'idly'},
{'menuitem': 'dosa'},
{'menuitem': 'sambar'},
]
Thanks.
Answer: Using [list
comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-
comprehensions), make a list of dictionaries:
>>> lst = ['dosa','idly','sambar']
>>> [{'menuitem': x} for x in lst]
[{'menuitem': 'dosa'}, {'menuitem': 'idly'}, {'menuitem': 'sambar'}]
Then, convert it to json using
[`json.dumps`](https://docs.python.org/3/library/json.html#json.dumps):
>>> import json
>>> json.dumps([{'menuitem': x} for x in lst])
'[{"menuitem": "dosa"}, {"menuitem": "idly"}, {"menuitem": "sambar"}]'
|
Python unittest failing to resolve import statements
Question: I have a file structure that looks like the following
project
src
__init__.py
main.py
module.py
secondary.py
test
test_module.py
## module.py
import secondary
x = False
## secondary.py
pass
## test_module.py
from unittest import TestCase
from src import module
class ModuleTest(TestCase):
def test_module(self):
self.assertTrue(module.x)
Invoking `python3 -m unittest discover` in `/project/` gives an error:
File "/Users/Me/Code/project/test/test_module.py", line 6, in <module>
from src import module
File "/Users/Me/Code/project/src/module.py", line 1, in <module>
import secondary
ImportError: No module named 'secondary'
What can I do so that `secondary.py` is imported without error?
Answer: In Python 3 (and Python 2 with `from __future__ import absolute_import`), you
must be explicit about what module you want when importing another module from
the same package. The syntax you're using in `module.py` (`import secondary`)
only works if `secondary` is a top-level module in a folder in the Python
module search path.
To explicitly request a relative import from your own package, use `from .
import secondary` instead. Or, make an absolute import, using the name of the
package as well as the module (`from src import secondary`, or `import
src.secondary` and use `src.secondary` elsewhere the module instead of just
`secondary).
|
Python 3.4 can't find OpenSSL
Question: I trying to use WebSockets in Python 3.4 (Windows 7) This is a test code:
from twisted.internet import reactor
from autobahn.twisted.websocket import WebSocketClientFactory, WebSocketClientProtocol, connectWS
import json
class ClientProtocol(WebSocketClientProtocol):
def onConnect(self, response):
print("Server connected: {0}".format(response.peer))
def initMessage(self):
message_data = [{"type": "subscribe", "product_id": "BTC-USD"}]
message_json = json.dumps(message_data)
print("sendMessage: " + message_json)
self.sendMessage(message_json)
def onOpen(self):
print("onOpen calls initMessage()")
self.initMessage()
def onMessage(self, msg, binary):
print("Got echo: " + msg)
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
factory = WebSocketClientFactory("wss://ws-feed.exchange.coinbase.com")
factory.protocol = ClientProtocol
connectWS(factory)
reactor.run()
When I start it I have an error:
F:\python>wss.py
Traceback (most recent call last):
File "F:\python\wss.py", line 24, in <module>
connectWS(factory)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\autobahn-0.11.0-py3
.5.egg\autobahn\twisted\websocket.py", line 519, in connectWS
from twisted.internet import ssl
File "C:\Program Files (x86)\Python35-32\lib\site-packages\twisted-15.5.0-py3.
5.egg\twisted\internet\ssl.py", line 59, in <module>
from OpenSSL import SSL
ImportError: No module named 'OpenSSL'
But when I tryed to install the OpenSSL, an error appeared:
F:\python>easy_install openssl
Searching for openssl
Reading https://pypi.python.org/simple/openssl/
Couldn't find index page for 'openssl' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.python.org/simple/
No local packages or download links found for openssl
error: Could not find suitable distribution for Requirement.parse('openssl')
How can I start this test code?
Answer: Try to install pyOpenSSL package: `pip install pyOpenSSL`
|
Python replace / with \
Question: I write some simple Python script and I want to replace all characters `/`
with `\` in text variable. I have problem with character `\`, because it is
escape character. When I use `replace()` method:
unix_path='/path/to/some/directory'
unix_path.replace('/','\\')
then it returns following string: `\\path\\to\\some\\directory`. Of course, I
can't use: `unix_path.replace('/','\')`, because `\` is escape character.
When I use regular expression:
import re
unix_path='/path/to/some/directory'
re.sub('/', r'\\', unix_path)
then it has same results: `\\path\\to\\some\\directory`. I would like to get
this result: `\path\to\some\directory`.
Note: I aware of `os.path`, but I did not find any feasible method in this
module.
Answer: You missed something: it is _shown_ as `\\` by the Python interpreter, but
your result is correct: `'\\'`is just how Python represents the character `\`
in a normal string. That's strictly equivalent to `\` in a raw string, e.g.
`'some\\path` is same as `r'some\path'`.
And also: Python on windows knows very well how to use `/` in paths.
You can use the following trick though, if you want your dislpay to be OS-
dependant:
In [0]: os.path.abspath('c:/some/path')
Out[0]: 'c:\\some\\path'
|
how to connect spark streaming with cassandra?
Question: I'm using
Cassandra v2.1.12
Spark v1.4.1
Scala 2.10
and cassandra is listening on
rpc_address:127.0.1.1
rpc_port:9160
For example, to connect kafka and spark-streaming, while listening to kafka
every 4 seconds, I have the following spark job
sc = SparkContext(conf=conf)
stream=StreamingContext(sc,4)
map1={'topic_name':1}
kafkaStream = KafkaUtils.createStream(stream, 'localhost:2181', "name", map1)
And spark-streaming keeps listening to kafka broker every 4 seconds and
outputs the contents.
Same way, **I want spark streaming to listen to cassandra and output the
contents of the specified table every say 4 seconds**.
**How to convert the above streaming code to make it work with cassandra
instead of kafka?**
* * *
# The non-streaming solution
**I can obviously keep running the query in an infinite loop but that's not
true streaming right?**
spark job:
from __future__ import print_function
import time
import sys
from random import random
from operator import add
from pyspark.streaming import StreamingContext
from pyspark import SparkContext,SparkConf
from pyspark.sql import SQLContext
from pyspark.streaming import *
sc = SparkContext(appName="sparkcassandra")
while(True):
time.sleep(5)
sqlContext = SQLContext(sc)
stream=StreamingContext(sc,4)
lines = stream.socketTextStream("127.0.1.1", 9160)
sqlContext.read.format("org.apache.spark.sql.cassandra")\
.options(table="users", keyspace="keyspace2")\
.load()\
.show()
run like this
sudo ./bin/spark-submit --packages \
datastax:spark-cassandra-connector:1.4.1-s_2.10 \
examples/src/main/python/sparkstreaming-cassandra2.py
and I get the **table values** which rougly looks like
lastname|age|city|email|firstname
**So what's the correct way of "streaming" the data from cassandra?**
* * *
Answer: Currently the "Right Way" to stream data from C* is not to Stream Data from C*
:) Instead it usually makes much more sense to have your message queue (like
Kafka) in front of C* and Stream off of that. C* doesn't easily support
incremental table reads although this can be done if the clustering key is
based on insert time.
If you are interested in using C* as a streaming source be sure to check out
and comment on <https://issues.apache.org/jira/browse/CASSANDRA-8844> Change
Data Capture
Which is most likely what you are looking for.
If you are actually just trying to read the full table periodically and do
something you may be best off with just a cron job launching a batch operation
as you really have no way of recovering state anyway.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.