text
stringlengths 226
34.5k
|
---|
Hook to perform actions after loaddata command (loading fixtures)
Question: There's `post_syncdb` signal to perform actions that can be done after syncdb.
Is there a similar hook to perform some actions after loading fixtures i.e.
after `python manage.py loaddata` command ?
I have a script that creates a new database, runs migrate (syncdb) and loads
data from JSON fixtures. Post all this, I want to create groups & permissions
for users that have been created. Where do I plug that code?
P.S. Use `post_migrate` instead of `post_syncdb` for 1.7+
Answer: [Read the source, Luke](http://blog.codinghorror.com/learn-to-read-the-source-
luke/).
Research how `post_migrate` (or `post_syncdb`) signal is fired in the
management command, see:
* [`emit_post_migrate_signal()`](https://github.com/django/django/blob/master/django/core/management/commands/migrate.py#L197) call at the end of the `handle()` method
* how [`emit_post_migrate_signal()`](https://github.com/django/django/blob/master/django/core/management/sql.py#L265-285) is responsible for sending `models.signals.post_migrate` signal
From what we've seen, here is what you should try:
* [create a custom signal](https://docs.djangoproject.com/en/dev/topics/signals/#defining-and-sending-signals) (and listener where you would create groups & permissions)
* [create a custom management command](https://docs.djangoproject.com/en/dev/howto/custom-management-commands/) subclassing `loaddata` `Command` and overriding `handle()` method:
from django.core.management.commands.loaddata import Command
class MyCommand(Command):
def handle(self, *fixture_labels, **options):
super(MyCommand, self).handle(*fixture_labels, **options)
my_signal.send(sender=self.__class__, my_argument=my_argument_value)
Haven't personally tested this. Hope it works for you.
|
How can I isolate a Python dictionary from a list for comparison?
Question: I have a Python function that takes as arguments a player's name and score and
determines whether this is the player's highest score. It does so by comparing
the arguments against a shelve object.
The shelve should only store the high score for each player; there should not
be to scores for someone named "Joe" for example.
Unfortunately, I can't figure out how to isolate a dict from the list of dicts
(shelf) for comparison against the incoming player dict.
Here's my code:
import shelve
import os
def highscore(player_name, player_score):
"""
Function to return the high score from our
persistent storage of score records for a given
person.
"""
# Get our working directory
working_dir = os.getcwd()
# Create our shelf object for a player
highscore_fn = os.path.join(working_dir, 'highscore.shelve')
# Set our player info
player = {'name': player_name, 'score': player_score}
with shelve.open(highscore_fn, writeback=True) as shelf:
# Check if any records exist in the shelf
if len(shelf) == 0:
# Assign the shelf to an empty list
shelf['player_data'] = []
# Append player data to shelf
shelf['player_data'].append(player)
# Current high score for player
high_score = player.get('score')
else:
# Loop through our player data list
for data in shelf['player_data']:
# Check to see if we have data for a player
if player['name'] in data['name']:
existing_record = data
# Compare the player's new score against previous score
if player.get('score') > existing_record.get('score'):
high_score = player.get('score')
# Update our record for the player
existing_record.update(player)
else:
high_score = existing_record.get('score')
else:
high_score = player.get('score')
shelf['player_data'].append(player)
# Return the high score
return high_score
Any tips would be appreciated!
Answer: If you don't have another reason to use a list of dicts, just using a dict of
dicts (or even a simple dict) would greatly simplify your code. Assuming your
shelf looks something like
{
"user_data": {
"joe": {"score": 2999, "name":"joe"},
"walt": {"score": 1784, "name":"walt"},
"bunny": {"score": 87441, "name":"bunny"},
# etc
},
}
Then your code would look like:
player = {'name': player_name, 'score': player_score}
high_score = player_score
with shelve.open(highscore_fn, writeback=True) as shelf:
user_data = shelf["user_data"]
# Check if any records exist in the shelf
found = user_data.get(player_name)
if found:
if found["score"] < player_score:
user_data[player_name] = player
else:
high_score = found["score"]
else:
user_data[player_name] = player
shelf["user_data"] = user_data
return high_score
Note that if the shelf only contains "user_data", you can get rid of this
level and directly store your dicts in the shelf itself. Also if you only have
scores to save, you can turn your dict of dicts into a simple dict, ie:
=> your shelf:
{
"joe": : 2999,
"walt": 1784,
"bunny": 87441,
# etc
}
=> your code:
high_score = player_score
with shelve.open(highscore_fn, writeback=True) as shelf:
# Check if any records exist in the shelf
found = shelf.get(player_name, 0)
if found > player_score:
high_score = found
else:
shelf[player_name] = player_score
return player_score
EDIT: The following code JustWorks(tm) on 2.7.3:
# scores.py
import shelve
DATA = {
"user_data": {
"joe": {"score": 2999, "name":"joe"},
"walt": {"score": 1784, "name":"walt"},
"bunny": {"score": 87441, "name":"bunny"},
# etc
},
}
class Score(object):
def __init__(self, path):
self.path = path
def init_data(self, data):
shelf = shelve.open(self.path)
shelf["user_data"] = data["user_data"]
shelf.close()
def read_data(self):
d = {}
shelf = shelve.open(self.path)
d["user_data"] = shelf["user_data"]
shelf.close()
return d
def highscore(self, name, score):
player = {'name': name, 'score': score}
high_score = score
shelf = shelve.open(self.path)
user_data = shelf["user_data"]
found = user_data.get(name)
if found:
if found["score"] < score:
user_data[name] = player
else:
high_score = found["score"]
else:
user_data[name] = player
shelf["user_data"] = user_data
shelf.sync()
shelf.close()
return high_score
>>> import scores
>>> s = scores.Score("scores.dat")
>>> s.init_data(scores.DATA)
>>> s.read_data()
{'user_data': {'walt': {'score': 1784, 'name': 'walt'}, 'joe': {'score': 2999, 'name': 'joe'}, 'bunny': {'score': 87441, 'name': 'bunny'}}}
>>> s.highscore("walt", 10000)
10000
>>> s.read_data()
{'user_data': {'walt': {'score': 10000, 'name': 'walt'}, 'joe': {'score': 2999, 'name': 'joe'}, 'bunny': {'score': 87441, 'name': 'bunny'}}}
|
How to authenticate by Access Token in code using python-social-auth
Question: I have a REST API and I need to authenticate users via Facebook Login API.
Access Token should be obtained in mobile app (I guess) and then sent to the
server. So I have found some code in old tutorial and I can't make it work.
Here's the code:
from social.apps.django_app.utils import strategy
from django.contrib.auth import login
from django.views.decorators.csrf import csrf_exempt
from rest_framework.decorators import api_view, permission_classes
from rest_framework import permissions, status
from django.http import HttpResponse as Response
@strategy()
def auth_by_token(request, backend):
user=request.user
user = backend.do_auth(
access_token=request.DATA.get('access_token'),
user=user.is_authenticated() and user or None
)
if user and user.is_active:
return user
else:
return None
@csrf_exempt
@api_view(['POST'])
@permission_classes((permissions.AllowAny,))
def social_register(request):
auth_token = request.DATA.get('access_token', None)
backend = request.DATA.get('backend', None)
if auth_token and backend:
try:
user = auth_by_token(request, backend)
except Exception, err:
return Response(str(err), status=400)
if user:
login(request, user)
return Response("User logged in", status=status.HTTP_200_OK)
else:
return Response("Bad Credentials", status=403)
else:
return Response("Bad request", status=400)
When I try to send POST request with parameters, I get this error:
'unicode' object has no attribute 'do_auth'
I found an example in [official documentation](http://python-social-
auth.readthedocs.org/en/latest/use_cases.html#signup-by-oauth-access-token),
it uses `@psa('social:complete')` decorator:
from django.contrib.auth import login
from social.apps.django_app.utils import psa
# Define an URL entry to point to this view, call it passing the
# access_token parameter like ?access_token=<token>. The URL entry must
# contain the backend, like this:
#
# url(r'^register-by-token/(?P<backend>[^/]+)/$',
# 'register_by_access_token')
@psa('social:complete')
def register_by_access_token(request, backend):
# This view expects an access_token GET parameter, if it's needed,
# request.backend and request.strategy will be loaded with the current
# backend and strategy.
token = request.GET.get('access_token')
user = backend.do_auth(request.GET.get('access_token'))
if user:
login(request, user)
return 'OK'
else:
return 'ERROR'
But what if I need to pass backend name in request body?
Answer: It should be `request.backend.do_auth(request.GET.get('access_token'))`.
I've updated the docs with the right snippet
<http://psa.matiasaguirre.net/docs/use_cases.html#signup-by-oauth-access-
token>.
|
Send a packet of data literally via UDP in Python
Question: I want to my UDP data packet to have literately this information for example:
data = "83053163021478010102010370020000000000"
I'm using the follow code to send it which works fine(I can see it going out
on wireshark):
listener = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
listener.sendto(data, (IP, PORT))
When I look at the data packet in wireshark i want the (wireshark)data packet
== data. How do I declare/convert the data type to make this happen.
Answer: I think this should do the trick:
import codecs
data = codecs.decode("83053163021478010102010370020000000000", "hex_codec")
Then you can send `data` same as you are doing now.
ref: [How to create python bytes object from long hex
string?](http://stackoverflow.com/questions/443967/how-to-create-python-bytes-
object-from-long-hex-string)
|
Calling a C library module with ctypes gives false results
Question: I created a simple C library file to test how I can access C functions with
ctypes.
The function in the C file named tetr.c reads:
double square(double x){
return x*x;
}
So it returns the square of the number.
I've compiled it to an object file using the following:
gcc -c tetr.c -o tetr.o
and to a shared object file using:
gcc -shared -Wl,-soname,library.so -o tetr.so tetr.o
So far so good. I call the ipython shell, and set as the variable lib my
library:
import ctypes
lib = ctypes.cdll.LoadLibrary("tetr.so")
Naturally, when I'd call `lib.square(ctypes.c_double(2.0))` it should give me
`4.0`. Instead I get `1`. And same happens with any number I insert in the
function square, even zero.
A short notice, I used `ctypes.c_double` inside my function because without
that I get zero when I input an integer and
ArgumentError: argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1
when I input a float/double.
Any ideas of what I might be doing wrong?
Answer: You need to tell `ctypes` that the function returns a double. It can't figure
that out itself, so unless you tell it otherwise it assumes every function's
return type is `int`. See
<https://docs.python.org/2/library/ctypes.html#return-types>
For example:
import ctypes
lib = ctypes.cdll.LoadLibrary("tetr.so")
lib.square.restype = ctypes.c_double
ret = lib.square(ctypes.c_double(2.0))
|
Python Custom Exception Handling
Question: After much googli searching to figure out whats going on, here it is:
I have a custom validation exception which takes a request and response
class ValidationException(Exception):
message = "Caught Validation Exception"
def __init__(self, request, response):
self.details = {
"request": request,
"response": response
}
super(ValidationException, self).__init__(self.message, self.details)
I have an exception handler which will raise an instance of it on some
condition:
class handler:
if something:
raise ValidationException(request, response)
The handler is called in the event we encounter an issue in a post
class Poster:
def post(data):
if self.last_response.status_code not in self.valid_post_codes:
self.exception_handler.handleException(self.last_request, self.last_response)
The problem is, I'm raising the ValidationException, getting it in my
traceback, but, it doesn't seem to get caught where I want it.
def testThis(self):
try:
self.poster.post(json.dumps({}))
except ValidationException:
print "got validation"
except Exception:
print "got exception"
Result: "got exception"
traceback
lib/service/pas/api/order.py line 24 in postOrder
return self.post()
lib/service/base.py line 42 in post
self.exception_handler.handleException(self.last_request, self.last_response)
lib/service/exception/handler.py line 14 in handleException
raise ValidationException(request, response)
ValidationException:
For what its worth:
assertRaises(ValidationException, self.poster.post, json.dumps({}))
only catches Exception as well. Any ideas? :\ Any help is greatly appreciated!
Thanks in advance
Answer: Well well well... So..
My IDE prefixed my import with "lib" which imported
Exceptions.ValidationException.
When I throw my.own.ValidationException elsewhere, it wasn't being caught as
it wasn't of the same type. Just so turned out there happened to be another
ValidationException I didn't know about...
Thats amazing, NOT!
|
Python 2.7 accessibility for blind
Question: Hello I am planning on creating a program in Python 2.7 using a tkinter GUI. I
am looking for some guidance on the best method to play text as audio in order
to aid people with visual difficulties.
The text that will need to be played would be text on buttons and text within
textboxes. are there any libraries I can import that can help me achieve this?
Thanks.
Answer: The answer appears to be 'no'. According to tcl/tk developer [Kevin
Walzer](https://mail.python.org/pipermail/tkinter-
discuss/2013-September/003480.html) "Tk doesn't support [screen readers]. I've
looked into it a bit and it seems like a huge project to implement on a cross-
platform basis." See link for a bit more.
|
Change cwd before running tests
Question: I have a bunch of `unittest` test cases in separate directories. There is also
a directory which just contains helper scripts for the tests. So my file tree
looks like this
test_dir1
test_dir2
test_dir3
helper_scripts
Each python file in `test_dir*` will have these lines:
import sys
sys.path.append('../helper_scripts')
import helper_script
This all works fine, as long as I run the tests from within their directory.
However, I would like to be at the project root and just run:
py.test
and have it traverse all the directories and run each test it finds. The
problem is that the tests are being run from the wrong directory, so the
`sys.path.append` doesn't append the `helper_scripts` directory, it appends
the parent of the project root. This makes all the imports fail with an
`Import Error`.
Is there a way to tell `py.test` to run the test scripts from their directory?
ie. change the cwd before executing them? If not, is there another test runner
I can use that will?
Answer:
os.chdir("newdir")
will change your current working directory
|
Get all possible combinations of rows in a matrix
Question: I'm setting up a simple sentence generator in python, to create as many word
combinations as possible to describe a generic set of images involving robots.
(Its a long story :D)
It outputs something like this: **'Cyborg Concept Downloadable Illustration'**
Amazingly, the random generate I wrote only goes up to 255 unique
combinations. Here is the script:
import numpy
from numpy import matrix
from numpy import linalg
import itertools
from pprint import pprint
import random
m = matrix( [
['Robot','Cyborg','Andoid', 'Bot', 'Droid'],
['Character','Concept','Mechanical Person', 'Artificial Intelligence', 'Mascot'],
['Downloadable','Stock','3d', 'Digital', 'Robotics'],
['Clipart','Illustration','Render', 'Image', 'Graphic'],
])
used = []
i = 0
def make_sentence(m, used):
sentence = []
i = 0
while i <= 3:
word = m[i,random.randrange(0,4)]
sentence.append(word)
i = i+1
return ' '.join(sentence)
def is_used(sentence, used):
if sentence not in used:
return False
else:
return True
sentences = []
i = 0
while i <= 1000:
sentence = make_sentence(m, used)
if(is_used(sentence, used)):
continue
else:
sentences.append(sentence)
print str(i) + ' ' +sentence
used.append(sentence)
i = i+1
Using `randint` instead of `randrange`, I get up to 624 combinations
(instantly) then it hangs in an infinite loop, unable to create more combos.
I guess the question is, is there a more appropriate way of determining all
possible combinations of a matrix?
Answer: You can make use of **itertools** to get the all possible combinations of
matrix. I given one example to show how itertools will work.
import itertools
mx = [
['Robot','Cyborg','Andoid', 'Bot', 'Droid'],
['Character','Concept','Mechanical Person', 'Artificial Intelligence', 'Mascot'],
['Downloadable','Stock','3d', 'Digital', 'Robotics'],
['Clipart','Illustration','Render', 'Image', 'Graphic'],
]
for combination in itertools.product(*mx):
print combination
|
How can I stream a GET request line-by-line?
Question: I would like to send a GET request to a server that is streamed almost in
'realtime' using Chunk Transfer Encoding that I can completley modify line-by-
line. For example:
SendChunks = SomeHTTPLibrary.SendData;
SendChunks(Example.org, "5\r\n")
SendChunks(Example.org, "Hello\r\n")
SendChunks(Example.org, "7\r\n")
SendChunks(Example.org, "Goodbye\r\n")
SendChunks(Example.org, "0\r\n")
Where I am right now, I don't even care about listening for a response. It
doesn't need to be in C++, I'm comfortable with Python, Javascript, PHP or
anything similar.
Answer: Firstly, you shouldn't be sending a request body along with a GET request. I
think technically you can, but if the server does anything with it then it's
non-compliant. See <http://stackoverflow.com/a/983458/241294>.
From you question it looks as though you already know that you need chunked
transfer encoding. Here is a crude example of how you can achieve this in
python, but with a `POST` request instead of a `GET` request (code hacked from
[here](http://stackoverflow.com/a/26673087/241294)):
import httplib
conn = httplib.HTTPConnection('Example.org')
conn.connect()
conn.putrequest('POST', '/post')
conn.putheader('Transfer-Encoding', 'chunked')
conn.endheaders()
conn.send("5\r\n")
conn.send("hello\r\n")
conn.send("7\r\n")
conn.send("Goodbye\r\n")
conn.send("0\r\n")
resp = conn.getresponse()
print(resp.status, resp.reason, resp.read())
conn.close()
For a nicer example with a python chunking function see [How to force
http.client to send chunked-encoding HTTP body in
python?](http://stackoverflow.com/q/9237961/241294).
|
Python- TypeError object int is not iterable
Question: Here is my code, when I am running it I get error on line 19 (for loop):
TypeError: object 'int' is not iterable.
import fb
from facepy import GraphAPI
token=""# access token here.
facebook=fb.graph.api(token)
graph1 = GraphAPI(token)
vid="" #page id here
query=str(vid)+"/feed"
r=graph1.get(query)
count=0
nos=input("Enter number of posts: ")
for indid in nos:
count=count+1
facebook.publish(cat="feed",id=indid,message="Hi"+str(count))
time.sleep(6)
print("Wall post:"+str(count))
else :
print("No posts made.")
Please tell me what's wrong.
Answer: Well, the error says it all: you try to iterate over an `int` in the `for`
loop of this code:
nos=input("Enter number of posts: ") # nos is an int here
for indid in nos: # and this is not how you iterate over an int
count=count+1
facebook.publish(cat="feed",id=indid,message="Hi"+str(count))
make a range instead:
for count in range(0, nos):
facebook.publish(cat="feed",id=count,message="Hi"+str(count))
furthermore: I don't know what you try to do with `indid`. Maybe you also want
to ask for the postid you want to change...
|
How to draw line segment on FITS figure using APLpy or python 2.7?
Question: I want to draw a line segment joining two points on a FITS figure.
(x,y) co-ordinates of these points are (200,250) & (300,400).
I am using APLpy for this.
My code is:
import matplotlib.pyplot as plt
import aplpy
import numpy as np
fig = aplpy.FITSFigure('test.fits')
fig.show_grayscale()
a=np.ndarray(shape=(2,2))
a[0][0]=200
a[0][1]=250
a[1][0]=300
a[1][1]=400
fig.show_lines(a)
plt.show()
I am using "fig.show_lines()" function of APLpy described on following web-
page: <http://aplpy.readthedocs.org/en/latest/quick_reference.html#shapes>
It says 'use lists of numpy arrays' as argument to show_lines().
But I got following error message:
Traceback (most recent call last):
File "draw.py", line 16, in <module>
fig.show_lines(a)
File "<string>", line 2, in show_lines
File "/home/swapnil/anaconda/lib/python2.7/site-packages/aplpy/decorators.py", line 25, in _auto_refresh
return f(*args, **kwargs)
File "/home/swapnil/anaconda/lib/python2.7/site-packages/aplpy/aplpy.py", line 1275, in show_lines
xp, yp = wcs_util.world2pix(self._wcs, line[0, :], line[1, :])
IndexError: too many indices
Any help will be appreciated.
Thanks.
Answer: I understand that it should be a list of `2xN` numpy arrays:
x = np.array([[200], [300]])
y = np.array([[250], [400]])
fig.show_lines([x, y])
HTH,
Germán.
|
Django Python PIL save image - broken image
Question: I am overriding the `save_model` method of modelAdmin to resize the image to
650 which is being uploaded via admin page:
def save_model(self, request, obj, form, change):
basewidth = 650
img = PIL.Image.open(form.cleaned_data['image_file'])
if img.size[0] > basewidth:
wpercent = (basewidth / float(img.size[0]))
hsize = int((float(img.size[1]) * float(wpercent)))
img = img.resize((basewidth, hsize), PIL.Image.ANTIALIAS)
img_filefield = getattr(obj, 'image_file')
random_image_name = ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(30)) + '.jpeg'
img.save(random_image_name)
img_filefield.save(random_image_name, ContentFile(img))
obj.save()
else:
obj.save()
it is saving the image, but the image is broken, just a black image with
`"invalid image"` if I open it.
what am I doing wrong in above code?
Answer: I didnot know that PIL Images are of different type than Django ImageField
type. Thanks to Skitz's [answer](http://stackoverflow.com/a/4544525/903790), I
could solve it this way:
def save_model(self, request, obj, form, change):
basewidth = 650
img = PIL.Image.open(form.cleaned_data['image_file'])
if img.size[0] > basewidth:
wpercent = (basewidth / float(img.size[0]))
hsize = int((float(img.size[1]) * float(wpercent)))
img = img.resize((basewidth, hsize), PIL.Image.ANTIALIAS)
img_file_lang = getattr(obj, 'image_file')
random_image_name = ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(30)) + '.jpeg'
image_io = StringIO.StringIO()
img.save(image_io, format='JPEG')
img_file_lang.save(random_image_name, ContentFile(image_io.getvalue()))
obj.save()
else:
obj.save()
dont forget to do: `import StringIO`
|
Avoid click to get out of wxPython TreeCtrl in a Notebook
Question: Below is a very simple wxPython code creating a Notebook inside which are
several panels containing TreeCtrl objects.
Using it, I get a behavior I would like to avoid:
When I click in a tree, then I cannot switch directly to another page of the
notebook without clicking first outside the tree. This means that it needs two
clicks to change the notebook page: One to get outside the tree, another to
switch the page.
I would like to be able to do this in one single click.
The code:
import wx
class TestFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY)
# Create the notebook
notebook = wx.Notebook(self)
# Put panels in the notebook
notebook.AddPage(TestPanel(notebook), "Page 1")
notebook.AddPage(TestPanel(notebook), "Page 2")
# Display the window
self.Show(True)
class TestPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
# Create the sizer
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
# Create the tree
tree = wx.TreeCtrl(self)
sizer.Add(tree, 1, wx.EXPAND)
# Create nodes in the tree
root = tree.AddRoot("root")
tree.AppendItem(root, "item 1")
tree.AppendItem(root, "item 2")
tree.AppendItem(root, "item 3")
# Expand the root node
tree.Expand(root)
if __name__ == "__main__":
# Create an application without redirection of stdout/stderr to a window
application = wx.App(False)
# Open a main window
frame = TestFrame()
# Launch the application
application.MainLoop()
Answer: This looks like [this bug](http://trac.wxwidgets.org/ticket/16055) which
should be fixed in 3.0.2. If you're using an earlier version, please upgrade.
|
PHP UTF8 decode not working for out returned from python
Question: i get a reply from python server basically what i am doing is sending an
article and the python code is sending me important tags in the article. the
reply i get is like this
"keywords": "[u'Smartphone', u'Abmessung', u'Geh\xe4userand']"
so i want to utf8 decode the **Geh\xe4userand** string. i read in some post
that i have to put it in "" and do the decoding but its not working. my code
is
$tags = str_replace("'",'"',$tags);
$tags = preg_replace('/\[*\s*u(".*?")\]*/', "$1", $tags);
$tags = explode(',', $tags);
foreach ($tags as $tag) {
pr(utf8_encode($tag));
}
die;
the output i am getting is
<pre>"Smartphone"</pre><pre>"Abmessung"</pre><pre>"Geh\xe4userand"</pre>
i don't have access to the python code.
Answer: If at all feasible, fix the Python code instead; it is sending you a Python
list literal with a Unicode escape, not UTF8. Ideally it should send you JSON
instead.
The `\xe4` character sequence encodes the codepoint U+00E4, but it is using 4
literal ASCII characters (`\`, `x`, `e`, `4`).
Other Python literal rules:
* It'll use either single quotes or double quotes, depending on the contents, with a preference for single quotes. As a result you may have to handle escaped `\'` single quotes.
* Newlines, carriage returns and tabs are escaped to `\n`, `\r` and `\t` respectively.
* All other non-printable Latin-1 characters are escaped to `\xhh`, a two-digit hexadecimal encoding of the codepoint.
* If the literal starts with `u` it is a Unicode string, not a byte string, and any codepoint outside the Latin-1 subset but part of the Basic Multilingual Plane is escaped to `\uhhhh`, a four-digit hexadecimal encoding of the codepoint in the range U+0100 through to U+FFFF
* In a Unicode string you'll also find `\Uhhhhhhhh`, a eight-digit hexadecimal encoding non-BMP unicode codepoints in the range U+00010000 through to U+0001FFFF.
|
python-mplayer not opening mp3 file
Question: Hi Iam trying to build a small audio player, integrating mplayer into python
I thought python-mplayer could do the job but I cannot get it to work. Any
idea?
seems like p.loadfile doesnt work as ps ax shows /usr/bin/mplayer -slave -idle
-really-quiet -msglevel global=4 -input nodefault-bindings -noconfig all
import pygame
import os
import subprocess
import sys
import time
import subprocess
from mplayer import *
audiofile = "/home/user/1.mp3"
gfx_rev_normal = pygame.image.load('rev_normal.png')
pygame.init()
screen = pygame.display.set_mode((800, 480))
#background = pygame.Surface(screen.get_size())
#background = background.convert()
#background.fill = ((255, 255, 255))
font = pygame.font.SysFont("monospace", 36)
source_text = font.render("Audio Player", 1, (255, 255, 255))
text_width = source_text.get_width()
source_text_x = screen.get_width() / 2
source_text_x = source_text_x - (text_width/2)
source_text_y = 10
screen.blit(source_text,(source_text_x, source_text_y))
Player.exec_path="/usr/bin/mplayer"
p = Player()
p.loadfile('audiofile')
#p.pause()
p.volume=100
running = True
while running:
time.sleep(0.1)
print p.stream_pos
screen.blit(gfx_rev_normal,(30,120))
pygame.display.flip()
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE:
running = False
Answer: I'm not familiar with the mplayer API, but maybe it's just this small
oversight:
p.loadfile('audiofile')
which should be
p.loadfile(audiofile)
as the path of your file is in the variable audiofile, not the string
`'audiofile'`.
|
error when try to install flask in the virtual enviroment
Question: I just configure the environment to develop the flask based web app. All the
things goes smoothly, but when I run my hello world app, the python interpret
tell me no module named flask:
Traceback (most recent call last):
File "hello.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
but I seriously install flask.
When I get the error, i just run the command in the virtual environment, `sudo
pip install flask`. Then, the console show the message:
(venv)ubuntu@localhost:/var/www/demoapp$ sudo pip install flask
Requirement already satisfied (use --upgrade to upgrade): flask in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in /usr/local/lib/python2.7/dist-packages (from flask)
Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in /usr/local/lib/python2.7/dist-packages (from flask)
Requirement already satisfied (use --upgrade to upgrade): itsdangerous>=0.21 in /usr/local/lib/python2.7/dist-packages (from flask)
Requirement already satisfied (use --upgrade to upgrade): markupsafe in /usr/local/lib/python2.7/dist-packages (from Jinja2>=2.4->flask)
Cleaning up...
Who can tell me how can I run my hello world flask app?
Answer: When you run
> $ sudo pip install...
system pip will be used. So to install flask in current environment just run
> $ pip install ... or as: $ /path/to/venv/bin/pip install ...
Or make your venv able to load global system packages by parameter --system-
site-packages, while configure virtual environment.
|
how to scrape imbeded script on webpage in python
Question: For example, I have webpage
[http://www.amazon.com/dp/1597805483](http://rads.stackoverflow.com/amzn/click/1597805483).
I want to use xpath to scrape this sentence `Of all the sports played across
the globe, none has more curses and superstitions than baseball, America’s
national pastime.`
page = requests.get(url)
tree = html.fromstring(page.text)
feature_bullets = tree.xpath('//*[@id="iframeContent"]/div/text()')
print feature_bullets
Nothing is returned by above code. The reason is the xpath interpreted by
browser is different from source code. But I don't know how to get the xpath
from source code.
Answer: There is a lot of things involved in building the page you are web-scraping.
As for description, specifically, the underlying HTML is constructed inside a
javascript function:
<script type="text/javascript">
P.when('DynamicIframe').execute(function (DynamicIframe) {
var BookDescriptionIframe = null,
bookDescEncodedData = "%3Cdiv%3E%3CB%3EA%20Fantastic%20Anthology%20Combining%20the%20Love%20of%20Science%20Fiction%20with%20Our%20National%20Pastime%3C%2FB%3E%3CBR%3E%3CBR%3EOf%20all%20the%20sports%20played%20across%20the%20globe%2C%20none%20has%20more%20curses%20and%20superstitions%20than%20baseball%2C%20America%26%238217%3Bs%20national%20pastime.%3Cbr%3E%3CBR%3E%3CI%3EField%20of%20Fantasies%3C%2FI%3E%20delves%20right%20into%20that%20superstition%20with%20short%20stories%20written%20by%20several%20key%20authors%20about%20baseball%20and%20the%20supernatural.%20%20Here%20you%27ll%20encounter%20ghostly%20apparitions%20in%20the%20stands%2C%20a%20strangely%20charming%20vampire%20double-play%20combination%2C%20one%20fan%20who%20can%20call%20every%20shot%20and%20another%20who%20can%20see%20the%20past%2C%20a%20sad%20alternate-reality%20for%20the%20game%27s%20most%20famous%20player%2C%20unlikely%20appearances%20on%20the%20field%20by%20famous%20personalities%20from%20Stephen%20Crane%20to%20Fidel%20Castro%2C%20a%20hilariously%20humble%20teenage%20phenom%2C%20and%20much%20more.%20In%20this%20wonderful%20anthology%20are%20stories%20from%20such%20award-winning%20writers%20as%3A%3CBR%3E%3CBR%3EStephen%20King%20and%20Stewart%20O%26%238217%3BNan%3Cbr%3EJack%20Kerouac%3CBR%3EKaren%20Joy%20Fowler%3CBR%3ERod%20Serling%3CBR%3EW.%20P.%20Kinsella%3CBR%3EAnd%20many%20more%21%3CBR%3E%3CBR%3ENever%20has%20a%20book%20combined%20the%20incredible%20with%20great%20baseball%20fiction%20like%20%3CI%3EField%20of%20Fantasies%3C%2FI%3E.%20This%20wide-ranging%20collection%20reaches%20from%20some%20of%20the%20earliest%20classics%20from%20the%20pulp%20era%20and%20baseball%27s%20golden%20age%2C%20all%20the%20way%20to%20material%20appearing%20here%20for%20the%20first%20time%20in%20a%20print%20edition.%20Whether%20you%20love%20the%20game%20or%20just%20great%20fiction%2C%20these%20stories%20will%20appeal%20to%20all%2C%20as%20the%20writers%20in%20this%20anthology%20bring%20great%20storytelling%20of%20the%20strange%20and%20supernatural%20to%20the%20plate%2C%20inning%20after%20inning.%3CBR%3E%3C%2Fdiv%3E",
bookDescriptionAvailableHeight,
minBookDescriptionInitialHeight = 112,
options = {};
...
</script>
The idea here would be to get the script tag's text, extract the description
value using regular expressions, unquote the HTML, parse it with `lxml.html`
and get the `.text_content()`:
import re
from urlparse import unquote
from lxml import html
import requests
url = "http://rads.stackoverflow.com/amzn/click/1597805483"
page = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36'})
tree = html.fromstring(page.content)
script = tree.xpath('//script[contains(., "bookDescEncodedData")]')[0]
match = re.search(r'bookDescEncodedData = "(.*?)",', script.text)
if match:
description_html = html.fromstring(unquote(match.group(1)))
print description_html.text_content()
Prints:
A Fantastic Anthology Combining the Love of Science Fiction with Our National Pastime.
Of all the sports played across the globe, none has more curses and superstitions than baseball, America’s national pastime.Field of Fantasies delves right into that superstition with short stories written by several key authors about baseball and the supernatural.
Here you'll encounter ghostly apparitions in the stands, a strangely charming vampire double-play combination, one fan who can call every shot and another who can see the past, a sad alternate-reality for the game's most famous player, unlikely appearances on the field by famous personalities from Stephen Crane to Fidel Castro, a hilariously humble teenage phenom, and much more.
In this wonderful anthology are stories from such award-winning writers as:Stephen King and Stewart O’NanJack KerouacKaren Joy FowlerRod SerlingW. P. KinsellaAnd many more!Never has a book combined the incredible with great baseball fiction like Field of Fantasies.
This wide-ranging collection reaches from some of the earliest classics from the pulp era and baseball's golden age, all the way to material appearing here for the first time in a print edition. Whether you love the game or just great fiction, these stories will appeal to all, as the writers in this anthology bring great storytelling of the strange and supernatural to the plate, inning after inning.
* * *
Similar solution, but using
[`BeautifulSoup`](http://www.crummy.com/software/BeautifulSoup/bs4/doc/):
import re
from urlparse import unquote
from bs4 import BeautifulSoup
import requests
url = "http://rads.stackoverflow.com/amzn/click/1597805483"
page = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36'})
soup = BeautifulSoup(page.content)
script = soup.find('script', text=lambda x:'bookDescEncodedData' in x)
match = re.search(r'bookDescEncodedData = "(.*?)",', script.text)
if match:
description_html = BeautifulSoup(unquote(match.group(1)))
print description_html.text
* * *
Alternatively, you can take a high-level approach and use a real browser with
the help of [`selenium`](http://selenium-python.readthedocs.org/):
from selenium import webdriver
url = "http://rads.stackoverflow.com/amzn/click/1597805483"
driver = webdriver.Firefox()
driver.get(url)
iframe = driver.find_element_by_id('bookDesc_iframe')
driver.switch_to.frame(iframe)
print driver.find_element_by_id('iframeContent').text
driver.close()
Produces a much more nicer formatted output:
A Fantastic Anthology Combining the Love of Science Fiction with Our National Pastime
Of all the sports played across the globe, none has more curses and superstitions than baseball, America’s national pastime.
Field of Fantasies delves right into that superstition with short stories written by several key authors about baseball and the supernatural. Here you'll encounter ghostly apparitions in the stands, a strangely charming vampire double-play combination, one fan who can call every shot and another who can see the past, a sad alternate-reality for the game's most famous player, unlikely appearances on the field by famous personalities from Stephen Crane to Fidel Castro, a hilariously humble teenage phenom, and much more. In this wonderful anthology are stories from such award-winning writers as:
Stephen King and Stewart O’Nan
Jack Kerouac
Karen Joy Fowler
Rod Serling
W. P. Kinsella
And many more!
Never has a book combined the incredible with great baseball fiction like Field of Fantasies. This wide-ranging collection reaches from some of the earliest classics from the pulp era and baseball's golden age, all the way to material appearing here for the first time in a print edition. Whether you love the game or just great fiction, these stories will appeal to all, as the writers in this anthology bring great storytelling of the strange and supernatural to the plate, inning after inning.
|
Why does sys.getsizeof() not return [size] in file.read([size]) in Python
Question: I have a large binary file that I would like to read in and unpack using
struct.unpack() The file consists of a number of lines each 2957 bytes long. I
read in the file using the following code:
with open("bin_file", "rb") as f:
line = f.read(2957)
My question is why, is the size returned by:
import sys
sys.getsizeof(line)
not equal to 2957 (in my case it is 2978)?
Answer: You misunderstand what `sys.getsizeof()` _does_. It returns the amount of
memory Python uses for a string object, not length of the line.
Python string objects track reference counts, the object type and other
metadata together with the actual characters, so 2978 bytes is _not_ the same
thing as the string length.
See the [`stringobject.h` definition of the
type](https://hg.python.org/cpython/file/3f13f1620d70/Include/stringobject.h#l35):
typedef struct {
PyObject_VAR_HEAD
long ob_shash;
int ob_sstate;
char ob_sval[1];
/* Invariants:
* ob_sval contains space for 'ob_size+1' elements.
* ob_sval[ob_size] == 0.
* ob_shash is the hash of the string or -1 if not computed yet.
* ob_sstate != 0 iff the string object is in stringobject.c's
* 'interned' dictionary; in this case the two references
* from 'interned' to this object are *not counted* in ob_refcnt.
*/
} PyStringObject;
where `PyObject_VAR_HEAD` is defined in
[`object.h`](https://hg.python.org/cpython/file/3f13f1620d70/Include/object.h#l77),
where the standard `ob_refcnt`, `ob_type` and `ob_size` fields are all
defined.
So a string of length 2957 takes 2958 bytes (string length + null) and the
remaining 20 bytes you see are to hold the reference count, the type pointer,
the object 'size' (string length here), the cached string hash and the
interned state flag.
Other object types will have different memory footprints, and the exact sizes
of the C types used differ from platform to platform as well.
|
Python - urllib2 > how to escape HTTP errors
Question: I am making a python app and I want to read a file from the net. this is the
code that I am using to read it :
urllib2.urlopen("http://example.com/check.txt").read()
everything works great but when I point it to a url that do not exist, it
gives HTTP 404: not found error and that is normal.
the problem is that the app is designed to work on windows, so, it will be
compiled. on windows, when the app tries to get a file from a url that do not
exist, the app crushes and gives an error window + it creates a log that
contains HTTP 404: NOT found error.
I tried to escape this error but I failed. This is the full code:
import urllib2
file = urllib2.urlopen("http://example.com/check.txt")
try:
file.read()
except urllib2.URLError:
print "File Not Found"
else:
print "File is found"
please, if you know how to escape this error, help me.
Answer: You should apply the try..except around the urlopen, not the read.
Try this
import urllib2
try:
fh = urllib2.urlopen('http://example.com/check.txt')
print fh.read()
except urllib2.HTTPError, e:
print e.code
except urllib2.URLError, e:
print e.code
|
Python Using Lists to create program
Question: I have an assignment in class to write a program using lists in Python
"Create a program that prompts the user for a vocabulary word. Then prompts
user to enter the word's definition. Ask user if they want to enter more words
and definitions. When they are done entering all words and definitions, print
out all the words along with their definition."
I know I need to have a nested list to store the user input. But my question
is how am I going to get the user input and store it into a nested list? I
also know that I need to use a loop to take in all the inputs for words and
definitions, but I'm confused on how to do so.
* * *
myvar=str(print(input("Type a Word.")))
myvar2=str(print(input("Type the word's definition.")))
myvar3=input(print("If you want to enter another word, enter Y, if not enter N"))
mylist=[[myvar,myvar2]]
while myvar3=='Y':
myvar4=str(print(input("Enter your next word.")))
myvar5=str(print(input("Enter the word's definition.")))
mylist.append([myvar4,myvar5])
myvar3=input(print("If you want to enter another word, enter Y, if not enter N"))
print(mylist)
I think this works, is there anything wrong with this? Do I need to make it to
where if they enter "N" it does something to end the loop? Or does the loop
just end as long as it doesn't equal 'Y'?
Answer: If you're using Python 3.x, getting input from a user is very simple. This is
accomplished using the input() function.
This will prompt input from the user, printing the string passed to input()
before the caret:
input("Please enter a word: ")
The user types whatever they feel, then hits Enter. When they hit enter,
input() _returns_ the text they've entered. So, you can store the value the
user typed with something like this:
user_word = input("Please enter a word: ")
And a definition can be entered into a separate variable like this:
user_definition = input("Please enter a definition: ")
Then, you can use one of Python's built-in data types to store both values,
and, just as importantly, to build a logical association between them, before
you prompt them for their next word.
Here's the documentation on the [input and
output](http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/io.html).
|
Does pygtk garbage-collect runtime-created functions connected to signals?
Question: I'm using PyGtk.
Will a **runtime-generated** function connected to the signal "drag_data_get"
of a widget be **garbage-collected** when the widget is **destroyed** ?
Same question about the Gtk.TargetList that are created and associated with
drag source/dest target?
I did found [Python and GTK+: How to create garbage collector friendly
objects?](http://stackoverflow.com/questions/6866578/python-and-gtk-how-to-
create-garbage-collector-friendly-objects?rq=1) but it does not help too much.
Answer: In short: yes, it does, dynamically created functions are created just like
any other Python objects created during run-time.
Longer answer: For resources managed by the garbage collector, such as objects
not tied to an external resource, Python and PyGTK will correctly dispose of
unused objects. For external resources, such as open files or running threads,
you need to take steps to ensure their correct cleanup. To answer your
question precisely, it would be useful to see concrete code. In general, the
following things apply to Python and GTK:
* Python objects, including dynamically created functions, are deallocated some time after they can no longer be reached from Python. In some cases deallocation happens immediately after the object becomes unreachable (if the object is not involved in reference cycles), while in others you must wait for the garbage collector to kick in.
* Destroying a widget causes GTK resources associated with the widget to be cleared immediately. The object itself can remain alive. Callbacks reachable through the widget should be dereferenced immediately and, provided nothing else holds on to them from Python, soon deallocated.
You can use the weak reference type from the `weakref` module to test this.
For example:
>>> import gtk
>>>
>>> def report_death(obj):
... # arrange for the death of OBJ to be announced
... def announce(wr):
... print 'gone'
... import weakref
... report_death.wr = weakref.ref(obj, announce)
...
>>> def make_dynamic_handler():
... def handler():
... pass
... # for debugging - we want to know when the handler is freed
... report_death(handler)
... return handler
...
>>> w = gtk.Window()
>>> w.connect('realize', make_dynamic_handler())
10L
>>> w.destroy()
gone
Now, if you change the code to `handler` to include a circular reference, e.g.
by modifying it to mention itself:
def handler():
handler # closure with circular reference
...the call to destroy will no longer cause `gone` to be immediately printed -
that will require the program to keep working, or an explicit call to
`gc.collect()`. In most Python and PyGTK programs automatic deallocation "just
works" and you don't need to make an effort to help it.
Ultimately, the only **reliable** test whether there is a memory leak is
running the suspect code in an infinite loop and monitoring the memory
consumption of the process - if it grows without bounds, something is not
getting deallocated and you have a memory leak.
|
Can't install SciPy on production server
Question: I'm trying to install SciPy on a Ubuntu machine in the cloud. Here are the
steps I followed:
* sudo pip install numpy
* sudo apt-get install gfortran
* sudo apt-get install libblas-dev
* sudo apt-get install liblapack-dev
* sudo apt-get install g++
* sudo pip install scipy
For your information, it's Ubuntu 14.04, Python 2.7.6. I did not install
Python, it was already in the machine, just like pip and easy_install. Here is
the pip.log:
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
error: Command "/usr/bin/gfortran -Wall -g -L/opt/bitnami/common/lib -L/opt/bitnami/common/lib build/temp.linux-x86_64-2.7/numpy/core/blasdot/_dotblas.o -L/usr/lib
-L/opt/bitnami/python/lib -Lbuild/temp.linux-x86_64-2.7 -lopenblas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/core/_dotblas.so" failed with exit status 1
----------------------------------------
Cleaning up... Removing temporary dir /tmp/pip_build_root... Command /opt/bitnami/python/bin/.python2.7.bin -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/numpy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Lm8nxq-record/install-record.txt
--single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/numpy Exception information:
Traceback (most recent call last):
File "/opt/bitnami/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/basecommand.py", line 130, in main
status = self.run(options, args) File "/opt/bitnami/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/commands/install.py", line 283, in run
requirement_set.install(install_options, global_options, root=options.root_path) File "/opt/bitnami/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/req.py", line 1435, in install
requirement.install(install_options, global_options, *args, **kwargs) File "/opt/bitnami/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/req.py", line 706, in install
cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "/opt/bitnami/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 697, in call_subprocess
% (command_desc, proc.returncode, cwd)) InstallationError: Command /opt/bitnami/python/bin/.python2.7.bin -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/numpy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Lm8nxq-record/install-record.txt
--single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/numpy
pip file:
#!/opt/bitnami/python/bin/.python2.7.bin
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==1.5.6','console_scripts','pip'
__requires__ = 'pip==1.5.6' import sys from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
)
easy_install file:
#!/usr/bin/env /opt/bitnami/python/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'distribute==0.6.34','console_scripts','easy_install'
__requires__ = 'distribute==0.6.34' import sys from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('distribute==0.6.34', 'console_scripts', 'easy_install')()
)
Please help.
Is this caused by the gfortran compiler flags? Just a suspicion.
Answer: # Ubuntu packages
The SciPy manual [suggests the Ubuntu version instead of the pip
version](http://www.scipy.org/install.html#ubuntu-debian):
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
# pip
Another approach would be [installing the
prerequisites](http://stackoverflow.com/a/15355787/51197):
sudo apt-get install python-pip python-dev build-essential
sudo pip install numpy
sudo apt-get install libatlas-base-dev gfortran
sudo pip install scipy
I've just tested it on an AWS Ubuntu 14.04 machine, and everything seems to
work, after a very long compilation process with tons of warnings.
I generally recommend using the later, because it works on any virtualenv,
while the former installs the packages system-wide.
|
Running python unittest in the console
Question: I have the follwoing package structure
my-base-project
-> package1
__init__.py
MyScript.py
-> test
__init__.py
TestMyScript.py
I'd like to run the `TestMyScript.py` in the console. Therefore I cd in to
`my-base-project/test` and execute `python TestMyScript.py`. However, I'm
getting the error:
user@computer:~/my-base-project/test$ python TestMyScript.py
Traceback (most recent call last):
File "TestMyScript.py", line 4, in <module>
from package1 import MyScript
ImportError: No module named package1
How do I run these tests?
Answer: From [this SO question](http://stackoverflow.com/questions/4580101/python-add-
pythonpath-during-command-line-module-run), consider adding the directory you
need to the PYTHONPATH:
import sys
sys.path.append('your certain directory')
Maybe you want to add the parent directory
sys.path.append('..')
|
Writing generated numbers in text file in Python
Question: So I tried browsing other posts, and even asked a friend before resorting to
actually asking here. I have a homework assignment which calls for me to
create a program in Python, that generates random numbers based on how many
numbers a user inputs. For example, if they input that they want to generate 5
numbers... the program will do just that. Now my issue is that I created the
following:
import random
def main():
howMany = 0
numbers = 0
howMany = int(input('How many numbers would you like to generate?: '))
infile = open ('rand_write.txt', 'w')
for n in range(1,howMany):
numbers = random.randint(1,115)
infile.write(str(numbers))
infile.close()
main()
Everything works alright, until it's time to actually get the 5 numbers onto a
text file. I can't... for the life of me.. figure out just what I'm doing
wrong. The program writes to the text file, but it only writes a random
number, not 5. I would greatly appreciate any pointers and guidance in
figuring out what I should do to solve this issue. Thank you very much!
Answer: Your indentation is wrong. You need to put tab in front of
`infile.write(str(numbers))` to make sure that it gets execute in each loop.
Otherwise you end up just writing the last number.
You may also want to write some separator between the numbers.
Finally you may want to make just one call to generate the random numbers as
follows:
numpy.random.random_integers(1,115, howMany)
|
Sympy cannot find the laplace transform of sinh (t)
Question: I have been using **sympy** for quite a few months now, and recently I have
run into a problem.
The following code is supposed to calculate the Laplace Transform of
**sinh(t)**.
from sympy import *
from sympy.abc import *
laplace_transform(sinh(t), t, s, noconds = True)
But, it fails to calculate anything and returns: `LaplaceTransform(sinh(t), t,
s)`
Can anyone tell me what's wrong?
(I am using `sympy-0.7.5` with `Python-2.7.7` on Windows 7 SP1 x64)
Answer: Apparently it doesn't try to rewrite the function. If you do so explicitly you
will find your answer:
>>> sinh(t).rewrite(exp)
exp(t)/2 - exp(-t)/2
>>> laplace_transform(_, t, s, noconds=True)
1/(s**2 - 1)
|
Passing string rather than function in django url pattern
Question: In the [Django
docs](https://docs.djangoproject.com/en/1.7/topics/http/urls/#passing-strings-
instead-of-callable-objects) it says about url patterns:
> It is possible to pass a string containing the path to a view rather than
> the actual Python function object. This alternative is supported for the
> time being, though is not recommended and will be removed in a future
> version of Django.
Does anyone have any insight as to why this the case? I find this alternative
to be quite handy and can't find anything explaining why this is a bad (or, at
least, less than ideal) idea.
Answer: I think the [1.8 Release
Notes](https://github.com/django/django/blob/master/docs/releases/1.8.txt#L695)
in the repo explains it quite well. Here's a summary of the main points:
> In the modern era, we have updated the tutorial to instead recommend
> importing your views module and referencing your view functions (or classes)
> directly. This has a number of advantages, all deriving from the fact that
> we are using normal Python in place of "Django String Magic": the errors
> when you mistype a view name are less obscure, IDEs can help with
> autocompletion of view names, etc.
>
> Thus `patterns()` serves little purpose and is a burden when teaching new
> users (answering the newbie's question "why do I need this empty string as
> the first argument to `patterns()`?"). For these reasons, we are deprecating
> it. Updating your code is as simple as ensuring that `urlpatterns` is a list
> of :func:`django.conf.urls.url` instances.
|
Python - NameError
Question: I have the following code that uses 3 strings 'us dollars','euro',
'02-11-2014', and a number to calculate the exchange rate for that given date.
I modified the code to pass those arguments but I get an error when I try to
call it with
python currencyManager.py "us dollars" "euro" 100 "02-11-2014"
* * *
Traceback (most recent call last):
File "currencyManager.py", line 37. in <module>
currencyManager(currTo,currFrom,currAmount,currDate)
NameError: name 'currTo' is not defined
I'm fairly new to Python so my knowledge is limited. Any help would be greatly
appreciated. Thanks. Also the version of Python I'm using is 3.4.2.
import urllib.request
import re
def currencyManager(currTo,currFrom,currAmount,currDate):
try:
currency_to = currTo #'us dollars'
currency_from = currFrom #'euro'
currency_from_amount = currAmount
on_date = currDate # Day-Month-Year
currency_from = currency_from.replace(' ', '+')
currency_to = currency_to.replace(' ', '+')
url = 'http://www.wolframalpha.com/input/?i=' + str(currency_from_amount) + '+' + str(currency_from) + '+to+' + str(currency_to) + '+on+' + str(on_date)
req = urllib.request.Request(url)
output = ''
urllib.request.urlopen(req)
page_fetch = urllib.request.urlopen(req)
output = page_fetch.read().decode('utf-8')
search = '<area shape="rect.*href="\/input\/\?i=(.*?)\+.*?&lk=1'
result = re.findall(r'' + search, output, re.S)
if len(result) > 0:
amount = float(result[0])
print(str(amount))
else:
print('No match found')
except URLError as e:
print(e)
currencyManager(currTo,currFrom,currAmount,currDate)
Answer: The command line
python currencyManager.py "us dollars" "euro" 100 "02-11-2014"
does not automatically assign "us dollars" "euro" 100 "02-11-2014" to
currTo,currFrom,currAmount,currDate. Instead the command line arguments are
stored in a list,
[sys.argv](https://docs.python.org/2/library/sys.html#sys.argv). You need to
parse sys.argv and/or pass its values on to the call to currencyManager:
For example, change
currencyManager(currTo,currFrom,currAmount,currDate)
to
import sys
currencyManager(*sys.argv[1:5])
* * *
The first element in `sys.argv` is the script name. Thus `sys.argv[1:5]`
consists of the next 4 arguments after the script name (assuming 4 arguments
were entered on the command line.) You may want to check that the right number
of arguments are passed on the command line and that they are of the right
type. [The `argparse` module](https://docs.python.org/3/library/argparse.html)
can help you here.
The `*` in `*sys.argv[1:5]` [unpacks the
list](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-
python/) `sys.argv[1:5]` and passes the items in the list as arguments to the
function `currencyManager`.
|
I would like to check if an input is python code
Question: I would like to check if an input is code before joining it to a larger
variable to eventually execute, is there any way I can do this? For example:
import readline
while True:
codelines=[]
code=raw_input(">>> ")
if code.iscode():
codelines.append(code)
elif x=="end":
break
else:
print "Not usable code."
fullcode="\n".join(codelines)
try:
exec fullcode
except Exception, e:
print e
But I know of no command that works like `.iscode()`
Answer: You could try parsing the input with
[`ast.parse`](https://docs.python.org/3/library/ast.html#ast.parse):
import ast
while True:
codelines=[]
code=raw_input(">>> ")
try:
ast.parse(code) # Try to parse the string.
except SyntaxError:
if x=="end": # If we get here, the string contains invalid code.
break
else:
print "Not usable code."
else: # Otherwise, the string was valid. So, we add it to the list.
codelines.append(code)
The function will raise a `SyntaxError` if the string is non-parseable
(contains invalid Python code).
|
Cannot import MySQLdb - python - Windows 8.1
Question: I am trying to import MySQLdb in python. I checked and followed all possible
solutions but I am still not able to import it. I have Windows 8.1.
So I started fresh, I installed the latest version of python (2.7.8), set the
path and pythonpath variables, and then tried installing the MySQL-
python-1.2.5.win-amd64-py2.7 from the link
(<https://pypi.python.org/pypi/MySQL-python/>)
This is the error I get
>>> import MySQLdb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named MySQLdb
I have tried searching and following many links. Nothing worked!! Can someone
please help me with this.
Thanks
Answer: Issue resolved. I dint have a C compiler / Visual Studio in my laptop - This
link might help <http://mysql-python.blogspot.com/2012/11/is-mysqldb-hard-to-
install.html> I installed MinGW -
<http://sourceforge.net/projects/mingw/?source=typ_redirect> Selected the c++
option
I uninstalled Anaconda, Python and Installed anaconda again. So python 2.7.7
got installed along with Anaconda. Did a conda init,
conda install pip
pip install mysql-python
and then import MySQLdb
No Error!! Hope this helps!!
====================================================================
Update - Jan 2, 2015
If you want to try installing mysql-python package using conda instead of pip,
you can try the following, which worked for me.
conda install binstar
binstar search -t conda mysql-python
This will show you 10 different packages for different OS. krisvanneste mysql-
python is for Win-64
To know more about this package use the command
binstar show krisvanneste/mysql-python
This will show you the command to install the mysql-python package which
happens to be
conda install --channel https://conda.binstar.org/krisvanneste mysql-python
This will install the required package. Now trying import MySQLdb in python
wont throw error.
|
Python Pandas Data Formatting
Question: I am in some sort of Python Pandas datetime purgatory and cannot seem to
figure out why the below throws an error. I have a simple date, a clear format
string, and a thus far unexplained ValueError. I've done quite a bit of
searching, and can't seem to get to the bottom of this.
On top of the issue below, what is the concept surrounding the format string
referred to as? In other words, where can I learn more about how the %m, %d,
and %Y can be changed and reconfigured to specify different formats?
Thanking you in advance from purgatory.
In [19]: import pandas as pd
In [20]: date = '05-01-11'
In [21]: print type(date)
<type 'str'>
In [22]: pd.to_datetime(date, format = '%m-%d-%Y')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-22-24aff1dbfb25> in <module>()
----> 1 pd.to_datetime(date, format = '%m-%d-%Y')
/Users/amormachine/anaconda/lib/python2.7/site-packages/pandas/tseries/tools.pyc in to_datetime(arg, errors, dayfirst, utc, box, format, coerce, unit, infer_datetime_format)
323 return _convert_listlike(arg, box, format)
324
--> 325 return _convert_listlike(np.array([ arg ]), box, format)[0]
326
327 class DateParseError(ValueError):
/Users/amormachine/anaconda/lib/python2.7/site-packages/pandas/tseries/tools.pyc in _convert_listlike(arg, box, format)
311 return DatetimeIndex._simple_new(values, None, tz=tz)
312 except (ValueError, TypeError):
--> 313 raise e
314
315 if arg is None:
ValueError: time data '05-01-11' does not match format '%m-%d-%Y' --> THE HELL IT DOESN'T!
Answer: `%Y` is a [four-digit year](https://docs.python.org/2/library/datetime.html).
You should find that `%y` will work.
|
How to convert Excel data into mysql without installing the plugin
Question: I have an Excel file which contains details about a database with 8 columns &
8000 rows.This data should be converted in to MySql. **I would like to use
python but not sure which library would support this conversion.** The file
which I have is Xls.I want a .sql file conversion. Could anyone help me with
the python code/suggest any other alternative???
Answer: To convert an Excel file to an SQL file python is one of the best options.
There is an **xlrd** library. The following is used to load the Excel file.
import xlrd
book = xlrd.open_workbook("<file_name>")
sheet = book.sheet_by_name("<sheet_Name>")
database = MySQLdb.connect (host="localhost", user = "root", passwd = "<Password>", db = "<db_name>")
|
Mongo UUID python vs java format
Question: I have an application that sends requests to a restAPI, where a java process
stores the data in mongo. When I try to read this data back using pymongo,
reading the database directly, it gets the UUIDs differently (seems it is due
to different encoding in java/python).
Is there a way to convert this UUID back and forth?
EDIT:
A few examples:
> in java: 38f51c1d-360e-42c1-8f9a-3f0a9d08173d,
> 1597d6ea-8e5f-473b-a034-f51de09447ec
>
> in python: c1420e36-1d1c-f538-3d17-089d0a3f9a8f,
> 3b475f8e-ead6-9715-ec47-94e01df534a0
thanks,
Answer: I spent a day of my life trying to tackle this same issue...
The root problem is likely that your Java code is storing the UUIDs in the
Mongo database with the Java drivers using the legacy UUID3 standard. To
verify, you just login with the Mongo shell and look at the raw output of your
UUIDs. If there's a 3, then that's the issue.
db.my_collection_name.find().limit(1)
...BinData(3,"blahblahblahblahblah"),...
With UUID3, Mongo decided to do everything different with all their drivers
based on the given language. (thanks Mongo…) It wasn’t until UUID4 that Mongo
decided to standardize across all their different drivers for various
languages. Ideally you should probably switch to UUID4, but that’s a more
impactful solution, so not necessarily practical. REFERENCE:
<http://3t.io/blog/best-practices-uuid-mongodb/>
Not to worry, there’s hope! The magic technique to make it all work involves
simply pulling the collection with the JAVA_LEGACY uuid specification in the
CodecOptions.
my_collection = db.get_collection('MyCollectionName', CodecOptions(uuid_representation=JAVA_LEGACY))
After that you can query with the UUIDs from your APIs and your query results
will also have the UUIDs in the same format as your APIs.
Here is a complete query example using this technique.
import pprint
import uuid
from bson.binary import JAVA_LEGACY
from bson.codec_options import CodecOptions
from pymongo import MongoClient
PP = pprint.PrettyPrinter(indent=2)
client = MongoClient('localhost', 27017)
db = client.my_database
# REFERENCES: http://3t.io/blog/best-practices-uuid-mongodb/ | http://api.mongodb.org/python/current/api/bson/binary.html
my_collection = db.get_collection('my_collection', CodecOptions(uuid_representation=JAVA_LEGACY))
my_java_uuid3 = "bee4ecb8-11e8-4267-8885-1bf7657fe6b7"
results = list(my_collection.find({"my_uuid": uuid.UUID(my_java_uuid3)}))
if results and len(results) > 0:
for result in results:
PP.pprint(result)
|
Inter-thread communication with python: Plotting memory-consumption using separate python thread
Question: Withing a python-script I call sequentially different functions (lets say
func_a, func_b, func_c), which process a given set of input data.
The execution takes about 30min.
Within these 30 minutes, I want to track and plot the memory consumption of my
program.
I considered to do this within a separate thread (e.g. my_tracking_thread),
which checks for the current memory usage.
[E.g. like this:](http://fa.bianp.net/blog/2013/different-ways-to-get-memory-
consumption-or-lessons-learned-from-memory_profiler/)
import psutil
process = psutil.Process(os.getpid())
mem = process.get_memory_info()[0] / float(2 ** 20)
Plotting the collected data of my_tracking_thread with matplotlib, I would
like to include as additional information the time-stamps, at which the
different functions started (func_a@4:14, func_b@6:23, func_c@25:48).
Therefore the question: How do I get my_tracking_thread to notify, that
func_{a|b|c} has been started?
Any help appreciated.
Answer: It's difficult to see how you could do that from the other thread. Do you need
to? I'd suggest wrapping them in decorators that log the time they start and
finish, and then when you generate the plot with the memory consumption data
merging in the start/finish information.
import collections
import functools
import time
LogEvent = collections.namedtuple('LogEvent', 'function event timestamp')
events = []
def log_start_finish(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
events.append(LogEvent(func.__name__, 'start', time.time())
result = func(*args, **kwargs)
events.append(LogEvent(func.__name__, 'finish', time.time())
return result
return wrapper
@log_start_finish
def func_a(...):
....
@log_start_finish
def func_b(...):
....
@log_start_finish
def func_c(...):
....
|
Access USB device info with ctypes?
Question: I am using python with `ctypes` to somehow access information about a USB
device that is connected to the PC. Is this achievable from a .dll? I try to
find things like where it's mounted, its vendor, etc.
An example:
>>> import ctypes import windll
>>> windll.kernel32
<WindDLL 'kernel32', handle 77590000 at 581b70>
But how do I find which .dll is the right one? I googled around but there
doesn't seem to be anything.
Answer: In the end I used a simpler methodology.
I use the `winreg` module that ships with Python to get access to the Windows
registry. `HKEY_LOCAL_MACHINE\SYSTEM\MountedDevices` keeps track of all
devices mounted (currently connected or not). So I get all the device
information from there and to check if the device is currently connected I
simply `os.path.exists` the storage letter of the device (ie. `G:`). The
storage letter can be obtained from the key `MountedDevices`.
Example:
# Make it work for Python2 and Python3
if sys.version_info[0]<3:
from _winreg import *
else:
from winreg import *
# Get DOS devices (connected or not)
def get_dos_devices():
ddevs=[dev for dev in get_mounted_devices() if 'DosDevices' in dev[0]]
return [(d[0], regbin2str(d[1])) for d in ddevs]
# Get all mounted devices (connected or not)
def get_mounted_devices():
devs=[]
mounts=OpenKey(HKEY_LOCAL_MACHINE, 'SYSTEM\MountedDevices')
for i in range(QueryInfoKey(mounts)[1]):
devs+=[EnumValue(mounts, i)]
return devs
# Decode registry binary to readable string
def regbin2str(bin):
str=''
for i in range(0, len(bin), 2):
if bin[i]<128:
str+=chr(bin[i])
return str
Then simply run:
get_dos_devices()
|
Python cannot allocate memory using multiprocessing.pool
Question: My code (part of a genetic optimization algorithm) runs a few processes in
parallel, waits for all of them to finish, reads the output, and then repeats
with a different input. Everything was working fine when I tested with 60
repetitions. Since it worked, I decided to use a more realistic number of
repetitions, 200. I received this error:
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 302, in _handle_workers
pool._maintain_pool()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 206, in _maintain_pool
self._repopulate_pool()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 199, in _repopulate_pool
w.start()
File "/usr/lib/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 120, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
Here is a snippet of my code that uses pool:
def RunMany(inputs):
from multiprocessing import cpu_count, Pool
proc=inputs[0]
pool=Pool(processes = proc)
results=[]
for arg1 in inputs[1]:
for arg2 in inputs[2]:
for arg3 in inputs[3]:
results.append(pool.apply_async(RunOne, args=(arg1, arg2, arg3)))
casenum=0
datadict=dict()
for p in results:
#get results of simulation once it has finished
datadict[casenum]=p.get()
casenum+=1
return datadict
The RunOne function creates an object in class I created, uses a
computationally-heavy python package to solve a chemistry problem that takes
about 30 seconds, and returns the object with the output of the chemistry
solver.
So, my code calls RunMany in serial, and RunMany then calls RunOne in
parallel. In my testing, I've called RunOne using 10 processors (the computer
has 16) and a pool of 20 calls to RunOne. In other words,
len(arg1)*len(arg2)*len(arg3)=20. Everything worked fine when my code called
RunMany 60 times, but I ran out of memory when I called it 200 times.
Does this mean some process isn't correctly cleaning up after itself? Do I
have a memory leak? How can I determine if I have a memory leak, and how do I
find out the cause of the leak? The only item that is growing in my
200-repetition loop is a list of numbers that grows from 0 size to a length of
200. I have a dictionary of objects from a custom class I've built, but it is
capped at a length of 50 entries - each time the loop executes, it deletes an
item from the dictionary and replaces it with another item.
**Edit:** Here is a snippet of the code that calls RunMany
for run in range(nruns):
#create inputs object for RunMany using genetic methods.
#Either use starting "population" or create "child" inputs from successful previous runs
datadict = RunMany(inputs)
sumsquare=0
for i in range(len(datadictsenk)): #input condition
sumsquare+=Compare(datadict[i],Target[i]) #compare result to target
with open(os.path.join(mainpath,'Outputs','output.txt'),'a') as f:
f.write('\t'.join([str(x) for x in [inputs.name, sumsquare]])+'\n')
Objective.append(sumsquare) #add sum of squares to list, to be plotted outside of loop
population[inputs]=sumsquare #add/update the model in the "population", using the inputs object as a key, and it's objective function as the value
if len(population)>initialpopulation:
population = PopulationReduction(population) #reduce the "population" by "killing" unfit "genes"
avgtime=(datetime.datetime.now()-starttime2)//(run+1)
remaining=(nruns-run-1)*avgtime
print(' Finished '+str(run+1)+' / ' +str(nruns)+'. Elapsed: '+str(datetime.datetime.now().replace(microsecond=0)-starttime)+' Remaining: '+str(remaining)+' Finish at '+str((datetime.datetime.now()+remaining).replace(microsecond=0))+'~~~', end="\r")
Answer: As shown in the comments to my question, the answer came from Puciek.
The solution was to close the pool of processes after it is finished. I
thought that it would be closed automatically because the `results` variable
is local to `RunMany`, and would be deleted after `RunMany` completed.
However, python doesn't always work as expected.
The fixed code is:
def RunMany(inputs):
from multiprocessing import cpu_count, Pool
proc=inputs[0]
pool=Pool(processes = proc)
results=[]
for arg1 in inputs[1]:
for arg2 in inputs[2]:
for arg3 in inputs[3]:
results.append(pool.apply_async(RunOne, args=(arg1, arg2, arg3)))
#new section
pool.close()
pool.join()
#end new section
casenum=0
datadict=dict()
for p in results:
#get results of simulation once it has finished
datadict[casenum]=p.get()
casenum+=1
return datadict
|
OrderedDict won't sort within a class
Question: I have a parent class, and I want to keep a registry (in the form of a
dictionary) of all instances of its sub-classes. Easy, but I want the registry
to sort itself based on its keys, which are the arguments of the 2 sub-classes
on initialisation. This is my code in simplified form:
from collections import OrderedDict
class Parent:
_registry = OrderedDict()
def __init__(self):
# add each sub-class instance to the registry & sort the registry
self._registry.update({self._num:self})
self._registry = OrderedDict(sorted(self._registry.items()))
class Foo(Parent):
def __init__(self, number):
self._num = number
Parent.__init__(self)
# then do some stuff
class Bar(Parent):
def __init__(self, number):
self._num = number
Parent.__init__(self)
# then do some other stuff
...
But, although the registry updates itself with the new sub-class objects, it
does not sort itself.
>>> a = Foo(3)
>>> Parent._registry # check to see if a was added to the registry
OrderedDict([(3, <Foo instance at 0x00A19C0C8>)])
>>> b = Bar(1)
>>> Parent._registry # check to see if b was inserted before a in the registry
OrderedDict([(3, <Foo instance at 0x00A19C0C8>), (1, <Bar instance at 0x00A19C1C8>)])
`b` comes after `a` in the registry!
If I do it manually in the iPython console, it works:
>>> Parent._registry = OrderedDict(sorted(Parent._registry.items()))
OrderedDict([(1, <Bar instance at 0x00A19C1C8>), (3, <Foo instance at 0x00A19C0C8>)])
Why won't it sort itself? I need it to, because later on, things have to
happen to those objects in strict order of their `number` arguments.
Answer: That's because:
self._registry = OrderedDict(sorted(self._registry.items()))
creates a new attrbute on the instance, this doesn't affect
`Parent._registry`.
Replace that line with:
Parent._registry = OrderedDict(sorted(self._registry.items()))
Here `self._registry.items()` can fetch the value of `Parent._registry` but
that doesn't mean assignment to `self._registry` will affect
`Parent._registry`.
* * *
Another way to do it using `self._registry` itself:
def __init__(self):
items = sorted(self._registry.items() + [(self._num, self)]) #collect items
self._registry.clear() #clean the dict
self._registry.update(items) #now update it
|
Get list of all possible dict configs in Python
Question: I have dict that describes possible config values, e.g.
{'a':[1,2], 'b':[3,4,5]}
I want to generate list of all acceptable configs, e.g.
[{'a':1, 'b':3},
{'a':1, 'b':4},
{'a':1, 'b':5},
{'a':2, 'b':3},
{'a':2, 'b':4},
{'a':1, 'b':5}]
I've looked through the docs and SO and it certainly seems to involve
`itertools.product`, but I can't get it without a nested for loop.
Answer: You don't need a nested `for` loop here:
from itertools import product
[dict(zip(d.keys(), combo)) for combo in product(*d.values())]
`product(*d.values())` produces your required value combinations, and
`dict(zip(d.keys(), combo))` recombines each combination with the keys again.
Demo:
>>> from itertools import product
>>> d = {'a':[1,2], 'b':[3,4,5]}
>>> list(product(*d.values()))
[(1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5)]
>>> [dict(zip(d.keys(), combo)) for combo in product(*d.values())]
[{'a': 1, 'b': 3}, {'a': 1, 'b': 4}, {'a': 1, 'b': 5}, {'a': 2, 'b': 3}, {'a': 2, 'b': 4}, {'a': 2, 'b': 5}]
>>> from pprint import pprint
>>> pprint(_)
[{'a': 1, 'b': 3},
{'a': 1, 'b': 4},
{'a': 1, 'b': 5},
{'a': 2, 'b': 3},
{'a': 2, 'b': 4},
{'a': 2, 'b': 5}]
|
How do i generate secure session_id on python?
Question: I'm trying to generate a secure session_id on python 3.
First of all, i just generate md5 hash value of timestamp (included
microseconds) and then write the value via cookie. But this method could
hijackable. for example, extract session_id value from browser A's cookie, and
then modify session_id value to browser B's cookie. it's works WELL.
For more securely, switch to decryptable encryption from md5 hash. That is,
session_id = decryptable_encryption(key:USER-AGENT, value:timestamp) and write
cookie. it's works WELL for same browsers, but different browsers not.
Encryption with IP address is not good. because of it requires mobile
environment which is changes ip address consistently.
Required more securely session_id, How do i generate it? I want to know about
mechanism of generating session id. Please let me know.
Answer: With UUID4 you can generate unique ID and is easy to use. (Convert to string
for assign to cookie)
>>> from uuid import uuid4
>>> uuid4()
UUID('6f4a1f4d-1315-4e3e-a737-14f005f86b8c')
>>>
|
What is the difference between spark-submit and pyspark?
Question: If I start up pyspark and then run this command:
import my_script; spark = my_script.Sparker(sc); spark.collapse('./data/')
Everything is A-ok. If, however, I try to do the same thing through the
commandline and spark-submit, I get an error:
Command: /usr/local/spark/bin/spark-submit my_script.py collapse ./data/
File "/usr/local/spark/python/pyspark/rdd.py", line 352, in func
return f(iterator)
File "/usr/local/spark/python/pyspark/rdd.py", line 1576, in combineLocally
merger.mergeValues(iterator)
File "/usr/local/spark/python/pyspark/shuffle.py", line 245, in mergeValues
for k, v in iterator:
File "/.../my_script.py", line 173, in _json_args_to_arr
js = cls._json(line)
RuntimeError: uninitialized staticmethod object
my_script:
...
if __name__ == "__main__":
args = sys.argv[1:]
if args[0] == 'collapse':
directory = args[1]
from pyspark import SparkContext
sc = SparkContext(appName="Collapse")
spark = Sparker(sc)
spark.collapse(directory)
sc.stop()
Why is this happening? What's the difference between running pyspark and
running spark-submit that would cause this divergence? And how can I make this
work in spark-submit?
EDIT: I tried running this from the bash shell by doing `pyspark my_script.py
collapse ./data/` and I got the same error. The only time when everything
works is when I am in a python shell and import the script.
Answer: pyspark-submit send your code to workers in a cluster to execute.
check: <http://spark.apache.org/docs/latest/submitting-applications.html>
|
python math.acos inverse cosine issues
Question: I need to find an angle of a triangle with only three coordinate points on a
plane. In regular trigonometry at the end of the equation i would use:
cos = (a ** 2) - (b ** 2) - (c ** 2) / -2 * b * c
I used the `**` operator for to the power of, and `sideA`, `sideB` and `sideC`
are the lengths of the sides of the triangle.
I am currently using `math.acos()` to find the angle, but I'm getting a math
domain error. Is `math.acos()` the right function to use for what I understand
as the inverse cosine?
Here is a excerpt of code from my program:
x = 100
y = 100
centerX = x + 50
centerY = y + 50
if event.type == MOUSEMOTION:
mousex, mousey = event.pos
sideA = math.sqrt((x - mousex)**2)+((y - mousey)**2)
sideB = math.sqrt((centerX - mousex)**2)+((centerY - mousey)**2)
sideC = math.sqrt((centerX - x)**2)+((centerY - y)**2)
cos = float(sideA**2)-(sideB**2)-(sideC**2)/(-2*(sideB*sideC))
angle = math.acos(cos)
print angle
What am i doing wrong? When I enter the numbers from my program into my
calculator I get the right angle.
Answer: Your problem here is that your code is formatted so badly, you can't see the
errors with parentheses that you're making.
## Error 1
For instance, this line:
sideA = math.sqrt((x - mousex)**2)+((y - mousey)**2)
when formatted properly, looks like this:
sideA = math.sqrt((x - mousex) ** 2) + ((y - mousey) ** 2)
and when you remove the redundant parentheses, you can see what's happening
even more clearly:
sideA = math.sqrt((x - mousex) ** 2) + (y - mousey) ** 2
You're only passing the square of _one_ of your sides to `math.sqrt()`, and
just adding the square of the second side to it. It should be:
sideA = math.sqrt((x - mousex) ** 2 + (y - mousey) ** 2)
or even better:
sideA = math.hypot(x - mousex, y - mousey)
## Error 2
Then this line:
cos = float(sideA**2)-(sideB**2)-(sideC**2)/(-2*(sideB*sideC))
has a similar problem - you're missing parentheses around those first three
terms, and you're only dividing the square of side C by 2bc. It should be:
cos = (sideA ** 2 - sideB ** 2 - sideC ** 2) / ( -2 * sideB * sideC)
## Solution
As a result of the above, you're not calculating the cosine correctly, so what
you're passing to `math.acos()` is way out of an allowable range for a cosine
(a cosine will always be in the range `-1 <= cos A <= 1`), so it's giving you
that domain error. Printing out your values would have helped see you were
getting something really strange, here.
Here's a fixed and working version of your program, modified to just set
values directly for `mousex` and `mousey`:
#!/usr/bin/env python
import math
x, y = 100, 100
centerX, centerY = x + 50, y + 50
mousex, mousey = 100,150
sideA = math.hypot(x - mousex, y - mousey);
sideB = math.hypot(centerX - mousex, centerY - mousey)
sideC = math.hypot(centerX - x, centerY - y)
cosA = (sideB ** 2 + sideC ** 2 - sideA ** 2) / (2 * sideB * sideC)
angle = math.acos(cosA)
print "sideA: %.2f, sideB: %.2f, sideC: %.2f" % (sideA, sideB, sideC)
print "cosA: %.6f" % (cosA)
print "angle: %.2f radians, %.2f degrees" % (angle, math.degrees(angle))
which outputs:
paul@horus:~/src/sandbox$ ./angle.py
sideA: 50.00, sideB: 50.00, sideC: 70.71
cosA: 0.707107
angle: 0.79 radians, 45.00 degrees
paul@horus:~/src/sandbox$
I've taken the liberty of rearranging your cosine rule calculation slightly to
eliminate the need to negate the denominator.
|
Python Find all letter chars in string and surround with text
Question: So lets say I have a string that says "m * x + b", I want to find any letter
chars, other than x, and surround them with text.
In this example, output should be "var['m'] * x + var['b']"
Answer: A tiny regular expression solves your problem:
import re
s = "m * x + b"
print re.sub("([a-wyzA-Z])", r"var['\1']", s)
Output:
var['m'] * x + var['b']
Explanation:
* `[a-wyzA-Z]` matches all characters within the brackets: a-w, y, z and A-Z (so basically every letter but x)
* `(...)` makes the found match accessible later via `\1`
* `r"var['\1']" is the replacement referring to the match`\1`
|
os.command giving Attribute error
Question: Hi I am trying to rum a C *.o using python 2.6.5 as follows
import os
import sys
file_type = os.command('./file_type.o %s.txt 2>&1' % file_name)
And, it gives the error message :
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'command'
Being a beginner this might be a very trivial question, I request for the
direction / help.
Answer: There is no function named `command` in the `os` module, hence the error. You
probably meant to call
[`os.system()`](https://docs.python.org/2/library/os.html#os.system) instead.
|
Python SQlite3 syntax error - cant figure out whats wrong
Question: I have a variation of the code below which I've re-written several times and
get the same error. `OperationalError: near ".": syntax error` which I googled
and removed a primary key and checked that all field names started with a
lower case letter and had no spaces. I'm a complete beginner with SQLite so
any help would be very much appreciated.
ETF = {'': '1', '#_of_holdings': '31', '%_in_top_10': '46.32%', '1_week': '-2.14%', '1_year': '3.86%', '200-day': '10.53%', '3_year': '39.32%', '4_week': '-6.65%', '5_year': 'n/a', 'annual_dividend_rate': '$0.18', 'annual_dividend_yield_%': '0.49%', 'assets': '13770', 'avg._vol': '4233', 'beta': '0.99', 'commission_free': 'Not Available', 'concentration': 'C', 'dividend': '$0.02', 'dividend_date': '2014-09-24', 'er': '1.25%', 'etfdb_category': 'Global Equities', 'expenses': 'C', 'inception': '2010-07-20', 'inverse': 'No',
'leveraged': 'No', 'liquidity': 'C', 'lower_bollinger': '$36.16', 'lt_cap_gains': '15%', 'name': 'WCM/BNY Mellon Focused Growth ADR ETF', 'overall': 'C', 'p/e_ratio': '23.58', 'performance': 'A-', 'price': '36.10', 'resistance_1': '$36.10', 'rsi': '33', 'scoredividend': 'C', 'st_cap_gains': '35%', 'support_1': '$36.10', 'symbol': 'AADR', 'tax_form': '1099', 'upper_bollinger': '$38.80', 'value': '152.8113', 'volatility': 'C',
'ytd': '-3.23%'}
import sqlite3
conn = sqlite3.connect('sample.sqlite')
cursor = conn.cursor()
cursor.execute('''CREATE TABLE static_data (p4w real, tax_form text, resistance_1 real, dividend_date date, \
expenses_rating text, avg_vol real, p5y real, scoredividend text, concentration text, expense_ratio real, \
inverse text, upper_bollinger real, p_e_ratio real, leveraged text, performance_rating text, pytd real, \
volatility text, price real, rsi real, lt_cap_gains real, holdings real, symbol real, overall_rating text,\
p1y real, beta real, p3y real, dividend_yield real, value real, inception date, dividend real, in_top_10 real,\
assets real, name text, st_cap_gains real, etfdb_category real, annual_dividend_rate real, support_1 real, \
lower_bollinger real, DMA200 real, liquidity text, p1w real, commission_free text)''')
conn.commit()
fieldnames = ['p4w','tax_form','resistance_1','dividend_date','expenses_rating','avg_vol',
'p5y','scoredividend','concentration','expense_ratio','inverse','upper_bollinger',
'p_e_ratio','leveraged','performance_rating','pytd','volatility','price','rsi',
'lt_cap_gains','holdings','symbol','overall_rating','p1y','beta','p3y','dividend_yield',
'value','inception','dividend','in_top_10','assets','name','st_cap_gains','etfdb_category',
'annual_dividend_rate','support_1','lower_bollinger','DMA200','liquidity','p1w',
'commission_free']
dictnames = ['4_week','tax_form','resistance_1','dividend_date','expenses','avg._vol',
'5_year','scoredividend','concentration','er','inverse','upper_bollinger','p/e_ratio',
'leveraged','performance','ytd','volatility','price','rsi','lt_cap_gains','#_of_holdings',
'symbol','overall','1_year','beta','3_year','annual_dividend_yield_%','value','inception',
'dividend','%_in_top_10','assets','name','st_cap_gains','etfdb_category',
'annual_dividend_rate','support_1','lower_bollinger','200-day','liquidity','1_week',
'commission_free']
fieldmap = zip(fieldnames,dictnames)
SQL_STRING = '''INSERT INTO static_data (%(colnames)s) values (%(dictfields)s);'''
colnames = ','.join(fieldnames)
dictnames = [":"+i for i in dictnames]
dictfields = ','.join(dictnames)
SQL_STRING_ETF = SQL_STRING % dict(colnames=colnames,dictfields=dictfields)
cursor.execute(SQL_STRING_ETF, ETF)
Full traceback is below
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2820, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-41-64b4fceab545>", line 36, in <module>
cursor.execute(SQL_STRING_ETF, ETF)
OperationalError: near ".": syntax error
`SQL_STRING_ETF` evaluates to
'INSERT INTO static_data (p4w,tax_form,resistance_1,dividend_date,expenses_rating,avg_vol,p5y,scoredividend,concentration,expense_ratio,inverse,upper_bollinger,p_e_ratio,leveraged,performance_rating,pytd,volatility,price,rsi,lt_cap_gains,holdings,symbol,overall_rating,p1y,beta,p3y,dividend_yield,value,inception,dividend,in_top_10,assets,name,st_cap_gains,etfdb_category,annual_dividend_rate,support_1,lower_bollinger,DMA200,liquidity,p1w,commission_free) values (:4_week,:tax_form,:resistance_1,:dividend_date,:expenses,:avg._vol,:5_year,:scoredividend,:concentration,:er,:inverse,:upper_bollinger,:p/e_ratio,:leveraged,:performance,:ytd,:volatility,:price,:rsi,:lt_cap_gains,:#_of_holdings,:symbol,:overall,:1_year,:beta,:3_year,:annual_dividend_yield_%,:value,:inception,:dividend,:%_in_top_10,:assets,:name,:st_cap_gains,:etfdb_category,:annual_dividend_rate,:support_1,:lower_bollinger,:200-day,:liquidity,:1_week,:commission_free);'
Answer: There are some invalid parameter names:
INSERT INTO static_data(...)
values (...,:avg._vol,:5_year,...,:p/e_ratio,...,:#_of_holdings,...,:1_year,...,
:3_year,:annual_dividend_yield_%,...,:%_in_top_10,...,:200-day,...,
:1_week,:commission_free);
Parameter names must be valid SQL identifiers, i.e., must not begin with a
digit, and cannot contain any special characters expect `_`.
You already have valid names for the table field names; just use them instead
of `dictnames` (and in the `ETF` dictionary).
|
IF and ELSE Exception in Python3
Question: I am trying to catch an exception for two boolean (for if and else
separately).
this is what I am working on:
from math import *
from decimal import Decimal
def add(self, *args):
try:
if all(isinstance(n, int) for n in args):
print(sum(int(n) for n in args))
else:
print(fsum(Decimal(n) for n in args))
except (NameError, SyntaxError) as e:
print("Error! {}".format(e))
def main():
add(a)
if __name__ == '__main__': main()
Both if and else gives me two exceptions `NameError` and `SyntaxError`, if I
give `add(a)` its giving me `NameError` as the exception. But the `except` is
not catching the error.
How should I catch the exception for both of them separately?
Answer: From what I understand I think you can try like this.
except NameError as e :
print "Name error occured"
print("Error! {}".format(e))
except SyntaxError as f:
print "Syntax error occurred"
print("Error! {}".format(f))
|
How to avoid nested "with" statements when working with multiple files in Python
Question: When working with multiple files in Python code can get ugly when using the
recommended style:
with open("foo.txt") as foo:
with open("bar.txt", "w") as bar:
with open("baz.txt", "w") as baz:
# Read from foo, write different output to bar an baz
That's three indentation levels just for working with files! The alternative
would be this
foo = open("foo.txt")
bar = open("bar.txt", "w")
baz = open("baz.txt", "w")
# Read from foo, write different output to bar an baz
foo.close()
bar.close()
baz.close()
I have a feeling that either of these examples could be refactored to
something more elegant. Any examples?
Answer: Python 2.7 and up let you specify multiple context managers in one `with`
statement:
with open("foo.txt") as foo, open("bar.txt", "w") as bar, open("baz.txt", "w") as baz:
# Read from foo, write different output to bar an baz
The line _does_ get long, and you cannot use parentheses to keep that below 80
characters. You can use `\` backslash continuations however:
with open("foo.txt") as foo,\
open("bar.txt", "w") as bar,\
open("baz.txt", "w") as baz:
# Read from foo, write different output to bar an baz
Another option would be to use [`contextlib.ExitStack()` context
manager](https://docs.python.org/3/library/contextlib.html#contextlib.ExitStack)
(only in Python 3.3 and up):
from contextlib import ExitStack
with ExitStack() as stack:
foo = stack.enter_context(open("foo.txt"))
bar = stack.enter_context(open("bar.txt"))
baz = stack.enter_context(open("baz.txt"))
|
How to parse Date(928142400000+0200)?
Question: I have JSON response object with string representing date and time:
"event":{
"type":"Type",
"date-time":"\/Date(928142400000+0200)\/",
},
I am not sure:
* what format is that
* how can I parse it in python app
* how can I convert python date into this format
Any suggestions?
Answer: `928142400000` is the time in milliseconds since the UNIX epoch, `+0200` is
the timezone.
With the [`dateutil` library](https://labix.org/python-
dateutil#head-8bf499d888b70bc300c6c8820dc123326197c00f) or
[`datetime.timezone()`
objects](https://docs.python.org/3/library/datetime.html#timezone-objects) you
can model the timezone offset, the timestamp itself is parsable with
[`datetime.datetime.fromtimestamp()`](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromtimestamp),
provided you divide the value by 1000.0:
import datetime
import re
timestamp_parse = re.compile(r'Date\((\d+)([+-]\d{4})\)')
timestamp, offset = timestamp_parse.search(datetime_value).groups()
tzoffset = datetime.timedelta(hours=int(offset[1:3]), minutes=int(offset[3:]))
if offset[0] == '-':
tzoffset *= -1
tzoffset = datetime.timezone(tzoffset)
dt = datetime.datetime.fromtimestamp(int(timestamp) / 1000.0).replace(tzinfo=tzoffset)
The [`dateutil.tz.tzoffset()` object](https://labix.org/python-
dateutil#head-8bf499d888b70bc300c6c8820dc123326197c00f) version is similar:
import datetime
import re
import dateutil.tz
timestamp_parse = re.compile(r'Date\((\d+)([+-]\d{4})\)')
timestamp, offset = timestamp_parse.search(datetime_value).groups()
tzoffset = int(offset[1:3]) * 3600 + int(offset[3:]) * 60
if offset[0] == '-':
tzoffset *= -1
tzoffset = dateutil.tz.tzoffset(None, tzoffset)
dt = datetime.datetime.fromtimestamp(int(timestamp) / 1000.0).replace(tzinfo=tzoffset)
Demo:
>>> import datetime
>>> import re
>>> datetime_value = "/Date(928142400000+0200)/"
>>> timestamp_parse = re.compile(r'Date\((\d+)([+-]\d{4})\)')
>>> timestamp, offset = timestamp_parse.search(datetime_value).groups()
>>> tzoffset = datetime.timedelta(hours=int(offset[1:3]), minutes=int(offset[3:]))
>>> if offset[0] == '-':
... tzoffset *= -1
...
>>> tzoffset = datetime.timezone(tzoffset)
>>> datetime.datetime.fromtimestamp(int(timestamp) / 1000.0).replace(tzinfo=tzoffset)
datetime.datetime(1999, 5, 31, 10, 20, tzinfo=datetime.timezone(datetime.timedelta(0, 7200)))
|
Why is the assert not getting imported here
Question: I have the following unit test case.
import unittest
from webPageTestUtils import WebPageTestProcessor
class webPageUtilsTest(unittest.TestCase):
def setUp(self):
self.webPageTestProcessor = WebPageTestProcessor()
def test_submitTest(self):
url = None;
assertEqual(self.webPageTestProcessor.submitTest(url),None)
if __name__ == '__main__':
unittest.main()
Upon execution I get an error saying.
`NameError: global name 'assertEqual' is not defined`
I have subclassed unittest.TestCase , then why is python not recognizing
assert
Answer: It's a method on `self`:
self.assertEqual(self.webPageTestProcessor.submitTest(url), None)
|
Calling Cython C functions from Python
Question: I have a [`Cython`](http://cython.org/) file called `foo.pyx` containing the
following functions:
def add_one(int n):
cdef int m = n + 1
return m
cdef int c_add_one(int n):
return n + 1
I build this `pyx` file using `cython -a foo.pyx` and can then do:
>>> import foo
>>> foo.add_one(5)
6
>>> foo.c_add_one(5)
AttributeError: 'module' object has no attribute 'c_add_one'
So it looks like I can't call `c_add_one` from python. What are the advantages
of declaring a function using `cdef`?
Answer: 1. **Speed** : Cython and C can call `cdef` and `cpdef` functions much faster than regular functions. Regular `def` functions are full-blown Python objects. They require reference counting, the GIL, etc.
2. **Encapsulation** : If a function is declared `cdef`, it will not be exposed to Python users. This is beneficial if the function was never intended for public consumption (e.g. because it doesn't check its parameters, or relates to an implementation detail which might change in the future). Users can still circumvent this via C/Cython, of course, but doing so is more of a hassle than most will bother with. In this way, it's similar to double-underscore name mangling.
|
python global parameters file
Question: I have a python script that calls multiple functions from other scripts. I
call the script giving it as a parameter a settings file with the format:
var1 = XXX
var2 = YYY
...
I would like to make those variables global so they can be used by every
function I call, but without having to pass them explicitely. What is the best
way to do this?
Thank you.
Answer: The simple way is using a settings (e.g. `settings.py`) module as suggested.
You simply define the variables in the module and they will be available to
any module that imports the `settings` module via `settings.var_name`.
Example:
file `settings.py`:
var_name = "value"
file `m.py`:
#!/usr/bin/env python
import settings
print settings.var_name
A preferred way is using the environment (child process inherits the
environment) to pass it in a global context. You can use
[os.environ](https://docs.python.org/2/library/os.html#process-parameters) for
this purpose.
An example of how to do it:
file `a.py`:
#!/usr/bin/env python
import os
import b
os.environ['var_name'] = 'value'
b.echo()
file `b.py`:
import os
def echo():
print os.environ['var_name']
|
python termination error when ctypes dll calls printf
Question: I am developing a python system with some core dlls accessed via ctypes. I
have reduced the problem to this condition: execute a module that loads (no
need to call) two dlls, one of which calls printf -- this error will occur in
exit.
> This application has requested the Runtime to terminate it in an unusual
> way. Please contact the application's support team for more information.
My environment:
\- Windows 7, SP1 \- Python 2.7.8 \- MinGW v 3.20
This test case is adapted from a tutorial on writing dlls with MinGW:
/* add_core.c */
__declspec(dllexport) int sum(int a, int b) {
return a + b;
}
/* sub_core.c */
#include <stdio.h>
__declspec(dllexport) int sum(int a, int b) {
printf("Hello from sub_core.c");
return a - b;
}
prog.py
import ctypes
add_core_dll = ctypes.cdll.LoadLibrary('add_core.dll')
add_core_dll = ctypes.cdll.LoadLibrary('sub_core.dll')
> make
gcc -Wall -O3 -g -ansi -c add_core.c -o add_core.o
gcc -g -L. -ansi -shared add_core.o -o add_core.dll
gcc -Wall -O3 -g -ansi -c sub_core.c -o sub_core.o
gcc -g -L. -ansi -shared sub_core.o -o sub_core.dll
>python prog.py
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
and pops up a message dialog to the same effect: "python.exe has stopped
working ...".
Note that the programs execute as expected and produce normal output. This
error at termination is just a big nuisance I'd like to be rid of.
Answer: the same happens for: Windows 7 Enterprise, SP1 Python 2.7.11 mingw32-g++.exe
5.3.0
|
What is wrong with my python code?
Question: I'm trying to make a guessing game in python.
from random import randint
print "\nI'm thinking of a number, you have to guess what it is.\n"
num = randint(1,100)
guess = 0
while guess != num:
guess = raw_input("Guess the number \n")
if guess < num:
print "Guess higher next time \n"
elif guess > num:
print "Guess lower next time \n"
elif guess == num:
print "That's correct \n"
The problem is no matter what number I enter, I keep getting the "guess a
lower number" each time. So what's wrong?
Answer: With this:
guess = raw_input("Guess the number \n")
`guess` is a string, not a number.
You can do this:
guess = int(raw_input("Guess the number \n"))
to get an int.
|
Offset key WORD for cipher in Python
Question: So I am half way through a Computing GCSE controlled assessment at school.
I have completed task one of my controlled assessment which was to create a
Caeser Cipher which encrypts a message by offsetting each letter in the
message by a key number which the user must enter.
Task two is to do the same, but using a key WORD.
This means that the keyword string must be multiplied until it is the same
length as the message which the user enters(from part one). For example:
keyword=gcse
usersmessage= Hello, my name is Jake
The keyword must be multiplied so it looks something like this compared to the
message:
**gcsegcsegcsegcsegcsegc**
**Hello, my name is Jake**
Now the above message and key word are the same length (22 characters,
although they don't look it)
Next, each ascii value of each character must be added together, meaning the
first letter of each string is added together, then the second, then third
etc..
Until each letter of the message now has a new value, once the program has
carried out this process it will need to print the **encrypted** message onto
the screen for the user to see. Can anybody help with this? It's hard to
explain here but hopefully somebody will know what I'm on about :) The main
issue I am having is that I cannot multiply a string by a number with a
decimal place, are there any ways around this? If nobody can answer the
question please tell me how I could carry this out, thanks.
Answer: Don't use multiplying here, use
[`itertools.cycle`](https://docs.python.org/2/library/itertools.html#itertools.cycle)
together with iterating over the string:
>>> from itertools import cycle
>>> word = cycle('gcse')
>>> message = 'Hello, my name is Jake'
>>> ''.join([next(word) for c in message])
'gcsegcsegcsegcsegcsegc'
|
Python ctypes identifying dll function result
Question: I have some functions according to a DLL's documentation (there are more):
# there are 2 lines in the other example like this
# define CALLTYPE __stdcall
# define pLSenvL void*
pLSenvL CALLTYPE LScreateEnvL()
int LSopenLogFileL( pLSenvL pL, char *pcLogFile)
int CALLTYPE LSsetPointerL(pLSenvL pL, double* pdPointer, int* pnPointersNow)
int LSexecuteScriptL( pLSenvL pL, char* pcScript)
int LScloseLogFileL( pLSenvL pL)
void LSclearPointers( pLSenvL pL)
What I've done so far is this:
from ctypes import *
api = CDLL("PATH_TO_DLL")
pLEnv = api.LScreateEnvL()
script = "STRING FULL OF COMMANDS"
print api.LSexecuteScriptL(pLEnv, script)
and it works, but now I want to replicate an example I found:
void CSDlg::OnSolve()
{
int nError, nPointersNow;
CString csScript, cs;
double dNeeds[1], dStart[1];
dNeeds[ 0] = (double) nNeedsM;
pLSenvL pL;
pL = LScreateEnvL();
nError = LSopenLogFileL( pL, "log_file.log");
nError = LSsetPointerL( pL, dNeeds, &nPointersNow);
nError = LSsetPointerL( pL, dStart, &nPointersNow);
csScript = "SET ECHOIN 1\n";
// Run the script
nError = LSexecuteScriptL( pL, (LPCTSTR) csScript);
// Close the log file
LScloseLogFileL( pL);
csStartM.Format( "%d", (int) dStart[0]);
}
So far I've done this:
nError = c_int
nPointersNow = c_int
dNeeds = c_double()
#I'm just setting a random value
dNeeds = [c_double(10)]
pLEnv = api.LScreateEnvL()
nError = api.LSopenLogFileL(pLEnv, "log_file.log")
# here I got
# Procedure called with not enough arguments (8 bytes missing) or wrong calling convention
nError = api.LSsetPointerL(pLEnv, byref(dNeeds), nPointersNow)
# and here I got
# byref argument must be a ctypes instance, not 'list'
So I've searched and I had to do something like this
#now here comes my problem
#according to documentation it is
# int CALLTYPE LSsetPointerL(pLSenvL pL, double* pdPointer, int* pnPointersNow)
api.LSsetPointerL.restype = c_int
api.LSsetPointerL.argtypes = [ ¿?, c_double, c_int]
* What should go as first element in that array of argtypes??
* Is there something I have to worry about the CALLTYPE definition?
Thanks in advance
Answer: "`Procedure called with not enough arguments (8 bytes missing) or wrong
calling convention`" refers to how you called the DLL. You used CDLL, but it
says "`# define CALLTYPE __stdcall`", so you should used WinDLL instead ([see
ctypes docs](https://docs.python.org/2/library/ctypes.html)). This sometimes
works regardless but seems to be failing on the more complicated calls.
You are not instantiating `nPointersNow`, you probably mean `nPointersNow()`.
You can't set `dNeeds` to be a list. Maybe you mean:
somefunc.argtypes = [c_double()]
dNeeds = c_double(10)
nError = api.LSsetPointerL(pLEnv, byref(dNeeds), nPointersNow)
And the type `plSenvL` should be defined somewhere in your docs.
`LScreateEnvL` returns that type, as shown here:
pLSenvL CALLTYPE LScreateEnvL()
pLSenvL __stdcall LScreateEnvL() (<- what the above line really means)
so you need to know what it is. We can guess that it's a pointer (an integer)
to something called `LSenvL`, which you will possibly need to make a structure
for. Not hard, but you need to know how it's defined. That said, you may be
able to avoid it since the first function returns it. So if you don't need to
use it directly you could try and dodgy it like this:
LScreateEnvL = api.LScreateEnvL
LScreateEnvL.restype = c_int # just assume it's an int
LSopenLogFileL = api.LSopenLogFileL
LSopenLogFileL.restype = c_int
LSopenLogFileL.argtypes = (c_int, # again just assume
c_char_p)
LSsetPointerL = api.LSsetPointerL
LSsetPointerL.restype = c_int
LSsetPointerL.argtypes = (c_int, # assume int
POINTER(c_double),
POINTER(c_int))
pLSenvL = LScreateEnvL()
dPointer = c_double()
nPointersNow = c_int()
nError = LSsetPointerL(pLSenvL,
byref(dPointer),
byref(nPointersNow))
|
Server side execution (execution back-end image processing )
Question: I tried to build an web app with python django module. The task is If any one
go to the link it will process image with threshold and save to server folder.
My code is(views.py):-
from django.http import HttpResponse
import numpy as np
import cv2
import Image
from PIL import Image
import tesseract
import ctypes
import os
import ImageDraw
def index(request):
im_gray = cv2.imread('Rimage.jpg', cv2.CV_LOAD_IMAGE_GRAYSCALE)
(thresh, im_bw) = cv2.threshold(im_gray, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
thresh = 100
im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]
cv2.imwrite('bw_image.png', im_bw)
return HttpResponse("Hello, world. You're at the polls index.")
this code is not working error :-
Traceback:
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response
98. resolver_match = resolver.resolve(request.path_info)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py" in resolve
343. for pattern in self.url_patterns:
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py" in url_patterns
372. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py" in urlconf_module
366. self._urlconf_module = import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py" in import_module
37. __import__(name)
File "/home/meraj/Desktop/project/web/mysite/mysite/urls.py" in <module>
5. url(r'^polls/', include('polls.urls')),
File "/usr/local/lib/python2.7/dist-packages/django/conf/urls/__init__.py" in include
28. urlconf_module = import_module(urlconf_module)
File "/usr/lib/python2.7/importlib/__init__.py" in import_module
37. __import__(name)
File "/home/meraj/Desktop/project/web/mysite/polls/urls.py" in <module>
3. from polls import views
Exception Type: IndentationError at /polls/
Exception Value: unindent does not match any outer indentation level (views.py, line 20)
There is nothing wrong with my code as it is executing locally, but when i
tried on django framework it is not working. Am I going on correct direction ?
I am new to python web framework so I don't have any idea about this. thanks
in advance
Answer: > "locally versus django framework".
Does this mean you are editing on one machine and copying the file to another?
The problem is an IndentationEror, according to your output, so perhaps this
is a space/tab or linebreak issue (one windows/one linux?) If the former, try
replacing all tabs with spaces in your editor and ensuring you have the
indentation correct after. If the latter, try a dos2unix on the file.
|
How to run f2py in macosx
Question: Hi I am trying to use f2py in macosx.
I have a homebrew python instalation and I have installed numpy using pip.
If I write on terminal `f2py` I get `-bash: f2py: command not found` but if I
write in a python script `import numpy.f2py`it works well. How can I solve
this problem runing f2py directly from terminal? Thank you!
Answer: I had a similar problem (installed numpy with pip on macosx, but got f2py not
found). In my case f2py was indeed in a location on my $PATH
(/Users/_username_ /Library/Python/2.7/bin), but had no execute permissions
set. Once that was fixed all was fine.
|
Precarious Popen Piping
Question: I want to use `subprocess.Popen` to run a process, with the following
requirements.
1. I want to pipe the `stdout` and `stderr` back to the caller of `Popen` as the process runs.
2. I want to kill the process after `timeout` seconds if it is still running.
I have come to the conclusion that a flaw in the `subprocess` API means it
cannot fulfill these two requirements at the same time. Consider the following
toy programs:
# chatty.py
while True:
print 'Hi'
# silence.py
while True:
pass
# caller.py
import subprocess
import time
def go(command, timeout=60):
proc = subprocess.Popen(command, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
start = time.time()
while proc.poll() is None:
print proc.stdout.read(1024) # <----- Line of interest
if time.time() - start >= timeout:
proc.kill()
break
else:
time.sleep(1)
Consider the marked line above.
* If it is included, `go('python silence.py')` will hang forever - not for just 60 seconds - because `read` is a blocking call until either `1024` bytes or end of stream, and neither ever comes.
* If it is commented, `go('python chatty.py')` will be printing out `'Hi'` over and over, but how can it be streamed back as it is generated? `proc.communicate()` blocks until end of stream.
I would be happy with a solution that replaces requirement (1) above with "In
the case where a timeout did not occur, I want to get `stdout` and `stderr`
once the algorithm finishes." Even this has been problematic. My
implementation attempt is below.
# speech.py
for i in xrange(0, 10000):
print 'Hi'
# caller2.py
import subprocess
import time
def go2(command, timeout=60):
proc = subprocess.Popen(command, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
start = time.time()
while True:
if proc.poll() is not None:
print proc.communicate()
break
elif time.time() - start >= timeout:
proc.kill()
break
else:
time.sleep(1)
But even this still has problems. Even though `python speech.py` runs in just
a couple seconds, `go2('python speech.py')` takes the full 60 seconds. This is
because the call to `print 'Hi'` in `speech.py` is blocking until
`proc.communicate()` is called when the process is killed. Since
`proc.stdout.read` had the problem demonstrated before with `silence.py`, I'm
really at a loss for how to get this working.
How can I get both the `stdout` and `stderr` and the timeout behavior?
Answer: The trick is to setup a side-band timer to kill the process. I wrote up a
program half way between chatty and silent:
import time
import sys
for i in range(10,0,-1):
print i
time.sleep(1)
And then a program to kill it early:
import subprocess as subp
import threading
import signal
proc = subp.Popen(['python', 'longtime.py'], stdout=subp.PIPE,
stderr=subp.PIPE)
timer = threading.Timer(3, lambda proc: proc.send_signal(signal.SIGINT),
args=(proc,))
timer.start()
out, err = proc.communicate()
timer.cancel()
print proc.returncode
print out
print err
and it output:
$ python killer.py
1
10
9
8
Traceback (most recent call last):
File "longtime.py", line 6, in <module>
time.sleep(1)
KeyboardInterrupt
Your timer could be made fancier, like trying increasingly bad signals til the
process completes, but you get the idea.
|
Python - Calling lines from a text file to compile a pattern search of a second file
Question: Forgive me if this is asked and answered. If so, chalk it up to my being new
to programming and not knowing enough to search properly.
I have a need to read in a file containing a series of several hundred
phrases, such as names or email addresses, one per line, to be used as part of
a compiled search term - pattern = re.search(name). The 'pattern' variable
will be used to search another file of over 5 million lines to identify and
extract select fields from relevant lines.
The text of the name file being read in for variable would be in the format
of:
John\n
Bill\n
[email protected]\n
Sally\n
So far I have the below code which does not error out, but also does not
process and close out. If I pass the names manually using slightly different
code with a sys.argv[1], everything works fine. The code (which should be) in
bold is the area I am having problems with - starting at "lines = open...."
import sys
import re
import csv
import os
searchdata = open("reallybigfile", "r")
Certfile = csv.writer(open('Certfile.csv', 'ab'), delimiter=',')
**lines = open("Filewithnames.txt", 'r')
while True:
for line in lines:
line.rstrip('\n')
lines.seek(0)
for nam in lines:
pat = re.compile(nam)**
for f in searchdata.readlines():
if pat.search(f):
fields = f.strip().split(',')
Certfile.writerow([nam, fields[3], fields[4]])
lines.close()
The code at the bottom (starting "for f in searchdata.readlines():") locates,
extracts and writes the fields fine. I have been unable to find a way to read
in the Filewithnames.txt file and have it use each line. It either hangs, as
with this code, or it reads all lines of the file to the last line and returns
data only for the last line, e.g. 'Sally'.
Thanks in advance.
Answer: `while True` is an infinite loop, and there is no way to break out of it that
I can see. That will definitely cause the program to continue to run forever
and not throw an error.
Remove the `while True` line and de-indent that loop's code, and see what
happens.
EDIT:
I have resolved a few issues, as commented, but I will leave you to figure out
the precise regex you need to accomplish your goal.
import sys
import re
import csv
import os
searchdata = open("c:\\dev\\in\\1.txt", "r")
# Certfile = csv.writer(open('c:\\dev\\Certfile.csv', 'ab'), delimiter=',') #moved to later to ensure the file will be closed
lines = open("c:\\dev\\in\\2.txt", 'r')
pats = [] # An array of patterns
for line in lines:
line.rstrip()
lines.seek(0)
# Add additional conditioning/escaping of input here.
for nam in lines:
pats.append(re.compile(nam))
with open('c:\\dev\\Certfile.csv', 'ab') as outfile: #This line opens the file
Certfile = csv.writer(outfile, delimiter=',') #This line interprets the output into CSV
for f in searchdata.readlines():
for pat in pats: #A loop for processing all of the patterns
if pat.search(f) is not None:
fields = f.strip().split(',')
Certfile.writerow([pat.pattern, fields[3], fields[4]])
lines.close()
searchdata.close()
First of all, make sure to close all the files, including your output file. As
stated before, the `while True` loop was causing you to run infinitely. You
need a regex or set of regexes to cover all of your possible "names." The code
is simpler to do a set of regexes, so that is what I have done here. This may
not be the most efficient. This includes a loop for processing all of the
patterns.
I believe you need additional parsing of the input file to give you clean
regular expressions. I have left some space for you to do that.
Hope that helps!
|
Debugging issues in importing modules in Python
Question: Does importing a specific function from a module is a faster process than
importing the whole module?
That is, is **from module import x** debugs faster than **import module**?
Answer: No, it shouldn't be faster, and that shouldn't matter anyway: importing things
is not usually considered a performance-critical operation, so you can expect
it to be fairly slow compared to other things you can do in Python. If you
require importing to be very fast, probably something is wrong with your
design.
|
Command 'makemessages' error
Question: I newbe in django and python. My project created under PyTools for Visual
Studio 2013. For localization I create 'locale' folder on manage.py level. And
I try run the following command:
.\ClarisPyEnv\Scripts\python.exe manage.py makemessages -l he
And I got the error:
Exception in thread Thread-2377:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 921, in _bootstrap_inner
self.run()
File "C:\Python34\lib\threading.py", line 869, in run
self._target(*self._args, **self._kwargs)
File "C:\Python34\lib\subprocess.py", line 1170, in _readerthread
buffer.append(fh.read())
File "C:\Python34\lib\encodings\cp1255.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 766: char
acter maps to <undefined>
Traceback (most recent call last):
File "manage.py", line 17, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\__init__.py", line 385, in execute_from_co
mmand_line
utility.execute()
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\base.py", line 338, in execute
output = self.handle(*args, **options)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\base.py", line 533, in handle
return self.handle_noargs(**options)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\commands\makemessages.py", line 290, in ha
ndle_noargs
self.write_po_file(potfile, locale)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\commands\makemessages.py", line 402, in wr
ite_po_file
msgs, errors, status = popen_wrapper(args)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\utils.py", line 25, in popen_wrapper
output, errors = p.communicate()
File "C:\Python34\lib\subprocess.py", line 959, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Python34\lib\subprocess.py", line 1234, in _communicate
stdout = stdout[0]
IndexError: list index out of range
What this means ? Where a problem? help me please!
Thank Alex
Answer: You dont need to specify unicode in pyhton 3. Unicode is default.
`from django.utils.translation import ugettext as _`
`_('חתול')`
Is enof for the translation and encoding to work.
|
Treebank-style tree parser python
Question: Recently i have been trying to parse syntactic trees returned by the [stanford
parser](http://nlp.stanford.edu/software/lex-parser.shtml) in python. I have
been trying to do that with nltk `tree =
Tree.parse(result['sentences'][0]['parsetree'])` and the parsing succeeds but
the tree class of nltk offers very few processing methods. I will need methods
like `tree.isPrePreTerminal()` which are not included from what i understand.
I found [this](http://nlp.stanford.edu/~mcclosky/software/PyInputTree/)
alternative but it seems that it doesnt like 64bit architectures and it gives
me this error `ImportError: InputTree/_InputTree.so: wrong ELF class:
ELFCLASS32` even though i compiled with the `-m64` flag. I have been looking
into this the last 2 days, if you know a way to make the above module to work
with 64bit systems or an alternate library or at least a good nltk.tree
documentation so that i implement the methods myself please let me know.
Answer: Unfortunately, `PyInputTree` is no longer maintained. However, the `InputTree`
class from the `Charniak` parser lives on in wrapped form as the [`Tree` class
in BLLIP Parser](https://github.com/BLLIP/bllip-parser/blob/master/README-
python.rst#the-tree-class). It doesn't implement `isPrePreTerminal()` but
here's one way to do it:
import bllipparser
def is_prepreterminal(node):
"""Returns True iff all children of this node are preterminals."""
subtrees = node.subtrees()
return len(subtrees) > 0 and \
all(subtree.is_preterminal() for subtree in subtrees)
# testing code
tree = bllipparser.Tree('(S1 (S (NP (DT This)) (VP (VBZ is) (NP (DT a) (ADJP (RB fairly) (JJ simple)) (NN parse) (NN tree))) (. .)))')
for subtree in tree.all_subtrees():
print subtree, is_prepreterminal(subtree)
See [`bllipparser`](https://pypi.python.org/pypi/bllipparser) on PyPI for more
information.
|
To unzip a file
Question: I want to unzip a file of type *.sec.gz which is a zipfile. But i'm getting
badfile.....Can someone guide to resolve this.....File present in the folder
is of type *.sec ........Thanks in advance
import zipfile
def unzip(path):
zfile = zipfile.ZipFile(path)
for name in zfile.namelist():
(dirname, filename) = os.path.split(name)
if filename == '':
# directory
if not os.path.exists(dirname):
os.mkdir(dirname)
else:
# file
fd = open(name, 'w')
fd.write(zfile.read(name))
fd.close()
zfile.close()
k=unzip('C://test//08October2014//DATA_INTV_NEW//Oct0814//1.sec.gz')
Output:
BadZipfile Traceback (most recent call last)
<ipython-input-7-5134b63e752e> in <module>()
27 zfile.close()
28
---> 29 k=unzip('C://test//08October2014//DATA_INTV_NEW//Oct0814//1.sec.gz')
<ipython-input-7-5134b63e752e> in unzip(path)
13
14 def unzip(path):
---> 15 zfile = zipfile.ZipFile(path)
16 for name in zfile.namelist():
17 (dirname, filename) = os.path.split(name)
C:\Python27\Lib\zipfile.pyc in __init__(self, file, mode, compression, allowZip64)
768 try:
769 if key == 'r':
--> 770 self._RealGetContents()
771 elif key == 'w':
772 # set the modified flag so central directory gets written
C:\Python27\Lib\zipfile.pyc in _RealGetContents(self)
809 raise BadZipfile("File is not a zip file")
810 if not endrec:
--> 811 raise BadZipfile, "File is not a zip file"
812 if self.debug > 1:
813 print endrec
BadZipfile: File is not a zip file
Answer: The error message is completely accurate: that is not a zip file. It is a gzip
file, which is something completely different. You should use the [gzip
module](https://docs.python.org/2/library/gzip.html).
|
Opening multiple excel files using Python
Question: I have multiple excel files in a directory and would like to open those files
at a time to perform write operations (To, for example, write "Hi" in the
first row of all the excel files). Is there any way to do so in Python?
Answer: You can use:
import csv
with open('A.csv','w') as a, open ('B.csv', 'w') as b:
a_reader = csv.reader(a)
b_reader = csv.reader(b)
...
Now you can iterate through the both files.
The python module
[fileinput](https://docs.python.org/2/library/fileinput.html) can be of help
also, it allows to iterate over lines of multiple files passed as arguments.
|
How to script django shell operations?
Question: I'd like to create a script (.sh or python, not important) that can do the
following:
heroku pg:reset DATABASE_URL
heroku run python manage.py migrate
heroku run python manage.py shell
> from myapp.scenarios import *; reset_demo_data(); exit()
Line 1 to 3 are UNIX commands, but Line 4 is python to be executed in the
opened Django shell.
I tried stuff with `|` and `>` to "inject" the python code in the command but
nothing worked. I guess it's quite easy to do but I can't figure out how..
Thanks.
Answer: I guess the best option would be to write a [custom management
command](https://docs.djangoproject.com/en/1.7/howto/custom-management-
commands/).
Your script could then look like:
heroku pg:reset DATABASE_URL
heroku run python manage.py migrate
heroku run python manage.py shell
heroku run python manage.py reset_demo
when the management command is something like:
from django.core.management.base import BaseCommand
from myapp.scenarios import *
class Command(BaseCommand):
def handle(self, *args, **options):
reset_demo_data()
|
python input() not working! (raspberry pi camera program)
Question: I am on my raspberry pi making a camera program (code)
import picamera
import sys
import time
question=input('do you want to take a picture(pic) take a timed picture(time) or a video(vid)')
if question=='pic':
time=0
elif question=='time':
time=1
elif question=='vid':
print ('coming soon')
sys.exit()
num=0
camera = picamera.PiCamera()
camera.start_preview()
input('press any key to take picture')
if time==1 :
print ('5')
time.sleep(1)
print ('4')
time.sleep(1)
print ('3')
time.sleep(1)
print ('2')
time.sleep(1)
print ('1')
time.sleep(1)
print ('0')
camera.capture(num)
num=num + 1
and when I ran it, it asked
> Do you want to take a picture(pic) take a timed picture(time) or a
> video(vid)
just as it was suppose to I replied
> pic
then it gave me this error
Traceback (most recent call last):
File "/home/pi/Desktop/camera.py", line 5, in <module>
question=input('do you want to take a picture(pic) take a timed picture(time) or a video(vid)')
File "<string>", line 1, in <module>
NameError: name 'pic' is not defined
What happened?
Answer: As Cyber said in the comments, use python's raw_input() function vis-a-vis
input.
raw_input() will take anything from STDIN as a STR type until the user hits
enter.
question = raw_input("Do you want to take a picture(pic) take a timed picture(time) or a video(vid)?")
|
python urllib2 and unicode
Question: I would like to collect information from the results given by a search engine.
But I can only write text instead of unicode in the query part.
import urllib2
a = "바둑"
a = a.decode("utf-8")
type(a)
#Out[35]: unicode
url = "http://search.naver.com/search.naver?where=nexearch&query=%s" %(a)
url2 = urllib2.urlopen(url)
give this error
#UnicodeEncodeError: 'ascii' codec can't encode characters in position 39-40: ordinal not in range(128)
Answer: Encode the Unicode data to UTF-8, then URL-encode:
from urllib import urlencode
import urllib2
params = {'where': 'nexearch', 'query': a.encode('utf8')}
params = urlencode(params)
url = "http://search.naver.com/search.naver?" + params
response = urllib2.urlopen(url)
Demo:
>>> from urllib import urlencode
>>> a = u"바둑"
>>> params = {'where': 'nexearch', 'query': a.encode('utf8')}
>>> params = urlencode(params)
>>> params
'query=%EB%B0%94%EB%91%91&where=nexearch'
>>> url = "http://search.naver.com/search.naver?" + params
>>> url
'http://search.naver.com/search.naver?query=%EB%B0%94%EB%91%91&where=nexearch'
Using
[`urllib.urlencode()`](https://docs.python.org/2/library/urllib.html#urllib.urlencode)
to build the parameters is easier, but you can also just escape the `query`
value with
[`urllib.quote_plus()`](https://docs.python.org/2/library/urllib.html#urllib.quote_plus):
from urllib import quote_plus
encoded_a = quote_plus(a.encode('utf8'))
url = "http://search.naver.com/search.naver?where=nexearch&query=%s" % encoded_a
|
Python how to delete lowercase words from a string that is in a list
Question: My question is: how do I delete all the lowercase words from a string that is
an element in a list? For example, if I have this list: `s = ["Johnny and
Annie.", "She and I."]`
what do I have to write to make python return `newlist = ["Johnny Annie", "She
I"]`
I've tried this, but it sadly doesn't work:
def test(something):
newlist = re.split("[.]", something)
newlist = newlist.translate(None, string.ascii_lowercase)
for e in newlist:
e = e.translate(None, string.ascii_lowercase)
Answer:
>>> s = ["Johnny and Annie.", "She and I."]
You can check if the word is lowercase using `islower()` and using `split` to
iterate word by word.
>>> [' '.join(word for word in i.split() if not word.islower()) for i in s]
['Johnny Annie.', 'She I.']
To remove punctuation as well
>>> import string
>>> [' '.join(word.strip(string.punctuation) for word in i.split() if not word.islower()) for i in s]
['Johnny Annie', 'She I']
|
MapReduce is not sorting
Question: I'm using python to develop a mapreduce program, when I use map.py and
reduce.py by comand line:
cat passengers.dat | python map.py | sort | python reduce.py
The result is good. But if I try to use mapreduce:
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.10.0-1.jar -input passengers.dat -output /out -file map.py -file reduce.py -mapper map.py -reducer reduce.py
The result is wrong because it seems not sorting before reducing.
I draw this conclusion because by command line:
cat passengers.dat | python map.py | sort
The result is:
141181 2014 5 1 0 STA 267
141181 2014 5 1 1 END 1031
141181 2014 5 1 4 STA 1031
141181 2014 5 1 5 END 267
But when I rewrite my reduce to write the lines 'reduce' receives to
understand what is the problem:
#!/usr/bin/env python
import sys
for line in sys.stdin:
print line
The result is not sorted:
141181 2014 5 1 1 END 1031
141181 2014 5 1 0 STA 267
141181 2014 5 1 5 END 267
141181 2014 5 1 4 STA 1031
the fifth column is disordered !!! Why??!
thank you
I post my SOLUTION below!!
Answer: I solved the problem, the problem was that mapreduce uses key-value and there
are too many keys, so the solution I have implemented is to rewrite mapper to
return:
return 'value1#value2#value3#value4#value5#value6 \t value7'
By this way mapreduce sort perfectly the key and I can split on reducer.
I hope it will be useful
|
Best strategy for merging a lot of data frames using pandas
Question: I'm trying to merge many (a few thousand one column tsv files) data frames
into a single csv file using pandas. I'm new to pandas (and python for that
matter) and could use some input or direction.
My data frames are observational data on a list scraped from the web and do
not contain headers. For example:
data frame 1:
bluebird 34
chickadee 168
eagle 10
hawk 67
sparrow 2
data frame 2:
albatross 56
bluebird 78
hawk 3
pelican 19
sparrow 178
I'm looking to do is simply create a master file with all of the individual
observations:
albatross 0 56
bluebird 34 78
chickadee 168 0
eagle 10 0
hawk 67 3
pelican 0 19
sparrow 2 178
I've tried to merge the data frames one at a time using pandas:
import pandas as pd
df1 = pd.read_table("~/home/birds1.tsv", sep='\t')
df2 = pd.read_table("~/home/birds2.tsv", sep='\t')
merged = df1.merge(df1, df2, how="left").fillna("0")
merged.to_csv("merged.csv", index=False)
but I am only getting one column. I don't have a master list of "birds", but I
can concatenate all the data and sort on unique names for a dictionary list if
this is needed.
What should my strategy be for merging a few thousand files?
Answer: Look at the docs for [merge](http://pandas.pydata.org/pandas-
docs/dev/merging.html#database-style-dataframe-joining-merging), when called
from a frame, the first parameter is the 'other' frame, and the second is
which variables you want to merge on (not actually sure what happens when you
pass a DataFrame).
But, assuming your bird column is called 'bird', what you probably want is:
In [412]: df1.merge(df2, on='bird', how='outer').fillna(0)
Out[412]:
bird value_x value_y
0 bluebird 34 78
1 chickadee 168 0
2 eagle 10 0
3 hawk 67 3
4 sparrow 2 178
5 albatross 0 56
6 pelican 0 19
|
importing with * (asterisk) versus as a namespace in python
Question: I know that its bad form to use `import *` in python, and I don't plan to make
a habit of it. However I recently came across some curious behaviour that I
don't understand, and wondered if someone could explain it to me.
Lets say I have three python scripts. The first, `first_script.py`, comprises:
MESSAGE = 'this is from the first script'
def print_message():
print MESSAGE
if __name__ == '__main__':
print_message()
Obviously running this script gives me the contents of MESSAGE. I have a
second script called `second_script.py`, comprising:
import first_script
first_script.MESSAGE = 'this is from the second script'
if __name__ == '__main__':
first_script.print_message()
The behaviour (prints `this is from the second script`) makes sense to me.
I've imported `first_script.py`, but overwritten a variable within its
namespace, so when I call `print_message()` I get the new contents of that
variable.
However, I also have `third_script.py`, comprising:
from first_script import *
MESSAGE = 'this is from the third script'
if __name__ == '__main__':
print MESSAGE
print_message()
This first line this produces is understandable, but the second doesn't make
sense to me. My intuition was that because I've imported into my main
namespace via * in the first line, I have a global variable called `MESSAGES`.
Then in the second line I overwrite `MESSAGES`. Why then does the function
(imported from the first script) produce the OLD output, especially given the
output of `second_script.py`. Any ideas?
Answer: This has to do with `Scope`. For a very excellent description of this, please
see [Short Description of Python Scoping
Rules](http://stackoverflow.com/questions/291978/short-description-of-python-
scoping-rules)
For a detailed breakdown with tons of examples, see
<http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/tutorials/scope_resolution_legb_rule.ipynb>
_Here's the details on your specific case:_
The `print_message` function being called from your third test file is being
asked to print out some `MESSAGE` object. This function will use the standard
`LEGB` resolution order to identify which object this refers to. `LEGB` refers
to `Local, Enclosing function locals, Global, Builtins`.
1. Local - Here, there is no `MESSAGES` defined within the `print_message` function.
2. Enclosing function locals - There are no functions wrapping this function, so this is skipped.
3. Global - Any explicitly declared variables in the outer code. It finds `MESSAGE` defined in the global scope of the `first_script` module. _Resolution then stops, but i'll include the others for completeness._
4. Built-ins - The list of python built-ins, [found here](https://docs.python.org/2/library/functions.html#id).
So, you can see that resolution of the variable `MESSAGE` will cease
immediately in `Global`, since there was something defined there.
Another resource that was pointed out to me for this is [Lexical scope vs
Dynamic
scope](http://en.wikipedia.org/wiki/Scope_%28computer_science%29#Lexical_scoping_vs._dynamic_scoping),
which may help you understand scope better.
HTH
|
Python Minimising function with Nelder-Mead algorithm
Question: I'm trying to minimize a function `mymodel` with the Nelder-Mead algorithm to
fit my data. This is done in the `myfit` function with scipy's
`optimize.fmin`. I think I'm quite close, but I must be missing something,
because I keep getting an error:
'operands could not be broadcast together with shapes (80,) (5,)'.
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
from scipy import special
def mymodel(c,t,y):
"""
This function specifies the form to be minimized by fmins in myfit.
c is a 1 x 5 array containing the fit parameters.
"""
m = (np.sin(np.exp(-c[1]*t)*c[0]/2.0))**2
# compute complete elliptic integral of the first kind with ellipk
w = np.pi*c[2]/2.0/special.ellipk(m)
dt = t[1] - t[0]
phase = np.cumsum(w)*dt
z = np.sum((y - c[0] * np.exp(-c[1]*t) * np.cos(phase+c[3])-c[4])**2)
return z
def myfit(c, pos):
"""
Fitting procedure for the amplitude decay of the undriven pendulum
initial fit parameters:
c[0]=theta_m, c[1]=alpha, c[2]=omega_0, c[3]=phi, c[4]=const.
pos = the position data
"""
# convert data to seconds
t = 0.001*np.arange(0,len(pos))
dt = t[1] - t[0]
# Minimise the function mymodel using Nelder-Mead algorithm
c = optimize.fmin(mymodel, c, args=(t,y), maxiter=5000, full_output=True)
m = (np.sin(np.exp(-c[1]*t)*c[0]/2.0))**2
# change of frequency with amplitude
w = np.pi*c[2]/2.0/special.ellipk(m)
phase = np.cumsum(w)*dt
# use values from fmin
fit = c[0]*np.exp(-c[1]*t)*np.cos(phase+c[3])+c[4]
return t, c, fit
t = np.array([ 0., 15., 30., 45., 60., 75., 90., 105.,
120., 135., 150., 165., 180., 195., 210., 225.,
240., 255., 270., 285., 300., 315., 330., 345.,
360., 375., 390., 405., 420., 435., 450., 465.,
480., 495., 510., 525., 540., 555., 570., 585.,
600., 615., 630., 645., 660., 675., 690., 705.,
720., 735., 750., 765., 780., 795., 810., 825.,
840., 855., 870., 885., 900., 915., 930., 945.,
960., 975., 1005., 1020., 1035., 1050., 1065., 1080.,
1095., 1110., 1125., 1140., 1155., 1170., 1185., 1200.,
])
pos = np.array([ 28.95, 28.6 , 28.1 , 27.5 , 26.75, 25.92, 24.78, 23.68,
22.5 , 21.35, 20.25, 19.05, 17.97, 16.95, 15.95, 15.1 ,
14.45, 13.77, 13.3 , 13. , 12.85, 12.82, 12.94, 13.2 ,
13.6 , 14.05, 14.65, 15.45, 16.1 , 16.9 , 17.75, 18.7 ,
19.45, 20.3 , 21.1 , 21.9 , 22.6 , 23.25, 23.75, 24.2 ,
24.5 , 24.75, 24.88, 24.9 , 24.8 , 24.65, 24.35, 23.9 ,
23.55, 22.95, 22.5 , 21.98, 21.3 , 20.65, 20.05, 19.4 ,
18.85, 18.3 , 17.8 , 17.35, 16.95, 16.6 , 16.35, 16.2 ,
16.1 , 16.1 , 16.35, 16.5 , 16.75, 17.02, 17.4 , 17.75,
18.3 , 18.65, 19.1 , 19.55, 20. , 20.45, 20.85, 21.25,
])
# fitting with myfit function
c = np.array([1,1,1,1,1]) # initial guess
t, c, fit = myfit(c, pos)
plt.plot(t,fit)
plt.show()
Answer: The problem is that `fmin`, called with the `full_output=true` parameter, not
only returns the optimized parameters but a tuple containing the parameters
and other stuff.
The parameters are the first value in the tuple. You could just select this
with `[0]`:
c = optimize.fmin(mymodel, c, args=(t,pos), maxiter=5000, full_output=True)[0]
Or simply remove the `full_output=true` parameter:
c = optimize.fmin(mymodel, c, args=(t,pos), maxiter=5000)
<http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin.html>
|
Python - Text in form disappears
Question: In the following code below, I would like to text to stay in the form when the
submit button is pressed. Currently the text disappears when submit is
pressed.
Here is a tutorial link I've been following:
<https://www.udacity.com/course/viewer#!/c-cs253/l-48736183/e-48754026/m-48717294>
import webapp2
form = """
<!DOCTYPE html>
<head>
<title>Unit 2 Rot 13</title>
</head>
<body>
<h2>Enter some text to ROT13:</h2>
<form method = "post">
<textarea name="text" value="%(word)s" style="height: 100px; width: 400px;"></textarea>
<br>
<input type="submit">
</form>
</body>
</html>
"""
class MainHandler(webapp2.RequestHandler):
def write_form(self, word=""):
self.response.out.write(form % {"word": word})
def get(self):
self.write_form()
def post(self):
string = self.request.get("word")
self.write_form(string)
app = webapp2.WSGIApplication([('/', MainHandler),
], debug=True)
Answer:
string = self.request.get("text")
since your form item name is `text`
this harkens back to the fact that you should use good identifiers ...
since you have no post variables named `word` ... trying to get that post
variable gives an empty string
|
Python-selecting specified items from dictionary list
Question: Let's say I have list of cars:
car=[{'model':'ferrari', 'color': 'red', 'price':1200},
{'model':'lamborgini', 'color': 'blue', 'price':2000},
{'model':'ferrari', 'color': 'yellow', 'price':1000},
{'model':'ferrari', 'color': 'yellow', 'price':500}]
I want to select the cheapest car for every model-color combination(cheapest
red lamborgini, cheapest green ferrari etc) and put them in new list.
The output should be:
[{'model':'ferrari', 'color': 'red', 'price':1200},
{'model':'lamborgini', 'color': 'blue', 'price':2000},
{'model':'ferrari', 'color': 'yellow', 'price':500}]
How can I do this?
Answer: It may be a good idea to create a helper datastructure.
Here I use a dictionary with tuples of (model, color) as keys
>>> car = [ {'model':'ferrari', 'color': 'red', 'price':1200},
... {'model':'lamborgini', 'color': 'blue', 'price':2000},
... {'model':'ferrari', 'color': 'yellow', 'price':1000},
... {'model':'ferrari', 'color': 'yellow', 'price':500} ]
>>> from operator import itemgetter
>>> from collections import defaultdict
>>> D = defaultdict(list)
>>> for item in car:
... D[item['model'], item['color']].append(item)
...
>>> min(D['ferrari', 'yellow'], key=itemgetter('price'))
{'color': 'yellow', 'model': 'ferrari', 'price': 500}
This means you don't need to scan the entire collection every time you make a
query
|
Python solving 2nd order ODE with quad function
Question: I am studying the dynamics of a damped, driven pendulum with second order ODE
defined like
[so](http://www.cmp.caltech.edu/~mcc/Chaos_Course/Lesson2/Demos.html), and
specifically I am progamming:
d^2y/dt^2 + c * dy/dt + sin(y) = a * cos(wt)
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
def pendeq(t,y,a,c,w):
y0 = -c*y[1] - np.sin(y[0]) + a*np.cos(w*t)
y1 = y[1]
z = np.array([y0, y1])
return z
a = 2.1
c = 1e-4
w = 0.666667 # driving angular frequency
t = np.array([0, 5000]) # interval of integration
y = np.array([0, 0]) # initial conditions
yp = integrate.quad(pendeq, t[0], t[1], args=(y,a,c,w))
This problem does look quite similar to [Need help solving a second order non-
linear ODE in python](http://stackoverflow.com/questions/19779217/need-help-
solving-a-second-order-non-linear-ode-in-python), but I am getting the error
Supplied function does not return a valid float.
What am I doing wrong??
Answer: `integrate.quad` requires that the function supplied (`pendeq`, in your case)
**returns only a float**. Your function is returning an array.
|
Python module manipulation with other modules
Question: I was fooling around with python modules today and i found something
interesting; Suppose you have a module like this:
# mymodule
value = "default"
def setVal(new):
value = new
def getVal():
return value
now, you have another module that imports mymodule. when you run the module in
python shell, this happens:
>>> moduleC.setC("newvalue")
>>> moduleC.getC()
'default'
>>> moduleC.Cval = "new value"
>>> moduleC.getC()
'new value'
Why does setting the value of the variable in a module using a function not
work, but assigning it using "=" does?
Answer: In you setVal, `value` is local to the function, and 'disappears' when the
function returns. To make it global to the module, you must declare it so.
This will do as you hoped, even though it is unnecessary (as is getVal()).
def setVal(new):
global value
value = new
|
VIM/Python: visualmode/mode detection
Question:
function! Delete() range
python3 << EOF
import vim
vim.command('let x = visualmode()')
mode = vim.eval('x')
EOF
endfunction
I'm not able to detect 'mode' properly in vim (xterm). It can't switch between
'visual' 'normal' - i get 'n' or '^V' always, regardless of what mode i'm
actually in. I invoke the function like so:
:map <kDel> :call Delete()<CR>
:imap <kDel> <Esc>l:call Delete()<CR>
:vmap <kDel> <Esc>:call Delete()<CR>
How exactly are these functions used?
Answer: This simplified version of your function (please use testable examples):
function! Delete() range
python << EOF
import vim
print vim.eval('visualmode()')
EOF
endfunction
does exactly what it is supposed to do:
* if the last visual mode was plain _visual_ mode, it echoes `v`,
* if the last visual mode was _visual-line_ mode, it echoes `V`,
* if the last visual mode was plain _visual-block_ mode, it echoes `^V`.
Of course, you would do the following in your real function:
mode = vim.eval('visualmode()')
What, _exactly_ , do you expect and what, _exactly_ , are you trying to
achieve?
\--- edit ---
From `:help visualmode()`, emphasis mine:
> The result is a String, which describes **_the last_** Visual mode used in
> the current buffer.
So it is obvious that `visualmode()` can't be used to know the _current_ mode.
The right function is `mode()`.
From `:help mode()`, emphasis mine:
> This is useful in the 'statusline' option or when used with remote_expr()
> **_In most other places it always returns "c" or "n"._**
So using straight `mode()` in most contexts will be useless, as demonstrated
by the mappings below that all put you in normal mode before calling `mode()`
so you always get `n`.
function! Delete() range
python << EOF
import vim
print vim.eval('mode()')
EOF
endfunction
nmap <key> :call Delete()<CR> --> n
imap <key> <Esc>:call Delete()<CR> --> n
xmap <key> <Esc>:call Delete()<CR> --> n
For `mode()` to return the value you want, you need to be in an _expression_
context which allows you to _evaluate_ stuff without leaving the current mode:
nmap <expr> <key> Delete() --> n
imap <expr> <key> Delete() --> i
xmap <expr> <key> Delete() --> v, V or ^V
|
How to read & export certain files from a Python GUI-prompted directory?
Question: OK guys,
I'm currently working on a file reading and processing with Python & OpenCV
cs' GUI feature. The feature will prompt the user to select a directory path
for a folder containing 340 JPEG images, which I labelled them as "frame1" to
"frame340". Then, I want to select several frames, process them, and save the
processed ones in a different directory.
My big issue is, I'm trying to get only frame87, frame164, and frame248 from
this folder with 340 images, and Python just keep returning error that claimed
"directory name is invalid", like this:
Traceback (most recent call last):
File "C:\Users\maxwell_hamzah\Documents\Python27\imageReadBeta.py", line 25, in <module>
imgRead = os.listdir(str(dirname) + "/frame"+ str(i) + ".jpg")
WindowsError: [Error 267] The directory name is invalid: 'C:/Users/maxwell_hamzah/Documents/Python27/toby arm framed/frame87.jpg/*.*'
To help familiarize with the situation, here's what my work looks like:
import os
import numpy as np
import cv2
from matplotlib import pyplot as plt
from skimage import color, data, restoration
import Tkinter, tkFileDialog
# first, we setup the Tkinter features for file-reading
root = Tkinter.Tk()
root.withdraw()
# prompt user to ask about the file directory
dirname = tkFileDialog.askdirectory\
(parent=root,initialdir="/",title='Pick FRAMES directory')
X = [] # initiate an array to store read images
frameIndex = [87, 163, 248] #this index is which frames we are interested in
imgRead = ""
temp = []
# we begin to read only frame87, frame163, and frame248
for i in frameIndex:
imgRead = os.listdir(str(dirname) + "/frame"+ str(i) + ".jpg")
temp = cv2.imread(imgRead, -1)
X.append(temp)
I'm totally stuck on how to fix this bug on especially the for loop part,
where the error comes from. Python keeps freeking out on the imgRead variable
claiming that the directory is invalid. Plus, I'm also wondering on how to
"export" processed files to other directories (e.g. saving processed images
from "My Pictures" to "My Music")
Really appreciate your help, guys.
Maxwell
Answer: In the last block, you call a method to list files, which is expecting a
directory, but you pass it a file path. That's a bug, and actually you don't
need that here in the first place:
for i in frameIndex:
imgRead = "{0}/frame{1}.jpg".format(dirname, i)
temp = cv2.imread(imgRead, -1)
X.append(temp)
As to moving files in Python, that's a pretty classic need, there's plenty of
doc out there. [One
example](http://stackoverflow.com/questions/8858008/moving-a-file-in-python).
|
Comparing 2 images/pictures, and mark the difference
Question: I am learning to compare 2 images/pictures. I found the post [Compare two
images the python/linux
way](http://stackoverflow.com/questions/1927660/compare-two-images-the-python-
linux-way) is very useful but I have 2 questions regarding the technique. So
please bear with me ask 2 questions here.
Question 1:
The post shows ways to compare 2 pictures/images. Probably the easiest way is:
from PIL import Image
from PIL import ImageChops
im1 = Image.open("file1.jpg")
im2 = Image.open("file2.jpg")
diff = ImageChops.difference(im2, im1).getbbox()
print diff
when I have 2 look alike pictures and run above, it give result:
(389, 415, 394, 420)
It’s the position on the picture where the difference in 2 pictures lies. So
my question is, would it be possible to **mark the difference** on the picture
(for example, draw a circle)?
Question 2:
import math, operator
from PIL import Image
def compare(file1, file2):
image1 = Image.open(file1)
image2 = Image.open(file2)
h1 = Image.open("image1").histogram()
h2 = Image.open("image2").histogram()
rms = math.sqrt(reduce(operator.add, map(lambda a,b: (a-b)**2, h1, h2))/len(h1))
if __name__=='__main__':
import sys
file1 = ('c:\\a.jpg') # added line
file2 = ('c:\\b.jpg') # added line
file1, file2 = sys.argv[1:]
print compare(file1, file2)
when I run above, it gives an error “ValueError: need more than 0 values to
unpack”, and the problem lies in this line:
file1, file2 = sys.argv[1:]
how can I have it corrected? and I tried below it neither works.
print compare('c:\\a.jpg', 'c:\\b.jpg')
Thanks.
## \-----------------------
added question following Matt's help.
it can draw a rectangle to mark the difference on the 2 images/pictures. when
the 2 images/pictures looked general the same but there are small spots
differences spread. it draws a big rectangle marking the big area include all
the spots differences. is there a way to identically mark the differences
individually?
thanks again.
Answer: **Regarding your first question:**
import ImageDraw
draw = ImageDraw.Draw(im2)
draw.rectangle(diff)
im2.show()
**Regarding your second question:**
The error states, that `sys.argv` does not contain enough values to be
assigned to `file1` and `file2`. You need to pass the names of the two files
you want to compare to you python script (the variable `sys.arv` contains the
name of your script and all the command line parameters):
python name_of_your_script.py file1.jpg file2.jpg
|
How to customize a folder's icon via Python?
Question: As [this SU answer](http://superuser.com/a/410091/35237) notes, in order to
change a folder's icon, one has to change a folder's attribute to read-only or
system, and have its `desktop.ini` contain something like
[.ShellClassInfo]
IconResource=somePath.dll,0
While it would be straightforward to use `win32api.SetFileAttributes(dirpath,
win32con.FILE_ATTRIBUTE_READONLY)` and create the `desktop.ini` from scratch,
I'd like to preserve other customizations present in a potentially existing
`desktop.ini`. But should I use
[ConfigParser](https://docs.python.org/2/library/configparser.html) for that
or does e.g. `win32api` (or maybe `ctypes.win32`) provide native means to do
so?
Answer: Ok, so from [this thread](http://stackoverflow.com/questions/15274013/how-to-
make-changes-using-configparser-in-ini-file-persistent), I managed to get
something working. I hope that it will help you.
Here is my base desktop.ini file:
[.ShellClassInfo]
IconResource=somePath.dll,0
[Fruits]
Apple = Blue
Strawberry = Pink
[Vegies]
Potatoe = Green
Carrot = Orange
[RandomClassInfo]
foo = somePath.ddsll,0
Here is the script I use:
from ConfigParser import RawConfigParser
dict = {"Fruits":{"Apple":"Green", "Strawberry":"Red"},"Vegies":{"Carrot":"Orange"} }
# Get a config object
config = RawConfigParser()
# Read the file 'desktop.ini'
config.read(r'C:\Path\To\desktop.ini')
for section in dict.keys():
for option in dict[section]:
try:
# Read the value from section 'Fruit', option 'Apple'
currentVal = config.get( section, option )
print "Current value of " + section + " - " + option + ": " + currentVal
# If the value is not the right one
if currentVal != dict[section][option]:
print "Replacing value of " + section + " - " + option + ": " + dict[section][option]
# Then we set the value to 'Llama'
config.set( section, option, dict[section][option])
except:
print "Could not find " + section + " - " + option
# Rewrite the configuration to the .ini file
with open(r'C:\Path\To\desktop.ini', 'w') as myconfig:
config.write(myconfig)
Here is the output desktop.ini file:
[.ShellClassInfo]
iconresource = somePath.dll,0
[Fruits]
apple = Green
strawberry = Red
[Vegies]
potatoe = Green
carrot = Orange
[RandomClassInfo]
foo = somePath.ddsll,0
The only problem I have is that the options are loosing their first letter
uppercase.
|
Maptlotlib: add zoomed region of a graph with anisotropic (axis dependent) zoom
Question: I'm trying to get a plot done using `Python.matplotlib` in which I would add
to a first plot a zoomed region in a box located in the lower right corner.
Looking at documentation and examples, I know that this is usually done using
`zoomed_inset_axes` but it seems to only take one factor for zooming/dilating
for both directions when I'd like to dilate only along
the y axis (looking at the attached picture it is easy to guess why). So I
tried to give a bbox scaled as I would like but it doesn't seem to do the
trick. Is there an axis transformation or a data operation that could help me?
The red arrows and box described the size I would like my box to take (more or
less, I'll tune later of course).
Edit:
I've been playing with it a bit more and my problem seems to be that the two
graphs, in the current state, do share the same y-axis scale.
Here is my code, by the way, should have been included in the first place:
ax = plt.gca()
axins = zoomed_inset_axes(ax, 1, loc=4)
axins.scatter(x,y, lw=0.1,c="#1B9E77",alpha=0.8)
x1, x2, y1, y2 = -75, 5200, -0.31, -0.18 #coordinates of the region I'm zooming in
axins.plot(N.linspace(x1,x2,100),[-0.25]*100,c="#EFD807",lw=3) #yellow line
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
mark_inset(ax, axins, loc1=3, loc2=1, fc="none", ec="0.5")
Answer: An alternative method to `zoomed_inset_axes` is to do it 'manually'. I'm sure
there are short comings to this method so if people know of them please
comment. I by no means am suggesting this is the best method - just one I've
seen about.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 100, 1000)
y = np.sin(x)
ax = plt.subplot(111)
ax.plot(x, y)
ax.set_ylim(-50, 10)
ax2 = plt.axes([0.2, 0.2, .6, .2])
ax2.plot(x, y)
mark_inset(ax, ax2, loc1=2, loc2=1, fc="none", ec="0.5")
plt.show()

This answer is adapted from [this
original](http://stackoverflow.com/questions/13583153/how-to-zoomed-a-portion-
of-image-and-insert-in-the-same-plot-in-matplotlib) SO post.
An altenative method is proposed in [this
SO](http://stackoverflow.com/questions/24035118/different-x-and-y-scale-in-
zoomed-inset-matplotlib) post.
|
connected components attributes in python
Question: I want to compute statistics on the connected components of a binary image. In
matlab we have
> Shape Measurements
'Area' 'EulerNumber' 'Orientation'
'BoundingBox' 'Extent' 'Perimeter'
'Centroid' 'Extrema' 'PixelIdxList'
'ConvexArea' 'FilledArea' 'PixelList'
'ConvexHull' 'FilledImage' 'Solidity'
'ConvexImage' 'Image' 'SubarrayIdx'
'Eccentricity' 'MajorAxisLength'
'EquivDiameter' 'MinorAxisLength'
Is there any equivalent in python?
Thanks
Answer: Just answered a [similar
question](http://stackoverflow.com/questions/37147762/python-connected-
components-with-pixel-list/38706369#38706369). Use the
[`regionprops`](http://scikit-
image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops)
function in scikit-image to get the CC properties in Python.
from scipy.ndimage.measurements import label
from skimage.measure import regionprops
label = label(img)
props = regionprops(label)
# get centroid of second object
centroid = props[1].centroid
# get eccentricity of first object
ecc = props[0].eccentricity
The shape measurements output by `regionprops` include all the features listed
above in the question. The `'PixelIdxList'` equivalent in Python is the
`coords` property output by `regionprops`.
|
python 2.7 requests.get() returning cookie raising TypeError
Question: I'm doing a simple HTTP requests authentication vs our internal server,
getting the cookie back then hitting a Cassandra RESTful server to get data.
The requests.get() chokes when returning the cookie.
I have a curl script that extracts the data successfully, I'd rather work with
the response JSON data in pure python.
Any clues to what I've doing wrong below? I dump the cookie, it looks fine,
very similar to my curl cookie.
Craig
* * *
import requests
import rtim
# this makes the auth and gets the cookie returned, save the cookie
myAuth = requests.get(rtim.rcas_auth_url, auth=(rtim.username, rtim.password),verify=False)
print myAuth.status_code
authCookie=myAuth.headers['set-cookie']
IXhost='xInternalHostName.com:9990'
mylink='http:/%s/v1/BONDISSUE?format=JSONARRAY&issue.isin=%s' % (IXhost, 'US3133XK4V44')
# chokes on next line .... doesn't like the Cookie format
r = requests.get(mylink, cookies=authCookie)
(Pdb) next
TypeError: 'string indices must be integers, not str'
Answer: I think the problem is on the last line:
`r = requests.get(mylink, cookies=authCookie) `
requests assumes that the `cookies` parameter is a dictionary, but you are
passing a string object `authCookie` to it.
The exception raises when requests tries to treat the string `authCookie` as a
dictionary.
|
Writing hex data into a file
Question: I'm trying to write hex data taken from ascii file to a newly created binary
file
ascii file example:
98 af b7 93 bb 03 bf 8e ae 16 bf 2e 52 43 8b df
4f 4e 5a e4 26 3f ca f7 b1 ab 93 4f 20 bf 0a bf
82 2c dd c5 38 70 17 a0 00 fd 3b fe 3d 53 fc 3b
28 c1 ff 9e a9 28 29 c1 94 d4 54 d4 d4 ff 7b 40
my code
hexList = []
with open('hexFile.txt', 'r') as hexData:
line=hexData.readline()
while line != '':
line = line.rstrip()
lineHex = line.split(' ')
for i in lineHex:
hexList.append(int(i, 16))
line = hexData.readline()
with open('test', 'wb') as f:
for i in hexList:
f.write(hex(i))
Thought `hexList` holds already hex converted data and `f.write(hex(i))`
should write these hex data into a file, but python writes it with ascii mode
final output: `0x9f0x2c0x380x590xcd0x110x7c0x590xc90x30xea0x37` which is
wrong!
where is the issue?
Answer: Use
[`binascii.unhexlify`](https://docs.python.org/3/library/binascii.html#binascii.unhexlify):
>>> import binascii
>>> binascii.unhexlify('9f')
'\x9f'
>>> hex(int('9f', 16))
'0x9f'
* * *
import binascii
with open('hexFile.txt') as f, open('test', 'wb') as fout:
for line in f:
fout.write(
binascii.unhexlify(''.join(line.split()))
)
|
cannot instantiate a class even though it is in a module
Question: I want to use [`pycvss`](https://pypi.python.org/pypi/pycvss/1.0.2) so I
installed it via `pip`.
Instantiating the `Cvss()` class fails, though:
>>> import pycvss
>>> c = pycvss.Cvss()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'Cvss'
When inspecting the module it indeed looks quite empty:
>>> dir(pycvss)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__']
But when I check where python looks for the module:
>>> print(pycvss.__file__)
/usr/local/python-2.7.8/lib/python2.7/site-packages/pycvss/__init__.pyc
it looks like that,
`/usr/local/python-2.7.8/lib/python2.7/site-packages/pycvss/pycvss.py`
does define `Cvss():
(...)
class Cvss(object):
"""Common Vulnerability Scoring System.
Use this class to set base, temporal and environmental vectors and
compute scores.
Cf module level documentation for sample usage.
"""
_BASE_VECTOR = (AV, AC, Au, C, I, A)
(...)
I must be missing something obvious, but the more I look, the less I see
(other modules are fine, including those installed by `pip` like for instance
`requests`).
Answer: The [`__init__`
file](https://github.com/attwad/pycvss/blob/master/pycvss/__init__.py) in that
project is _empty_. You need to import the [nested `pycvss`
module](https://github.com/attwad/pycvss/blob/master/pycvss/pycvss.py):
from pycvss import pycvss
c = pycvss.Cvss()
The documentation is rather unclear about this; I'd file a [documentation
issue](https://github.com/attwad/pycvss/issues?q=is%3Aopen+is%3Aissue) with
the project.
Personally, I'd add one line to the `__init__.py` file here:
from pycvss import Cvss
and update the documentation to illustrate how to import that class.
Note that the project expects the [`enum`
library](https://docs.python.org/3/library/enum.html) to be available. If you
are not using Python 3.4 or newer, you'll have to install the [`enum34`
backport package](https://pypi.python.org/pypi/enum34) too.
|
Python exiting multiple threads
Question: I'm trying to see how multi thread are working in order to use them in an
automation project. I can run the thread but I cannot find a way to exit
completely the two threads: the thread restart after each keyboard interupt.
Is there a way to exit both thread with a keyboard interupt ?
import thread
from time import sleep
*parameters when starting
temp_c = 32
T_hot = 30
T_cold = 27
interval_temp = 2
def ctrl_fan(temp_c, T_hot,interval_temp):
while True:
if temp_c >= T_hot:
print 'refreshing'
else:
print ' fan stopped'
sleep(interval_temp)
print 'shutting everything off'
def ctrl_light(temp_c, T_cold,interval_temp):
while True:
if temp_c <= T_cold:
print 'warming'
else:
print 'light stopped'
sleep(interval_temp)
print 'shutting everything off'
try:
thread.start_new_thread(ctrl_fan, (temp_c, T_hot,interval_temp, ) )
sleep(1)
thread.start_new_thread(ctrl_light, (temp_c, T_cold,interval_temp, ) )
except (KeyboardInterrupt, SystemExit):
thread.exit()
print "Error: unable to start thread"
Answer: Sure,
Firstly I'd recommend using the slightly higher level
[`threading`](https://docs.python.org/2/library/threading.html) module instead
of the `thread` module.
To start a thread with `threading` use the following
import threading
t = threading.Thread(target=ctrl_fan, args=(temp_c, T_hot, interval_temp))
t.start()
There's a few things you'll need to do to get the program to exit with a
`Ctrl-C` interupt.
Firstly you will want to set the threads to be
[daemon](https://docs.python.org/2/library/threading.html#threading.Thread.daemon),
so that they allow the program to exit when the main thread exits (`t.daemon =
True`)
You will also want the main thread to wait on the completion of the threads,
you can use `t.join()` to do this. However this wont raise out a
`KeyboardInterrupt` exception until the thread finishes, there is a work
around for this though
while t.is_alive():
t.join(1)
Providing a timeout value gets around this.
I'd be tempted to pull this together into a subclass, to get the behaviour you
want
import threading
class CustomThread(threading.Thread):
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
self.daemon = True
def join(self, timeout=None):
if timeout is None:
while self.is_alive():
threading.Thread.join(self, 10)
else:
return threading.Thread.join(self, timeout)
t1 = CustomThread(target=ctrl_fan, args=(temp_c, T_hot, interval_temp))
t1.start()
t2 = CustomThread(target=ctrl_light, args=(temp_c, T_cold, interval_temp))
t2.start()
t1.join()
t2.join()
|
Testing in Django 1.7 throws warning: RemovedInDjango18Warning
Question: When I do my tests with Django 1.7.1 it throws the next warning:
/usr/local/lib/python2.7/dist-packages/django/test/_doctest.py:59:
RemovedInDjango18Warning: The django.test._doctest module is deprecated;
use the doctest module from the Python standard library instead.
RemovedInDjango18Warning)
I also tried adding in the settings.py file this line:
TEST_RUNNER = 'django.test.runner.DiscoverRunner'
But still throws the warning.
I write down the code from my test model file:
from django.test import TestCase
from myproject import tests, models
class TestModels(TestCase):
def test_rol(self):
rol = tests.create_rol()
rol.save()
self.assertTrue(isinstance(rol, models.Rol))
self.assertEqual(rol.name, rol.__unicode__())
I have read the docs from the Django Web page:
<https://docs.djangoproject.com/en/1.7/topics/testing/overview/> but still
can't get a solution.
I am using Django-nose.
How can I solve this?
Thanks
Answer: This is a problem in the `django-nose` package (see [GitHub issue
#149](https://github.com/django-nose/django-nose/issues/149)). It has been
fixed in master, but there is no PyPI release for it yet. For now, if you
don't want to see the warning, you can point your `requirements.txt` file to
the fixed version:
-e git://github.com/django-nose/django-nose.git@154b663097e8f3131fe5d1cdd8a8df2e388450ac#egg=django_nose
... instead of ...
django-nose==1.2
**Update: django-nose 1.3 has been released and contains this fix.**
|
Communicating with the outside world from within an atomic database transaction
Question: I am implementing an import tool (Django 1.6) that takes a potentially very
large CSV file, validates it and depending on user confirmation imports it or
not. Given the potential large filesize, the processing of the file is done
via flowy (a python wrapper over Amazon's SWF). Each import job is saved in a
table in the DB and the workflow, which is quite simple and consists of only
one activity, basically calls a method that runs the import and saves all
necessary information about the processing of the file in the job's record in
the database.
The tricky thing is: We now have to make this import atomic. Either all
records are saved or none. But one of the things saved in the import table is
the progress of the import, which is calculated based on the position of the
file reader:
progress = (raw_data.tell() * 100.0) / filesize
And this progress is used by an AJAX progressbar widget in the client side. So
simply adding `@transaction.atomic` to the method that loops through the file
and imports the rows is not a solution, because the progress will only be
saved on commit.
The CSV files only contain one type of record and affect a single table. If I
could somehow do a transaction only on this table, leaving the job table free
for me to update the progress column, it would be ideal. But from what I've
found so far it seems impossible. The only solution I could think of so far is
opening a new thread and a new database connection inside it every time I need
to update the progress. But I keep wondering… will this even work? Isn't there
a simpler solution?
Answer: One simple approach would be to use the `READ UNCOMMITTED` transaction
isolation level. That _could_ allow dirty reads, which would allow your other
processes to see the progress even though the transaction hasn't been
committed. However, whether this works or not will be database-dependent. (I'm
not familiar with MySQL, but this wouldn't work in PostgreSQL because `READ
UNCOMMITTED` works the same way as `READ COMMITTED`.)
Regarding your proposed solution, you don't necessarily need a new thread, you
really just need a fresh connection to the database. One way to do that in
Django might be to take advantage of the [multiple database
support](https://docs.djangoproject.com/en/dev/topics/db/multi-db/). I'm
imagining something like this:
1. As described in the documentation, add a new entry to `DATABASES` with a different name, but the same setup as `default`. From Django's perspective we are using multiple databases, even though we in fact just want to get multiple connections to the same database.
2. When it's time to update the progress, do something like:
JobData.objects.using('second_db').filter(id=5).update(progress=0.5)
That should take place in its own autocommitted transaction, allowing the
progress to be seen by your web server.
Now, does this work? I honestly don't know, I've never tried anything like it!
|
Python: onkeypress without turtle window?
Question: My problem at this time is, that I want to detect a keypress through the
command
onkeypress(fun,"key")
but when I import onkeypress and listen from turtle, a turtle window pops out,
when I run my program. Do you know how to close it again, or how to not even
let it appear? Thanks for hopefully coming answers, sorry for my bad english
(I'm german and 13)
Answer: It might be hard to find experts in python
[turtle](https://docs.python.org/3/library/turtle.html). However, if you are
not limited to that library, you may use the following code to get the key
pressed by the user (last call actually gets the key for you):
class _Getch:
"""Gets a single character from standard input. Does not echo to the
screen."""
def __init__(self):
try:
self.impl = _GetchWindows()
except ImportError:
self.impl = _GetchUnix()
def __call__(self): return self.impl()
class _GetchUnix:
def __init__(self):
import tty, sys
def __call__(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
class _GetchWindows:
def __init__(self):
import msvcrt
def __call__(self):
import msvcrt
return msvcrt.getch()
getch = _Getch()
Code comes from [this article](http://code.activestate.com/recipes/134892/)
|
class ForkAwareLocal(threading.local): AttributeError: 'module' object has no attribute 'local
Question: I'm a python newbie. Am trying this code snippet from the manual, but am
getting this error. Cannot figure out why. Any help will be appreciated. Thx
## Abhi
## code snippet
#/usr/bin/python
# -*- coding: utf-8 -*-
from multiprocessing import Pool
def f(x):
return x*x
p = Pool(1)
p.map(f, [1, 2, 3])
* * *
## Error
[root@localhost mpls-perf]# python thr_1.py
Traceback (most recent call last):
File "thr_1.py", line 4, in <module>
from multiprocessing import Pool
File "/usr/lib64/python2.7/multiprocessing/__init__.py", line 65, in <module>
from multiprocessing.util import SUBDEBUG, SUBWARNING
File "/usr/lib64/python2.7/multiprocessing/util.py", line 340, in <module>
class ForkAwareLocal(threading.local):
AttributeError: 'module' object has no attribute 'local'
Exception AttributeError: '_shutdown' in <module 'threading'
from '/root/nfs/zebos/tests/mpls- perf/threading.pyc'> ignored
* * *
OS etc [root@localhost mpls-perf]# uname -a Linux localhost.localdomain
3.4.4464bit-smp-xp1.1-allpatch #1 SMP Wed Oct 15 17:34:02 EDT 2014 x86_64
x86_64 x86_64 GNU/Linux
[root@localhost mpls-perf]# python -V Python 2.7.5 [root@localhost mpls-perf]
Answer: You appear to have a file called `threading.py`
This is being imported instead by `multiprocessing` instead of the builtin
`threading`
rename your file to something else and delete the `.pyc`
|
Replacing JSON value in python
Question: **EDIT: sorry, i had a hard to see uppercase/lowercase typo. please someone
delete this question.**
I am trying to change the value of a json object with simplejson. The problem
is that instead of replacing it, it is adding another entry with the same key.
{
"main" : "value_to_replace"
}
and after doing this in python:
json["main"] = "replaced"
becomes
{
"main" : "value_to_replace",
"main" : "replaced"
}
which is infact still valid json.
Answer: it works for me.
import simplejson as json
str = """{
"main" : "value_to_replace"
}"""
data = json.loads(str)
print data
data["main"] = "test"
print data
Output:
(test)alexandr@alexandr:~/Desktop$ python test.py
{'main': 'value_to_replace'}
{'main': 'test'}
|
Python module function not defined
Question: I am trying to import a module in my python script and I can't make it work.
So I have my python script: /home/user/pythonscript/oneDir/onescript.py And I
would like to use a script that is a directory higher in hierarchy:
/home/user/pythonscript/common.py So I did the following at the top of my
onescript.py:
import sys
sys.path.insert(1,'/home/user/pythonscript')
import common
In my common.py file, I have a function onecConnect, and when I try to run
onescript.py, which uses onecConnect function, I get the following error:
nameError: name 'onecConnect' is not defined
Anyone can see what I do wrong or forgot to do? Thanks
Answer: Make sure there are `__init__.py` in all directories, go to
`/home/user/pythonscript` and run Python code from there. So:
python oneDir/onescript.py
In `onescript.py` you can do:
from common import onecConnect
The rules are:
* Always run a Python script from the highest possible directory (not the deepest into the project).
* Always have full import lines, no relative imports.
This keeps the problems away.
|
All possible combinations of dictionary values given input string. Python
Question: I'm trying to get all possible strings from the values in a dictionary given a
particular key. For example, 'A' could mean either 'aaa','aba', or 'aac' and
'B' could mean either 'bbb','bab', or 'bbc', etc.
I've given an example of the output where it shows most combinations of the
possible strings
import itertools
in_ = 'ABC'
D = {'A':['aaa','aba','aac'],'B':['bbb','bab','bbc'],'C':['ccc','cac','ccb']}
#out_ = ['aaabbbccc','ababbbccc','aacbbbccc','aaababccc','aaabbcccc','aaabbbcac','aaabbbccb'...]
I started writing the code but it started to get REALLY messy and I feel that
there is a way to use itertools or something more pythonic to achieve this
goal
output = []
for char in in_:
out_string = ''
while char:
for v in D[char]:
while v:
for char2 in in_:
out_string
#not pythonic . . .
Answer: Well, you've got `itertools` imported there. Let's use it! We want to take the
Cartesian product `D['A'] × D['B'] × D['C']`, so we'll do precisely that using
`itertools.product`.
import itertools
in_ = 'ABC'
D = {'A':['aaa','aba','aac'],'B':['bbb','bab','bbc'],'C':['ccc','cac','ccb']}
iterables = [D[character] for character in in_]
out_ = [''.join(tup) for tup in itertools.product(*iterables)]
Now, `out_` is:
['aaabbbccc', 'aaabbbcac', 'aaabbbccb', 'aaababccc', 'aaababcac', 'aaababccb',
'aaabbcccc', 'aaabbccac', 'aaabbcccb', 'ababbbccc', 'ababbbcac', 'ababbbccb',
'abababccc', 'abababcac', 'abababccb', 'ababbcccc', 'ababbccac', 'ababbcccb',
'aacbbbccc', 'aacbbbcac', 'aacbbbccb', 'aacbabccc', 'aacbabcac', 'aacbabccb',
'aacbbcccc', 'aacbbccac', 'aacbbcccb']
Is that the result you were going for?
|
updating djangocms database entry via script not working with cronjob
Question: I have a python script which automatically updates a database entry of the
`djangocms_text_ckeditor_text` table. I'm using djangocms 3 on debian wheezy.
When running this script from the bash with `trutty:~$ ./update.py` it works
and the database entry gets updated. However, when running the same script
with a cronjob (specified in `crontab -e -u trutty`), the entry does not get
updated although the script runs.
My script looks like this:
#!/home/trutty/v/bin/python
...
from django import settings
from djangocms_text_ckeditor.models import Text
from cms.models.pluginmodel import CMSPlugin
...
c = CMSPlugin.objects.filter(placeholder_id=8, parent_id__isnull=True)
if c:
t = Text.objects.get(pk=c.first().id)
t.body = ...
t.save()
...
What am I missing?
Answer: I now get the page object and save it right after `t.save()`.
from cms.models.pagemodel import Page
...
p = Page.objects,get(...)
...
t.save()
p.save()
|
Configuring OpenCV with Python on Mac, but failed to compile
Question: I was trying to configure the OpenCV with Python 2.7 on a MacOSX environment,
I use the Homebrew to install OpenCV, and it works perfect with C++, but when
I attempted to compile a python file by typing `python test.py`, it gives me
the error saying that
File "test.py", line 1, in <module>
import cv ImportError: No module named cv
I tried the solution by adding `export
PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH` into
`.bash_profile` for my home folder.
However, it does not solve my compiling issue.
Any solution for this issue? Thanks.
Answer: According to solution found in comments, it's because you have unmet `numpy`
dependency. Use following:
sudo pip install numpy
brew install opencv
|
Python Selenium to select "menuitem" from "menubar"
Question: I have a code that clicks a button on a web page, which pops up a `menubar`. I
would like to select a `menuitem` from the choices that appear, and then
`click` the `menuitem` (if possible); however, I'm at a roadblock.
Here is the relevant part of the code so far:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('URL')
Btn = driver.find_element_by_id('gwt-debug-BragBar-otherDropDown')
Btn.click() #this works just fine
MenuItem = driver.find_element_by_id('gwt-uid-463') #I'm stuck on this line
MenuItem.click()
Here is the error it's throwing, based on what I have written:
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element
Note: it appears that the `id` for this element changes each time the page
loads (which is probably the cause of the error). I've tried searching for the
element by `find_element_by_class_name` as well, but it has a compound class
name and I keep getting an error there, too.
Here's the code of the `menubar`:
<div class="gux-combo gux-dropdown-c" role="menubar" id="gwt-debug-BragBar-otherMenu">
and the `menuitem` I want:
<div class="gux-combo-item gux-combo-item-has-child" id="gwt-uid-591" role="menuitem" aria-haspopup="true">text</div>
I'm looking for a way to select the `menuitem`. Thanks!
Answer: Try this xpath
driver.find_element_by_xpath('//div[@role='menuitem' and .='text']').click();
**It will check for the 'div' element having attribute 'role' as 'menuitem'
and having exact text as 'text'.**
Say, there is a menuitem "Lamborghini AvenTaDor" under your menu. So, the code
for that will become:
driver.find_element_by_xpath('//div[@role='menuitem' and .='Lamborghini AvenTaDor']').click();
|
Insert the $currentDate on mongodb with pymongo
Question: I need test the accuracy of a server mongodb. I am trying to insert a sequence
of data, take the moment and it was sent to the database to know when it was
inserted. I'm trying this:
#!/usr/bin/python
from pymongo import Connection
from datetime import date, timedelta, datetime
class FilterData:
@classmethod
def setData(self, serialData):
try:
con = Connection('IP_REMOTE', 27017, safe=True)
db = con['resposta']
inoshare = db.resposta
inoshare.insert(serialData)
con.close()
except Exception as e:
print "Erro no filter data: ", e.message, e.args
obj = FilterData()
inicio = datetime.now()
termino = inicio + timedelta(seconds=10)
contador = 1
while inicio <= termino:
print contador, inicio.strftime('%d-%m-%Y %H:%M:%S')
pacote = {'contador':contador, 'datahora':$currentDate()}
obj.setData(pacote)
contador += 1
But the variables of mongodb (using $) are not recognized in python. How to
proceed to accomplish this integration?
**Obs: IP_REMOTE = my valid IP on REMOTE server**
then tried the following, but only inserts a single record.
#!/usr/bin/python
from pymongo import Connection
from datetime import date, timedelta, datetime
import time
class FilterData:
def __init__(self):
self.con = Connection('54.68.148.224', 27017, safe=True)
self.db = self.con['resposta']
self.inoshare = self.db.resposta
def setData(self, serialData):
try:
self.inoshare.update({}, serialData, upsert=True)
except Exception as e:
print "Erro no filter data: ", e.message, e.args
def desconect(self):
self.con.close()
obj = FilterData()
inicio = datetime.now()
termino = inicio + timedelta(seconds=30)
while inicio <= termino:
print inicio.strftime('%d-%m-%Y %H:%M:%S')
pacote = {'$currentDate': {'datahora': { '$type': 'date' }}}
obj.setData(pacote)
inicio = datetime.now()
time.sleep(1)
obj.desconect()
Answer: Operator expressions in MongoDB are represented in the data structure as a
string. These are also "update operators", so
[**`$currentDate`**](http://docs.mongodb.org/manual/reference/operator/update/currentDate/)
is meant to be used in the "update object" portion of an
[`.update()`](http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update)
method.
So something like this to insert a new record with the "$currentDate" from the
server:
db = con['resposta']
inoshare = db.resposta
inoshare.update({}, {
'$currentDate': {
'datahora': { '$type': 'date' }
}
},upsert=True)
Presuming of course there is nothing in your collection. Otherwise make sure
the "query" portion of the `.update()` statement does not match a document
when you want to "insert"/"upsert" as it were.
All the documentation options in the MongoDB manual pages are as JSON notation
relevant to the MongoDB shell, but however this is not that different from the
notation of many dyamically typed languages such as python, ruby and Perl.
BTW. Unless you are **really** testing in distinct scripts, then do not make a
connection and disconnect before and after every operation. Database
collections should stay open for the life-cycle of your application.
|
Append to XML structure in python
Question: I would like to change/add a custom subelement to an xml which was generated
by my script.
The top element is AAA:
top = Element('AAA')
The collected_lines looks like this:
[['TY', ' RPRT'], ['A1', ' Peter'], ['T3', ' Something'], ['ER', ' ']]
Then I enumerate all lines one-by-one and create a SubElement for `top`:
for line in enumerate(collected_lines):
child = SubElement(top, line[0])
child.text = line[1]
Output:
<?xml version="1.0" ?>
<AAA>
<TY> RPRT</TY>
<A1> Peter</A1>
<T3> Something</T3>
<ER> </ER>
<TY> RPRT2</TY>
<A1> Peter</A1>
<T3> Something2</T3>
<ER> </ER>
<TY> RPRT2</TY>
<A1> Peter</A1>
<T3> Something2</T3>
<ER> </ER>
</AAA>
And I would like to add `<ART>` element to the `top` element and then print
the xml like this:
<?xml version="1.0" ?>
<AAA>
<ART>
<TY> RPRT</TY>
<A1> Peter</A1>
<T3> Something</T3>
<ER> </ER>
</ART>
<ART>
<TY> RPRT2</TY>
<A1> Peter</A1>
<T3> Something2</T3>
<ER> </ER>
</ART>
<ART>
<TY> RPRT2</TY>
<A1> Peter</A1>
<T3> Something2</T3
</ART>
</AAA>
I'm tried to do it with an if statemant. Like:
if "TY" in line:
"append somehow before TY element, <ART>"
if "ER" in line:
"append somehow after ER element, </ART>"
Is there a simple way to solve this?
Answer: Just reassign the `top` element and use `insert`:
top = ET.Element('AAA')
# by the way you need index, element on enumerate
for i, line in enumerate(collected_lines):
child = ET.SubElement(top, line[0])
child.text = line[1]
art = top
art.tag = 'ART'
top = ET.Element('AAA')
top.insert(1, art)
ET.tostring(top)
'<AAA><ART><TY> RPRT</TY><A1> Peter</A1><T3> Something</T3><ER> </ER></ART></AAA>'
* * *
As @twasbrillig pointed out, you don't even need `enumerate`, just a simple
`for/loop` will do:
...
for line in collected_lines:
child = ET.SubElement(top, line[0])
child.text = line[1]
...
### Another update
OP edited to also ask how to handle multiple sections as in previous example,
this can be achieved by normal Python logic:
import xml.etree.ElementTree as ET
s = '''<?xml version="1.0" ?>
<AAA>
<TY> RPRT</TY>
<A1> Peter</A1>
<T3> Something</T3>
<ER> </ER>
<TY> RPRT2</TY>
<A1> Peter</A1>
<T3> Something2</T3>
<ER> </ER>
<TY> RPRT2</TY>
<A1> Peter</A1>
<T3> Something3</T3>
<ER> </ER>
</AAA>'''
top = ET.fromstring(s)
# assign a new Element to replace top later on
new_top = ET.Element('AAA')
# get all indexes where TY, ER are at
ty = [i for i,n in enumerate(top) if n.tag == 'TY']
er = [i for i,n in enumerate(top) if n.tag == 'ER']
# top[x:y] will get all the sibling elements between TY, ER (from their indexes)
nodes = [top[x:y] for x,y in zip(ty,er)]
# then loop through each nodes and insert SubElement ART
# and loop through each node and insert into ART
for node in nodes:
art = ET.SubElement(new_top, 'ART')
for each in node:
art.insert(1, each)
# replace top Element by new_top
top = new_top
# you don't need lxml, I just used it to pretty_print the xml
from lxml import etree
# you can just ET.tostring(top)
print etree.tostring(etree.fromstring(ET.tostring(top)), \
xml_declaration=True, encoding='utf-8', pretty_print=True)
<?xml version='1.0' encoding='utf-8'?>
<AAA>
<ART><TY> RPRT</TY>
<T3> Something</T3>
<A1> Peter</A1>
</ART>
<ART><TY> RPRT2</TY>
<T3> Something2</T3>
<A1> Peter</A1>
</ART>
<ART><TY> RPRT2</TY>
<T3> Something3</T3>
<A1> Peter</A1>
</ART>
</AAA>
|
Running Flask app on Heroku
Question: I'm trying to run a Flask app on Heroku and I'm getting some frustrating
results. I'm not interested in the ops side of things. I just want to upload
my code and have it run. Pushing to the Heroku git remote works fine (`git
push heroku master`), but when I tail the logs (`heroku logs -t`) I see the
following error:
2014-11-08T15:48:50+00:00 heroku[slug-compiler]: Slug compilation started
2014-11-08T15:48:58+00:00 heroku[slug-compiler]: Slug compilation finished
2014-11-08T15:48:58.607107+00:00 heroku[api]: Deploy 2ba1345 by <my-email-address>
2014-11-08T15:48:58.607107+00:00 heroku[api]: Release v5 created by <my-email-address>
2014-11-08T15:48:58.723704+00:00 heroku[web.1]: State changed from crashed to starting
2014-11-08T15:49:01.458713+00:00 heroku[web.1]: Starting process with command `gunicorn app:app`
2014-11-08T15:49:02.538539+00:00 app[web.1]: bash: gunicorn: command not found
2014-11-08T15:49:03.340833+00:00 heroku[web.1]: Process exited with status 127
2014-11-08T15:49:03.355031+00:00 heroku[web.1]: State changed from starting to crashed
2014-11-08T15:49:04.462248+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=blueprnt.herokuapp.com request_id=e7f92595-b202-4cdb-abbc-309dcd3a04bc fwd="54.163.35.91" dyno= connect= service= status=503 bytes=
Here's the pertinent files:
**Procfile**
web: gunicorn app:app
heroku ps:scale web
**requirements.txt**
Flask==0.10.1
Flask-Login==0.2.11
Flask-WTF==0.10.2
Jinja2==2.7.3
MarkupSafe==0.23
Unidecode==0.04.16
WTForms==2.0.1
Werkzeug==0.9.6
awesome-slugify==1.6
blinker==1.3
gnureadline==6.3.3
gunicorn==19.1.1
ipdb==0.8
ipython==2.3.0
itsdangerous==0.24
peewee==2.4.0
py-bcrypt==0.4
pytz==2014.7
regex==2014.10.24
wsgiref==0.1.2
wtf-peewee==0.2.3
**app.py (the run portion)**
# Run application
if __name__ == '__main__':
# from os import environs
# app.run(debug=False, port=environ.get("PORT", 5000), processes=2)
app.run(debug=True, port=33507)
I've tried both the answer from [this
thread](http://stackoverflow.com/questions/13714205/deploying-flask-app-to-
heroku) and from [this
thread](http://stackoverflow.com/questions/21079474/flask-app-wont-run-on-
heroku). When I try to shell into Heroku to investigate (`heroku run bash`) it
appears that something is wrong with my app's environment:
(blueprnt)☀ website [master] heroku run bash
/Users/andymatthews/.rvm/gems/ruby-1.9.3-p125/gems/heroku-3.15.0/lib/heroku/helpers.rb:91: warning: Insecure world writable dir /usr/local in PATH, mode 040777
Running `bash` attached to terminal... up, run.8540
~ $ ls
Gruntfile.js __init__.py fixtures models.py requirements.txt static vendor
Procfile app.py forms.py node_modules settings.py templates views
README.md blueprnt.db mixins.py package.json site-theme-assets.zip utils.py
~ $ pwd
/app
~ $ whoami
u15880
~ $ which pip
~ $ which git
/usr/bin/git
~ $ pip install -r requirements.txt
bash: pip: command not found
Would really love some assistance. In the past when I've deployed apps to
Heroku, I haven't had any problem. But this app is more complicated than those
others.
Answer: this isn't an answer but had to include a decent sized code block
Part of the problem with your current traceback is it does not really provide
relevant information for debugging. You can log the errors on heroku then use
`heroku log` to get a more substantial traceback of your app's errors.
Do this by adding a handler on to your `app.logger`. Heroku will pick up data
from `sys.stderr` so you can use a `logging.StreamHandler` class
import logging
import sys
from logging import Formatter
def log_to_stderr(app):
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'
))
handler.setLevel(logging.WARNING)
app.logger.addHandler(handler)
if __name__ == '__main__':
log_to_stderr(app)
app.run()
|
Create a one week agenda in python
Question: I'm starting studying python and in particular I'm starting studying the
dictionary. I saw an exercise and I decide to solve it. The exercise asks to
create a one week agenda in python using dictionaries. Nothing too complicate
but I have also to insert the appointment that the user wants to insert. I
create something, not too difficult, but I don't know how to create a design
of the agenda. I did something:
from collections import OrderedDict
line_new = ''
d = {}
d = OrderedDict([("Monday", "10.30-11.30: Sleeping"), ("Tuesday", "13.30-15.30: Web Atelier"), ("Wednsday", "08.30-10.30: Castle"), ("Thursday", ""), ("Friday", "11.30-12.30: Dinner"), ("Saturday",""), ("Sunday","")])
for key in d.keys():
line_new = '{:>10}\t'.format(key)
print(line_new)
print("| | | | | | | |")
print("| | | | | | | |")
print("| | | | | | | |")
Where the output is:
Monday
Tuesday
Wednsday
Thursday
Friday
Saturday
Sunday
And the lines to create the idea of a table. How can I put the days all on one
line using dictionary? I know how to do it with strings (with format) but I
don't know how to do it with the keys od the dictionary Can you help me?
EDIT The output I'm looking for is something like:
Monday Tuesday Wednesday Thursday Friday Saturday Sunday
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
With some space more between the days (I cannot insert it here) And the lines
that create a division between the days

Update to the output after Wasowsky's solution
Answer:
# you can save your formatting as a string to stay consistent
fmt = '{txt:>{width}}'
# calculate how much space you need instead of guessing:
maxwidth = max(len(day) for day in d.keys())
# join formatted strings with your separator and print
separator = ' '
print separator.join(fmt.format(txt=day, width=maxwidth) for day in d.keys())
# you can use the same formatter to print other rows, just change separator
separator = ' | '
for _ in range(3):
print ' | '.join(fmt.format(txt='', width=maxwidth) for day in d.keys())
Output:
Monday Tuesday Wednsday Thursday Friday Saturday Sunday
| | | | | |
| | | | | |
| | | | | |
|
"TypeError: Can't convert 'NoneType' object to str implicitly" when var should have a value
Question:
import sys
from tkinter import *
def print():
print("Encoded " + message + " with " + offset)
gui = Tk()
gui.title("Caesar Cypher Encoder")
Button(gui, text="Encode", command=encode).grid(row = 2, column = 2)
Label(gui, text = "Message").grid(row = 1, column =0)
Label(gui, text = "Offset").grid(row = 1, column =1)
message = Entry(gui).grid(row=2, column=0)
offset = Scale(gui, from_=0, to=25).grid(row=2, column=1)
mainloop( )
When i run this code with an input in both the input box and a value on the
slider - it comes up with the error
>>>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python34\lib\tkinter\__init__.py", line 1533, in __call__
return self.func(*args)
File "C:/Users/xxxx/Desktop/Code/Functionised/GUI.pyw", line 5, in encode
print("Encoded " + message + " with " + offset)
TypeError: Can't convert 'NoneType' object to str implicitly
using a simple str() does not work by the way
EDIT
With the new code
import sys
from tkinter import *
def printer():
print(message)
print(offset)
gui = Tk()
gui.title("Caesar Cypher Encoder")
Button(gui, text="Encode", command=printer).grid(row = 2, column = 2)
Label(gui, text = "Message").grid(row = 1, column =0)
Label(gui, text = "Offset").grid(row = 1, column =1)
message = Entry(gui)
message.grid(row=2, column=0)
offset = Scale(gui, from_=0, to=25)
offset.grid(row=2, column=1)
mainloop( )
It returns
.46329264
.46329296
EDIT 2
def printer():
print(message.get())
print(offset.get())
this fixes the .xxxxxxxx problem
Answer: You are setting the variables `message` and `offset` to widgets but then you
position them on the same line, this makes them `Nonetype` objects, instead
position them on the next line e.g.:
message = Entry(gui)
message.grid(row=2, column=0)
offset = Scale(gui, from_=0, to=25)
offset.grid(row=2, column=1)
this should solve your problem, but also, it's not advised to use `from
tkinter import *` rather `import tkinter as tk`, and your function `print()`
should be differently named (not the same as a python keyword) so that it is
less confusing and prevents errors, make it `printer()` or similar to be on
the safe side.
Hope this helps you!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.