text
stringlengths 226
34.5k
|
---|
Python 3.5.1 regex doesn't match as expected
Question: I tested my regex with _Regex101_ already, but when I tried in _Python 3.5.1_
it returns `None`
Here's my [Regex101](https://regex101.com/r/fC0lI5/4)
And here's my Python code. Not sure if I miss anything.
Python 3.5.1 |Anaconda 2.4.0 (x86_64)| (default, Dec 7 2015, 11:24:55)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> p = re.compile('([01]?[0-9]+:?[0-9]*(?:[AP]M)?)-([01]?[0-9]+:?[0-9]*(?:[AP]M)?)', re.MULTILINE)
>>> m = p.match('NO PARKING (SANITATION BROOM SYMBOL) 7AM-7:30AM EXCEPT SUNDAY')
>>> print(m)
None
>>>
Answer: The problem is that `match` will search from the start of the string. Use
`search` instead.
|
New line in Python for loop with sqlite3
Question: I'm trying to create a small function that prints successive lines from a
database I'm working with. What I currently have written does that quite well,
except it's all a blocky mess and "\n" is interpreted literally rather than
actually creating a new line.
import sqlite3
conn = sqlite3.connect('database.db')
database = conn.cursor()
def textfromDB(text):
for row in database.execute("SELECT * FROM table1 WHERE column1 = ?", [text]):
print row
print "\n"
Answer: When you print a row inside a `for` loop for sqlite queries, in fact, you
print a tuple. So the mess with newline characters you are talking about is
the question of tuples representation using `print`. If what you need is to
change sparse format with literally interpreted `\n`'s to normal text, then
you have to convert your tuple to a string. You can achieve that by changing
`print row` by this command:
print(', '.join(map(str, row)))
What it's doing is joining all elements of a tuple into one string. `map`
function ensures that non-string items (like `int`s) of the tuple converted to
string, before `join` function can concatenate them. If there are no non-`str`
items in your tuples, then you can omit `map` function.
Also, as @timgeb pointed out in comments, you may use pretty print module
`pprint`. It doesn't change `\n`s to actual line breaks, but can print tuples
with nice indentation and other formatting features. It may be quite helpful,
when debugging a program that does something with longish tuples, lists, sets
or dicts. In your case, you'd need to import pprint first:
from pprint import pprint
And then use `pprint(row)` instead of `print row`. See pprint's
[docs](https://docs.python.org/library/pprint.html) for more details.
|
Python parameterized unittest by subclassing TestCase
Question: How can I create multiple TestCases and run them programmatically? I'm trying
to test multiple implementations of a collection on a common TestCase.
I'd prefer to stick to with plain unittest and avoid dependencies.
Here's some resources that I looked at that didn't quite meet what I wanted:
* [Writing a re-usable parametrized unittest.TestCase method](http://stackoverflow.com/questions/1676269/) \- The accepted answer proposes four different external libraries.
* <http://eli.thegreenplace.net/2011/08/02/python-unit-testing-parametrized-test-cases> \- This approach uses a static method `paramerize`. I don't understand why you can't pass in a parameter directly into the `TestSubClass.__init__`.
* [How to generate dynamic (parametrized) unit tests in python?](http://stackoverflow.com/questions/32899/) \- A little bit too black magic.
Here's a minimal (non)working example.
import unittest
MyCollection = set
AnotherCollection = set
# ... many more collections
def maximise(collection, array):
return 2
class TestSubClass(unittest.TestCase):
def __init__(self, collection_class):
unittest.TestCase.__init__(self)
self.collection_class = collection_class
self.maximise_fn = lambda array: maximise(collection_class, array)
def test_single(self):
self.assertEqual(self.maximise_fn([1]), 1)
def test_overflow(self):
self.assertEqual(self.maximise_fn([3]), 1)
# ... many more tests
def run_suite():
suite = unittest.defaultTestLoader
for collection in [MyCollection, AnotherCollection]:
suite.loadTestsFromTestCase(TestSubClass(collection))
unittest.TextTestRunner().run(suite)
def main():
run_suite()
if __name__ == '__main__':
main()
The above approach errors with in `loadTestsFromTestCase`:
`TypeError: issubclass() arg 1 must be a class`
Answer: How about using [`pytest` with to parametrize
fixture](https://pytest.org/latest/fixture.html#fixture-parametrize):
import pytest
MyCollection = set
AnotherCollection = set
def maximise(collection, array):
return 1
@pytest.fixture(scope='module', params=[MyCollection, AnotherCollection])
def maximise_fn(request):
return lambda array: maximise(request.param, array)
def test_single(maximise_fn):
assert maximise_fn([1]) == 1
def test_overflow(maximise_fn):
assert maximise_fn([3]) == 1
If that's not an option, you can make a mixin to contain test function, and
subclasses to provide `maximise_fn`s:
import unittest
MyCollection = set
AnotherCollection = set
def maximise(collection, array):
return 1
class TestCollectionMixin:
def test_single(self):
self.assertEqual(self.maximise_fn([1]), 1)
def test_overflow(self):
self.assertEqual(self.maximise_fn([3]), 1)
class TestMyCollection(TestCollectionMixin, unittest.TestCase):
maximise_fn = lambda self, array: maximise(MyCollection, array)
class TestAnotherCollection(TestCollectionMixin, unittest.TestCase):
maximise_fn = lambda self, array: maximise(AnotherCollection, array)
if __name__ == '__main__':
unittest.main()
|
Unicode error Python3 calendar module
Question: I'm trying to print a simple calendar from python `calendar` module:
import calendar
c = calendar.LocaleTextCalendar(0, 'Russian')
s = c.formatmonth(2016, 5)
print(s)
On linux it works well, but on Windows I got an error: `UnicodeEncodeError:
'charmap' codec can't encode characters in position 4-10: character maps to
<undefined>`
All I can do to avoid the error is `print(s.encode('ascii',
'replace').decode('ascii'))` (with locale text values missed), so I'm
intrested in common nice solution.
Thanks in advance.
Answer: That happens because the windows console encoding is not Unicode.
Unfortunately it is not trivial and there are several way to work around this.
What is the encoding of your console? You can find out in Python this way
import sys
sys.stdin.encoding
you can try to set Unicode this way only for the current console:
chcp 65001
python myScript.py
In your script make sure that your string is encoded into UTF-8.
|
rename a zipped file in python
Question: I have a zipped file. Inside of it I have a`.tvx` file - which I want to
rename to `.xml` . So I tried the following: (of course, I imported all the
relevant modules).
with zipfile.ZipFile(file_name) as z:
for filename in z.namelist():
if not os.path.isdir(filename):
os.rename(filename,filename.replace("tvx","xml"))
and the error I got was:
> WindowsError: [Error 2] The system cannot find the file specified
I thought that maybe the error was because the filename is not in absolute
path,
so I tried also this:
with zipfile.ZipFile(complete_name) as z:
for filename in z.namelist():
if not os.path.isdir(filename):
filename=os.path.abspath(filename) #making filename absolute path
os.rename(filename,filename.replace("tvx","xml"))
but still, the same error.
Answer: You can't rename file within the zip file, So you should extract, rename and
rezip the file.
|
How to read star (*) as system command
Question: I have this code :
> >>> import os
> >>> os.chdir('/u01/APPLTOP/instance/domains/*.oracleoutsourcing.com/ICDomain/servers/IncentiveCompensationServer_1/logs')
> Traceback (most recent call last): File "<stdin>", line 1, in ?
> OSError: [Errno 2] No such file or directory:
> '/u01/APPLTOP/instance/domains/*.oracleoutsourcing.com/ICDomain/servers/IncentiveCompensationServer_1/logs'
> >>>
Can you please let me know how can I read asterisk (*) as Linux system command
in python.
Answer: To simulate [shell path
expansion](http://bash.cyberciti.biz/guide/Path_name_expansion) of '*' (as
well as other glob special characters), you can use the
[glob](https://docs.python.org/2/library/glob.html) module:
import glob
glob_pattern = '/u01/APPLTOP/instance/domains/*.oracleoutsourcing.com/ICDomain/servers/IncentiveCompensationServer_1/logs'
dir_paths = glob.glob(glob_pattern)
Now, assuming the above results with a single match (otherwise, it makes no
sense "chdir"ing ot it), you can do:
dir_path, = dir_paths
os.chdir(dir_path)
The assingment above fails if you get no matches, or multiple matches.
|
Could not import module written in c# with IronPython
Question: Currently i'm struggeling with writing IronPython modules in c#. At first i
have some empty partial class, which represents the module base:
[assembly: PythonModule("demo", typeof(Demo.IronPythonAPI.PythonAPIModule))]
namespace Demo.IronPythonAPI
{
/// <summary>
/// Demo api module root/base
/// </summary>
public static partial class PythonAPIModule
{
}
}
In some other files, i try to implement the modules:
namespace Demo.IronPythonAPI
{
/// <summary>
/// Python api module path root (~import demo)
/// </summary>
public static partial class PythonAPIModule
{
/// <summary>
/// Python SQL-Module
/// </summary>
[PythonType]
public static class Sql
{
public static int executeNoneQuery(string query, string conName)
{
Console.WriteLine("Hello World");
return 0;
}
}
}
}
If i now want to use the module, it does not work:
import demo
Sql.executeNoneQuery("", "")
This throws the exception:
> name 'Sql' is not defined
When using
from demo import Sql
Sql.executeNoneQuery("", "")
Everything works just fine. What did i do actualy wrong?
Thank you very much!
Answer: You should check the difference between **[import and from
import](http://stackoverflow.com/questions/710551/import-module-or-from-
module-import)**
**Demo.Sql** is needed in the first case. So try
`Demo.Sql.executeNoneQuery("", "")` instead
|
Python selenium didn't find css element
Question: I imported the code from Selenium Ide in python. The selenium test works fine
without clicks on the item and scroll seamlessly clicks on an item. HTML
selenium code :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head profile="http://selenium-ide.openqa.org/profiles/test-case">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<link rel="selenium.base" href="http://www.amazon.com/" />
<title>New Test</title>
</head>
<body>
<table cellpadding="1" cellspacing="1" border="1">
<thead>
<tr><td rowspan="1" colspan="3">New Test</td></tr>
</thead><tbody>
<tr>
<td>open</td>
<td>/</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td>css=[alt="Deals in Books"]</td>
<td></td>
</tr>
</tbody></table>
</body>
</html>
But in Python does not work until you scroll-click to the to the desired item.
from selenium import webdriver
import unittest, time, re
class Untitled(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "http://www.amazon.com/"
self.verificationErrors = []
self.accept_next_alert = True
def test_untitled(self):
driver = self.driver
driver.get(self.base_url)
driver.find_element_by_css_selector("a.feed-carousel-control.feed-right > span.gw-icon.feed-arrow").click()
driver.find_element_by_css_selector("a.feed-carousel-control.feed-right > span.gw-icon.feed-arrow").click()
driver.find_element_by_css_selector("a.feed-carousel-control.feed-right > span.gw-icon.feed-arrow").click()
driver.find_element_by_css_selector("a.feed-carousel-control.feed-right > span.gw-icon.feed-arrow").click()
driver.find_element_by_css_selector("[alt=\"Deals in Books\"]").click()
def tearDown(self):
self.driver.quit()
self.assertEqual([], self.verificationErrors)
I tested in different selenium locators and xpath css and it works, but in
python without scrolling click on the element is not working.
Answer: First, you may [wait for the element to become visible](https://selenium-
python.readthedocs.org/waits.html):
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(driver, 10).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "[alt=\"Deals in Books\"]"))
)
element.click()
And, if you need to scroll to the element before clicking it, use the [Action
Chains](https://selenium-python.readthedocs.org/api.html#module-
selenium.webdriver.common.action_chains):
deals_in_books = driver.find_element_by_css_selector("[alt=\"Deals in Books\"]")
actions = ActionChains(driver)
actions.move_to_element(deals_in_books).click().perform()
And, if needed, [scroll into view of the
element](https://developer.mozilla.org/en-
US/docs/Web/API/Element/scrollIntoView):
browser.execute_script("arguments[0].scrollIntoView();", deals_in_books)
|
ReportLab displays images with wrong orientation
Question: I'm using reportlab to generate PDF documents from a python API. The documents
include pictures (previously taken with a camera or mobile device) loaded with
:
from reportlab.platypus import Image
img = Image(path)
story.append(img)
Problem : some images are not displayed with the right orientation (some EXIF
data is probably lost or ignored at some point).
I encountered a similar problem with PIL once, and the solution I chose was to
use Wand instead of PIL or Pillow, but it appears ReportLab only uses PIL to
handle images with Python...
I found [this code snippet from another
question](http://stackoverflow.com/a/6218425) but I'm not sure how to edit
reportlab to include it, or if it's a good way to go.
I'm surprised I didn't find anything on this subject, I can't be the only one
wanting to include pictures in a reportlab-generated PDF...
Here is a picture with the original image opened in Preview on the left and
the PDF on the right: 
Thanks for any help, I've been struggling with that for hours now...
Answer: I actually had the same problem with pyCairo.
I have a bunch of JPEG images, some of them are directly put in the PDF
document, some other are manipulated with pyCairo before being inserted into
the PDF.
When inserting a JPEG image into a reportlab PDF document, or when converting
an image from JPEG to PNG to work with pyCairo (pyCairo doesn't work with JPEG
as far as I know), the orientation of the image stored in the EXIF gets lost.
Here's what I ended up doing :
from reportlab.platypus import Image
from wand.image import Image as WandImage
def AddAPictureToDocument():
with WandImage(filename=path) as wimg:
WandConvertToPNG(wimg,pngDestinationPath)
img = Image(pngDestinationPath)
story.append(img)
def WandConvertToPNG(img, savepath):
exif = {}
exif.update((k[5:], v) for k, v in img.metadata.items()
if k.startswith('exif:'))
orientation = exif['Orientation']
with img.convert('png') as converted:
if int(orientation) == 3 :
converted.rotate(180)
elif int(orientation) == 6 :
converted.rotate(90)
elif int(orientation) == 8 :
converted.rotate(270)
converted.save(filename=savepath)
But it can be pretty slow, especially with pyCairo, since I need to:
1) Convert from JPEG to PNG,
2) Rotate the image to its correct orientation
3) Use pyCairo to draw things on the image
4) Save the pyCairo manipulated image to a PNG
5) Convert the PNG to a JPEG to compress the image
Was it naive of me to assume that image librairies such as PIL or Wand would
handle the orientation of the image after a JPEG->PNG conversion ?
Anyways, I'm still looking for a better solution.
|
how to return simple function in django html file?
Question: I need return calculate_c value in html
urls.py
from django.conf.urls import include, url
from . import views
urlpatterns = [
url(r'^$', views.my_view, name='my_view'),
]
views.py
from django.shortcuts import render
from django.http import HttpResponse
def abc():
a = 1
b = 3
calculate_c = a + b
return calculate_c
def my_view(request):
context = {'calculated_value': 0}
context['calculated_value'] = abc()
return HttpResponse(request, 'blog/post_list.html', context)
post_list.html
{% extends 'blog/base.html' %}
{% block content %}
<h2>{{calculated_value}}</h2>
<h2>Test</h2>
{% endblock %}
Internal Server Error: / Traceback (most recent call last): File
"/home/v1/newproject/newenv/lib/python3.4/site-
packages/django/core/handlers/base.py", line 242, in get_response response =
self.apply_response_fixes(request, response) File
"/home/v1/newproject/newenv/lib/python3.4/site-
packages/django/core/handlers/base.py", line 305, in apply_response_fixes
response = func(request, response) File
"/home/v1/newproject/newenv/lib/python3.4/site-packages/django/http/utils.py",
line 17, in conditional_content_removal if 100 <= response.status_code < 200
or response.status_code in (204, 304): TypeError: unorderable types: int() <=
dict()
Answer: If you need the value returned by `adc()` in your template, you could pass it
via `context`:
def my_view(request):
...
context = {...}
context['calculated_value'] = abc()
return render(request, 'blog/post_list.html', context)
And then in your template you can use:
{{ calculated_value }}
|
How does Flask start a new SQLAlchemy transaction at the start of each request?
Question: I tried to totally seperate Flask and SQLAlchemy using [this
method](http://flask.pocoo.org/docs/0.10/patterns/sqlalchemy/#declarative) but
Flask still seems to be able to detect my database and start a new transaction
at the beginning of each request.
The `db.py` file creates a new session and defines a simple model of a table:
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String
engine = create_engine("mysql://web:kingtezdu@localhost/web_unique")
print("creating new session")
db_session = scoped_session(sessionmaker(bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
# define model of 'persons' table
class Person(Base):
__tablename__ = "persons"
name = Column(String(30), primary_key=True)
def __repr__(self):
return "Person(\"{0.name}\")".format(self)
# create table
Base.metadata.create_all(bind=engine)
And `app.py`, a simple Flask application using the SQLAlchemy session and
model:
from flask import Flask, escape
app = Flask(__name__)
# importing new session
from db import db_session, Person
# registering for app teardown to remove session
@app.teardown_appcontext
def shutdown_session(exception=None):
db_session.remove()
@app.route("/query")
def query():
# query all persons in the database
all_persons = Person.query.all()
print all_persons
return "" # we use the console output
if __name__ == "__main__":
app.run(debug=True)
Let's run this:
$ python app.py
creating new session
* Running on http://127.0.0.1:5000/
* Restarting with reloader
creating new session
Weired enough it runs `db.py` two times but we just ignore this, let's access
the webpage `/query`:
[]
127.0.0.1 - - [23/Dec/2015 18:20:14] "GET /query HTTP/1.1" 200 -
We can see that our request was answered, though we only use the console
output. There is no `Person` in the database yet, let's add one:
mysql> INSERT INTO persons (name) VALUES ("Marie");
Query OK, 1 row affected (0.11 sec)
`Marie` is part of the database now so we reload the webpage:
[Person("Marie")]
127.0.0.1 - - [23/Dec/2015 18:24:48] "GET /query HTTP/1.1" 200 -
As you can see the session already knows about `Marie`. Flask didn't create a
new session. That means that there was a new transaction started. Contrast
this to the plan python example below to see the difference.
My question is how Flask is able to start a new transaction on the begin of
each request. Flask shouldn't know about the database but seems to be able to
change something about it's behaviour.
Thanks for answers in advance,
\- timgame
* * *
In case you don't know what a SQLAlchemy transaction is read this paragraph
extracted from [Managing
Transactions](http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#managing-
transactions):
> When the transactional state is completed after a rollback or commit, the
> Session releases all Transaction and Connection resources, and goes back to
> the “begin” state, which will again invoke new Connection and Transaction
> objects as new requests to emit SQL statements are received.
So a transaction is ended by a commit and will cause a new connection to be
set up which will then make the session read the database again. In reality
this means that you have to commit when you want to see changes made to the
database:
First in interactive python mode:
>>> from db import db_session, Person
creating new session
>>> Person.query.all()
[]
Switch over to MySQL and insert a new `Person`:
mysql> INSERT INTO persons (name) VALUES ("Paul");
Query OK, 1 row affected (0.03 sec)
Finally try to load `Paul` into our session:
>>> Person.query.all()
[]
>>> db_session.commit()
>>> Person.query.all()
[Person("Paul")]
Answer: I think the issue here is that `scoped_session` somewhat hides what happens to
the actual sessions in use. When your teardown handler
# registering for app teardown to remove session
@app.teardown_appcontext
def shutdown_session(exception=None):
db_session.remove()
runs at the end of each request, you call `db_session.remove()` which disposes
of the session used in that particular request along with any transaction
context. See <http://docs.sqlalchemy.org/en/latest/orm/contextual.html> for
the details, particularly
> The scoped_session.remove() method first calls Session.close() on the
> current Session, which has the effect of releasing any
> connection/transactional resources owned by the Session first, then
> discarding the Session itself. “Releasing” here means that connections are
> returned to their connection pool and any transactional state is rolled
> back, ultimately using the rollback() method of the underlying DBAPI
> connection.
|
asyncio: loop.run_until_complete(loop.create_task(f)) prints "Task exception was never retrieved" even though it clearly was propagated
Question: For some reason this program prints the following warning:
Task exception was never retrieved
future: <Task finished coro=<coro() done, defined at /usr/lib64/python3.4/asyncio/coroutines.py:139> exception=SystemExit(2,)>
even though the exception is clearly retrieved and propagated, as `caught
SystemExit!` is printed to the terminal, and process status code becomes 2.
The same thing happens with Python 2 and trollius.
Am I missing something?
#!/usr/bin/env python3
import asyncio
@asyncio.coroutine
def comain():
raise SystemExit(2)
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(comain())
try:
loop.run_until_complete(task)
except SystemExit:
print("caught SystemExit!")
raise
finally:
loop.close()
if __name__ == "__main__":
main()
Answer: `SystemExit` seems to be a special case. If, for example, you raise and catch
an `Exception`, you won't see any errors. The way around this seems to be to
manually retrieve the exception using `Task.exception()`:
import asyncio
@asyncio.coroutine
def comain():
raise SystemExit(2)
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(comain())
try:
loop.run_until_complete(task)
except SystemExit:
print("caught SystemExit!")
task.exception()
raise
finally:
loop.close()
if __name__ == "__main__":
main()
**EDIT**
Actually, all `BaseException` subclasses will behave this way.
|
Python fibonacci number
Question: This is not a homework question, i'm simply trying to learn. Trying to write a
simple program that reads in 2 numbers.
Compute the Nth fibonacci number and Mth fibonacci number and then find the
greatest common factor of those two numbers.
Must ensure user types in a positive number and then asks again if the user
doesn't
Answer:
import collections
def compute():
m,n = -1, -1
while m<0: m = int(input("enter a positive number for m: "))
while n<0: m = int(input("enter a positive number for n: "))
a, b = 1,1
collections.deque(((globals().__setitem__('temp', a+b),
globals().__setitem__('a', b),
globals().__setitem__('b', temp),
globals().__setitem__('mth', b if i==m else globals().get('mth', None)),
globals().__setitem__('nth', b if i==m else globals().get('nth', None)))
for i in range(2, n+1)),
maxlen=0)
return max(i for i in range(1, nth+1) if not mth%i and not nth%i)
|
Changing Tensorflow MNIST code with interactive session into session
Question: So, I have been learning tensorflow, and I have tried to change the code on
the documentation from being run on an interactive session to being run in a
regular session, so that I can run the python file containing the code from
command line. The relevant tensorflow code is here:
<https://www.tensorflow.org/versions/master/tutorials/mnist/pros/index.html>
Here is my code:
import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
def train():
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
for i in range(1000):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
def test():
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
with tf.Session() as sess:
sess.run(train())
sess.run(test())
However, I am getting error message when I try to run the code:
Traceback (most recent call last):
File "tensorflow_mnist.py", line 15, in <module>
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1267, in run
_run_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2763, in _run_using_default_session
session.run(operation, feed_dict)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 345, in run
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 419, in _do_run
e.code)
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable
[[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0, Variable)]]
Caused by op u'MatMul', defined at:
File "tensorflow_mnist.py", line 10, in <module>
y = tf.nn.softmax(tf.matmul(x,W) + b)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 781, in matmul
name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 600, in _mat_mul
transpose_b=transpose_b, name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
self._traceback = _extract_stack()
I have no idea what is causing this problem; most likely it is an incorrect
usage of the session. Could someone give me some help? Thanks.
Answer: There are just two things you need to change to make this work:
1. Initialise the variables before running the first training steps.
init_op = tf.initialize_all_variables()
init_op.run()
for i in range(1000):
# …
This will fix the first error you are seeing, and is an important first step
in any TensorFlow program that uses variables.
2. Inline the bodies of `train()` and `eval()` in the `with tf.Session() as sess:` block. Your `eval()` function uses local variables from `train()` so the code is not valid Python as written. (Note that the `sess.run()` around `train()` and `eval()` is incorrect too—those functions don't have a return value, so this is equivalent to calling `sess.run(None)`, which will raise an error.)
|
Django, upgrading to 1.9
Question: I evolve my Django version to 1.9 (before I had the 1.6 or 1.7), so I modify
many obseltes things...
But I have a problem with theses lines in my urls.py :
import django
import main_app
from django.conf.urls import patterns, include, url
from django.views.generic import TemplateView, ListView
from django.contrib import admin
from django.conf import settings
from django.conf.urls.static import static
from main_app.views import *
from main_app.views import password_reset_confirm
... # many urls
url(r'^authentification/$', django.contrib.auth.views.login),
url(r'^forget/send/$', django.contrib.auth.views.password_reset_done),
url(r'^password/$', django.contrib.auth.views.password_reset),
url(r'^password_forget/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', main_app.views.password_reset_confirm),
url(r'^password-init/$', django.contrib.auth.views.password_reset_complete),
I have this error when I write "python manage.py runserver" :
Unhandled exception in thread started by <function wrapper at 0x7f5bcf01af50>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 116, in inner_run
self.check(display_num_errors=True)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "/usr/local/lib/python2.7/dist-packages/django/core/checks/registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "/usr/local/lib/python2.7/dist-packages/django/core/checks/urls.py", line 10, in check_url_config
return check_resolver(resolver)
File "/usr/local/lib/python2.7/dist-packages/django/core/checks/urls.py", line 19, in check_resolver
for pattern in resolver.url_patterns:
File "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 417, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 410, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/yb/web/carzuip/carzuip/urls.py", line 55, in <module>
url(r'^authentification/$', django.contrib.auth.views.login),
NameError: name 'django' is not defined
I don't understand why I have a problem just with those 5 urls ??!
Thank's
Answer: That code _doesn't_ show you importing `django`: it shows you importing
elements underneath it, but never the name itself. It is a fundamental
principle of Python that you must import or define every name you use. In your
case, `import django` at the top would work, although note you would then have
another problem when the code gets to the password_reset URL since that is
referenced from `main_app` which you again haven't imported.
|
Python code not working. Trying to compute two fibonacci numbers and find greatest common factor
Question: Can someone please help with this code. I'm attempting to write the simplest
program possible that reads in 2 numbers (m,n), then computes the nth
fibonacci number and the mth fibonacci number and then finds the greatest
common factor of the two inputted numbers. This is what I have so far. I'm new
to Python, so any help would be appreciated. Thanks in advance!
def compute():
m, n = -1, -1
while m<0: m = int(input(“Please enter a positive number for m: “))
while n<0: m = int(input(“Please enter a positive number for n: “))
a, b = 1,1
for i in (‘temp’, a+b),
(‘a’, b),
(‘b’, temp),
(‘mth’, b if i==m)
(‘nth’, b if i==m)
for i in range(2, n+1)),:pass
maxlen=0)
return max(i for i in range(1, nth+1) if not mth%i and not nth%i)
Answer: In my opinion, when it comes to Python, the simplest answer might be either
the smallest piece of code or the code that's easiest to understand (and
sometimes both things are the same). Since you're starting with Python, I
think the easiest to understand is the best way to approach, but I reckon that
qwertyboys's answer might be also correct (and more advanced).
This is how I would do (I agree with user2357112's comment on splitting the
problem into two parts):
first_num = int(input("Enter the first positive number: "))
second_num = int(input("Enter the second positive number: "))
fib_a, fib_b = 1, 1
fib_sequence = [1, 1]
for i in range (first_num + second_num):
temp = fib_a + fib_b
fib_a = fib_b
fib_b = temp
fib_sequence.append(temp)
from fractions import gcd
print("Fibonacci sequence's #%s: %s" % (first_num, fib_sequence[first_num - 1]))
print("Fibonacci sequence's #%s: %s" % (second_num, fib_sequence[second_num - 1]))
print("Their Greatest Common Divisor is: %s" % gcd(fib_sequence[first_num - 1], fib_sequence[second_num - 1]))
I first created a Fibonacci's list that's long enough to fit both indexes, and
then picked the ones I wanted. An output example would be:
> Enter the first positive number: 3
>
> Enter the second positive number: 6
>
> Fibonacci sequence's #3: 2
>
> Fibonacci sequence's #6: 8
>
> Their Greatest Common Divisor is: 2
Although it's not the most elegant solution (yes, there is recursion and the
likes), I've tried to keep it clear on what's happening, hope it helps.
Remember that readability is a very important part of coding (specially on
Python) :)
|
Selenium WebDriver Python, search WebElement
Question: i am using Selenium to scrape some stuff live, but i can't seem to search a
WebElement even tho the docs say i can.
while True:
try:
member = self.pdriver.find_all("sv:member_profile")[index]
self.pdriver.info_log("Found a member")
except IndexError:
self.pdriver.info_log("No more members")
break
member.highlight(style = self.pdriver.get_config_value("highlight:style_on_assertion_success"))
profile = {}
profile["name"] = member.findElement("sv:member_name").get_attribute('innerHTML')
profile["image"] = member.findElement("sv:member_image").get_attribute('src')
profile["link"] = member.findElement("sv:member_link").get_attribute('href')
members.append(profile)
index += 1
This returns a single web element:
member = self.pdriver.find_all("sv:member_profile")[index]
which i need to search, according to the docs, this element would also have
the findElement method, seems not to be the case tho?
> AttributeError: 'WebElement' object has no attribute 'findElement'
Answer: As the error states, [`WebElement`](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.remote.webelement.WebElement)
object has no attribute `findElement`. Depending on what are `sv:member_name`,
`sv:member_image` and `sv:member_link`, you need to choose which of the
[`find_element_by_*` methods](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.remote.webelement.WebElement.find_element_by_class_name)
you need to use.
For instance, if `sv:member_name` is a CSS selector:
profile["name"] = member.find_element_by_css_selector("sv:member_name").get_attribute('innerHTML')
Or, you may also use the [`find_element()`](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.remote.webelement.WebElement.find_element)
method directly supplying the `By` value:
from selenium.webdriver.common.by import By
profile["name"] = member.find_element(By.CSS_SELECTOR, "sv:member_name").get_attribute('innerHTML')
|
what does this regular expression in python mean (.+?)
Question: I saw this regular expression being used in a program - (.+?) But I don't
understand What does this mean. I know that, . is for any character except
newline \+ is for one or more characters ? is for zero or one character
But don't understand what this entire regex (.+?) convey.
Answer: The parenthesis mean a [_capturing
group_](https://docs.python.org/2/howto/regex.html#grouping). The `.+` would
match any character _1 or more times_. The `?` makes it work in a [_non-
greedy_](https://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy)
fashion.
Study [Regular Expression How To](https://docs.python.org/2/howto/regex.html)
\- it covers all of the parts of this regular expression.
This expression alone does not make much sense, and is usually a part of an
expression, sample:
>>> import re
>>> s = "Hello, World!"
>>> re.match(r"(.+?), World!", s).group(1)
'Hello'
|
urllib slower than browser to access html
Question: The following python script takes 3 seconds on my PC to load the source code
of a twitter page, which is much higher than it takes to retrieve the source
code of other websites, such as youtube. When I load the same twitter page in
my browser, the "network" tab in google chrome tells me the html is retrieved
in 0.3 seconds.
Why is urllib so much slower than my browser?
import urllib2
import time
start=time.time()
channel='pontifex'
url="https://twitter.com/"+channel
page = urllib2.urlopen(url).read()
print str(round(time.time()-start,0))+" secs total"
Answer: Caching is the answer and it presents itself in the form of preloading which
is what most browsers these days do. If not the browser, then search engines
such as Google also cache frequently visited websites so that retrieving them
is only a matter of milliseconds
See this post: [How can Google be so
fast?](http://stackoverflow.com/questions/132359/how-can-google-be-so-fast)
|
urllib2.urlopen(url).read() fails to read the URL content
Question: I am trying to read the web content of the link: `http://www.quikr.com/Mobile-
Phones/y149` using following python command:
import requests
import urllib2
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11'}
url = 'http://www.quikr.com/Mobile-Phones/y149'
req = urllib2.Request(url, headers=hdr)
page = urllib2.urlopen(req).read()
`print page` gives the following output:
<!DOCTYPE html>
<head>
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="refresh" content="10; url=/distil_r_captcha.html?Ref=/Mobile-Phones/y149&distil_RID=97C53AFC-AA02-11E5-B76A-8C12C4D2AB6C&distil_TID=20151224055301" />
<script type="text/javascript">
(function(window){
try {
if (typeof sessionStorage !== 'undefined'){
sessionStorage.setItem('distil_referrer', document.referrer);
}
} catch (e){}
})(window);
</script>
<script type="text/javascript" src="/QkrDIV1cexsvzwdadarecara.js" defer></script><style type="text/css">#d__fFH{position:absolute;top:-5000px;left:-5000px}#d__fF{font-family:serif;font-size:200px;visibility:hidden}#qttwcrxueetv{display:none!important}</style></head>
<body>
<div id="distil_ident_block"> </div>
</body>
</html>
Is there any workaround to get the actual url content to be read. Any help is
appreciated. Thanks in advance!!
Answer: One option would be to automate a real browser via
[`selenium`](https://selenium-python.readthedocs.org/). Working sample:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.quikr.com/Mobile-Phones/y149")
for phone in driver.find_elements_by_css_selector(".snb_entire_ad"):
link = phone.find_element_by_css_selector("a.adttllnk")
print link.text
driver.close()
If you want to get the page source, use [`.page_source`](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.remote.webdriver.WebDriver.page_source)
(before closing the driver of course):
print(driver.page_source)
|
Why python json module produces different encoding on same file
Question: I'm trying to parse a json with some Finnish characters included. A goog
example would be a region called Etelä-Karjala. I had it all working locally
when I opened the json as a file and then loaded with json.load. The unicode I
got for this region was u'Etel\xe4-Karjala'.
However my next step was to do the same thing on the server, and json was
stored at some url from which I had to retrieve it. I used
json.loads(requests.get(url).text), and the unicode that I got for the same
region was now u'Etel\xc3\xa4-Karjala'.
Why do I get these different results even though the input file is the same?
Can you suggest a workaround or a good pattern to load json from a url that
will not cause this issue?
Here is an example to reproduce the issue:
import requests
import json
# Example with loading from request
r = requests.get('http://becs.aalto.fi/~smirnod1/maakunnat.geojson')
geo1 = json.loads(r.text)
test1 = geo1['features'][5]['properties']['text']
# test1 = u'Etel\xc3\xa4-Karjala'
Then, I download this json and try to open it as a file (this was the approach
I used while I developed my application).
# Example with loading from file
with open('/Users/dmitrysmirnov/Downloads/maakunnat.geojson') as f:
geo2 = json.load(f)
test2 = geo2['features'][5]['properties']['text']
# test2 = u'Etel\xe4-Karjala'
I assume that u'Etel\xe4-Karjala' (or result of test2) should be what I aim
for. Or at least that is the result that will not break the application.
Answer: The server is misconfigured. Either tell it to report that the file is encoded
as UTF-8, or encode the JSON in ASCII-only.
|
Dask array rfft doesn't seems to work
Question: I'm trying to do some real fft in some large arrays and decided to give dask a
try. I've run into a problem where the function dask.array.rfft does not seems
to work no matter what I do. Here's a minimal example.
import numpy as np
import dask.array as da
import dask
print('Dask version: {}'.format(dask.__version__))
x = np.random.random((10, 10))
dx = da.from_array(x, chunks=(2, x.shape[1]))
dx_fft = da.fft.fft(dx)
dx_ifft = da.fft.ifft(dx_fft)
dx_ifft.compute()
print('Regular fft worked out just fine.')
dx = da.from_array(x, chunks=(2, x.shape[1]))
dx_rfft = da.fft.rfft(dx, axis=1)
dx_irfft = da.fft.irfft(dx_rfft, axis=1)
dx_irfft.compute()
print('Real fft worked out just fine.')
The output of the program is.
Dask version: 0.7.5
Regular fft worked out just fine.
Traceback (most recent call last):
File "a.py", line 16, in <module>
dx_irfft = da.fft.irfft(dx_rfft, axis=1)
File "/home/heitor/anaconda/lib/python2.7/site-packages/dask/array/fft.py", line 35, in func
chunks=chunks)
File "/home/heitor/anaconda/lib/python2.7/site-packages/dask/array/core.py", line 449, in map_blocks
result = atop(func, out_ind, *args, name=name, dtype=dtype)
File "/home/heitor/anaconda/lib/python2.7/site-packages/dask/array/core.py", line 1420, in atop
chunkss, arrays = unify_chunks(*args)
File "/home/heitor/anaconda/lib/python2.7/site-packages/dask/array/core.py", line 1342, in unify_chunks
for a, i in arginds]
File "/home/heitor/anaconda/lib/python2.7/site-packages/dask/array/core.py", line 1141, in rechunk
return rechunk(self, chunks)
File "/home/heitor/anaconda/lib/python2.7/site-packages/dask/array/rechunk.py", line 232, in rechunk
return Array(x2, temp_name, chunks, dtype=x.dtype)
File "/home/heitor/anaconda/lib/python2.7/site-packages/toolz/functoolz.py", line 348, in memof
raise TypeError("Arguments to memoized function must be hashable")
TypeError: Arguments to memoized function must be hashable
Whatever operation I try to do with dx_rfft, it returns the same error. I've
tried Pythons 2 and 3 and both have the same issue. Am I missing something or
is this a bug of the library?
Answer: This does not occur on dask master. The easiest solution is probably to
install from there. The easiest way to do this is to
$ conda remove dask
$ pip install git+git://github.com/blaze/dask.git # might need root
Or you can create a fresh conda environment so your system dask does not have
to be replaced with the potentially broken development version
$ conda create -n myenv dask #create "myenv" environment and install dask + depedencies
$ source activate myenv
(myenv)$ conda remove dask
(myenv)$ pip install git+git://github.com/blaze/dask.git
|
Virtualenv: "main.py" gives error but "python main.py" works perfectly fine
Question: I created a new virtualenv to test fuzzywuzzy. I activate my env and "pip
install fuzzywuzzy"
I create a file "main.py" with the following code:
from fuzzywuzzy import fuzz
r = fuzz.ratio("this is a test", "this is a test!")
print(r)
Back to the console, I activate the env and enter "main.py":
(fuzzytest) C:\Users\Family\Desktop\fuzzytest>main.py
Traceback (most recent call last):
File "C:\Users\Family\Desktop\fuzzytest\main.py", line 1, in <module>
from fuzzywuzzy import fuzz
ImportError: No module named 'fuzzywuzzy'
BUT if I do "python main.py":
(fuzzytest) C:\Users\Family\Desktop\fuzzytest>python main.py
97
It works fine. Why is that? Am I doing anything wrong?
Answer: Try starting the script with `#! /usr/bin/env python`.
This is supposed to work on windows according to the [python
docs](https://docs.python.org/3.3/using/windows.html#shebang-lines).
|
Cannot get button to toggle a change in pygame
Question: I am wondering why it is that my "GO" button will not toggle the
def game_start()
permanently, it toggles it while holding the button, but when you let go of
the button its goes back to the main menu?
I am also curious to if there is a way of making the text and buttons vanish
when you press the go button as the game has start?
I am quite new to python so an explanation would be great with any code I have
made a mistake on and or need to add/change.
import sys
import pygame
from pygame.locals import *
pygame.init()
size = width, height = 720, 480
speed = [2, 2]
#Colours
black = (0,0,0)
blue = (0,0,255)
green = (0,200,0)
red = (200,0,0)
green_bright = (0,255,0)
red_bright = (255,0,0)
screen = pygame.display.set_mode(size)
#Pictures
road = pygame.image.load(r"C:\Users\John\Desktop\Michael\V'Room External\1.png")
BackgroundPNG = pygame.image.load(r"C:\Users\John\Desktop\Michael\V'Room External\BackgroundPNG.png")
carImg = pygame.image.load(r"C:\Users\John\Desktop\Michael\V'Room External\Sp1.png").convert_alpha()
pygame.display.set_caption("Broom! || BETA::00.0.3")
clock = pygame.time.Clock()
def text_objects(text, font):
textSurface = font.render(text, True, black)
return textSurface, textSurface.get_rect()
def game_intro():
intro = True
while intro:
for event in pygame.event.get():
print(event)
if event.type == pygame.QUIT:
pygame.quit()
quit()
mouse = pygame.mouse.get_pos()
click = pygame.mouse.get_pressed()
print(mouse)
print(click)
screen.fill(blue)
screen.blit(BackgroundPNG,(0,0))
largeText = pygame.font.Font('freesansbold.ttf',115)
TextSurf, TextRect = text_objects("V'Room!", largeText)
TextRect.center = ((width/2),(height/2))
screen.blit(TextSurf, TextRect)
#Button
if 75+100 > mouse[0] > 75 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, green_bright,(75,400,100,50))
if click != None and click[0] == 1:
print("GO == 1 ! == None")
x = 350
y = 370
game_start()
else:
pygame.draw.rect(screen, green,(75,400,100,50))
smallText = pygame.font.Font("freesansbold.ttf",20)
TextSurf, TextRect = text_objects("GO", smallText)
TextRect.center = ((75+(100/2)),(400+(50/2)))
screen.blit(TextSurf, TextRect)
if 550+100 > mouse[0] > 550 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, red_bright,(550,400,100,50))
if click != None and click[0] == 1:
pygame.quit()
quit()
else:
pygame.draw.rect(screen, red,(550,400,100,50))
TextSurf, TextRect = text_objects("Exit", smallText)
TextRect.center = ((550+(100/2)),(400+(50/2)))
screen.blit(TextSurf, TextRect)
pygame.display.flip()
pygame.display.update()
clock.tick(15)
def game_start():
print("Car Loaded Sucessfully")
screen.blit(road, (0,0))
screen.blit(carImg, (350,370))
game_intro()
Answer: One of solution is to use `game_started` variable instead of function
`game_start()` and use it to decide what to draw - title or car and road, GO
or STOP button, etc.
I use rectangles in place of bitmaps to make full working example.
import pygame
# --- constants ----
size = width, height = 720, 480
speed = [2, 2]
#Colours
black = (0,0,0)
blue = (0,0,255)
green = (0,200,0)
red = (200,0,0)
green_bright = (0,255,0)
red_bright = (255,0,0)
# --- functions ---
def text_objects(text, font):
textSurface = font.render(text, True, black)
return textSurface, textSurface.get_rect()
def game_intro():
largeText = pygame.font.Font('freesansbold.ttf',115)
smallText = pygame.font.Font("freesansbold.ttf",20)
text_vroom, text_vroom_rect = text_objects("V'Room!", largeText)
text_vroom_rect.center = ((width/2),(height/2))
text_go, text_go_rect = text_objects("GO", smallText)
text_go_rect.center = ((75+(100/2)),(400+(50/2)))
text_stop, text_stop_rect = text_objects("STOP", smallText)
text_stop_rect.center = ((75+(100/2)),(400+(50/2)))
text_exit, text_exit_rect = text_objects("Exit", smallText)
text_exit_rect.center = ((550+(100/2)),(400+(50/2)))
game_started = False
intro = True
while intro:
for event in pygame.event.get():
print(event)
if event.type == pygame.QUIT:
pygame.quit()
quit()
mouse = pygame.mouse.get_pos()
click = pygame.mouse.get_pressed()
screen.fill(blue)
#screen.blit(BackgroundPNG,(0,0))
# road and car - or title
if game_started:
screen.blit(road, (0,0))
screen.blit(carImg, (350,370))
else:
screen.blit(text_vroom, text_vroom_rect)
# Button GO/STOP
if 75+100 > mouse[0] > 75 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, green_bright,(75,400,100,50))
if click != None and click[0] == 1:
# toggle True/False
game_started = not game_started
else:
pygame.draw.rect(screen, green,(75,400,100,50))
# draw GO or STOP
if not game_started:
screen.blit(text_go, text_go_rect)
else:
screen.blit(text_stop, text_stop_rect)
# Button EXIT
if 550+100 > mouse[0] > 550 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, red_bright,(550,400,100,50))
if click != None and click[0] == 1:
pygame.quit()
quit()
else:
pygame.draw.rect(screen, red,(550,400,100,50))
screen.blit(text_exit, text_exit_rect)
pygame.display.flip()
clock.tick(15)
# --- main ---
pygame.init()
screen = pygame.display.set_mode(size)
pygame.display.set_caption("Broom! || BETA::00.0.3")
#Pictures
road = pygame.surface.Surface( size )
road.fill(black)
carImg = pygame.surface.Surface( (10,10) )
road.fill(green)
clock = pygame.time.Clock()
game_intro()
**EDIT:** second solution is to create function (`game_running`) with own
`while` loop, own buttons, etc.
I had to use `pygame.time.wait()` because `pygame.mouse.get_pressed()` is not
good function for single click on button. Computer (and `while` loop) is too
fast (for human click) and `pygame.mouse.get_pressed()` toggle button many
times. Better to use `pygame.event.get()` for single click.
import pygame
# --- constants ----
size = width, height = 720, 480
speed = [2, 2]
#Colours
black = (0,0,0)
blue = (0,0,255)
green = (0,200,0)
red = (200,0,0)
green_bright = (0,255,0)
red_bright = (255,0,0)
# --- functions ---
def text_objects(text, font):
textSurface = font.render(text, True, black)
return textSurface, textSurface.get_rect()
def game_intro():
text_vroom, text_vroom_rect = text_objects("V'Room!", largeText)
text_vroom_rect.center = ((width/2),(height/2))
text_go, text_go_rect = text_objects("GO", smallText)
text_go_rect.center = ((75+(100/2)),(400+(50/2)))
text_exit, text_exit_rect = text_objects("Exit", smallText)
text_exit_rect.center = ((550+(100/2)),(400+(50/2)))
running = True
while running:
for event in pygame.event.get():
print(event)
if event.type == pygame.QUIT:
pygame.quit()
quit()
mouse = pygame.mouse.get_pos()
click = pygame.mouse.get_pressed()
screen.fill(blue)
screen.blit(text_vroom, text_vroom_rect)
# Button GO
if 75+100 > mouse[0] > 75 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, green_bright,(75,400,100,50))
if click != None and click[0] == 1:
# wait because `pygame.mouse.get_pressed()` is too fast for human clik
pygame.time.wait(100)
# run game
game_running()
else:
pygame.draw.rect(screen, green,(75,400,100,50))
screen.blit(text_go, text_go_rect)
# Button EXIT
if 550+100 > mouse[0] > 550 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, red_bright,(550,400,100,50))
if click != None and click[0] == 1:
pygame.quit()
quit()
else:
pygame.draw.rect(screen, red,(550,400,100,50))
screen.blit(text_exit, text_exit_rect)
pygame.display.flip()
clock.tick(15)
def game_running():
text_stop, text_stop_rect = text_objects("STOP", smallText)
text_stop_rect.center = ((75+(100/2)),(400+(50/2)))
text_exit, text_exit_rect = text_objects("Exit", smallText)
text_exit_rect.center = ((550+(100/2)),(400+(50/2)))
running = True
while running:
for event in pygame.event.get():
print(event)
if event.type == pygame.QUIT:
pygame.quit()
quit()
mouse = pygame.mouse.get_pos()
click = pygame.mouse.get_pressed()
screen.fill(blue)
#screen.blit(BackgroundPNG,(0,0))
# road and car - or title
screen.blit(road, (0,0))
screen.blit(carImg, (350,370))
# Button STOP
if 75+100 > mouse[0] > 75 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, green_bright,(75,400,100,50))
if click != None and click[0] == 1:
# return to menu
return
else:
pygame.draw.rect(screen, green,(75,400,100,50))
# draw STOP
screen.blit(text_stop, text_stop_rect)
# Button EXIT
if 550+100 > mouse[0] > 550 and 400+50 > mouse[1] > 400:
pygame.draw.rect(screen, red_bright,(550,400,100,50))
if click != None and click[0] == 1:
pygame.quit()
quit()
else:
pygame.draw.rect(screen, red,(550,400,100,50))
screen.blit(text_exit, text_exit_rect)
pygame.display.flip()
clock.tick(15)
# --- main ---
pygame.init()
screen = pygame.display.set_mode(size)
pygame.display.set_caption("Broom! || BETA::00.0.3")
# pictures
road = pygame.surface.Surface( size )
road.fill(black)
carImg = pygame.surface.Surface( (10,10) )
road.fill(green)
# fonts
largeText = pygame.font.Font('freesansbold.ttf',115)
smallText = pygame.font.Font("freesansbold.ttf",20)
# others
clock = pygame.time.Clock()
game_intro()
|
regular expression Python search
Question: I'm trying to extract some info from the source code in a webpage and I'm
having trouble figuring out how to go about it. Part of the source code is as
follows:
<th>Model #:</th>
<td>1561496564</td>
</tr>
<tr>
I want to start at "Model #:" and go all the way up the to td>. From there, I
can erase anything that's not a number to get the 1561496564.
I can't do:
modelMatch = re.search('Model[^\n]*', contents)
because the actual number is on the next line. I also can't do anything that's
not a /, d, or >. I'm thinking I can do [^\^n^:^<^/^t^h^>^r]*, but that seems
a little messy. I'm wondering if there's a better way.
For regular expression, is there an easy way to say, extract until you reach
this specific phrase of "tr"?
Thanks a lot.
Answer: You can enable re's multiline mode by pass `re.MULTILINE` parameter.
However, for tasks like extract data from a webpage, I would recommend using
tools like [lxml](http://lxml.de/),
[pyquery](https://pythonhosted.org/pyquery/),
[Beautifulsoup](http://www.crummy.com/software/BeautifulSoup/) instead. These
approaches would be much simpler and elegant.
Here's an example using pyquery module:
In [1]: import pyquery
In [2]: s = '''<th>Model #:</th>
...: <td>1561496564</td>
...: </tr>'''
In [3]: pyquery.PyQuery(s).find('td').text()
Out[3]: '1561496564'
|
Import java class in jython
Question: I am dealing with NLP in python. There is a NLP tool namely Zemberek for
turkish language. But it is written in java. So I have to use jython to be
able to import these classes. I installed jython 2.7. Also, I installed
Eclipse Mars as an IDE for java. On the Internet I found the following link to
use Jython in an IDE. I installed PyDev and configured it properly as
explained in the link.
<http://www.jython.org/jythonbook/en/1.0/JythonIDE.html>
import sys
import os
import java.lang.System.out
import java.util.Arrays
import java.util.List
zembereksourcedir = ?
sys.path.append(zembereksourcedir +'/jar/zemberek-tr-2.1.1.jar')
sys.path.append(zembereksourcedir +'/jar/zemberek-cekirdek-2.1.1.jar')
from net.zemberek.erisim import Zemberek
from net.zemberek.tr.yapi import TurkiyeTurkcesi
zemberek = Zemberek(TurkiyeTurkcesi())
for st in ["ebeni","ırz"]:
kok = zemberek.kokBulucu().kokBul(st) # array(net.zemberek.yapi.Kok, [ev ISIM , evli ISIM ])
print str(list(kok))
k = str(list(kok)).split()[0][1:]
print k
The code that I am trying to run is given above.(can be found in the following
link <https://gist.github.com/ferayebend/5331379>) But the problem is that
even if I specifiy the paths correctly it gives an error.
ImportError: No module named Zemberek
I applied the steps correctly to create projects which is also explained in
the link above. But I still could not solve the problem. Any help would be
much appreciated.
Answer: Instead of adding it to sys.path, you should add it as a library either in the
Jython Interpreter configuration (window > preferences > pydev > interpreters
> jython interpreter > new jar/zip(s)) or if it's in a folder in the project,
right-click project > properties > pydev - pythonpath > external libraries >
add zip/jar/egg.
The reason is that just adding to sys.path doesn't work, you also need to add
those jars to the CLASSPATH for java/jython to find it (which PyDev will do
for you if you specify it in the proper way).
|
Python multiprocessing on For Loop
Question: First of all, I know there are quite some threads about multiprocessing on
python already, but none of these seems to solve my problem.
Here is my problem: I want to implement Random Forest Algorithm, and a naive
way to do so would be like this:
def random_tree(Data):
tree = calculation(Data)
forest.append(tree)
forest = list()
for i in range(300):
random_tree(Data)
And the`forest` with 300 "trees" inside would be my final result. In this
case, how do I turn this code into a multiprocessing version?
* * *
Update: I just tried Mukund M K's method, in a very simplified script:
from multiprocessing import Pool
def f(x):
return 2*x
data = np.array([1,2,5])
pool = Pool(processes=4)
forest = pool.map(f, (data for i in range(4)))
# I use range() instead of xrange() because I am using Python 3.4
And now....the script is running like forever.....I open a python shell and
enter the script line by line, and this is the messages I've got:
> Process SpawnPoolWorker-1:
> Process SpawnPoolWorker-2:
> Traceback (most recent call last):
> Process SpawnPoolWorker-3:
> Traceback (most recent call last):
> Process SpawnPoolWorker-4:
> Traceback (most recent call last):
> Traceback (most recent call last):
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
> File "E:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
> File "E:\Anaconda3\lib\multiprocessing\pool.py", line 108, in worker
task = get()
> File "E:\Anaconda3\lib\multiprocessing\pool.py", line 108, in worker
task = get()
> File "E:\Anaconda3\lib\multiprocessing\pool.py", line 108, in worker
task = get()
> File "E:\Anaconda3\lib\multiprocessing\pool.py", line 108, in worker
task = get()
> File "E:\Anaconda3\lib\multiprocessing\queues.py", line 357, in get
return ForkingPickler.loads(res)
> File "E:\Anaconda3\lib\multiprocessing\queues.py", line 357, in get
return ForkingPickler.loads(res)
> AttributeError: Can't get attribute 'f' on
> AttributeError: Can't get attribute 'f' on
File "E:\Anaconda3\lib\multiprocessing\queues.py", line 357, in get
return ForkingPickler.loads(res)
> AttributeError: Can't get attribute 'f' on
File "E:\Anaconda3\lib\multiprocessing\queues.py", line 357, in get
return ForkingPickler.loads(res)
> AttributeError: Can't get attribute 'f' on
* * *
Update: I edited my sample code according to some other example code like
this:
from multiprocessing import Pool
import numpy as np
def f(x):
return 2*x
if __name__ == '__main__':
data = np.array([1,2,3])
with Pool(5) as p:
result = p.map(f, (data for i in range(300)))
And it works now. What I need to do now is to fill in this with more
sophisticated algorithm now..
Yet another question in my mind is: why could this code work, while the
previous version couldn't?
Answer: Package processing might help you. Check it out
[here](https://pypi.python.org/pypi/processing?).
|
Averaging values in a nested loop in python
Question: I have to perform a running average. In the code below, the input file
(stress1.txt) contains two columns of x and y values. Every y between 0.9x and
1.1x needs to be averaged. The last part of the code that goes over the two
lists is correct, in the sense that it returns the correct x values for each
upper and lower bound when I print it out. However, the averaging is not done
correctly, and I've tried all possible placements of the mean command. I'm
really stuck as what I have seems logically right to me. Also, I'm fairly new
to programming so my code might not be very pythonic. Can someone point out
what's going wrong?
import sys,string
import numpy as np
from math import *
import fileinput
infiles = ['stress1.txt']
oldlist = []
xlist = []
newlist = [0]
IN = fileinput.input(infiles)
for step in range(21): ## number of rows in stress1.txt
line = IN.readline()
[t,a] = string.split(line)
time = float(t)
acf = float(a)
oldline = [time,acf]
oldlist.append(oldline) ## nested list containing x and y values
for i in range(len(oldlist)):
t11 = float(0.9*oldlist[i][0])
t1 = float("{0:.4f}".format(t11))
t22 = float(1.1*oldlist[i][0])
t2 = float("{0:.4f}".format(t22))
xlist.append(t1)
xlist.append(t2)
xlist = [xlist[i:i+2] for i in range(0, len(xlist), 2)] ## nested list containing upper and lower bounds for each x. This list has the same size as 'oldlist'.
for i in range(len(oldlist)):
for j in range(len(xlist)):
if (xlist[i][0] <= oldlist[j][0] < xlist[i][1]):
#print oldlist[j][0]
newlist.append(oldlist[j][0])
#print '\n'
mean = sum(newlist)/float(len(newlist)) ## not giving the right average
print mean
I have edited my question to include stress1.txt:
0 63.97308696
0.005 62.68978803
0.01 58.95890345
0.015 53.11671683
0.02 45.64732412
0.025 37.10669444
0.03 28.05011931
0.035 18.98414178
0.04 10.34110231
0.045 2.470985737
0.05 -4.356736338
0.055 -9.947472597
0.06 -14.17532845
0.065 -16.97779073
0.07 -18.35134411
0.075 -18.34723586
0.08 -17.0675793
0.085 -14.66065262
0.09 -11.3157742
0.095 -7.257500157
0.1 -2.7383312
The code is expected to average each 'block' as shown below. The initial
blocks only contain a single value, so that itself is the average. The later
blocks have multiple values which have to be averaged and outputted.{sorry for
making this thread so lengthy}
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.045
0.05
0.05
0.055
0.06
0.055
0.06
0.065
0.06
0.065
0.07
0.065
0.07
0.075
0.07
0.075
0.08
0.075
0.08
0.085
0.08
0.085
0.09
0.085
0.09
0.095
0.09
0.095
0.1
0.09
0.095
0.1
Answer: If I understand your goal correctly, you need an empty newlist[] in the
beginning of every iteration. If you do not empty it, it is full of the
previous blocks' values.
for i in range(len(oldlist)):
#Make it empty
newlist = []
for j in range(len(xlist)):
if (xlist[i][0] <= oldlist[j][0] and oldlist[j][0] <= xlist[i][1]):
newlist.append(oldlist[j][0])
print(repr(newlist))
mean = sum(newlist)/float(len(newlist))
print(mean)
Produced output:
[0.0]
0.0
[0.005]
0.005
[0.01]
0.01
[0.015]
0.015
[0.02]
0.02
[0.025]
0.025
[0.03]
0.03
[0.035]
0.035
[0.04]
0.04
[0.045]
0.045
[0.045, 0.05, 0.055]
0.04999999999999999
[0.05, 0.055, 0.06]
0.055
[0.055, 0.06, 0.065]
0.06
[0.06, 0.065, 0.07]
0.065
[0.065, 0.07, 0.075]
0.07
[0.07, 0.075, 0.08]
0.07500000000000001
[0.075, 0.08, 0.085]
0.08
[0.08, 0.085, 0.09]
0.085
[0.085, 0.09, 0.095]
0.09000000000000001
[0.09, 0.095, 0.1]
0.09500000000000001
[0.09, 0.095, 0.1]
0.09500000000000001
|
Terminating a python text adventure
Question: so I'm writing a little text adventure and have lots of nested if statements.
However, there comes times where I want to terminate the program in one of
these if statements to give a "GAME OVER". I've tried quit() and exit(), but
at best they still output an error message:
Traceback (most recent call last):
File "<string>", line 9, in 0
SystemExit: 0
I'm not using any try/except statements, just if statements. For example:
import sys
if True:
print "Hello guv'na"
if True:
print "Good day, Good day"
sys.exit(0)
if True:
print "What's all this then?"
yields:
Hello guv'na
Good day, Good day
Traceback (most recent call last):
File "/Users/austin/test.py", line 6, in 0
builtins.SystemExit: 0
I just want the program to terminate after "Good day, good day" without a
message. is this possible?
Answer: If you are seeing that exception, it means that something is catching and
printing it.
My guess is that you have a try/except block like this:
try:
do_something()
except:
print_exc()
Here the `except:` clause does not have an exception type specified, which
means that **all** kind of exceptions (including `SystemExit`) will be caught.
Bare `except:` clauses are meant for cleanup code and generally re-raise the
exception after catching it. Avoid them unless you absolutely need them.
Use this instead:
try:
do_something()
except Exception:
print_exc()
This way, you'll catch only `Exception` subclass, i.e. all standard exceptions
except `SystemExit`, `KeyboardInterrupt` and `GeneratorExit`.
|
Python Doesn't Have Permission To Access On This Server / Return City/State from ZIP
Question: What I'm trying to do is retrieve the city and state from a zip code. Here's
what I have so far:
def find_city(zip_code):
zip_code = str(zip_code)
url = 'http://www.unitedstateszipcodes.org/' + zip_code
source_code = requests.get(url)
plain_text = source_code.text
index = plain_text.find(">")
soup = BeautifulSoup(plain_text, "lxml")
stuff = soup.findAll('div', {'class': 'col-xs-12 col-sm-6 col-md-12'})
I also tried using id="zip-links", but that didn't work. But here's the thing:
when I run `print(plain_text)` I get the following:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /80123
on this server.<br />
</p>
</body></html>
So I guess my question is this: is there a better way to get a city and state
from a zip code? Or is there a reason that unitedstateszipcodes.gov isn't
cooperating. After all, it is easy enough to see the source and tags and text.
Thank you
Answer: I think you are taking a longer route to solve an easy problem!
Try [pyzipcode](https://pypi.python.org/pypi/pyzipcode/)
>>> from pyzipcode import ZipCodeDatabase
>>> zcdb = ZipCodeDatabase()
>>> zipcode = zcdb[54115]
>>> zipcode.zip
u'54115'
>>> zipcode.city
u'De Pere'
>>> zipcode.state
u'WI'
>>> zipcode.longitude
-88.078959999999995
>>> zipcode.latitude
44.42042
>>> zipcode.timezone
-6
|
Error deploying Django with ImageKit using Apache "cannot import name conf"
Question: I'm trying to deploy my Django site with Apache but I'm running into issues
with the ImageKit library. Here's the error from /var/log/apache2/error.log:
No handlers could be found for logger "django.request"
[1.2.3.4] mod_wsgi (pid=17276): Exception occurred processing WSGI script '/var/www/mysite.com/portfoliosite/portfoliosite/wsgi.py'.
[1.2.3.4] Traceback (most recent call last):
[1.2.3.4] File "/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/django/core/handlers/wsgi.py", line 189, in __call__
[1.2.3.4] response = self.get_response(request)
[1.2.3.4] File "/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/django/core/handlers/base.py", line 218, in get_respo$
[1.2.3.4] response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
[1.2.3.4] File "/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/django/core/handlers/base.py", line 264, in handle_un$
[1.2.3.4] if resolver.urlconf_module is None:
[1.2.3.4] File "/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/django/core/urlresolvers.py", line 395, in urlconf_mo$
[1.2.3.4] self._urlconf_module = import_module(self.urlconf_name)
[1.2.3.4] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
[1.2.3.4] __import__(name)
[1.2.3.4] File "/var/www/mysite.com/portfoliosite/portfoliosite/urls.py", line 21, in <module>
[1.2.3.4] url(r'^', include('portfolio.urls')), # route root through portfolio routes
[1.2.3.4] File "/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/django/conf/urls/__init__.py", line 33, in include
[1.2.3.4] urlconf_module = import_module(urlconf_module)
[1.2.3.4] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
[1.2.3.4] __import__(name)
[1.2.3.4] File "/var/www/mysite.com/portfoliosite/portfolio/urls.py", line 3, in <module>
[1.2.3.4] from . import views
[1.2.3.4] File "/var/www/mysite.com/portfoliosite/portfolio/views.py", line 9, in <module>
[1.2.3.4] from .models import Project
[1.2.3.4] File "/var/www/mysite.com/portfoliosite/portfolio/models.py", line 2, in <module>
[1.2.3.4] from imagekit.models import ImageSpecField
[1.2.3.4] File "/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/imagekit/__init__.py", line 2, in <module>
[1.2.3.4] from . import conf
[1.2.3.4] ImportError: cannot import name conf
Does anybody know what might be causing this to happen? Doing "sudo python3
manage.py runserver" works fine, but Apache runs into this library error.
Here's some more relevant info:
wsgi.py:
import os
import sys
import site
# Add the site-packages of the chosen virtualenv to work with
site.addsitedir('/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/')
# Add the app's directory to the PYTHONPATH
sys.path.append('/var/www/mysite.com/portfoliosite')
sys.path.append('/var/www/mysite.com/portfoliosite/portfoliosite')
os.environ['DJANGO_SETTINGS_MODULE'] = 'portfoliosite.settings'
# Activate your virtual env
activate_env=os.path.expanduser('/home/ubuntu/.virtualenvs/portfoliositeenv/bin/activate_this.py')
exec(open(activate_env).read(), dict(__file__=activate_env))
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
/etc/apache2/apache2.conf (the part I added):
WSGIPythonPath /var/www/mysite.com/portfoliosite:/home/ubuntu/.virtualenvs/portfoliositeenv/lib/python3.4/site-packages/
Alias /media/ /var/www/mysite.com/portfoliosite/portfolio/media/
Alias /static/ /var/www/mysite.com/static/
<Directory /var/www/mysite.com/static>
Require all granted
</Directory>
<Directory /var/www/mysite.com/media>
Require all granted
</Directory>
WSGIScriptAlias / /var/www/mysite.com/portfoliosite/portfoliosite/wsgi.py
<Directory /var/www/mysite.com/portfoliosite>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
This is really frustrating, and any help would be appreciated. Thank you!
Answer: Figured it out! I was mixing Python versions, you can see some paths in the
logs pointing at Python 2.7 while my virtualenv was using Python 3. I created
a new virtual environment and changed all paths to use Python 2.7, and it
seems to be working now.
|
Django settings SECRET_KEY
Question: I have the following structure of my project
project
--project
----settings
------base.py
------development.py
------testing.py
------secrets.json
--functional_tests
--manage.py
**development.py** and **testing.py** 'inherit' from **base.py**
from .base import *
So, where I have problems
I have the SECRET_KEY for Django in secrets.json, which is stored in
**settings** folder
I load this key like this (saw this in "Two scoops of Django")
import json
from django.core.exceptions import ImproperlyConfigured
key = "secrets.json"
with open(key) as f:
secrets = json.loads(f.read())
def get_secret(setting, secret=secrets):
try:
return secrets[setting]
except KeyError:
error_msg = "Set the {} environment variable".format(setting)
raise ImproperlyConfigured(error_msg)
SECRET_KEY = get_secret("SECRET_KEY")
But when I run `python manage.py runserver`
> Blah-blah-blah
> django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must
> not be empty.
After some investigations I got the following
1. If I put `print(os.getcwd())` inside **base.py** I get `/media/grimel/Home/project/` instead of `/media/grimel/Home/project/project/settings/`
2. This code works only if I replace:
`key = "secrets.json"`
by
`key = "project/settings/secrets.json"`
Personally, I don't like this solution.
So, questions:
1. Why, for base.py current working directory is so confusing?
2. What's a better approach in solving this problem?
Answer: The working directory is based on how you run the program, in your case
`python manage.py runserver` hints that your working directory is the one
containing `manage.py`. Beware that this can vary when run as WSGI script or
otherwise, so your concern with using `key = "project/settings/secrets.json"`
is valid.
One solution is to use the value of `__file__` in `base.py`, likely to be
`"project/settings/base.py"`. I would use something like
import os
BASE_DIR = os.path.dirname(__file__)
key = os.path.join(BASE_DIR, "secrets.json")
|
Handling same type exceptions separately in Python
Question: Say I have the following four variables:
>>> f1 = (print, 1, 2 ,3 ,4)
>>> f2 = (exit, 1, 2 ,3 ,4)
>>> f3 = (1, 2, 3, 4)
>>> f4 = 4
In a hypothetical program, I expect each of those variables to hold a tuple,
whose first item should be the name of a function, and whose subsequent items
should be said function's parameters, in order.
I could call a function stored in this way like so:
>>> f1[0](*f1[1:])
1 2 3 4
However, most of these variables are not in the excepted format, and I would
like to be able to encapsulate the calling of them inside `try`/`except`
blocks in order to handle those situations.
Now, even though the function calls of `f2`, `f3` and `f4` break for radically
different reason, they all throw the same kind of exception, a `TypeError`:
>>> f2[0](*f2[1:])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __call__() takes from 1 to 2 positional arguments but 5 were given
>>> f3[0](*f3[1:])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
>>> f4[0](*f4[1:])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not subscriptable
So that doing a generic:
try:
f[0](*f[1:])
except TypeError:
# handle exception
Would not provide me with enough information to handle each exception
accordingly.
What would be the correct way to differentiate between distinct exceptions of
the same type in Python?
Answer: So, you need to use a different method of calling than `function(*tuple)`.
Here is an example of a function `fun_apply` that calls a particular function
with positional and keyword arguments. It adds an explicit check to make sure
that `args` is iterable and `kwargs` inherits from `collection.Mapping`.
from collections import Mapping
def fun_apply(f, args=None, kwargs=None):
if args is None:
args = []
if kwargs is None:
kwargs = {}
try:
args = iter(args)
except TypeError:
# handle args being non-iterable
pass
if isinstance(kwargs, collections.Mapping):
pass
else:
# handle kwargs being a non-mapping
pass
if hasattr(f, '__call__'):
pass
else:
# handle f being non-callable
pass
try:
f(*args, **kwargs)
except TypeError as e:
raise TypeError(e)
The other option is to process the `message` string explicitly, but this could
have unintended consequences and the error messages themselves might differ
between versions of Python.
|
How do I make an object mutable in python?
Question: So from what I've gathered user-made classes are supposed to be mutable by
default, but I've experienced the opposite. Here's my code:
import copy
class vector:
def __init__(self, entries):
if type(entries) == list:
self.elements = []
self.dimensionality = len(entries)
for entry in entries:
self.elements.append(entry)
elif type(entries) == vector:
self.elements = entries.elements
def __getitem__(self,index):
return self.elements[index]
def __add__(self, otherVector):
if len(self) != len(otherVector):
raise RuntimeError("Cannot add these vectors")
sumt = []
for entry in len(self):
sumt.append(self.elements[entry] + otherVector.elements[entry])
return vector(sumt)
def __len__(self):
return len(self.elements)
def __eq__(self, otherVariable):
return size(self) == size(otherVariable)
def size(x):
return (x * x)**(1/2)
def taxi(x):
taxiSize = 0
for entry in x.elements:
taxiSize += entry
return taxiSize
class matrix:
def __init__(self, entries):
for i in entries:
if len(entries[0]) != len(i):
raise RuntimeError("All rows of matrix must contain the same number of entries")
self.elements = []
for row in entries:
self.elements.append(vector(row))
def __add__(self, otherMatrix):
if len(self.elements) != len(otherMatrix.elements):
raise RuntimeError("Cannot add matrices of different sizes")
if len(self.elements[0]) != len(otherMatrix.elements[0]):
raise RuntimeError("Cannot add matrices of different sizes")
sumt = []
row = []
for entry in self.elements[0]:
row.append(0)
for i in self.elements:
sumt.append(row)
sumf = matrix(sumt)
for i in range(len(self.elements)):
for j in range(len(self.elements[0])):
sumf.elements[i][j] = self.elements[i][j] + otherMatrix.elements[i][j] #error is on this line
return sumf
matrix([[3,4],[1,2]]) + matrix([[3,4],[1,2]])
And the error:
Traceback (most recent call last):
File "/Users/sebpole/Documents/vectors.py", line 164, in <module>
matrix([[3,4],[1,2]]) + matrix([[3,4],[1,2]])
File "/Users/sebpole/Documents/vectors.py", line 143, in __add__
sumf.elements[i][j] = self.elements[i][j] + otherMatrix.elements[i][j]
TypeError: 'vector' object does not support item assignment
seems like Python doesn't like my changing the entries of my vector class,
which is confusing me because I thought the class was supposed to be
immutable.
Why is my object immutable, and how can I fix this?
Answer: Item assignment is enabled via the [`__setitem__()`
method](https://docs.python.org/2/reference/datamodel.html#object.__setitem__),
which your class does not define.
|
Importing system-wide installed module into Anaconda
Question: I have an issue with module importing within Anaconda. I'm using the latest
Anaconda 3 installed in my Linux home directory in order to have the latest
jupyter, scipy, numpy and so on. I also have installed a scientific package
(Kwant) for quantum transport calculations via Ubuntu ppa deb package: the
libraries are found in `/usr/lib/python3/dist-packages/kwant`
I have set python as python3.4 with alias command and exported environment
variables `PYTHONPATH=$PYTHONPATH:/usr/lib/python3/dist-packages/kwant` and
the same for `LD_LIBRARYPATH`. When I launch jupyter notebook and execute the
first cell with `import kwant` I get the error message
ImportError: No module named 'kwant'
Is it possible importing a module which is installed in `/usr` system
directory whereas Anaconda is in the `/home directory` ?
Thanks in advance
Answer: change
PYTHONPATH=$PYTHONPATH:/usr/lib/python3/dist-packages/kwant
to
PYTHONPATH=$PYTHONPATH:/usr/lib/python3/dist-packages
And see if that works for you
|
Django URL pattern does not match with my config
Question: I'm new with Django, I'm having a problem with the url of the page
"`DetailLivre.html`" it shows :
Using the `URLconf` defined in `Ilhem.urls`, Django tried these URL patterns,
in this order:
^Bibliotheque/ ^$ [name='index']
^Bibliotheque/ ^(?P<livre_id>[0-9]+)/$ [name='DetailLivre']
^admin/
The current URL, `Bibliotheque/BOOK1/`, didn't match any of these.*
I'm using: `Python 2.7.6`, `django 1.8.5` can you please help?
**index.html:**
<h1>La liste des Livres </h1>
{% block content %}
{% block theme %}
{% load bootstrap_themes %}
{% bootstrap_styles theme='default' type='min.css' %}
{% bootstrap_styles theme='cosmo' type='css' %}
{% bootstrap_styles theme='united' type='less' %}
{% bootstrap_script use_min=True %}
{% endblock theme %}
{% if BookList%}
<div class="container-fluid">
<div class="row">
<ul>
{% for livre in BookList %}
<li><a href="/Bibliotheque/{{ livre.Titre }}/">{{ livre.Titre }}</a></li>
{% endfor %}
</ul>
</div>
</div>
{% else %}
<p>Pas de Livres.</p>
{% endif %}
{% endblock %}
**DetailLivre.html**
<h1>{{ Auteur }}</h1>
<label for="Nom">{{ Auteur.Nom }}</label><br />
<label for="Prenom">{{ Auteur.Prenom }}</label><br />
<label for="dateNaissance">{{ Auteur.dateNaissance }}</label><br />
<label for="Lieu_de_naissance">{{ Auteur.Lieu_de_naissance }}</label><br />
<label for="Specialite">{{ Auteur.Specialite }}</label>
**Views.py**
from django.shortcuts import render
from django.template import RequestContext, loader
from django.http import HttpResponse
from django.template import RequestContext, loader
from django.shortcuts import get_object_or_404, render
from .models import Livre, Auteur
def index(request):
BookList = Livre.objects.all()
template = loader.get_template('Bibliotheque/index.html')
context = RequestContext(request, {
'BookList': BookList,
})
return HttpResponse(template.render(context))
def DetailLivre(request, livre_id):
livre = Livre.objects.get(pk=livre_id)
return render(request, 'Bibliotheque/DetailLivre.html', {'livre.Titre': livre.Titre})
**url.py**
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^(?P<livre_id>[0-9]+)/$', views.DetailLivre, name='DetailLivre'),
]
Answer: you have missing BOOK prefix `BOOK(?P<livre_id>[-\d]+)`
|
Create a single-file executable using py2exe
Question: I have written a python code which displays a window using Tkinter. It also
calls another python file present in the same folder. I converted the .py
files into a .exe file using py2exe. But i am facing the below issues:
1. The output (in dist folder) is a set of files and not a single executable file.
* As per my understanding using the `'bundle_files':1,'compressed':True`, i should be getting a single file.
* Now i have two .exe files and 1 folder: **w9xpopen.exe,myframe.py**(this is my file) and folder "**tcl** "
2. The icon is not changed.
* I had mentioned `"icon_resources":[(0,"icon.ico")]` in the "windows" section
Below is the **setup.py** i used:
from distutils.core import setup
import py2exe, glob,sys,os
sys.argv.append('py2exe')
setup(
options={'py2exe':{'bundle_files':1,'compressed':True}},
windows=[{"script":'hr_data_downloader.py',"icon_resources": [(0,"icon.ico")]}],
data_files = [],
zipfile=None
)
I had issues running the executable at first but after going through the below
posts, i corrected it by explicitly adding the two dlls.
[Creating single EXE using py2exe for a Tkinter
program](http://stackoverflow.com/questions/14975018/creating-single-exe-
using-py2exe-for-a-tkinter-program)
[py2exe - generate single executable
file](http://stackoverflow.com/questions/112698/py2exe-generate-single-
executable-file)
Please let me know if it is possible to create a single-file executable by
modifying the setup files or any other py2exe files. Also please tell me why
the icon is not shown for the created .exe
I am open to try other distribution utilities like py2exe if it can help me
create single-file executable.
Answer: I figured out how to do it using pyinistaller. Although it makes the exe
considerably large, i am happy that i have only single file.
Below is what i did:
1. Installed pyinstaller, pywin32
2. Open command prompt
3. go to my code folder
4. use command `pyinstaller --onefile --windowed myframe.py`
The manual for pyinstaller has detailed explanations.
|
PuLP not printing output on IPython cell
Question: I am using [PuLP](https://pythonhosted.org/PuLP/ "PuLP") and IPython/Jupyter
Notebook for a project.
I have the following cell of code:
import pulp
model = pulp.LpProblem('Example', pulp.LpMinimize)
x1 = pulp.LpVariable('x1', lowBound=0, cat='Integer')
x2 = pulp.LpVariable('x2', lowBound=0, cat='Integer')
model += -2*x1 - 3*x2
model += x1 + 2*x2 <= 7
model += 2*x1 + x2 <= 7
model.solve(pulp.solvers.COIN(msg=True))
When I execute the cell, the output is simply:
1
When I look at the terminal running the Notebook server, I can see the output
of the solver (in this case: COIN). The same happens if a change the
_model.solve_ argument to
model.solve(pulp.solvers.PULP_CBC_CMD(msg=True))
or
model.solve(pulp.solvers.PYGLPK(msg=True))
However, when I use the Gurobi Solver, with the line
model.solve(pulp.solvers.GUROBI(msg=True))
the output of the solver is displayed on the Notebook cell, which is the
behavior I want. In fact, I would be happy with any free solver printing its
output directly on the Notebook cell.
I could not find directions on how to approach this issue in PuLP
documentation. Any help would be appreciated. I am also curious to know if
someone else gets this behavior.
I am using Linux Mint, 64 Bits, IPython 4.0.0 and PuLP 1.6.0.
Answer: Use `%%python` cell magic to print terminal's output.
%%python
import pulp
model = pulp.LpProblem('Example', pulp.LpMinimize)
x1 = pulp.LpVariable('x1', lowBound=0, cat='Integer')
x2 = pulp.LpVariable('x2', lowBound=0, cat='Integer')
model += -2*x1 - 3*x2
model += x1 + 2*x2 <= 7
model += 2*x1 + x2 <= 7
model.solve(pulp.solvers.COIN(msg=True))
|
Executing multiple python pandas dataframe methods on one csv
Question: I'm a bit new to programming in general. I've picked up a small project to
automate some csv changes via pandas dataframe.
I've been able to figure out a few of the changes I need to make,
unfortunately, when I print the current data frame, it only prints out the
changes from one of the functions and not the other (and vice-versa).
My code so far:
import unicodecsv
import datetime as dt
from pandas import DataFrame
# using Pandas for table view to rename '\xef\xbb\xbfId' to 'Id'
df = pd.read_csv('Usage_sample.csv')
df.rename(columns = {'\xef\xbb\xbfId':'ID'})
# add column between ID and Client Name called "Usage Week Of"
df.insert(1,"Usage Week Of", dt.datetime.today().strftime("%Y-%m-%d"))
df
As you can see the two methods that I'm using is "rename" and "insert". Any
help would be much appreciated. Thanks!
Answer: Your `rename` does not have effect on `df` because it returns a new dataframe
which is not used. If you want to modify `df`, use `inplace`:
df.rename(columns = {'\xef\xbb\xbfId':'ID'}, inplace=True)
|
How to set gunicorn to find a flask application?
Question: Please somebody help me that gunicorn find a flask application. I guess
application name is defined inside create_function() hides the application
from run.py; however, I don't know how to fix it. Here is the error when to
run gunicorn with the application:
1 (venv) MacPro:11a toshio$ gunicorn run:app
2 [2015-12-27 15:10:45 +0900] [83437] [INFO] Starting gunicorn 19.4.1
3 [2015-12-27 15:10:45 +0900] [83437] [INFO] Listening at: http://127.0.0.1:8000 (83437)
4 [2015-12-27 15:10:45 +0900] [83437] [INFO] Using worker: sync
5 [2015-12-27 15:10:45 +0900] [83440] [INFO] Booting worker with pid: 83440
6 Failed to find application: 'run'
7 [2015-12-27 15:10:51 +0900] [83440] [INFO] Worker exiting (pid: 83440)
8 [2015-12-27 15:10:52 +0900] [83437] [INFO] Shutting down: Master
9 [2015-12-27 15:10:52 +0900] [83437] [INFO] Reason: App failed to load.
10 (venv) MacPro:11a toshio$
The application structure is following:
1 ├── app
2 │ ├── __init__.py
3 │ ├── main
4 │ │ ├── __init__.py
5 │ │ ├── forms.py
6 │ │ └── routes.py
7 │ ├── models.py
8 │ ├── static
9 │ │ ├── js
10 │ │ ├── css
11 │ │ ├── pdf
12 │ └── templates
13 │ ├── base.html
14 │ ├── index.html
15 │ ├── login.html
16 ├── config
17 │ └── development.py
18 ├── data-dev.sqlite3
19 ├── run.py
The package constructer **app/__init__.py** code:
1 import os
2 import subprocess
3 import imghdr
4 from datetime import datetime
5 from flask import Flask, render_template, session, g, redirect, url_for, request
6 from flask.ext.script import Manager
7 from flask.ext.wtf import Form
8 from flask.ext.bootstrap import Bootstrap
9 from wtforms import StringField, SubmitField, ValidationError, PasswordField, BooleanField
10 from wtforms.validators import Required, Length
11 from werkzeug import secure_filename
12 from werkzeug.security import generate_password_hash, check_password_hash
13 from flask_wtf.file import FileField
14 from flask.ext.sqlalchemy import SQLAlchemy
15 from flask.ext.migrate import Migrate, MigrateCommand
16 from flask.ext.login import LoginManager, UserMixin, login_user, logout_user, login_required
18 bootstrap = Bootstrap()
19 db = SQLAlchemy()
20 lm = LoginManager()
21 lm.login_view = 'main.login'
22
23 def create_app(config_name):
24 """ Create an application instance """
25 app = Flask(__name__, static_folder = 'static')
26
27 # import configration
28 cfg = os.path.join( os.getcwd(), 'config', config_name + '.py' )
29 app.config.from_pyfile(cfg)
30
31 # initialize extenstions
32 bootstrap.init_app(app)
33 db.init_app(app)
34 lm.init_app(app)
35
36 # import blueprints
37 from .main import main as main_blueprint
38 app.register_blueprint(main_blueprint)
39
40 return app
Lastly **run.py** is follwing:
1 #!/usr/bin/env python
2 from app import create_app, db
3 from app.models import User, Book
4
5
6 if __name__ == '__main__':
7 app = create_app('development')
8 with app.app_context():
9 db.create_all()
10 if User.query.filter_by(username='somebody').first() is None:
11 User.register('somebody', 'abc')
12 app.run()
Answer: Try to run gunicorn from the command line like this:
gunicorn 'app:create_app("development")'
|
Python: ImportError happens on IDE suggestion
Question: **This is my packages structure:**
[](http://i.stack.imgur.com/PHZaf.png)
**This is my __init.py__ inside settings package:**
from settings import *
**This is my functions.py:**
from git import *
import initializer.settings as settings
_repo_remote = "https://%s:%s@%s" % (settings.git_username, settings.git_password, git_info["remote"])
Although I imported the settings package with my IDE auto-complete, I keep
getting:
ImportError: No module named initializer.settings
When changing my import to:
import settings
The code works, but IDE is showing an error, why does it happen and what's
wrong? I assume it is something with the path it try to load the module from,
but I don't know how to change or control it..
Answer: If your `main.py` is in the `'initializer'` folder (it appears to be?), you
could import simply like this instead:
import settings
As long as the `__init__.py` in the `'settings'` folder is like you described
it. You would need to have the `'main.py'` in the topmost folder to use it as
you had it.
|
Python CSV write to file unreadable (Chinese characters)
Question: I am trying to performing text analysis on Chinese texts. The program is
provided below. I got the result with unreadable characters such as
`浜烘皯鏃ユ姤绀捐`. And if I change the output file `result.csv` to `result.txt`, the
characters are correct as `人民日报社论`. So what's wrong with this? I can not
figure out. I tried several ways including add `decoder` and `encoder`.
# -*- coding: utf-8 -*-
import os
import glob
import jieba
import jieba.analyse
import csv
import codecs
segList = []
raw_data_path = 'monthly_raw_data/'
file_name = ["201010", "201011", "201012", "201101", "201103", "201105", "201107", "201109", "201110", "201111", "201112", "201201", "201202", "201203", "201205", "201206", "201208", "201210", "201211"]
jieba.load_userdict("customized_dict.txt")
for name in file_name:
all_text = ""
multi_line_text = ""
with open(raw_data_path + name + ".txt", "r") as file:
for line in file:
if line != '\n':
multi_line_text += line
templist = multi_line_text.split('\n')
for text in templist:
all_text += text
seg_list = jieba.cut(all_text,cut_all=False)
temp_text = []
for item in seg_list:
temp_text.append(item.encode('utf-8'))
stop_list = []
with open("stopwords.txt", "r") as stoplistfile:
for item in stoplistfile:
stop_list.append(item.rstrip('\r\n'))
text_without_stopwords = []
for word in temp_text:
if word not in stop_list:
text_without_stopwords.append(word)
segList.append(text_without_stopwords)
with open("results/result.csv", 'wb') as f:
writer = csv.writer(f)
writer.writerows(segList)
Answer: For UTF-8 encoding, Excel requires BOM (byte order mark) codepoint written at
the start of the file or it will assume `ANSI` encoding, which is locale-
dependent. `U+FEFF` is the Unicode BOM. Here's an example that will open in
Excel correctly:
#!python2
#coding:utf8
import csv
data = [[u'American',u'美国人'],
[u'Chinese',u'中国人']]
with open('results.csv','wb') as f:
f.write(u'\ufeff'.encode('utf8'))
w = csv.writer(f)
for row in data:
w.writerow([item.encode('utf8') for item in row])
For completeness, Python 3 makes this easier. Note `newline=''` parameter
instead of `wb` and `utf-8-sig` encoding automatically adds a BOM. Unicode
strings are written directly instead of needing to encode each item.
#!python3
#coding:utf8
import csv
data = [[u'American',u'美国人'],
[u'Chinese',u'中国人']]
with open('results.csv','w',newline='',encoding='utf-8-sig') as f:
w = csv.writer(f)
w.writerows(data)
There is also the 3rd party module `unicodecsv` that makes Python 2 easier as
well:
#!python2
#coding:utf8
import unicodecsv
data = [[u'American',u'美国人'],
[u'Chinese',u'中国人']]
with open('results.csv','wb') as f:
w = unicodecsv.writer(f,encoding='utf-8-sig')
w.writerows(data)
|
tracking frequency of words in an ebay search result
Question: using python 3.5, what im looking to do is to go to the results page of an
ebay search by means of generating a link, save the source code as an xml
document, and iterate thru every individual listing, of which there could be
1000 or more. next i want to create a dictionary with every word that appears
in every listing's title, (title only) and its corresponding frequency of
appearance. so for example, if i search 'honda civic', and the thirty of the
results are 'honda civic ignition switch', i'd like my results to come out as
`results = {'honda':70, 'civic':60, 'igntion':30, 'switch':30, 'jdm':15,
'interior':5}` etc, etc.
heres a link i use: [http://www.ebay.com/sch/Car-Truck-
Parts-/6030/i.html?_from=R40&LH_ItemCondition=4&LH_Complete=1&LH_Sold=1&_mPrRngCbx=1&_udlo=100&_udhi=700&_nkw=honda+%281990%2C+1991%2C+1992%2C+1993%2C+1994%2C+1995%2C+1996%2C+1997%2C+1998%2C+1999%2C+2000%2C+2001%2C+2002%2C+2003%2C+2004%2C+2005%29&_sop=16](http://www.ebay.com/sch/Car-
Truck-
Parts-/6030/i.html?_from=R40&LH_ItemCondition=4&LH_Complete=1&LH_Sold=1&_mPrRngCbx=1&_udlo=100&_udhi=700&_nkw=honda+%281990%2C+1991%2C+1992%2C+1993%2C+1994%2C+1995%2C+1996%2C+1997%2C+1998%2C+1999%2C+2000%2C+2001%2C+2002%2C+2003%2C+2004%2C+2005%29&_sop=16)
the problem im having is that i only get the first 50 results, instead of the
X,000's of results i potentially will get with different search options. what
might be a better method of going about this?
and my code:
import requests
from bs4 import BeautifulSoup
from collections import Counter
r = requests.get(url)
myfile = 'c:/users/' + myquery
fw = open(myfile + '.xml', 'w')
soup = BeautifulSoup(r.content, 'lxml')
for item in soup.find_all('ul',{'class':'ListViewInner'}):
fw.write(str(item))
fw.close()
print('...complete')
fr = open(myfile + '.xml', 'r')
wordfreq = Counter()
for i in fr:
words = i.split()
for i in words:
wordfreq[str(i)] = wordfreq[str(i)] + 1
fw2 = open(myfile + '_2.xml', 'w')
fw2.write(str(wordfreq))
fw2.close()
Answer: You are getting the first 50 results because EBay display 50 results for each
page. The solution is to parse one page at time. With this search, you can use
a different url:
[http://www.ebay.com/sch/Car-Truck-
Parts-/6030/i.html?_from=R40&LH_ItemCondition=4&LH_Complete=1&LH_Sold=1&_mPrRngCbx=1&_udlo=100&_udhi=700&_sop=16&_nkw=honda+%281990%2C+1991%2C+1992%2C+1993%2C+1994%2C+1995%2C+1996%2C+1997%2C+1998%2C+1999%2C+2000%2C+2001%2C+2002%2C+2003%2C+2004%2C+2005%29&_pgn=1&_skc=50&rt=nc](http://www.ebay.com/sch/Car-
Truck-
Parts-/6030/i.html?_from=R40&LH_ItemCondition=4&LH_Complete=1&LH_Sold=1&_mPrRngCbx=1&_udlo=100&_udhi=700&_sop=16&_nkw=honda+%281990%2C+1991%2C+1992%2C+1993%2C+1994%2C+1995%2C+1996%2C+1997%2C+1998%2C+1999%2C+2000%2C+2001%2C+2002%2C+2003%2C+2004%2C+2005%29&_pgn=1&_skc=50&rt=nc)
Notice a parameter `_pgn=1` in the url? This is the number of the page
currently displayed. If you provide a number that exceeds the number of the
pages for the search, a error message will appear in a div with class `"sm-
md"`
So you can do something like:
page = 1
url = """http://www.ebay.com/sch/Car-Truck-Parts-/6030/i.html?_from=R40&LH_ItemCondition=4&LH_Complete=1&LH_Sold=1&_mPrRngCbx=1&_udlo=100&_udhi=700&_sop
=16&_nkw=honda+%281990%2C+1991%2C+1992%2C+1993%2C+1994%2C+1995%2C+1996%2C+
1997%2C+1998%2C+1999%2C+2000%2C+2001%2C+2002%2C+2003%2C+2004%2C+2005%29&
_pgn="""+str(page)+"&_skc=50&rt=nc"
has_page = True
myfile = 'c:/users/' + myquery
fw = open(myfile + '.xml', 'w')
while has_page:
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
error_msg = soup.find_all('p', {'class':"sm-md"})
if len(error_msg) > 0:
has_page = False
continue
for item in soup.find_all('ul',{'class':'ListViewInner'}):
fw.write(str(item))
page+=1
fw.close()
I only tested entering the pages and printing the ul, and it worked nice
|
How do I display the hosts inside the Google Chrome sqlite3 "cookie" database using Python
Question: I'm using Python to access the "cookie" chrome sqlite3 db to retrieve the host
keys, but getting error below
import sqlite3
conn = sqlite3.connect(r"C:\Users\tikka\AppData\Local\Google\Chrome\User Data\Default\Cookies")
cursor = conn.cursor()
cursor.execute("select host_key from cookies")
results = cursor.fetchall()
print results
conn.close()
Error
Traceback (most recent call last):
File "C:\Python27\cookies.py", line 4, in <module>
cursor.execute("select host_key from cookies")
DatabaseError: malformed database schema (is_transient) - near "where": syntax error
>>>
Answer: thanks to [link](http://www.obsidianforensics.com/blog/upgrading-python-
sqlite) provided by [alecxe](http://stackoverflow.com/users/771848/alecxe) was
able to fix it by upgrading sqlite3 version from 3.6.21 to 3.9.2. I upgraded
by downloading [new version](https://www.sqlite.org/2015/sqlite-dll-
win64-x64-3090200.zip) from [this site](https://www.sqlite.org/download.html)
and placing the dll in C:\Python27\DLLs
|
Django: django-admin startproject ImportError
Question: I have two Python VE's one in which I created a Django project that is located
on the Desktop. I recently created another VE to start another Django project.
However, when I run `django-admin startproject projectname` within the new VE,
I get an ImportError saying that the other Django app couldn't be imported.
What would be trying to import my old app? Why would this be happening?
I am running Django 1.9 on Debian 8.
Traceback (most recent call last):
File "/home/lie/.virtualenvs/tagger/bin/django-admin", line 11, in <module>
sys.exit(execute_from_command_line())
File "/home/lie/.virtualenvs/tagger/lib/python3.4/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line
utility.execute()
File "/home/lie/.virtualenvs/tagger/lib/python3.4/site-packages/django/core/management/__init__.py", line 302, in execute
settings.INSTALLED_APPS
File "/home/lie/.virtualenvs/tagger/lib/python3.4/site-packages/django/conf/__init__.py", line 55, in __getattr__
self._setup(name)
File "/home/lie/.virtualenvs/tagger/lib/python3.4/site-packages/django/conf/__init__.py", line 43, in _setup
self._wrapped = Settings(settings_module)
File "/home/lie/.virtualenvs/tagger/lib/python3.4/site-packages/django/conf/__init__.py", line 99, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/home/lie/.virtualenvs/tagger/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2212, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2224, in _find_and_load_unlocked
ImportError: No module named 'AUVSIDataProc'
Answer: I found there was an Environment Variable called `DJANGO_SETTINGS_MODULE`.
I just unset it: `export DJANGO_SETTINGS_MODULE=""`. and I was able to start a
new project.
|
overflow error Plot
Question: I want do stuff with sound/ audio and music processing. Before this i created
a sample signal with a 10 second sweep. I have a simple script which have to
plot some signals. First signal is a simple sine; second a sweep; Both with
frequency just below Nyquist frequency so thats no problem.
The Code:
#import
import numpy as np
import scipy.signal as sig
import matplotlib.pylab as plt
f0 = 50
f1 = 20000
t1 = 10
t = np.arange(0,t1,1/44100)#[numpy.newaxis];
print(t.shape)
sine = np.sin(2*np.pi*f0*t)
plt.plot(t, sine)
plt.xlabel('Angle [rad]')
plt.ylabel('sin(t)')
plt.axis('tight')
plt.show()
sweep = sig.chirp(t,f0,t1,f1,'linear',90)
plt.plot(t, sweep)
plt.xlabel('Angle [rad]')
plt.ylabel('sin(t)')
plt.axis('tight')
plt.show()
When I run the Python code it runs fine with the simple sine wave, but not
with the sweep.
It gave the following error(s):
runfile('C:/Users/****/Documents/python/test_sweep.py', wdir='C:/Users/****/Documents/python')
(441000,)
Traceback (most recent call last):
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\IPython\core\formatters.py", line 330, in __call__
return printer(obj)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\IPython\core\pylabtools.py", line 207, in <lambda>
png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\IPython\core\pylabtools.py", line 117, in print_figure
fig.canvas.print_figure(bytes_io, **kw)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\backend_bases.py", line 2158, in print_figure
**kwargs)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\backends\backend_agg.py", line 521, in print_png
FigureCanvasAgg.draw(self)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\backends\backend_agg.py", line 469, in draw
self.figure.draw(self.renderer)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\figure.py", line 1085, in draw
func(*args)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\axes\_base.py", line 2110, in draw
a.draw(renderer)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\lines.py", line 715, in draw
drawFunc(renderer, gc, tpath, affine.frozen())
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\lines.py", line 1072, in _draw_lines
self._lineFunc(renderer, gc, path, trans)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\lines.py", line 1112, in _draw_solid
renderer.draw_path(gc, path, trans)
File "C:\Users\****\Documents\python\WinPython-64bit-3.4.3.5\python-3.4.3.amd64\lib\site-packages\matplotlib\backends\backend_agg.py", line 163, in draw_path
self._renderer.draw_path(gc, path, transform, rgbFace)
OverflowError: Allocated too many blocks
When Changing the frequency `f1` to around 10% of the sample frequency i don't
have any errors. But i wanna create some sweeps within the CD audio range so
what's happent and how to avoid this problem
Edit: I use Spyder with IPython on Windows/ Ubuntu.
edit 2: I know that the screen resolution is not fine enough... but otherwise
GNU octave/ matlab/ ... it work well. the simple sinewave with the same number
of samples works fine... so it the different in reaction on data points...
Answer: Thanks to the commend of @ali_m and his link i found a solution.
according to the answer there i need to add `agg.path.chunksize` of 10 000 to
100 000. because i don't want to do that in the `matplotlibrc` file i had to
do that in the script.
According to a discusion on there own
[github](https://github.com/matplotlib/matplotlib/pull/4464) i found the right
method doing that. I have to add `plt.rcParams['agg.path.chunksize'] = 10000`
to the script and now it works fine.
|
How do I get the current length of the Text in a Tkinter Text widget
Question: I am writing a light webtexting application, and I'm trying to display the
current number of characters in a TKinter Text widget used for writing the
message to be sent in the webtext. The code I have at the moment can be seen
below, I'm using python
root = Tk()
msgLabel = Label(text="Message")
msgLabel.grid(row=0, column=0)
msg = Text(width=40, height=4, wrap="word")
msg.grid(row=0, column=1, padx=10, pady=5)
#Try to display number of characters within message to user
charCount = Label(text="Character Count: "+str(len(meg.get("1.0", 'end-1c')))
charCount.grid(row=1, column=1, pady=5, padx=5)
root.mainloop()
I'd like to be able to display the number of characters in the message written
by the user, since there is a 160 character limit to each webtext. Is it
possible to display the current character length, that updates as text is
inserted and removed? Thanks in advance
Answer: We use `StringVar` to set new text to label. We need to bind `msg` for key
pressing.
from Tkinter import *
root = Tk()
#-----------------------------------------------
def update(event):
var.set(str(len(msg.get("1.0", 'end-1c'))))
#-----------------------------------------------
msgLabel = Label(text="Message")
msgLabel.grid(row=0, column=0)
msg = Text(width=40, height=4, wrap="word")
msg.grid(row=0, column=1, padx=10, pady=5)
#----------------
var = StringVar()
#----------------
#Try to display number of characters within message to user
#----------------------------------
charCount = Label(textvariable=var)
#----------------------------------
charCount.grid(row=1, column=1, pady=5, padx=5)
#------------------------
msg.bind("<Key>", update)
#------------------------
root.mainloop()
|
Integer slicing in pandas different for rows and columns?
Question: Coming from R I try to get my head around integer slicing for pandas
dataframes. What puzzles me is the different slicing behavior for rows and
columns using the same integer/slice expression.
import pandas as pd
x = pd.DataFrame({'a': range(0,6),
'b': range(7,13),
'c': range(14, 20)})
x.ix[0:2, 0:2] # Why 3 x 2 and not 3 x 3 or 2 x 2?
a b
0 0 7
1 1 8
2 2 9
We get 3 rows but only 2 columns. In the docs I find that different from
standard python, [label based slicing in pandas is
inclusive](http://pandas.pydata.org/pandas-docs/stable/gotchas.html#endpoints-
are-inclusive). Does this apply here and is it inclusive for rows but not for
columns then?
**Can someone explain the behavior and the rationale behind it?**
Answer: You are correct that there is a distinction between **label based indexing**
and **position based indexing**. The first includes the end label, while
typical python position based slicing does not include the last item.
In the example you give: `x.ix[0:2, 0:2]` the rows are being sliced based on
the labels, so '2' is included (returning 3 rows), while the columns are
sliced based on position, hence returning only 2 columns.
If you want guaranteed position based slicing (to return a 2x2 frame in this
case), `iloc` is the indexer to use:
In [6]: x.iloc[0:2, 0:2]
Out[6]:
a b
0 0 7
1 1 8
For guaranteed position based slicing, you can use the `loc` indexer.
The `ix` indexer you are using, is more flexible (not strict in type of
indexing). It is primarily label based, but will fall back to position based
(when the labels are not found and you are using integers). This is the case
in your example for the columns. For this reason, it is **recommended to
always use`loc`/`iloc` instead of `ix`** (unless you need mixed label/position
based indexing).
See the docs for a more detailed overview of the different types of indexers:
<http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-
for-indexing>
|
Why is self superfluous for a method when using bottle in a class?
Question: I usually use `bottle` in a naked script:
import bottle
@bottle.route('/ping')
def ping():
return "pong"
bottle.run()
It works fine, a call to `http://127.0.0.1:8080/ping` returns `pong`. I now
want to use a class for the same functionality:
import bottle
class PingPong:
@bottle.route('/ping')
def ping(self):
return "pong"
def run(self):
bottle.run()
if __name__ == "__main__":
p = PingPong()
p.run()
A call to `http://127.0.0.1:8080/ping` now returns a `500` and the traceback
on the server is
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\bottle.py", line 862, in _handle
return route.call(**args)
File "C:\Python34\lib\site-packages\bottle.py", line 1732, in wrapper
rv = callback(*a, **ka)
TypeError: ping() missing 1 required positional argument: 'self'
127.0.0.1 - - [28/Dec/2015 19:15:15] "GET /ping HTTP/1.1" 500 745
If I remove `self` from the method definition the server works fine.
Why is the self parameter superfluous in this case? How is that different from
a normal method where self is passed by default and corresponds to 'no
parameters' in the call of the method?
Answer: That's because bottle does not know that the function you have passed is a
method, it has no notion of methods. Also, ask yourself: should bottle
automatically create instances?
If you want to use a bound instance method, do this instead:
class PingPong:
def ping(self):
return "pong"
def run(self):
bottle.route('/ping', callback=self.ping)
bottle.run()
if __name__ == "__main__":
p = PingPong()
p.run()
That is, pass the bound method to `route()` once the instance has been
initialized.
|
Derived class doesn't recognise arguments of method from parent
Question: I'm trying to make a set of functions to operate easily through some data. The
problem I'm facing is: it seems to recognize and use methods from the parent
class, except one: `show()`, giving me errors about **unexpected arguments**.
Here's a sample of the classes:
from treelib import Tree, Node
class Join(Tree):
def __init__(self, id, desc, childs=(), *args, **kwargs):
Tree.__init__(self, *args, **kwargs)
self.id = id
self.desc = desc
self.value = None
self.parent = None
self.childs = None
self.create_node(tag=desc, identifier=id)
for i in childs:
self.paste(self.id, i)
def getSons(self):
sons = self.children(self.id)
return sons
def getID(self):
return self.id
def getDesc(self):
return self.desc
def show(self):
self.show(key=lambda x: x.tag, reverse=True, line_type='ascii-em')
class Get(Tree):
def __init__(self, id, desc, primitive, *args, **kwargs):
Tree.__init__(self, *args, **kwargs)
self.id = id
self.desc = desc
self.parent = None
self.primitive = primitive
self.create_node(tag=desc, identifier=id, data=primitive)
def getID(self):
return self.id
def getDesc(self):
return self.desc
def show(self):
self.show(key=lambda x: x.tag, reverse=True, line_type='ascii-em')
class Primitive():
def __init__(self, value):
self.value = value
def getValue(self):
return self.value
def show(self):
pass
#print '\t -> ' + str(self.value)
If, for example, I do this on another .py
prim = Primitive(0)
get1 = Get("get1", "Some random thing", prim)
get1.show()
it tells me that `key` is an unexpected argument. I even checked the library's
.py file, the argument is there:
def show(self, nid=None, level=ROOT, idhidden=True, filter=None,
key=None, reverse=False, line_type='ascii-ex'):
The `create_node()` method works just fine! That's what's weird. Any
suggestions?
I'm using `treelib` in **Python 2.7**
Answer: Your method `show()` calls itself:
def show(self):
self.show(key=lambda x: x.tag, reverse=True, line_type='ascii-em')
Removed it in `Get` and change it in `Join` to:
def show(self):
super(Join, self).show(key=lambda x: x.tag, reverse=True, line_type='ascii-em')
|
How to remove select characters from xml parse in python / django?
Question: **Context**
I am working on a django project and I need to loop through a nested
dictionary to print the values
Here's the dictionary:
> {body{u'@copyright': u'All data copyright Unitrans ASUCD/City of Davis
> 2015.', u'predictions': {u'@routeTitle': u'A',
> u'@dirTitleBecauseNoPredictions': u'Outbound to El Cemonte',
> u'@agencyTitle': u'Unitrans ASUCD/City of Davis', u'@stopTag': u'22258',
> u'@stopTitle': u'Silo Terminal & Haring Hall (WB)', u'@routeTag': u'A',
> u'message': [{u'@text': u'Weekend service is running Monday-Wednesday Dec.
> 28-30.', u'@priority': u'Normal'}, {u'@text': u'The A-line and Z-line do not
> run on weekends. Use O-line for weekend service.', u'@priority':
> u'Normal'}]}}}
I am parsing the dictionary from the following url:
[http://webservices.nextbus.com/service/publicXMLFeed?command=predictions&a=unitrans&r=A&s=22258](http://webservices.nextbus.com/service/publicXMLFeed?command=predictions&a=unitrans&r=A&s=22258)
**Problem 1**
I am getting trouble displaying the values of keys with '@' in them using
django template tags, for example
{% for i in data%}
{% i.@copyright %}
{% endfor %}
This gives an error saying could not parse remainder.
**Problem 2**
One of the values has a nested dictionary in it with square brackets
> [{u'@text': u'Weekend service is running Monday-Wednesday Dec. 28-30.',
> u'@priority': u'Normal'}, {u'@text': u'The A-line and Z-line do not run on
> weekends. Use O-line for weekend service.', u'@priority': u'Normal'}]
I cannot loop through this using for loop template tags
**The solution I have in mind**
In order to solve this and make it simpler I am looking to strip the
characters `'@'`, `'['` and `']'`from the xml, this would leave me with a much
simpler dictionary which would be easy to loop through.
**My Python Code Right Now in views.py**
import xmltodict
import requests
def prediction(request, line, s_id):
url = "http://webservices.nextbus.com/service/publicXMLFeed? command=predictions&a=unitrans&r=" + line + "&s=" + s_id
data = requests.get(url)
data = xmltodict.parse(data, dict_constructor=dict)
data_dict = {}
data_dict["data"] = data
return render(request, 'routes/predictions.html', data_dict)
**What I want to display on page predictions.html**
Route Tag: A
Message : Weekend Service is running Monday-Wednesday Dec. 28-30.
The A-Line and Z-Line do not run on weekends. use O-Line for weekend service.
Priority: Normal
I would appreciate any inputs on this problem. Thank you for your time.
Answer: In xmltodict, the '@' symbols are there to indicate attributes of xml nodes,
and the '[' and ']' are used to delimit element values that are themselves a
list of values. (Here, it indicates the 'message' value is itself a list of
two message objects). You can certainly try to read in the dict as raw text
and scrape out what you need, but that won't take advantage of the reason most
people are importing it to begin with: To organize the data and make it easier
to access.
Instead of scraping the text, you can easily craft a template that would just
pull the specific values from the dict that you want. Your data dict should be
structured something like this:
{
body:
{
u'@copyright': u'All data copyright Unitrans ASUCD/City of Davis 2015.',
u'predictions':
{
u'@routeTitle': u'A',
u'@dirTitleBecauseNoPredictions': u'Outbound to El Cemonte',
u'@agencyTitle': u'Unitrans ASUCD/City of Davis',
u'@stopTag': u'22258',
u'@stopTitle': u'Silo Terminal & Haring Hall (WB)',
u'@routeTag': u'A',
u'message':
[
{
u'@text': u'Weekend service is running Monday-Wednesday Dec. 28-30.',
u'@priority': u'Normal'
},
{
u'@text': u'The A-line and Z-line do not run on weekends. Use O-line for weekend service.',
u'@priority': u'Normal'
}
]
}
}
}
To get the output you want, create a template tailored for this data and then
just insert directly the values you need. Something like this: (apologies, I
don't know django template syntax exactly)
Route Tag: {{ data_dict.body.predictions.routeTitle }}
Messages :
<ul>
{% for msg in data_dict.body.predictions.message %}
<li>{{ msg.text }} (Priority: {{ msg.priority }})</li>
{% endfor %}
</ul>
|
Daily Hurst Exponent
Question: I am trying to estimate daily Hurst exponent values of a stock returns (e.g.
for each day to have also Hurst exponent - something like that:
<https://www.quandl.com/data/PE/CKEC_HURST-Hurst-Exponent-of-Carmike-Cinemas-
Inc-Common-Stock-CKEC-NASDAQ>).
I am using this Python code (taken from
<https://www.quantstart.com/articles/Basics-of-Statistical-Mean-Reversion-
Testing>), but I do not know how to accommodate it for daily Hurst values
instead of just one value:
from datetime import datetime
from pandas.io.data import DataReader
from numpy import cumsum, log, polyfit, sqrt, std, subtract
from numpy.random import randn
def hurst(ts):
"""Returns the Hurst Exponent of the time series vector ts"""
# Create the range of lag values
lags = range(2, 100)
# Calculate the array of the variances of the lagged differences
tau = [sqrt(std(subtract(ts[lag:], ts[:-lag]))) for lag in lags]
# Use a linear fit to estimate the Hurst Exponent
poly = polyfit(log(lags), log(tau), 1)
# Return the Hurst exponent from the polyfit output
return poly[0]*2.0
# Download the stock prices series from Yahoo
aapl = DataReader("AAPL", "yahoo", datetime(2012,1,1), datetime(2015,9,18))
# Call the function
hurst(aapl['Adj Close'])
Answer: I guess you mean:
from datetime import timedelta
current_date = datetime(2012,1,3)
end_date = datetime(2015,9,18)
aapl = DataReader("AAPL", "yahoo", current_date, end_date)
index = 0
while index < len(aapl['Adj Close']):
print current_date.strftime("%Y-%m-%d")
print hurst(aapl['Adj Close'][index:index + 1])
index += 1
current_date += timedelta(days=1)
|
Loop over (or vectorize) variable length matrices in Theano
Question: I have a list of matrices `L`, where each item `M` is a `x*n` matrix (`x` is a
variable, `n` is a constant).
I want to compute the sum of `M'*M` for all items in `L` (`M'` is the
transpose of `M`) as the following Python code does:
for M in L:
res += np.dot(M.T, M)
Actually I want to implement this in Theano (which doesn't support variable
length multidimensional arrays), and I don't want to pad all matrices to the
same size because that will waste too much space (some of the matrices can be
very large).
Is there a better way to do this?
**Edit** :
`L` is known before the Theano compilation.
**Edit** :
received two excellent answers from @DanielRenshaw and @Divakar, emotionally
difficult to choose one to accept.
Answer: Given that the number of matrices is known before the Theano compilation needs
to happen, one can simply use regular Python lists of Theano matrices.
Here's a complete example showing the difference between numpy and Theano
versions.
This code has been updated to include a comparison with @Divakar's vectorized
approach which performs better. Two vectorized approaches are possible for
Theano, one where Theano performs the concatenation, and one where numpy does
the concatenation the result of which is then passed to Theano.
import timeit
import numpy as np
import theano
import theano.tensor as tt
def compile_theano_version1(number_of_matrices, n, dtype):
assert number_of_matrices > 0
assert n > 0
L = [tt.matrix() for _ in xrange(number_of_matrices)]
res = tt.zeros(n, dtype=dtype)
for M in L:
res += tt.dot(M.T, M)
return theano.function(L, res)
def compile_theano_version2(number_of_matrices):
assert number_of_matrices > 0
L = [tt.matrix() for _ in xrange(number_of_matrices)]
concatenated_L = tt.concatenate(L, axis=0)
res = tt.dot(concatenated_L.T, concatenated_L)
return theano.function(L, res)
def compile_theano_version3():
concatenated_L = tt.matrix()
res = tt.dot(concatenated_L.T, concatenated_L)
return theano.function([concatenated_L], res)
def numpy_version1(*L):
assert len(L) > 0
n = L[0].shape[1]
res = np.zeros((n, n), dtype=L[0].dtype)
for M in L:
res += np.dot(M.T, M)
return res
def numpy_version2(*L):
concatenated_L = np.concatenate(L, axis=0)
return np.dot(concatenated_L.T, concatenated_L)
def main():
iteration_count = 100
number_of_matrices = 20
n = 300
min_x = 400
dtype = 'float64'
theano_version1 = compile_theano_version1(number_of_matrices, n, dtype)
theano_version2 = compile_theano_version2(number_of_matrices)
theano_version3 = compile_theano_version3()
L = [np.random.standard_normal(size=(x, n)).astype(dtype)
for x in range(min_x, number_of_matrices + min_x)]
start = timeit.default_timer()
numpy_res1 = np.sum(numpy_version1(*L)
for _ in xrange(iteration_count))
print 'numpy_version1', timeit.default_timer() - start
start = timeit.default_timer()
numpy_res2 = np.sum(numpy_version2(*L)
for _ in xrange(iteration_count))
print 'numpy_version2', timeit.default_timer() - start
start = timeit.default_timer()
theano_res1 = np.sum(theano_version1(*L)
for _ in xrange(iteration_count))
print 'theano_version1', timeit.default_timer() - start
start = timeit.default_timer()
theano_res2 = np.sum(theano_version2(*L)
for _ in xrange(iteration_count))
print 'theano_version2', timeit.default_timer() - start
start = timeit.default_timer()
theano_res3 = np.sum(theano_version3(np.concatenate(L, axis=0))
for _ in xrange(iteration_count))
print 'theano_version3', timeit.default_timer() - start
assert np.allclose(numpy_res1, numpy_res2)
assert np.allclose(numpy_res2, theano_res1)
assert np.allclose(theano_res1, theano_res2)
assert np.allclose(theano_res2, theano_res3)
main()
When run this prints (something like)
numpy_version1 1.47830819649
numpy_version2 1.77405482179
theano_version1 1.3603150303
theano_version2 1.81665318145
theano_version3 1.86912039489
The asserts pass, showing that the Theano and numpy versions both compute the
same result to high degree of accuracy. Clearly this accuracy will reduce if
using `float32` instead of `float64`.
The timing results show that the vectorized approach may not be preferable, it
depends on the matrix sizes. In the example above the matrices are large and
the non-concatenation approach is faster but if the `n` and `min_x` parameters
are changed in the `main` function to be much smaller then the concatenation
approach is quicker. Other results may hold when running on a GPU (Theano
versions only).
|
object not callable python when parsing a json response
Question: I have a response from a URL which is of this format.
'history': {'all': [[u'09 Aug', 1,5'],[u'16 Aug', 2, 6]]}
And code is :
response = urllib.urlopen(url)
data = json.loads(response.read())
print data["fixture_history"]['all']
customObject = MyObject (
history = data["history"]['all']
)
Printing works but in my custom class I am seeing this error :
history = data["history"]['all']
TypeError: 'module' object is not callable
My class is :
class MyObject:
#init
def _init_(self,history):
self.hstory = history
Answer: > Printing works but in my custom class I am seeing this error : TypeError:
> 'module' object is not callable
I bet your your class is defined in a module named `MyObject.py` and that you
imported it as `import MyObject` instead of `from MyObject import MyObject`,
so in your calling code, name `MyObject` is bound to the module, not the
class.
|
Accessing QML TextField value in Python
Question: I have a form in QML with two TextFields. How do I access the value entered in
the fields in Python?
I'm using PyQt5.5 and Python3.
import sys
from PyQt5.QtCore import QObject, QUrl
from PyQt5.QtWidgets import QApplication
from PyQt5.QtQuick import QQuickView
from PyQt5.QtQml import QQmlApplicationEngine
if __name__ == '__main__':
myApp = QApplication(sys.argv)
engine = QQmlApplicationEngine()
context = engine.rootContext()
context.setContextProperty("main", engine)
engine.load('basic.qml')
win = engine.rootObjects()[0]
button = win.findChild(QObject, "myButton")
def myFunction():
print("handler called")
foo = win.findChild(QObject, "login")
print(dir(foo))
print(foo.text)
button.clicked.connect(myFunction)
win.show()
sys.exit(myApp.exec_())
## basic.qml
import QtQuick 2.3
import QtQuick.Controls 1.2
ApplicationWindow {
width: 250; height: 175
Column {
spacing: 20
TextField {
objectName: "login"
placeholderText: qsTr("Login")
focus: true
}
TextField {
placeholderText: qsTr("Password")
echoMode: TextInput.Password
}
Button {
signal messageRequired
objectName: "myButton"
text: "Login"
onClicked: messageRequired()
}
}
}
## Console
Traceback (most recent call last):
File "working.py", line 25, in myFunction
print(foo.text)
AttributeError: 'QQuickItem' object has no attribute 'text'
fish: “python working.py” terminated by signal SIGABRT (Abort)
Answer: You need to call the
[`property()`](https://doc.qt.io/qt-5/qobject.html#property) method of the
object to get the desired property.
In your example, you need to call:
print(foo.property("text"))
rather than `print(foo.text)`
Note that `property()` returns `None` if the property doesn't exists.
|
IP Camera Python 3 Error
Question: I am working on using Python 3 to take an IP web camera's stream and display
it on my computer. The following code only works in python 2.7
import cv2
import urllib
import numpy as np
stream=urllib.urlopen('http://192.168.0.90/mjpg/video.mjpg')
bytes=''
while True:
bytes+=stream.read(16384)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR)
cv2.imshow('i',i)
if cv2.waitKey(1) ==27:
exit(0)
However when I try it on Python 3 I get the following error
> stream=urllib.urlopen('<http://192.168.0.90/mjpg/video.mjpg>')
> AttributeError: 'module' object has no attribute 'urlopen'
Is there any fix for this? I tried making my own buffer but there isn't much
information out there on this stuff
Answer: For python3 you need `import urllib.request`:
import urllib.request
stream = urllib.request.urlopen('http://192.168.0.90/mjpg/video.mjpg')
|
Python 3 IP Webcamera byte Error
Question: I am working on using Python 3 to take an IP web camera's stream and display
it on my computer. The following code only works in python 2.7
import cv2
import urllib.request
import numpy as np
stream=urllib.request.urlopen('http://192.168.0.90/mjpg/video.mjpg')
bytes=''
while True:
bytes+=stream.read(16384)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR)
cv2.imshow('i',i)
if cv2.waitKey(1) ==27:
exit(0)
However when I try it on Python 3 I get the following error
> bytes+=stream.read(16384)
>
> TypeError: Can't convert 'bytes' object to str implicitly
This works perfectly in 2.7 but I cannot find a way to get it to work in 3,
any ideas?
Answer: in python3 str are not single bytes
change it to
bytes=b''
also bytes is a builtin ... you probably should not use that as a variable
name
|
Python Multiprocessing get does not timeout
Question: I'm testing some code to timeout a function call using multiprocessing with
`Process` and `Queue`. The `Queue.get()` method takes an optional timeout
parameter. I wrote the following test to confirm it throws a timeout error
when the called process takes longer than what is allotted in the call to
`get` but it doesn't throw the error. Can anybody tell me how I'm failing to
properly test the `get` timeout? I'm on **Windows 7 with python 2**.
import time
from multiprocessing import Process, Queue
def f(q, t):
time.sleep(t)
q.put(0)
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q, 15, ))
p.start()
x = q.get(1)
print "received ", x
Answer: From the
[documentation](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.get),
`Queue.get` receives 2 parameters: `block` and `timeout`, in that order. You
should call it like this
q.get(timeout=1)
|
python-ldap: Unable to find a callback when using GSS-API
Question: I am trying to use python-ldap on Windows to query an Active Directory server.
This is what I have so far:
import ldap
import ldap.sasl
email_address = '[email protected]'
ldap_url = 'ldap://domain.company.com:389'
domain = 'domain'
user = 'user'
password = 'password'
lo = ldap.initialize(ldap_url)
auth_tokens = ldap.sasl.gssapi('')
lo.sasl_interactive_bind_s('', auth_tokens)
print lo.whoami_s()
base = 'dc=%s,dc=company,dc=com' % domain
scope = ldap.SCOPE_SUBTREE
filter_str = '(mail=%s)' % email_address
attr_list = None
result = lo.search_s(base, scope, filter_str, attr_list)
print "result = %s" % result
This is the traceback:
Traceback (most recent call last):
File "question.py", line 11, in <module>
lo.sasl_interactive_bind_s('', auth_tokens)
File "C:\Python27\lib\site-packages\ldap\ldapobject.py", line 244, in sasl_interactive_bind_s
return self._ldap_call(self._l.sasl_interactive_bind_s,who,auth,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls),sasl_flags)
File "C:\Python27\lib\site-packages\ldap\ldapobject.py", line 106, in _ldap_call
result = func(*args,**kwargs)
ldap.LOCAL_ERROR: {'info': 'SASL(-1): generic failure: Unable to find a callback: 2', 'desc': 'Local error'}
I have used the [LDAP Admin](http://www.ldapadmin.org/download/ldapadmin.html)
tool to verify that I can reach the server from my Windows machine.
This is the code for ldap.sasl_interactive_bind_s:
def sasl_interactive_bind_s(self,who,auth,serverctrls=None,clientctrls=None,sasl_flags=ldap.SASL_QUIET):
"""
sasl_interactive_bind_s(who, auth [,serverctrls=None[,clientctrls=None[,sasl_flags=ldap.SASL_QUIET]]]) -> None
"""
return self._ldap_call(self._l.sasl_interactive_bind_s,who,auth,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls),sasl_flags)
SASL_QUIET is the default setting.
Answer: Cyrus SASL GSSAPI mechanism does not accept any authentication information
except a GSS credential struct at most. The error you see is that the
interactive mode requires a callback to supply dummy/default values. You need
to set to quiet/non-iternative mode.
Here is the code in C, you probably have to figure it out for Python:
rc = ldap_sasl_interactive_bind_s(ld, NULL, SASL_MECH,
NULL, NULL, LDAP_SASL_QUIET, do_interact, NULL);
where `do_interact` is:
int do_interact(
LDAP *ld,
unsigned flags,
void *defaults,
void *in )
{
sasl_interact_t *interact = in;
char *sasl_defaults = (char *)defaults;
const char *dflt = interact->defresult;
dflt = sasl_defaults;
interact->result = (dflt && *dflt) ? dflt : "";
interact->len = strlen( interact->result );
return LDAP_SUCCESS;
}
Always remember that GSS-API always requires that you already have performed
authentication to network by other/external means, e.g., workstation login,
`kinit`, etc.
|
tkinter python checkbox issues
Question: The intent of this python file is to read in a file similar to the one below,
modify the lines that have "PL" in the shape field. the issues I am having is
the OK box is bleeding into the initial file selection button. Also, the OK
button does not show up in the initial checkbox, and will not update the first
line with "PL" in it. Once the first checkbox and the file selection box is
Xed out, the checkbox seems to work as intended for the second line on. Could
someone please help me figure this out? it should bring up the file selection
box, select file to open, select file to save as. this box should stay open,
and the checkbox should open. once you make your selection, it should open
again and again until there are no more lines that have "PL" in them. the new
file should have the added data in the correct position.
This is in the read file "1288.kss". Save file as "1288r.kss"
D,88C200a,0,88C200a,88C200a,1,HSS,5x5x.375,A500B,4311.65,,S1E,,,,,,,,,,,,,,,,,,,
W,88C200a,0,COLUMN,07/23/15,SDS7.420,,,,,,,,,,,,,,,,,,,,,,,,,
M,88C200a,1,COLUMN,,,,,,,,,,,,,,,,,,,,,,,,,,,
S,2B,1,,,,,,,,,,,,,,,,,,,,,,,,,,,,
D,88C200a,0,88C200a,bs5_2,1,PL,1-1/4x13,A36,330.2,,,,,,,,,,,,,,,,,,,,,
D,88C200a,0,88C200a,p307,1,PL,3/8x9-5/16,A36,838.2,,,,,,,,,,,,,,,,,,,,,
D,88C200a,0,88C200a,p310,1,PL,3/8x7-1/4,A36,379.41,,,,,,,,,,,,,,,,,,,,,
D,88C200a,0,88C200a,p317,1,PL,3/8x6-5/8,A36,533.4,,,,,,,,,,,,,,,,,,,,,
this is the code:
from Tkinter import *
import Tkinter
import tkFileDialog
def main(root):
fn = tkFileDialog.askopenfilename(master=root,
initialdir=r'C:\kiss\Routing',
filetypes=[("KSS", "*.kss")])
if not fn: return
fnFiltered = tkFileDialog.asksaveasfilename(master=root,
initialdir=r'C:\kiss\Routing',
filetypes=[("KSS", "*.kss")])
if not fnFiltered: return
lines = open(fn).readlines()
index0 = 0
indexPage = 1
index2 = 2
indexDetail = 3
# index 4 is the part number
indexPart = 4
indexQty = 5
indexShape = 6
indexDescr = 7
# index 8 is the grade
indexGrade = 8
# length index, length is in millimeters - convert to inches and 16ths
# with function mm_to_imperial
indexLength = 9
index10 = 10
# index 11 is the remarks column
indexRemark = 11
# 1-1/2 | 1 | 15
'''
revisions:
'''
outputLines = []
for i, line in enumerate(lines):
if "," in line or "*" in line:
lineList = line.strip().split(",")
if lineList[0] == "L":
continue
if lineList[0] == "A":
continue
if lineList[0] == "D":
if "PL" in lineList[indexShape]:
#
pn1 = lineList[indexPart]
#
def results():
top.destroy()
top = Tkinter.Tk()
CheckVar1 = Tkinter.IntVar()
CheckVar2 = Tkinter.IntVar()
CheckVar3 = Tkinter.IntVar()
CheckVar4 = Tkinter.IntVar()
CheckVar1.set(1)
CheckVar2.set(0)
CheckVar3.set(0)
CheckVar4.set(0)
C1 = Tkinter.Checkbutton(top, text = "Route 25 - Plate Table",
variable=CheckVar1, height=1, width=20)
C2 = Tkinter.Checkbutton(top, text = "Route 35 - Forming",
variable=CheckVar2, height=1, width=20)
C3 = Tkinter.Checkbutton(top, text = "Route 175 - T-Load#1",
variable=CheckVar3, height=1, width=20)
C4 = Tkinter.Checkbutton(top, text = "Route 176 - T-Load#2",
variable=CheckVar4, height=1, width=20)
C1.pack()
C2.pack()
C3.pack()
C4.pack()
bt = Button(text='OK', command=lambda: top.destroy())
bt.pack(side='left')
top.mainloop()
cv1 = CheckVar1.get()
cv2 = CheckVar2.get()
cv3 = CheckVar3.get()
cv4 = CheckVar4.get()
#
if cv1 == 0:
rt1 = ""
if cv1 == 1:
rt1 = "25"
if cv2 == 1:
rt1 = rt1+"-35"
if cv3 == 1:
rt1 = rt1+"-175"
if cv4 == 1:
rt1 = rt1+"-176"
lineList[index10] = rt1
outputLines.append(lineList)
try:
f = open(fnFiltered, 'w')
f.write("\n".join([','.join(lineList) for lineList in outputLines]))
f.close()
except Exception, e:
print e
# print "\n".join([','.join(lineList) for lineList in outputLines])
root = Tkinter.Tk()
Tkinter.Button(root, text="Select File To Process", command=lambda: main(root)).pack()
Tkinter.Button(root, text="Exit", command=root.destroy).pack()
root.mainloop()
Answer: Your calls to `CheckVar1.get()`, etc happen after the root window has been
destroyed, in which case the variables will be destroyed too.
You must call the `get` method in reaction to events before destroying the
root window, rather than immediately after the root window is destroyed.
|
Calling a C function from a Python file. Getting error when using Setup.py file
Question: My problem is as follows: I would like to call a C function from my Python
file and return a value back to that Python file. I have tried the following
method of using embedded C in Python (the following code is the C code called
"mod1.c). I am using Python3.4 so the format follows that given in the
documentation guidelines. The problem comes when I call my setup file (second
code below). #include #include "sum.h"
static PyObject*
mod_sum(PyObject *self, PyObject *args)
{
int a;
int b;
int s;
if (!PyArg_ParseTuple(args,"ii",&a,&b))
return NULL;
s = sum(a,b);
return Py_BuildValue("i",s);
}
/* DECLARATION OF METHODS */
static PyMethodDef ModMethods[] = {
{"sum", mod_sum, METH_VARARGS, "Descirption"}, // {"methName", modName_methName, METH_VARARGS, "Description.."}, modName is name of module and methName is name of method
{NULL,NULL,0,NULL}
};
// Module Definition Structure
static struct PyModuleDef summodule = {
PyModuleDef_HEAD_INIT,
"sum",
NULL,
-1,
ModMethods
};
/* INITIALIZATION FUNCTION */
PyMODINIT_FUNC initmod(void)
{
PyObject *m;
m = PyModule_Create(&summodule);
if (m == NULL)
return m;
}
Setup.py from distutils.core import setup, Extension
setup(name='buildsum', version='1.0', \
ext_modules=[Extension('buildsum', ['mod1.c'])])
The result that I get when I compile my code using gcc is the following error:
**_Cannot export PyInit_buildsum: symbol not defined_**
I would greatly appreciate any insight or help on this problem, or any
suggestion in how to call C from Python. Thank you!
\---------------------------------------EDIT ---------------------------------
Thank you for the comments: I have tried the following now:
static PyObject*
PyInit_sum(PyObject *self, PyObject *args)
{
int a;
int b;
int s;
if (!PyArg_ParseTuple(args,"ii",&a,&b))
return NULL;
s = sum(a,b);
return Py_BuildValue("i",s);
}
For the first function; however, I still get the same error of **_PyInit_sum:
symbol not defined_**
Answer: The working code from above in case anyone runs into the same error: the
answer from @dclarke is correct. The initialization function in python 3 must
have PyInit_(name) as its name.
#include <Python.h>
#include "sum.h"
static PyObject* mod_sum(PyObject *self, PyObject *args)
{
int a;
int b;
int s;
if (!PyArg_ParseTuple(args,"ii",&a,&b))
return NULL;
s = sum(a,b);
return Py_BuildValue("i",s);
}
/* DECLARATION OF METHODS*/
static PyMethodDef ModMethods[] = {
{"modsum", mod_sum, METH_VARARGS, "Descirption"},
{NULL,NULL,0,NULL}
};
// Module Definition Structure
static struct PyModuleDef summodule = {
PyModuleDef_HEAD_INIT,"modsum", NULL, -1, ModMethods
};
/* INITIALIZATION FUNCTION*/
PyMODINIT_FUNC PyInit_sum(void)
{
PyObject *m;
m = PyModule_Create(&summodule);
return m;
}
|
why unnable to do annotation adjacent to legend?
Question: I am trying to add text to a location that is adjacent to the legend. Here is
what I have tried:
import matplotlib.pyplot as plt
x = y = [1,2,3,4,5]
fig, ax = plt.subplots()
ax.plot(x,y)
leg = ax.legend(['line 1'], loc=6, frameon=False)
plt.draw()
p = leg.get_window_extent()
ax.annotate('Annotation Text', (p.p0[0], p.p1[1]), (p.p0[0], p.p1[1]),
xycoords='figure pixels', zorder=9)
plt.show()
This is exactly the script contained in the stackoverflow question at [Get
Matplotlib legend location?](http://stackoverflow.com/questions/28711376/get-
matplotlib-legend-location). When I run the exact same script I produce
different results. When I run this script the string "Annotation Text" appears
at the bottom left of the figure.
For the record, the value of p when I run this script is `Bbox(x0=0.0, y0=0.0,
x1=1.0, y1=1.0)`.
How can I obtain the coordinates of the Legend, preferably in terms of axes
coordinates ie. ax.transAxes ?
I am using matplotlib 1.5.0 and python 2.7
Answer: You can use `inverse_transformed` to convert figure pixels to axes fraction:
import matplotlib.pyplot as plt
x = y = [1,2,3,4,5]
fig, ax = plt.subplots()
ax.plot(x,y)
leg = ax.legend(['line 1'], loc=6, frameon=False)
fig.canvas.draw()
p = leg.get_window_extent().inverse_transformed(ax.transAxes)
ax.annotate('Annotation Text', (p.p0[0], p.p1[1]), xycoords='axes fraction')
This will produce plot like this:
[](http://i.stack.imgur.com/jYouK.png)
I also use matplotlib 1.5.0 and python 2.7 and I always get the same plot
regardless of how many times the script is executed.
|
How to write a django view to search in database?
Question: i have been trying to make a search engine for my database(sqlite3). i have
stored name of the places in the database. And i want to show an empty form to
the user and get the input from that form and pass these as arguments to
database_table.objects.filter() or by fetching all the rows from database and
then searching these against these arguments. So my problem is how to write a
view(either class based or function based) to achieve this functionality. I
know how to query database but want to know how to write view for this. i am
using Django-1.9 and python 3.4. I am struggling in it. Please help.
Answer: A Django's form isn't neccesary, a simple function-based view for searching
might look like this:
**views.py**
from django.shortcuts import render
def search(request):
template_name = 'search.html'
query = request.GET.get('q', '')
if query:
# query example
results = MyEntity.objects.filter(name__icontains=query).distinct()
else:
results = []
return render(
request, template_name, {'results': results})
take a look at basic tutorial from [official django
site](https://docs.djangoproject.com/en/1.9/intro/tutorial03/#write-views-
that-actually-do-something)
|
Calculate date 5 days from today, adding an extra day for each day in the next 5 days that is a weekend day
Question: I am testing using Robot Framework and need to create my own Python keyword.
Taking the current date as day 0 (tomorrow as day 1), I am trying to calculate
what the date will be 5 days from today. If any of the days in the next 5 days
is a Saturday I need to add an extra day to my calculation. Same if any of the
days is a Sunday.
As a Python beginner, I'm a little out of my depth so any help would be much
appreciated
Answer: Basically you need to add 5 business days... This should do it:
import datetime
def addBusinessDays(from_date, add_days):
business_days_to_add = add_days
current_date = from_date
while business_days_to_add > 0:
current_date += datetime.timedelta(days=1)
weekday = current_date.weekday()
if weekday >= 5: # sunday = 6
continue
business_days_to_add -= 1
return current_date
#demo:
print '5 business days from today:'
print addBusinessDays(datetime.date.today(), 5)
Update:
Here is the explanation:
1. We get the start date(Date that we need to add business days to it)
2. We use a loop to add days 1 at a time to the date(we use datetime.timedelta(days=1) to add 1 day to date)
3. After adding each day we check to see if updated date is weekday. If it's weekday we count it otherwise we don't count it and continue
|
How do I get my tkinter picture viewer working?
Question: I've been trying to teach myself tkinter and wanted to make a program that
would find all the pictures in the directory and sub-directories of a folder
and then display them one by one with a button to either save the file into
the "Yes", "Maybe", or "Skip" folders or simply delete the file.
Here's what I'm trying to get it to look like: [](http://i.stack.imgur.com/EOaqL.png)
And here is my code which tries to do just that:
# Python 3.4
import os
import tkinter as tk
from tkinter import Frame, Button
from PIL import Image, ImageTk
from send2trash import send2trash
tk_root = tk.Tk()
tk_root.title("Picture Viewer - Do I want to keep this picture?")
file_count = 0
p = path = 'C:\\Users\\MyUserName\\Desktop\\Test\\'
yes = lambda img: os.rename(img, p+'Yes\\Picture_{0}.jpg'.format(file_count))
maybe = lambda img: os.rename(img, p+'Maybe\\Picture_{0}.jpg'.format(file_count))
skip = lambda img: os.rename(img, p+'Skipped\\Picture_{0}.jpg'.format(file_count))
delete = lambda img: send2trash(img) # Note: os.remove('img.jpg') also works
def search(directory):
global file_count
for root, subdirs, files in os.walk(directory):
for file in files:
if os.path.splitext(file)[1].lower() in ('.jpg', '.jpeg'):
img = os.path.join(root, file)
file_count += 1
top_frame = Frame(tk_root)
bottom_frame = Frame(tk_root)
top_frame.pack(side='top')
bottom_frame.pack(side='bottom')
picture = ImageTk.PhotoImage(Image.open(img))
picture = tk.Label(tk_root, image=picture)
picture.pack(side='top')
button_yes = Button(top_frame, text="Yes", command=lambda x=img:yes(x))
button_maybe = Button(top_frame, text="Maybe", command=lambda x=img:maybe(x))
button_skip = Button(top_frame, text="skip", command=lambda x=img:skip(x))
button_delete = Button(bottom_frame, text="Delete", command=lambda x=img:delete(x))
button_yes.pack(side='left')
button_maybe.pack(side='left')
button_skip.pack(side='left')
button_delete.pack(side='bottom')
print('All done!')
search('Test')
However, the problem is after I launch the program it isn't working well at
all. It simply moves the first picture, "1.jpg", into whatever folder I pick
(or delete) and then giving the following error if I try to sort another
image:
> FileNotFoundError: [WinError 2] The system cannot find the file specified:
> 'Test\Example.jpg' -> 'C:\Users\Vale\Desktop\Test\Maybe\1.jpg'
Perhaps most importantly the images are not displaying and cycling properly.
It is just a gray box in the middle each time. How do I get my program to
work? I'm aware I need to get the image to appear still and something needs to
be done to get the program to move onto the next picture as well (so I don't
get FileNotFoundError for trying to sort the same picture twice), but I'm
unsure what to do differently after watching tutorials and reading
documentation.
Answer: As mentioned by BlackJack, your code creates the GUI Widgets over and over.
You'll need to move that out from the loop. Also for displaying the image in
the `Label`, you can't use `picture` as the name for both the `ImageTk` object
and `Label` object.
Suggestions to changes. You could use a Generator to get the image
path/filename. And make regular functions instead of using lambda. I had an
interest in seeing how it could work, so I made the program below based on
your code. When I tested it I had different paths, working on OSX, so haven't
tested it with your Windows paths (that I put into the code here).
import os
import tkinter as tk
from tkinter import Frame, Button
from PIL import Image, ImageTk
tk_root = tk.Tk()
tk_root.title("Picture Viewer - Do I want to keep this picture?")
file_count = 0
p = path = 'C:\\Users\\MyUserName\\Desktop\\Test\\'
def search(directory):
global file_count
for root, subdirs, files in os.walk(directory):
for file in files:
if os.path.splitext(file)[1].lower() in ('.jpg', '.jpeg'):
img = os.path.join(root, file)
file_count += 1
yield img
def next_image():
try:
global photo_path
photo_path = next(path_generator)
photo = ImageTk.PhotoImage(Image.open(photo_path))
picture.configure(image=photo)
picture.image = photo
except StopIteration:
picture.configure(image='', text='All done!')
def move_file(directory):
if not os.path.exists(directory):
os.makedirs(directory)
new_file = directory + 'Picture_{0}.jpg'.format(file_count)
os.rename(photo_path, new_file)
def yes():
move_file(path + 'Yes\\')
next_image()
def maybe():
move_file(path + 'Maybe\\')
next_image()
def skip():
move_file(path + 'Skipped\\')
next_image()
def delete():
# Code for deleting file here
next_image()
top_frame = Frame(tk_root)
bottom_frame = Frame(tk_root)
top_frame.pack(side='top')
bottom_frame.pack(side='bottom')
path_generator = search(p)
photo_path = next(path_generator)
photo = ImageTk.PhotoImage(Image.open(photo_path))
picture = tk.Label(tk_root, image=photo)
picture.image = photo
picture.pack(side='top')
button_yes = Button(top_frame, text="Yes", command=yes)
button_maybe = Button(top_frame, text="Maybe", command=maybe)
button_skip = Button(top_frame, text="skip", command=skip)
button_delete = Button(bottom_frame, text="Delete", command=delete)
button_yes.pack(side='left')
button_maybe.pack(side='left')
button_skip.pack(side='left')
button_delete.pack(side='bottom')
tk_root.mainloop()
**EDIT:**
One issue with this code seems to be that it runs through the subdirectories
(Yes, Maybe, Skipped). So you'll be presented with the images twice if it is
in the path and then moved.
If you don't want to traverse the Yes, Maybe and Skipped folders, you can
change the `search` function to:
def search(directory):
global file_count
excludes = ['Yes', 'Maybe', 'Skipped']
for root, subdirs, files in os.walk(directory, topdown=True):
subdirs[:] = [d for d in subdirs if d not in excludes]
for file in files:
if os.path.splitext(file)[1].lower() in ('.jpg', '.jpeg'):
img = os.path.join(root, file)
file_count += 1
yield img
|
why even after adding the -url: /images in app.yaml, i can't access /images/med-9.png ,
Question: I am learning python, specifically web development using python, and am
following the course cs253 in udacity, now after performing the unit 4
excercise, i want to add an image in my html template, i have made a directory
: "Templates" in my "hellowebappworld" directory which is the directory being
hosted by app engine and inside templates, i've made a folder : "images" where
i have my image. even after adding the url : / images and static_dir: images m
not able to access the image
this is my app.yaml:
application: hellowebappworld
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: /images
static_dir: images
- url: .*
script: main.py
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
following is the main.py:
import os
import webapp2
import jinja2
from google.appengine.ext import db
template_dir = os.path.join(os.path.dirname(__file__), 'templates')
jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir), autoescape=True)
class Art(db.Model):
title = db.StringProperty(required = True)
art = db.TextProperty(required = True)
created = db.DateTimeProperty(auto_now_add = True)
class Handler(webapp2.RequestHandler):
def write(self, *a, **kw):
self.response.out.write(*a, **kw)
def render_str(self, template, **params):
t = jinja_env.get_template(template)
return t.render(params)
def render(self, template, **kw):
self.write(self.render_str(template, **kw))
class MainPage(Handler):
def render_front(self, title="", art="", error=""):
arts = db.GqlQuery("SELECT * FROM Art ORDER BY created DESC")
self.render("front.html", title=title, art=art, error = error, arts = arts)
def get(self):
self.render_front()
def post(self):
title = self.request.get("title")
art = self.request.get("art")
if title and art:
a = Art(title = title, art = art)
a.put()
self.redirect("/")
else:
error = "we need both a title and some artwork!"
self.render_front(title,art,error)
app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
and finally my template:
<!DOCTYPE html>
<html>
<head>
<title>/ascii/</title>
</head>
<body>
<form method="post">
<label>
<div>title</div><input type="text" name="title" value= {{title}}>
</label>
<label>
<div>art</div><textarea name="art"> {{art}} </textarea>
</label>
<div class="error"> {{error}}</div>
<input type="submit">
</form>
<hr>
{% for art in arts %}
<div class="art">
<div class="art-title">{{art.title}}</div>
<pre class="art-body">{{art.art}}</pre>
</div>
{% endfor %}
<img src="/images/med-9.png">
</body>
Answer: try modifying your app.yaml file to this..
- url: /templates/images
static_dir: templates/images
On a side note, I like to structure my static files like so..
- url: /static
static_dir: static
then my static folder is located in the root directory of my app engine app.
* Desktop
* ProjectFolder
* static
* images
* js
* css
* templates
* main.py
* app.yaml
|
Raising exceptions with django_rest_framework
Question: I have written a view that decrypts a GPG encrypted file and returns it as
plain text. This works fine in general. The problem is, if the file is empty
or otherwise contains invalid GPG data, gnupg returns an empty result rather
than throw an exception.
I need to be able to do something like this inside _decrypt_file_ to check to
see if the decryption failed and raise an error:
if data.ok:
return str(data)
else:
raise APIException(data.status)
If I do this, I see the APIException raised in the Django debug output, but
it's not translating to a 500 response to the client. Instead the client gets
a 200 response with an empty body. I can raise the APIException in my _get_
method and it sends a 500 response, but there I don't have access to the gnupg
error message.
Here is a very simplified version of my view:
from rest_framework.views import APIView
from django.http import FileResponse
from django.core import files
from gnupg import GPG
class FileDownload(APIView):
def decrypt_file(self, file):
gpg = GPG()
data = gpg.decrypt(file.read())
return data
def get(self, request, id, format=None):
f = open('/tmp/foo', 'rb')
file = files.File(f)
return FileResponse(self.decrypt_file(file))
I have read the docs on DRF exception handling [here](http://www.django-rest-
framework.org/api-guide/exceptions/), but it doesn't seem to provide a
solution to this problem. I am fairly new to Django and python in general, so
it's entirely possible I'm missing something obvious. Helpful advice would be
appreciated. Thanks.
Answer: If you raise an error in python, every function in the trace re-raise it until
someone catch it. You can first declare a exception :
from rest_framework.exceptions import APIException
class ServiceUnavailable(APIException):
status_code = 503
default_detail = 'Service temporarily unavailable, try again later.'
Then in your `decrypt_file` function, raise this exception if decryption is
not successful. You can pass an argument to modify the message. Then, in the
`get` method, you should call `decrypt_file` function and pass your file as an
argument. If any things goes wrong, your function raise that exception and
then, get method re-raise it until Django Rest Framework exception handler
catch it.
**EDIT:**
In your decrypt function, do something like this:
from rest_framework.response import Response
def decrypt_file(self, file):
... # your codes
if data.ok:
return str(data)
else:
raise ServiceUnavailable
def get(self, request, id, format=None):
f = open('/tmp/foo', 'rb')
result = decrypt_file(f)
f.close()
return Response({'data': result})
|
Python overlapping timer with UDP listener
Question: I'm looking for insight on including a timer with/within the WHILE loop of a
UDP listener service. The service is part of a device auto-discovery system I
need to interact with.
The process requiring the listener has three requirements/responsibilities:
* Broadcast a notification packet at initial socket bind (device is up)
* Listen for and reply to "search" broadcasts
* regardless of presence/absence of "search" broadcasts, broadcast a "still alive" packet every 30 minutes
Each of these tasks alone are no problem, and the first two are easy to
include. What I'm not comfortable with is "interrupting the listener" or
modifying the WHILE loop of the listener to send the "alive" packet.
If I "wait" for the interval, I suspend other processes. Will a Scheduler
object do the same, or allow us to continue? I can't multi-thread, because I
need to receive and send on a specific port, which is bound within the thread.
here is what I have...
import time
import socket
import sys
import shutil
import signal
import string
import re
import os
import socket
import fcntl
import struct
HOST = ''
PORT = 8888
RESPONSE_MSG = 'Yes, I'm here'
ALIVE_MSG = 'I'm alive'
IDENTIFY_MSG = 'It's me'
IP_ADDR = ''
INTERVAL = 1800
# Datagram (udp) socket
try :
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
print 'Socket created'
except socket.error, msg :
print 'Failed to create socket. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
try:
s.bind((HOST, PORT))
except socket.error , msg:
print 'Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
# broadcast wake-up message
s.sendto(IDENTIFY_MSG,'239.255.255.250')
#now keep listening...how do I not stop listening while sending a packet on an interval?
while 1:
# receive data from client (data, addr)
d = s.recvfrom(1024)
data = d[0]
source_addr = d[1]
if not data:
break
s.sendto(RESPONSE_MSG, source_addr)
Answer: I think I have what is a working solution. It "punts" on the looping
complexity a bit, but I believe it is a clean, maintainable, readable
solution.
I've created three specific .py scripts; one opens a socket to send the "wake-
up" packet, second opens a socket to send the "alive" packet, and a third that
opens up a socket to listen/respond to device search requests.
These are then imported into a "calling script" with a timer-based interrupt.
Here's how it looks...
import udp_wakeup
import udp_listen
import udp_alive
import shutil
import string
from threading import Timer
import thread, time, sys
from datetime import datetime as dt
#dummy flag to ensure constant looping
restart = 1
def timeout():
thread.interrupt_main()
#Call and execute the wake-up packet broadcast
udp_wakeup.main()
#Initiate timer for 1800 seconds, or 30 minutes
while restart = 1
try:
#Call and execute the listener routine
Timer(1800, timeout).start()
udp_listen.main()
except:
#On Timer expire, break from listener, call Alive-broadcast, reset timer and restart listener
udp_alive.main()
|
Installing disqus on django
Question: I am trying to install disqus on my django project. I have followed these
instructions:
First, add disqus to your INSTALLED_APPS. You don’t need to run syncdb as
there are no models provided.
Next, add DISQUS_API_KEY and DISQUS_WEBSITE_SHORTNAME to your settings. You
can get your API key here (you must be logged in on the DISQUS website). To
see the shortname of your website, navigate to Settings->General on the DISQUS
website.
Finally, you need to change the domain of your Site to the domain you’re
actually going to use for your website. The easiest way to do this is to
enable django.contrib.admin and just click on the Site object to modify it. If
you don’t have contrib.admin installed (or don’t want to install it), you can
run python manage.py shell and change the value in the cli:
I am trying to do the last part, the one which starts with the word Finally...
The easiest way to do this is to enable django.contrib.admin and just click on
the Site object to modify it.
For this part, i already have django.contrib.admin under my INSTALLED_APPS,
but what i dont understand is where is this Site object I am supposed to
click. Because of this i tried to use the python manage.py shell approach. The
instructions are as follows:
from django.contrib.sites.models import Site
Site.objects.all()
s = Site.objects.all()[0]
s.domain = 'arthurkoziel.com'
s.name = 'arthurkoziel.com'
s.save()
Site.objects.all()
Now the problem is when i type **from django.contrib.sites.models import
Site** , i get the following error message:
> Model class django.contrib.sites.models.Site doesn't declare an explicit
> app_label and either isn't in an application in INSTALLED_APPS or else was
> imported before its application was loaded.
Can anyone who understands the installation process help me out to interpret.
Answer: You need to ensure that `'django.contrib.sites'` is in your `INSTALLED_APPS`
setting. After this, the above error should go away, and you should also have
a "Sites" section in your Django admin.
|
Playing a Lot of Sounds at Once
Question: I am attempting to create a program in python that plays a particular
harpsichord note when a certain key is pressed. I want it to remain responsive
so you can continue to play more notes (kind of like a normal electric piano.)
However, because the wav files that the notes are stored in are about 7-10
seconds long I am experiencing some issues. I can press at least 10 keys per
second. So, over the duration of one note I could have around 100 different
wav files playing at once. I tried to use winsound, but it was unable to play
multiple wav files at once. I then moved on to PyAudio and it works kind of.
The only way that I found to accomplish what I wanted was this:
from msvcrt import getch
import pyaudio
import wave
import multiprocessing as mp
#This function is just code for playing a sound in PyAudio
def playNote(filename):
CHUNK = 1024
wf = wave.open(filename, 'rb')
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
data = wf.readframes(CHUNK)
while data != '':
stream.write(data)
data = wf.readframes(CHUNK)
stream.stop_stream()
stream.close()
p.terminate()
if __name__ == "__main__":
while True:
#If the 'a' key is pressed: start a new process that calls playNote
#and pass in the file name for a note.
if ord(getch()) == 97: #a
mp.Process(target=playNote, args=("F:\Project Harpsichord\The wavs\A1.wav",)).start()
#If the 's' key is pressed: start a new process that calls playNote
#and pass in the file name for another note.
if ord(getch()) == 115: #s
mp.Process(target=playNote, args=("F:\Project Harpsichord\The wavs\A0.wav",)).start()
Basically whenever I want to play a new wav, I have to start a new process
that runs the code in the playNote function. As I already stated I can
potentially have up to 100 of these playing at once. Suffice it to say, one
hundred copies of the python interpreter all running at once almost crashed my
computer. I also tried a similar approach with multi-threading, but had the
same problems.
[This post](http://stackoverflow.com/questions/28743400/pyaudio-play-multiple-
sounds-at-once "This post") shows a way to mix multiple wav files together so
they can be played at the same time, but since my program will not necessarily
be starting the sounds at the same time I am unsure if this will work. I need
an efficient way to play multiple notes at the same time. Whether this comes
in the form of another library, or even a different language I really don't
care.
Answer: I checked out pygame like J.F Sebastian suggested. It ended up being exactly
what I needed. I used
[pygame.mixer.Sound()](https://www.pygame.org/docs/ref/mixer.html) in
conjunction with
[pygame.mixer.set_num_channels()](https://www.pygame.org/docs/ref/mixer.html#pygame.mixer.get_num_channels).
Here's what I came up with.
import pygame as pg
import time
pg.mixer.init()
pg.init()
a1Note = pg.mixer.Sound("F:\Project Harpsichord\The wavs\A1.wav")
a2Note = pg.mixer.Sound("F:\Project Harpsichord\The wavs\A0.wav")
pg.mixer.set_num_channels(50)
for i in range(25):
a1Note.play()
time.sleep(0.3)
a2Note.play()
time.sleep(0.3)
|
Python Pandas Compare 2 Large DataFrames of Text for Similarity
Question: I have two large dataframes I want to compare. I want a comparison result
capable of a column and / or row wise comparison of similarities by percent.
_This part is simple._ However, I want to be able to make the comparison
ignore differences based upon value criteria. A small example is below.
d1 = {'Sample':pd.Series([101,102,103]),
'Col1':pd.Series(['AA','--','BB']),
'Col2':pd.Series(['AB','AA','BB'])}
d2 = {'Sample':pd.Series([101,102,103]),
'Col1':pd.Series(['BB','AB','--']),
'Col2':pd.Series(['AB','AA','AB'])}
df1 = pd.DataFrame(d1)
df2 = pd.DataFrame(d2)
df1 = df1.set_index('Sample')
df2 = df2.set_index('Sample')
comparison = df1.eq(df2)
# for column stats
comparison.sum(axis=0) / float(len(df1.index))
# for row stats
comparison.sum(axis=1) / float(len(df1.columns))
My problem is that for when `value1='AA' and value2 = '--'` I want them to be
viewed as equal (so when one is `'--'` basically always be true) but,
otherwise perform a normal Boolean comparison. I need an efficient way to do
this that doesn't include excessive looping as the datasets are quite large.
Answer: Below, I'm interpreting _"when one is '--' basically always be true"_ to mean
that any comparison against `'--'` (no matter what the other value is) should
return True. In that case, you could use
mask = (df1=='--') | (df2=='--')
to find every location where either `df1` or `df2` is equal to `'--'` and then
use
comparison |= mask
to update `comparison`. For example,
import itertools as IT
import numpy as np
import pandas as pd
np.random.seed(2015)
N = 10000
df1, df2 = [pd.DataFrame(
np.random.choice(map(''.join, IT.product(list('ABC'), repeat=2))+['--'],
size=(N, 2)),
columns=['Col1', 'Col2']) for i in range(2)]
comparison = df1.eq(df2)
mask = (df1=='--') | (df2=='--')
comparison |= mask
# for column stats
column_stats = comparison.sum(axis=0) / float(len(df1.index))
# for row stats
row_stats = comparison.sum(axis=1) / float(len(df1.columns))
|
How to use Tensorflow Optimizer without recomputing activations in reinforcement learning program that returns control after each iteration?
Question: EDIT(1/3/16): [corresponding github
issue](https://github.com/tensorflow/tensorflow/issues/672)
I'm using Tensorflow (Python interface) to implement a q-learning agent with
function approximation trained using stochastic gradient-descent. At each
iteration of the experiment a step function in the agent is called that
updates the parameters of the approximator based on the new reward and
activation, and then chooses a new action to perform.
Here is the problem(with reinforcement learning jargon):
* The agent computes its state-action value predictions to choose an action.
* Then gives control back another program which simulates a step in the environment.
* Now the agent's step function is called for the next iteration. I want to use Tensorflow's Optimizer class to compute the gradients for me. However, this requires both the state-action value predictions that I computed last step, AND their graph. So:
* If I run the optimizer on the whole graph, then it has to recompute the state-action value predictions.
* But, if I store the prediction (for the chosen action) as a variable, then feed it to the optimizer as a placeholder, it no longer has the graph necessary to compute the gradients.
* I can't just run it all in the same sess.run() statement, because I have to give up control and return the chosen action in order to get the next observation and reward (to use in the target for the loss function).
So, is there a way that I can (without reinforcement learning jargon):
1. Compute part of my graph, returning value1.
2. Return value1 to the calling program to compute value2
3. In the next iteration, use value2 to as part of my loss function for gradient descent WITHOUT recomputing the the part of the graph that computes value1.
Of course, I've considered the obvious solutions:
1. Just hardcode the gradients: This would be easy for the really simple approximators I'm using now, but would be really inconvenient if I were experimenting with different filters and activation functions in a big convolutional network. I'd really like to use the Optimizer class if possible.
2. Call the environment simulation from within the agent: [This system](https://github.com/asrivat1/DeepLearningVideoGames/blob/master/deep_q_network.py#L135) does this, but it would make mine more complicated, and remove a lot of the modularity and structure. So, I don't want to do this.
I've read through the API and whitepaper several times, but can't seem to come
up with a solution. I was trying to come up with some way to feed the target
into a graph to calculate the gradients, but couldn't come up with a way to
build that graph automatically.
If it turns out this isn't possible in TensorFlow yet, do you think it would
be very complicated to implement this as a new operator? (I haven't used C++
in a couple of years, so the TensorFlow source looks a little intimidating.)
Or would I be better off switching to something like Torch, which has the
imperative differentiation Autograd, instead of symbolic differentiation?
Thanks for taking the time to help me out on this. I was trying to make this
as concise as I could.
EDIT: After doing some further searching I came across [this previously asked
question](http://stackoverflow.com/questions/32082506/neural-network-
reinforcement-learning-requiring-next-state-propagation-for-backp). It's a
little different than mine (they are trying to avoid updating an LSTM network
twice every iteration in Torch), and doesn't have any answers yet.
Here is some code if that helps:
'''
-Q-Learning agent for a grid-world environment.
-Receives input as raw rbg pixel representation of screen.
-Uses an artificial neural network function approximator with one hidden layer
2015 Jonathon Byrd
'''
import random
import sys
#import copy
from rlglue.agent.Agent import Agent
from rlglue.agent import AgentLoader as AgentLoader
from rlglue.types import Action
from rlglue.types import Observation
import tensorflow as tf
import numpy as np
world_size = (3,3)
total_spaces = world_size[0] * world_size[1]
class simple_agent(Agent):
#Contants
discount_factor = tf.constant(0.5, name="discount_factor")
learning_rate = tf.constant(0.01, name="learning_rate")
exploration_rate = tf.Variable(0.2, name="exploration_rate") # used to be a constant :P
hidden_layer_size = 12
#Network Parameters - weights and biases
W = [tf.Variable(tf.truncated_normal([total_spaces * 3, hidden_layer_size], stddev=0.1), name="layer_1_weights"),
tf.Variable(tf.truncated_normal([hidden_layer_size,4], stddev=0.1), name="layer_2_weights")]
b = [tf.Variable(tf.zeros([hidden_layer_size]), name="layer_1_biases"), tf.Variable(tf.zeros([4]), name="layer_2_biases")]
#Input placeholders - observation and reward
screen = tf.placeholder(tf.float32, shape=[1, total_spaces * 3], name="observation") #input pixel rgb values
reward = tf.placeholder(tf.float32, shape=[], name="reward")
#last step data
last_obs = np.array([1, 2, 3], ndmin=4)
last_act = -1
#Last step placeholders
last_screen = tf.placeholder(tf.float32, shape=[1, total_spaces * 3], name="previous_observation")
last_move = tf.placeholder(tf.int32, shape = [], name="previous_action")
next_prediction = tf.placeholder(tf.float32, shape = [], name="next_prediction")
step_count = 0
def __init__(self):
#Initialize computational graphs
self.q_preds = self.Q(self.screen)
self.last_q_preds = self.Q(self.last_screen)
self.action = self.choose_action(self.q_preds)
self.next_pred = self.max_q(self.q_preds)
self.last_pred = self.act_to_pred(self.last_move, self.last_q_preds) # inefficient recomputation
self.loss = self.error(self.last_pred, self.reward, self.next_prediction)
self.train = self.learn(self.loss)
#Summaries and Statistics
tf.scalar_summary(['loss'], self.loss)
tf.scalar_summary('reward', self.reward)
#w_hist = tf.histogram_summary("weights", self.W[0])
self.summary_op = tf.merge_all_summaries()
self.sess = tf.Session()
self.summary_writer = tf.train.SummaryWriter('tensorlogs', graph_def=self.sess.graph_def)
def agent_init(self,taskSpec):
print("agent_init called")
self.sess.run(tf.initialize_all_variables())
def agent_start(self,observation):
#print("agent_start called, observation = {0}".format(observation.intArray))
o = np.divide(np.reshape(np.asarray(observation.intArray), (1,total_spaces * 3)), 255)
return self.control(o)
def agent_step(self,reward, observation):
#print("agent_step called, observation = {0}".format(observation.intArray))
print("step, reward: {0}".format(reward))
o = np.divide(np.reshape(np.asarray(observation.intArray), (1,total_spaces * 3)), 255)
next_prediction = self.sess.run([self.next_pred], feed_dict={self.screen:o})[0]
if self.step_count % 10 == 0:
summary_str = self.sess.run([self.summary_op, self.train],
feed_dict={self.reward:reward, self.last_screen:self.last_obs,
self.last_move:self.last_act, self.next_prediction:next_prediction})[0]
self.summary_writer.add_summary(summary_str, global_step=self.step_count)
else:
self.sess.run([self.train],
feed_dict={self.screen:o, self.reward:reward, self.last_screen:self.last_obs,
self.last_move:self.last_act, self.next_prediction:next_prediction})
return self.control(o)
def control(self, observation):
results = self.sess.run([self.action], feed_dict={self.screen:observation})
action = results[0]
self.last_act = action
self.last_obs = observation
if (action==0): # convert action integer to direction character
action = 'u'
elif (action==1):
action = 'l'
elif (action==2):
action = 'r'
elif (action==3):
action = 'd'
returnAction=Action()
returnAction.charArray=[action]
#print("return action returned {0}".format(action))
self.step_count += 1
return returnAction
def Q(self, obs): #calculates state-action value prediction with feed-forward neural net
with tf.name_scope('network_inference') as scope:
h1 = tf.nn.relu(tf.matmul(obs, self.W[0]) + self.b[0])
q_preds = tf.matmul(h1, self.W[1]) + self.b[1] #linear activation
return tf.reshape(q_preds, shape=[4])
def choose_action(self, q_preds): #chooses action epsilon-greedily
with tf.name_scope('action_choice') as scope:
exploration_roll = tf.random_uniform([])
#greedy_action = tf.argmax(q_preds, 0) # gets the action with the highest predicted Q-value
#random_action = tf.cast(tf.floor(tf.random_uniform([], maxval=4.0)), tf.int64)
#exploration rate updates
#if self.step_count % 10000 == 0:
#self.exploration_rate.assign(tf.div(self.exploration_rate, 2))
return tf.select(tf.greater_equal(exploration_roll, self.exploration_rate),
tf.argmax(q_preds, 0), #greedy_action
tf.cast(tf.floor(tf.random_uniform([], maxval=4.0)), tf.int64)) #random_action
'''
Why does this return NoneType?:
flag = tf.select(tf.greater_equal(exploration_roll, self.exploration_rate), 'g', 'r')
if flag == 'g': #greedy
return tf.argmax(q_preds, 0) # gets the action with the highest predicted Q-value
elif flag == 'r': #random
return tf.cast(tf.floor(tf.random_uniform([], maxval=4.0)), tf.int64)
'''
def error(self, last_pred, r, next_pred):
with tf.name_scope('loss_function') as scope:
y = tf.add(r, tf.mul(self.discount_factor, next_pred)) #target
return tf.square(tf.sub(y, last_pred)) #squared difference error
def learn(self, loss): #Update parameters using stochastic gradient descent
#TODO: Either figure out how to avoid computing the q-prediction twice or just hardcode the gradients.
with tf.name_scope('train') as scope:
return tf.train.GradientDescentOptimizer(self.learning_rate).minimize(loss, var_list=[self.W[0], self.W[1], self.b[0], self.b[1]])
def max_q(self, q_preds):
with tf.name_scope('greedy_estimate') as scope:
return tf.reduce_max(q_preds) #best predicted action from current state
def act_to_pred(self, a, preds): #get the value prediction for action a
with tf.name_scope('get_prediction') as scope:
return tf.slice(preds, tf.reshape(a, shape=[1]), [1])
def agent_end(self,reward):
pass
def agent_cleanup(self):
self.sess.close()
pass
def agent_message(self,inMessage):
if inMessage=="what is your name?":
return "my name is simple_agent";
else:
return "I don't know how to respond to your message";
if __name__=="__main__":
AgentLoader.loadAgent(simple_agent())
Answer: Right now what you want to do is very difficult in Tensorflow (0.6). Your best
bet is to bite the bullet and call run multiple times at the cost of
recomputing the activations. However, we are very aware of this issue
internally. A prototype "partial run" solution is in the works, but there is
no timeline for its completion right now. Since a truly satisfactory answer
might require modifying tensorflow itself, you could also make a github issue
for this and see if anyone else has anything to say on this there.
Edit: Experimental support for partial_run is now in.
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/client/session.py#L317>
|
Python numpy fill masked elements in matrix according to order in another matrix
Question: I'm trying to do a Uniform Order Crossover for a genetic algorithm. In that, I
have two 2D arrays p1 and p2 and a 2D bit array, b. p1, p2 and b are of the
same shape. I mask elements in p1 corresponding to 1s in b and elements in p2
corresponding to 0s in b. From these, I need to generate 2 matrices c1 and c2
such that c1 has the same elements as in p2 but has the blanks replaced by the
corresponding values in p2 as per the order given in p1 and the opposite for
c1.
For example:
p1 = [[1, 2, 3, 4, 5],
[1, 4, 3, 2, 5]]
p2 = [[5, 4, 3, 2, 1],
[2, 1, 3, 4, 5]]
b = [[1, 0, 1, 0, 1],
[1, 0, 0, 1, 0]]
masked arrays are mp1 and mp2;
mp1 = [[1, _, 3, _, 5],
[1, _, _, 2, _]]
mp2 = [[_, 4, _, 2, _],
[_, 1, 3, _, 5]]
Then c1 and c2 would be,
c1 = [[1, 4, 3, 2, 5],
[1, 3, 4, 2, 5]]
c2 = [[1, 4, 3, 2, 5],
[4, 1, 3, 2, 5]]
p1, p2, b, c1, c2 are of dimension 500X100000 in my case, so the answer needs
to be vectorized and efficient.
Answer: I can't exactly make sense of what exactly you're asking, because either you
have made a typo in your question or I'm just completely missing the point
here.
So, first we have a mask for p1
mp1 = [[1, _, 3, _, 5],
[1, _, _, 2, _]]
where the `_` values should be replaced by corresponding values from p2. Which
should be
[[_, 4, _, 2, _],
[_, 1, 3, _, 5]]
so in my head the result should be
[[1, 4, 3, 2, 5],
[1, 1, 3, 2, 5]])
but you say the expected result is
[[1, 4, 3, 2, 5],
[1, 3, 4, 2, 5]]
how does 3 appear there and where does the 4 come from? What am I missing?
(Before anyone holds this against me, I can't fit this in a comment)
If you can use `numpy`, and for stuff like this I recommend you do, the
solution, to the best of my interpretation abilities, would be:
p1 = [[1, 2, 3, 4, 5],
[1, 4, 3, 2, 5]]
p2 = [[5, 4, 3, 2, 1],
[2, 1, 3, 4, 5]]
b = [[1, 0, 1, 0, 1],
[1, 0, 0, 1, 0]]
#this is just how you turn python lists into numpy arrays
import numpy as np
p1 = np.asarray(p1)
p2 = np.asarray(p2)
b = np.asarray(b)
#this is the actual solution to the problem
p1[b==0] = p2[b==0]
These substitutions are done in place, so it's nice for the memory and they're
basically done in the background in C which means it's lightning fast.
|
Error iterating through file - Python
Question: I'm trying to iterate through a txt file with a long string and deleting the
double quotes(") and commas(,) and writing it in a new file but it keeps
getting a error. Please help.
Code:
from sys import argv
script, filename = argv
long_String = ""
for line in filename.readline():
long_String += line
long_String += " "
for x in long_String:
if (x = '\"'):
x = ""
if x = ",":
x = ""
filename2 = "print.txt"
target = open (filename2, 'a')
target.write(long_string)
target.close()
Answer: You have some errors:
target.write(long_string)
Should be :
target.write(long_String)
and
if (x = '\"'):
should be:
if (x == '\"'):
Same with :
if x = ",":
Otherwise, you can easily do with
[re.sub](https://docs.python.org/2/library/re.html#re.sub)
for line in filename.readline():
my_string += re.sub(r'\"\'', '', line)
|
Numba 3x slower than numpy
Question: We have a vectorial numpy **get_pos_neg_bitwise** function that use a
mask=[132 20 192] and a df.shape of (500e3, 4) that we want to accelerate with
numba.
from numba import jit
import numpy as np
from time import time
def get_pos_neg_bitwise(df, mask):
"""
In [1]: print mask
[132 20 192]
In [1]: print df
[[ 1 162 97 41]
[ 0 136 135 171]
...,
[ 0 245 30 73]]
"""
check = (np.bitwise_and(mask, df[:, 1:]) == mask).all(axis=1)
pos = (df[:, 0] == 1) & check
neg = (df[:, 0] == 0) & check
pos = np.nonzero(pos)[0]
neg = np.nonzero(neg)[0]
return (pos, neg)
Using tips from @morningsun we made this numba version:
@jit(nopython=True)
def numba_get_pos_neg_bitwise(df, mask):
posneg = np.zeros((df.shape[0], 2))
for idx in range(df.shape[0]):
vandmask = np.bitwise_and(df[idx, 1:], mask)
# numba fail with # if np.all(vandmask == mask):
vandm_equal_m = 1
for i, val in enumerate(vandmask):
if val != mask[i]:
vandm_equal_m = 0
break
if vandm_equal_m == 1:
if df[idx, 0] == 1:
posneg[idx, 0] = 1
else:
posneg[idx, 1] = 1
pos = list(np.nonzero(posneg[:, 0])[0])
neg = list(np.nonzero(posneg[:, 1])[0])
return (pos, neg)
But it still 3 times slower than the numpy one (~0.06s Vs ~0,02s).
if __name__ == '__main__':
df = np.array(np.random.randint(256, size=(int(500e3), 4)))
df[:, 0] = np.random.randint(2, size=(1, df.shape[0])) # set target to 0 or 1
mask = np.array([132, 20, 192])
start = time()
pos, neg = get_pos_neg_bitwise(df, mask)
msg = '==> pos, neg made; p={}, n={} in [{:.4} s] numpy'
print msg.format(len(pos), len(neg), time() - start)
start = time()
msg = '==> pos, neg made; p={}, n={} in [{:.4} s] numba'
pos, neg = numba_get_pos_neg_bitwise(df, mask)
print msg.format(len(pos), len(neg), time() - start)
start = time()
pos, neg = numba_get_pos_neg_bitwise(df, mask)
print msg.format(len(pos), len(neg), time() - start)
Am I missing something ?
In [1]: %run numba_test2.py
==> pos, neg made; p=3852, n=3957 in [0.02306 s] numpy
==> pos, neg made; p=3852, n=3957 in [0.3492 s] numba
==> pos, neg made; p=3852, n=3957 in [0.06425 s] numba
In [1]:
Answer: Try moving the call to `np.bitwise_and` outside of the loop since numba can't
do anything to speed it up:
@jit(nopython=True)
def numba_get_pos_neg_bitwise(df, mask):
posneg = np.zeros((df.shape[0], 2))
vandmask = np.bitwise_and(df[:, 1:], mask)
for idx in range(df.shape[0]):
# numba fail with # if np.all(vandmask == mask):
vandm_equal_m = 1
for i, val in enumerate(vandmask[idx]):
if val != mask[i]:
vandm_equal_m = 0
break
if vandm_equal_m == 1:
if df[idx, 0] == 1:
posneg[idx, 0] = 1
else:
posneg[idx, 1] = 1
pos = np.nonzero(posneg[:, 0])[0]
neg = np.nonzero(posneg[:, 1])[0]
return (pos, neg)
Then I get timings of:
==> pos, neg made; p=3920, n=4023 in [0.02352 s] numpy
==> pos, neg made; p=3920, n=4023 in [0.2896 s] numba
==> pos, neg made; p=3920, n=4023 in [0.01539 s] numba
So now numba is a bit faster than numpy.
Also, it didn't make a huge difference, but in your original function you
return numpy arrays, while in the numba version you were converting `pos` and
`neg` to lists.
In general though, I would guess that the function calls are dominated by
numpy functions, which numba can't speed up, and the numpy version of the code
is already using fast vectorization routines.
**Update:**
You can make it faster by removing the `enumerate` call and index directly
into the array instead of grabbing a slice. Also splitting `pos` and `neg`
into separate arrays helps to avoid slicing along a non-contiguous axis in
memory:
@jit(nopython=True)
def numba_get_pos_neg_bitwise(df, mask):
pos = np.zeros(df.shape[0])
neg = np.zeros(df.shape[0])
vandmask = np.bitwise_and(df[:, 1:], mask)
for idx in range(df.shape[0]):
# numba fail with # if np.all(vandmask == mask):
vandm_equal_m = 1
for i in xrange(vandmask.shape[1]):
if vandmask[idx,i] != mask[i]:
vandm_equal_m = 0
break
if vandm_equal_m == 1:
if df[idx, 0] == 1:
pos[idx] = 1
else:
neg[idx] = 1
pos = np.nonzero(pos)[0]
neg = np.nonzero(neg)[0]
return pos, neg
And timings in an ipython notebook:
%timeit pos1, neg1 = get_pos_neg_bitwise(df, mask)
%timeit pos2, neg2 = numba_get_pos_neg_bitwise(df, mask)
100 loops, best of 3: 18.2 ms per loop
100 loops, best of 3: 7.89 ms per loop
|
Python module import in interactive shell
Question: I moved my python module into the site-package folder along with the working
default modules, but when I choose to import my modules they come back with
this error.
>>> import go
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
import go
ImportError: No module named go
Answer: `site-packages` folder is for Python packages. Please do not try to just copy
files there.
[For how to create and install Python package please read Python packaging
tutorial](http://packaging.python.org/).
|
Reading a file with Fortran formatted small floats, using numpy
Question: I am trying to read a data file written by a Fortran program, in which every
once in a while there is a very small float like `0.3299880-104`. The error
message is:
>np.loadtxt(filename, usecols = (1,))
File "/home/anaconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 928, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "/home/anaconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 659, in floatconv
return float(x)
ValueError: invalid literal for float(): 0.3299880-104
Can I do something to make Numpy able to read this data file anyway?
Answer: As **@agentp** mentioned in the comments, one approach would be to use the
`converters=` argument to
[`np.genfromtxt`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.genfromtxt.html)
to insert the `e` characters before casting to float:
import numpy as np
# some example strings
strings = "0.3299880-104 0.3299880+104 0.3299880"
# create a "dummy file" (see http://stackoverflow.com/a/11970414/1461210)
try:
from StringIO import StringIO # Python2
f = StringIO(strings)
except ImportError:
from io import BytesIO # Python3
f = BytesIO(strings.encode())
c = lambda s: float(s.decode().replace('+', 'e').replace('-', 'e-'))
data = np.genfromtxt(f, converters=dict(zip(range(3), [c]*3)))
print(repr(data))
# array([ 3.29988000e-105, 3.29988000e+103, 3.29988000e-001])
|
Python - part of str [urllib3 data]
Question: I'm Trying to delete 2 chars from start of the str and 1 char from the end
import urllib3
target_url="www.klimi.hys.cz/nalada.txt"
http = urllib3.PoolManager()
r = http.request('GET', target_url)
print(r.status)
print(r.data)
print()
And Output is
200
b'smutne'
I need to output be only "smutne", only this, not the " b' " and " ' "
Answer: When you have bytes, you'll need them to _decode_ it into a string with the
appropriate encoding type. For example, if you have a ASCII characters as
bytes, you can do:
>>> foo = b'mystring'
>>> print(foo)
b'mystring'
>>> print(foo.decode('ascii'))
'mystring'
Or, more commonly, you probably have Unicode characters (which includes most
of the ASCII character codes too):
>>> print(foo.decode('utf-8'))
'mystring'
This will work if you have glyphs with accents and such.
More on Python encoding/decoding here:
<https://docs.python.org/3/howto/unicode.html>
In the particular case of urllib3, `r.data` returns bytes that you'll need to
decode in order to use as a string if that's what you want.
|
Checking divisibility for (sort of) big numbers in python
Question: I've been writing a simple program in python that encodes a string into a
number using [Gödel's
encoding](https://en.wikipedia.org/wiki/G%C3%B6del_numbering#G.C3.B6del.27s_encoding).
Here's a quick overview: you take the first letter of the string, find its
position in the alphabet (a -> 1, b -> 2, ..., z -> 26) and raise the first
prime number (2) to this power. The you take the second letter in the string
and the second prime (3) and so on. Here's the code:
import string, math
alphabet = list(string.ascii_lowercase)
def primes(n):
"Returns a list of primes up to n."
primes = [2, 3]
i = 5
while i < n:
l = math.ceil(math.sqrt(i))
k = math.ceil(math.sqrt(i+2))
for p in primes[:l]:
if i % p == 0:
break
else:
primes.append(i)
for p in primes[:k]:
if (i+2) % p == 0:
break
else:
primes.append(i+2)
i += 6
return primes
def Encode(string):
"Encodes a string using Godel's encoding."
enc = 1
p = primes(100)
for i in range(len(string)):
enc = enc*(p[i]**(alphabet.index(string[i])+1))
return enc
def Decode(code):
"Decodes a Godel's encoding into a string."
dec = ""
for i in primes(100):
count = 0
while code % i == 0:
code /= i
count += 1
if count == 0: #If we've found a prime that doesn't divide code,
#there are no more letters to be added.
break
else:
dec += alphabet[count-1]
return dec
The primes() function works for my intends and purposes and so does Encode().
Now Decode() is the interesting part. It works for encodings up to ~15 digits
long but starts doing some mystical stuff starting at ~20 digits. So for
instance it gives the right output for the encoding of "aaaaaaaaaaaaaa" but
not for "python". For big numbers it seems to execute the `while code % i ==
0` loop too many times (176 for the first letter of "python" when it should be
just 16).
Is this just a problem with the mod function in python? Sounds strange as 20
digits isn't all that long for a computer. Is there a mistake in my code?
Thanks for all the help. I'm not a programmer myself but I'm trying to learn
doing stuff like this. Therefore any constructive criticism is welcome.
Answer: `/=` in Python 3 returns a double-precision floating point value. (As does
`math.ceil`, btw.) Floating point values do not have arbitrary precision. You
could use `//=` instead. That always results in an integer. (It gives the
floor of the result.)
(I previously said that math.ceil was your main culprit. I don't think that's
the case, but nonetheless, you probably shouldn't be using a floating point
value to index into a list. If you need to run the same code in Python 2 that
will fail. You can cast it back to an integer using `int(math.ceil(...))`,
although you might want to consider avoiding floating-point calculations
altogether, since things will begin to break down for sufficiently large
values.)
|
Error Spyder Python + opencv 3
Question: I have installed Opencv 3.1.4 on Spyder Python 2.7, all running on Windows
Vista 32bits.
My code is
import cv2
import sys
cascPath = "C:\opencv\sources\data\haarcascades\haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_DO_CANNY_PRUNING
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
print faces
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
But I have this error
runfile('C:/Anaconda2/Scripts/tracking-video-fonctionnel.py', wdir='C:/Anaconda2/Scripts')
Traceback (most recent call last):
File "<ipython-input-2-3b1671aa3a09>", line 1, in <module>
runfile('C:/Anaconda2/Scripts/tracking-video-fonctionnel.py', wdir='C:/Anaconda2/Scripts')
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Anaconda2/Scripts/tracking-video-fonctionnel.py", line 20, in <module>
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
error: C:\builds\master_PackSlaveAddon-win32-vc12-static\opencv\modules\imgproc\src\color.cpp:7456: error: (-215) scn == 3 || scn == 4 in function cv::ipp_cvtColor
However, this code is running on an other computer with the same spyder and
opencv but on Windows 7 64bits
I think the issue is on opencv because I can't import cv2.cv but I can import
cv2
Thank's
Answer: I have found a solution, it's to install a 2.x version of opencv
|
Python real-time keyboard input
Question: I am not looking for `input()` or `raw_input()`. I am looking for what sounds
like is available in the msvcrt module, specifically `msvcrt.kbhit()` and
`msvcrt.getch()`, but am unable to get it working.
I tried example 1, here:
<http://effbot.org/librarybook/msvcrt.htm>
and the chosen answer here:
[Python Windows `msvcrt.getch()` only detects every 3rd
keypress?](http://stackoverflow.com/questions/22365473/python-windows-msvcrt-
getch-only-detects-every-3rd-keypress)
both of which put me into infinite loops from which I am unable to escape by
pressing 'esc' and 'q' respectively.
import msvcrt
while True:
pressedKey = msvcrt.getch()
if pressedKey == 'x':
break
I would like to avoid downloading and installing new modules, like pyhook
suggested below, if possible:
[How do I get realtime keyboard input in
Python?](http://stackoverflow.com/questions/13694873/how-do-i-get-realtime-
keyboard-input-in-python)
Answer: I found the answer here: [Python kbhit()
problems](http://stackoverflow.com/questions/18672923/python-kbhit-problems)
Basically, you need to run the program from a console window instead of from
the IDE (Python in my case).
|
SOAP client in python, how to replicate with XML
Question: I am using suds to send XML and I got my request working, but I'm really
confused by how to replicate my results using XML. I have the XML request that
my suds client is sending by using:
from suds.client import Client
ulr = "xxxxxxx"
client = Client(url)
...
client.last_received.str()
but I'm not sure where I would send that request to if I was using the
requests library. How would I replicate the request from the suds client in a
python request?
Answer: Most SOAP APIs are just over plain `HTTP`, use `POST` \- and therefore are
easily mimicked with any standard HTTP client such as Requests.
First [look here](http://stackoverflow.com/questions/4426204/how-can-i-output-
what-suds-is-generating-receiving) to see how to view the headers and body
that suds is sending - it is then a matter of replicating these headers/XML
body and passing them into the Requests library.
One defining characteristic in 99% of all HTTP SOAP API's is that your request
is going to the same end-point for each request (for example
'<http://yyy.com:8080/Posting/LoadPosting.svc>), and the actual action is
specified in the header using `SOAPAction` header). Contrast this to a RESTful
API where the action is implied with the verb + end-point you call (`POST
/user`, `GET /menu` etc.)
|
how to get the value of multiple maximas in an array in python
Question: I have an array
a =[0, 0, 15, 17, 16, 17, 16, 12, 18, 18]
I am trying to find the element value that has `max` count. and if there is a
tie, I would like all of the elements that have the same `max` count.
as you can see there are two 0, two 16, two 17, two 18 one 15 and one 12 so i
want something that would return `[0, 16, 17, 18]` (order not important but I
do not want the 15 or the 12)
I was doing `np.argmax(np.bincount(a))` but `argmax` only returns one element
(per its documentation) so I only get the 1st one which is 0
I tried `np.argpartition(values, -4)[-4:]` that works, but in practice I would
not know that there are 4 elements that have the same count number! (maybe I
am close here!!! the light bulb just went on !!!)
Answer: You can use
[np.unique](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.unique.html)
to get the counts and an array of the unique elements then pull the elements
whose count is equal to the max:
import numpy as np
a = np.array([0, 0, 15, 17, 16, 17, 16, 12, 18, 18])
un, cnt = np.unique(a, return_counts=True)
print(un[cnt == cnt.max()])
[ 0 16 17 18]
un are the unique elements, cnt is the frequency/count of each:
In [11]: a = np.array([0, 0, 15, 17, 16, 17, 16, 12, 18, 18])
In [12]: un, cnt = np.unique(a, return_counts=True)
In [13]: un, cnt
Out[13]: (array([ 0, 12, 15, 16, 17, 18]), array([2, 1, 1, 2, 2, 2]))
`cnt == cnt.max()` will give us the mask to pull the elements that are equal
to the max:
In [14]: cnt == cnt.max()
Out[14]: array([ True, False, False, True, True, True], dtype=bool)
|
Why is my Runge-Kutta Python script defining elements of an array in this abnormal way?
Question: I am a newcomer to Python, my knowledge of the programming language is still
in its infancy, so I copied the Runge-Kutta Python script shown
[here](http://rosettacode.org/wiki/Runge-Kutta_method#Python) and modified it
for my purposes. Here is my current script:
import numpy as np
from matplotlib import pyplot as plt
a=0
b=np.pi
g=9.8
l=1
N=1000
def RK4(f):
return lambda t, y, dt: (
lambda dy1: (
lambda dy2: (
lambda dy3: (
lambda dy4: (dy1 + 2*dy2 + 2*dy3 + dy4)/6
)( dt * f( t + dt , y + dy3 ) )
)( dt * f( t + dt/2, y + dy2/2 ) )
)( dt * f( t + dt/2, y + dy1/2 ) )
)( dt * f( t , y ) )
from math import sqrt
dy = RK4(lambda t, y: -y)
t, y, dt = 0., 1., np.divide((b-a),float(N))
i=0
T=np.zeros((N+1,1))
DY=T
Y=T
while t < (b-dt):
T[i]=t
DY[i]=dy(t,y,dt)
t, y = t + dt, y + dy( t, y, dt )
Y[i]=y
i=i+1
plt.figure(1)
plt.plot(T,Y)
plt.show()
You can ignore the g and l variables in the first few lines, I was going to
solve the problem of the simple pendulum, but then I remembered this is a
first order ODE solver, so now my ODE is `dy/dx=-y`. I have been running this
in IPython. I was expecting `T` to be an array, basically the equivalent to
`linspace(0,pi,N+1)` in MATLAB. So T is to be a set of N+1 evenly-spaced
values between (and including) 0 and pi, but instead this is a sample of its
contents (which came as an output from running `T`):
In [101]: T
Out[101]:
array([[ 0.99686334],
[ 0.99373651],
[ 0.9906195 ],
...,
[ 0.04334989],
[ 0.04321392],
[ 0. ]])
(including the Input and Output lines to give some context as to what I am
referring to here, in case it is unclear). Oh and by-the-way, if you're
wondering why I didn't use `T=np.linspace(a,b,num=N+1)` instead of defining it
in this loop well it is because this gives similarly unusual T arrays.
Answer: With
DY=T
Y=T
you are only copying the references, they all point to the same array object.
Essentially, the contents of all will be `Y` as it is the last assigned.
Use `T.copy()` to get a separate array object.
|
Getting a legend in a seaborn FacetGrid heatmap plot
Question: How can we get legends for seaborn `FacetGrid` heatmaps? The `.add_legend()`
method isn't working for me.
Using code from [this previous
question](http://stackoverflow.com/q/31864770/1461210):
import pandas as pd
import numpy as np
import itertools
import seaborn as sns
print("seaborn version {}".format(sns.__version__))
# R expand.grid() function in Python
# http://stackoverflow.com/a/12131385/1135316
def expandgrid(*itrs):
product = list(itertools.product(*itrs))
return {'Var{}'.format(i+1):[x[i] for x in product] for i in range(len(itrs))}
methods=['method 1', 'method2', 'method 3', 'method 4']
times = range(0,100,10)
data = pd.DataFrame(expandgrid(methods, times, times))
data.columns = ['method', 'dtsi','rtsi']
data['nw_score'] = np.random.sample(data.shape[0])
def facet(data,color):
data = data.pivot(index="dtsi", columns='rtsi', values='nw_score')
g = sns.heatmap(data, cmap='Blues', cbar=False)
with sns.plotting_context(font_scale=5.5):
g = sns.FacetGrid(data, col="method", col_wrap=2, size=3, aspect=1)
g = g.map_dataframe(facet)
g.add_legend()
g.set_titles(col_template="{col_name}", fontweight='bold', fontsize=18)
[](http://i.stack.imgur.com/VAmAp.png)
Answer: What you want (in matplotlib lingo) is a colorbar, not a legend. In
matplotlib, the former is used for continuous data, while the latter is used
for categorical data. Colorbar support isn't built into `FacetGrid`, but it is
not hard to expand your example code to add a colorbar:
import pandas as pd
import numpy as np
import itertools
import seaborn as sns
methods=['method 1', 'method2', 'method 3', 'method 4']
times = range(0, 100, 10)
data = pd.DataFrame(list(itertools.product(methods, times, times)))
data.columns = ['method', 'dtsi','rtsi']
data['nw_score'] = np.random.sample(data.shape[0])
def facet_heatmap(data, color, **kws):
data = data.pivot(index="dtsi", columns='rtsi', values='nw_score')
sns.heatmap(data, cmap='Blues', **kws) # <-- Pass kwargs to heatmap
with sns.plotting_context(font_scale=5.5):
g = sns.FacetGrid(data, col="method", col_wrap=2, size=3, aspect=1)
cbar_ax = g.fig.add_axes([.92, .3, .02, .4]) # <-- Create a colorbar axes
g = g.map_dataframe(facet_heatmap,
cbar_ax=cbar_ax,
vmin=0, vmax=1) # <-- Specify the colorbar axes and limits
g.set_titles(col_template="{col_name}", fontweight='bold', fontsize=18)
g.fig.subplots_adjust(right=.9) # <-- Add space so the colorbar doesn't overlap the plot
[](http://i.stack.imgur.com/oj2PH.png)
I've indicated the changes I made and the rationale for them as inline
comments.
|
Python logging - excluding submodule
Question: I have a python **main** which users various submodules. structure is like
this:
root:.
│ main.py
│
└───MyModule
file1.py
file2.py
special.py
MyModule outputs some important logs (each file does logger =
logging.getLogger(**name**) ). However - "special.py" logs need to be stored
separately.
I attempt:
MyModuleHandler = logging.handlers.RotatingFileHandler('MyModule.log', maxBytes= 5000000, backupCount=5)
MyModuleHandler.setFormatter(formatter)
MyModuleHandler.setLevel(logging.DEBUG)
specialHandler = logging.handlers.RotatingFileHandler('special.log', maxBytes= 5000000, backupCount=5)
specialHandler.setFormatter(formatter)
specialHandler.setLevel(logging.INFO)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
console.setFormatter(formatter)
logging.getLogger('MyModule.special').setLevel(logging.DEBUG)
logging.getLogger('MyModule.special').addHandler(specialHandler)
logging.getLogger('MyModule').addHandler(console)
logging.getLogger('MyModule').setLevel(logging.DEBUG)
logging.getLogger('MyModule').addHandler(MyModuleHandler)
guys, what am I doing wrong?
Answer: OK. This was really silly All i needed was to set the propagate field in the
specified logger.
correct code should be:
MyModuleHandler = logging.handlers.RotatingFileHandler('MyModule.log', maxBytes= 5000000, backupCount=5)
MyModuleHandler.setFormatter(formatter)
MyModuleHandler.setLevel(logging.DEBUG)
specialHandler = logging.handlers.RotatingFileHandler('special.log', maxBytes= 5000000, backupCount=5)
specialHandler.setFormatter(formatter)
specialHandler.setLevel(logging.INFO)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
console.setFormatter(formatter)
logging.getLogger('MyModule.special').setLevel(logging.DEBUG)
logging.getLogger('MyModule.special').addHandler(specialHandler)
logging.getLogger('MyModule.special').propagate = False
logging.getLogger('MyModule').addHandler(console)
logging.getLogger('MyModule').setLevel(logging.DEBUG)
logging.getLogger('MyModule').addHandler(MyModuleHandler)
|
Importing scapy to blender
Question: I'm trying to import the scapy module into blender:
from bge import logic
import socket
from scapy.all import *
But I face this exception:
[](http://i.stack.imgur.com/LDwIf.png)
I copied the scapy module folder into:
C:\Program Files\Blender Foundation\Blender\2.75\scripts\modules
And this is what it contains:
[](http://i.stack.imgur.com/e2OoP.png)
Notice all and base_classes is in it.
In addition i tried to add the PYTHONPATH in the environment variables (I'm
not sure this is what I had to do.. I also tried to add
C:\Program Files\Blender Foundation\Blender\2.75\scripts\modules\scapy
in the PATH and in PYTHONPATH, they both didn't solve the problem):
[](http://i.stack.imgur.com/Lma8Y.png)
EDIT:
The problem as sambler said is that I used scapy that doesn't support python
3.x as blender uses. So I download newer scapy version which supports python
3.x from here: <https://github.com/phaethon/scapy> and replaced it with the
older scapy, now it works but I can't sniff, send or receive packets:
[](http://i.stack.imgur.com/6tuOY.png)
[](http://i.stack.imgur.com/xPHOo.png)
Answer: The direct cause of this error is that you don't have `C:\Program
Files\Blender Foundation\Blender\2.75\scripts\modules` in the
[PYTHONPATH](https://docs.python.org/2/using/cmdline.html#envvar-PYTHONPATH)
environment variable. It's not a specific Blender issue, it's a general Python
requirement for loading third-party packages.
You can try to add `PYTHONPATH` as a global per-user environment variable as
described in this question: [How to add to the pythonpath in windows
7?](http://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-
in-windows-7)
|
loading a dll in Python
Question: I am using Windows 10 and Visual Studio 2015 with Python 3.4.4 (from the
Python Software Foundation)
I want to call some functions in a DLL I wrote. I have tried several
approaches found on this forum but get one type of error or another. I have
worked from examples but may be misunderstanding something. I have set my path
to include the folder containing the DLL. Does anyone know what is going
wrong?
Here is the output from the most recent try (my DLL is in the last folder in
the path):
Python 3.4 interactive window [PTVS 2.2.31124.00-14.0]
Type $help for a list of commands.
>>> from ctypes import *
>>> from builtins import print
>>> import os # needed for os.system()
>>> import ctypes
>>> from ctypes.util import find_library
>>> os.system("path")
0PATH=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\NativeBinaries\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;w:\eddy2;C:\Program Files (x86)\Common Files\Seagate\SnapAPI\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\SlikSvn\bin;C:\Program Files (x86)\Common Files\Acronis\SnapAPI\;C:\Program Files (x86)\Skype\Phone\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files\Git\cmd;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\Program Files (x86)\Windows Live\Shared;C:\Users\eddyq\.dnx\bin;C:\Program Files\Microsoft DNX\Dnvm\;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;W:\Dropbox\DSI (His)\Windows Apps\Debug
>>> os.system("where DsiLibrary_dll.dll")
W:\Dropbox\DSI (His)\Windows Apps\Debug\DsiLibrary_dll.dll
0
>>> print (find_library('DsiLibrary_dll')) #this does print the location of the DLL
W:\Dropbox\DSI (His)\Windows Apps\Debug\DsiLibrary_dll.dll
>>> nidaq_cdecl = ctypes.CDLL('DsiLibrary_dll') #this gives an error "%1 is not a valid Win32 application"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\ctypes\__init__.py", line 351, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 is not a valid Win32 application
>>> nidaq = ctypes.cdll.LoadLibrary(find_library('DsiLibrary_dll'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\ctypes\__init__.py", line 429, in LoadLibrary
return self._dlltype(name)
File "C:\Python34\lib\ctypes\__init__.py", line 351, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 is not a valid Win32 application
>>>
Answer: This works now, I was apparently mixing a 32 bit library with a 64 bit Python.
nidaq = ctypes.cdll.LoadLibrary(find_library('DsiLibrary_dll'))
Thanks to everyone for your help.
|
Python: Why does my inner-nested while-loop continue to execute indefinitely
Question: Python 3.4.3
I am trying to create an interactive drawing program using Python and turtle.
My program runs by first asking the user to specify the length of the sides of
the shape. If the length is greater than zero, the program will begin to
execute.
I would like the program to continue running and to continue asking the user
for information, up until the point the user enters the length of the sides as
less than or equal to zero, at which point the program will exit. I am
therefore using a ‘while-loop’ to run the program and ask the user for
specifics.
I want the user to be limited to requesting one of the following three shapes;
octagon, heptagon or hexagon. If the user enters anything other than this, I
want the program to ask the user to specify their choice of shape again.
Therefore, I have the following code:
import turtle
lineLength = float(input("What length line would you like? "))
while lineLength > 0 :
print("You have choosen to draw a shape, please enter specifics:")
penColor = input("Please choose a pen color: ")
shape = input("which shape would you like to draw? ")
while shape.lower() != "octagon".lower() or shape.lower() !=
"heptagon".lower() or shape.lower() != "hexagon".lower() :
print("You have to select a shape from the specified list.")
shape = input("What shape would you like to draw: ")
Output: The actual output is that the code will run the "You have to select a
shape from the specified list. What shape would you like to draw: "
indefinitely; regardless of the input from the user.
Expected Output: My aim, and what I am expecting, is for the inner while-loop
to exit once the user has input either octagon, heptagon or hexagon. In fact,
I do not understand why the inner while-loop should run at all if the user
selects one of these three shapes as the condition for the while-loop has not
been met.
Answer: You need to use `and`, not `or`.
If you think about it, it's never possible that your loop will end, because no
matter what the shape is, it will always not both of the other shape names.
i.e. 'octagon' doesn't equal 'heptagon', so it will keep looping. If you
change that to be `and`, it will only loop if the shape doesn't equal any of
them, which is what you intend. [De Morgan's
laws](https://en.wikipedia.org/wiki/De_Morgan%27s_laws) is a good thing to
read to get a better understanding of this kind of logic.
A cleaner way to do this is to use `not in`:
while shape.lower() not in ["octagon", "heptagon", "hexagon"] :
print("You have to select a shape from the specified list.")
shape = input("What shape would you like to draw: ")
|
Flask: Peewee model_to_dict helper not working
Question: i'm developing a little app for a University project and i need to json encode
the result of a query to pass it to a js library, i've read elsewhere that i
can use model_to_dict to accomplish that, but i'm getting this error
> AttributeError: 'SelectQuery' object has no attribute '_meta'
and i don't know why or what to do, does anyone know how to solve that?
I'm using python 2.7 and the last version of peewee
@app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
return model_to_dict(doitch)
Answer: This is because `doitch` is a `SelectQuery` instance it is not model, you have
to call [`get()`](http://docs.peewee-
orm.com/en/latest/peewee/api.html#SelectQuery.get)
from flask import jsonify
@app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
return jsonify(model_to_dict(doitch.get()))
Also you could use [dicts](http://docs.peewee-
orm.com/en/latest/peewee/querying.html#retrieving-raw-tuples-dictionaries)
method to get data as dict. This omits creation a whole model stuff.
from flask import jsonify
@app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
return jsonify(doitch.dicts().get())
**edit**
As @lord63 pointed out, you cannot simply return dict, it must be a Flask
response so convert it to jsonify.
**edit 2**
@app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
# another query
sth = Something.select()
return jsonify({
'doitch': doitch.dicts().get(),
'something': sth_query.dicts().get()
})
|
Python include Scrapy from subdirectory
Question: I would like to know if there's a way that I can put Scrapy into a
subdirectory and import it. I did this with BeautifulSoup, rather than
installing it, I just drop the bs4 directory into the directory of my app, and
import it:
`from bs4 import BeautifulSoup`
In the source that I downloaded from scrapy.org there is no scrapy.py so I
tried importing
`from scrapy import *`
This returned a bunch of errors.
Traceback (most recent call last):
File "C:\Users\Kat\Desktop\linkscrape\cookie.py", line 1, in <module> from scrapy import *
File "C:\Users\Kat\Desktop\linkscrape\scrapy\__init__.py", line 27, in <module>
from . import _monkeypatches
File "C:\Users\Kat\Desktop\linkscrape\scrapy\_monkeypatches.py", line 2, in <module>
from six.moves import copyreg
ImportError: No module named six.moves
Is there any way that I can simply include this to make it easy to migrate the
app from computer to computer, or does this have to be installed? Thanks.
Answer: Don't do that. You will hurt yourself.
To achieve easy portability, use [virtualenv](http://docs.python-
guide.org/en/latest/dev/virtualenvs/) \- it's a golden standard of Python
development.
Create a file named
[requirements.txt](https://pip.readthedocs.org/en/1.1/requirements.html) in a
root directory of your project, and write all necessary dependencies, like
this:
python-dateutil==2.4.2
Scrapy==1.0.4
XlsxWriter==0.7.7
When you are setting up the environment, create a fresh virtualenv and then
simply:
pip install -r requirements.txt
Voilà! You have a working environment.
In case of Scrapy, it's even more important because in production Scrapy is
typically deployed using
[scrapyd](https://scrapyd.readthedocs.org/en/latest/), where having a correct
[package](https://python-packaging.readthedocs.org/en/latest/) with a version
number and fixed requirements is absolutely necessary.
|
roc curve with sklearn [python]
Question: I have an understanding problem by using the roc libraries.
I want to plot a roc curve with a python <http://scikit-
learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html>
I am writing a program which evalutes detectors (haarcascade, neuronal
networks) and want to evaluate them. So I already have the data saved in a
file in the following format:
0.5 TP
0.43 FP
0.72 FN
0.82 TN
...
whereas TP means True Positive, FP - False Positivve, FN - False Negative, TN
- True Negative
I parse it and fill 4 arrays with this data set.
Then I want to put this in
fpr, tpr = sklearn.metrics.roc_curve(y_true, y_score, average='macro', sample_weight=None)
but how to do this? What is y_true in my case and y_score? afterwards, I put
it fpr, tpr in
auc = sklearn.metric.auc(fpr, tpr)
Answer: Quotting Wikipedia:
> The ROC is created by plotting the FPR (false positive rate) vs the TPR
> (true positive rate) at various thresholds settings.
In order to compute FPR and TPR, you must provide the true binary value and
the target scores to the function [sklearn.metrics.roc_curve](http://scikit-
learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve).
So in your case, I would do something like this :
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
# Compute fpr, tpr, thresholds and roc auc
fpr, tpr, thresholds = roc_curve(y_true, y_score)
roc_auc = auc(y_true, y_score)
# Plot ROC curve
plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--') # random predictions curve
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate or (1 - Specifity)')
plt.ylabel('True Positive Rate or (Sensitivity)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
If you want to have a deeper understanding of how the False positive rate and
the True positive rate are computed for all the possible thresholds values, I
suggest you to read [this
article](http://blogs.sas.com/content/iml/2011/07/29/computing-an-roc-curve-
from-basic-principles.html)
|
Not able to start python program via sh / crontab
Question: I try to start a python program called ocrmypdf from a script or as a cronjob.
It works perfectly from the terminal,
pi@piscan:~ $ ocrmypdf
usage: ocrmypdf [-h] [--verbose [VERBOSE]] [--version] [-L FILE] [-j N] [-n]
[--flowchart FILE] [-l LANGUAGE] [--title TITLE]
[--author AUTHOR] [--subject SUBJECT] [--keywords KEYWORDS]
[-d] [-c] [-i] [--oversample DPI] [-f] [-s]
[--skip-big MPixels] [--tesseract-config TESSERACT_CONFIG]
[--pdf-renderer {auto,tesseract,hocr}]
[--tesseract-timeout TESSERACT_TIMEOUT] [-k] [-g]
input_file output_file
ocrmypdf: error: the following arguments are required: input_file, output_file
but from another shell it breaks for reasons I do not understand.
pi@piscan:~ $ sh ocrmypdf
sh: 0: Can't open ocrmypdf
pi@piscan:~ $ which ocrmypdf
/usr/local/bin/ocrmypdf
pi@piscan:~ $ sh $(which ocrmypdf)
import: unable to open X server `' @ error/import.c/ImportImageCommand/364.
import: unable to open X server `' @ error/import.c/ImportImageCommand/364.
from: can't read /var/mail/ocrmypdf.main
/usr/local/bin/ocrmypdf: 10: /usr/local/bin/ocrmypdf: Syntax error: "(" unexpected (expecting "then")
This is the executed code:
pi@piscan:~ $ cat $(which ocrmypdf)
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from ocrmypdf.main import run_pipeline
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(run_pipeline())
Answer: When you type `sh ocrmypdf` you ask the `sh` shell (probably `/bin/sh` which
is often a symlink to `/bin/bash` or `/bin/dash`) to interpret the `ocrmypdf`
file which is a _Python_ script, not a shell one.
So either run `python ocrmypdf` or `python $(which ocrmypdf)` or make the
`ocrmypdf` script executable. Then (on Linux at least)
[execve(2)](http://man7.org/linux/man-pages/man2/execve.2.html) will [start
the python interpreter](http://stackoverflow.com/questions/8967902/why-do-you-
need-to-put-bin-bash-at-the-beginning-of-a-script-file/8968514#8968514),
because of the [shebang](https://en.wikipedia.org/wiki/Shebang_%28Unix%29).
Of course, the `ocrmypdf` script should be in your
[`PATH`](https://en.wikipedia.org/wiki/PATH_%28variable%29)
And `crontab` jobs are not running in your desktop environment. So they don't
have access to your [X11](https://en.wikipedia.org/wiki/X_Window_System)
server [Xorg](https://en.wikipedia.org/wiki/X.Org_Server) (or to
[Wayland](https://en.wikipedia.org/wiki/Wayland_%28display_server_protocol%29),
if you are using it). You could explicitly set the `DISPLAY` variable for
that, but I don't recommend doing this.
|
tkinter wait_window() raising tkinter.TclError: Bad window path name
Question: I've been messing around with python's tkinter, and wrote the following code
for dialog practice:
import tkinter as tk
class Dialog(tk.Toplevel):
def __init__(self, parent, title=None):
tk.Toplevel.__init__(self, parent)
self.transient(parent)
if title: self.title(title)
self.parent = parent
self.result = None
body = tk.Frame(self)
self.e = tk.Entry(body)
self.e.pack()
self.initial_focus = self.e
body.pack(padx=5, pady=5)
self.buttonbox()
self.grab_set()
if not self.initial_focus: self.initial_focus = self
self.protocol("WM_DELETE_WINDOW", self.cancel)
self.geometry("+%d+%d"%(parent.winfo_rootx()+50, parent.winfo_rooty()+50))
self.initial_focus.focus_set()
self.wait_window(self)
def buttonbox(self):
box = tk.Frame(self)
w = tk.Button(box, text="OK", width=10, command=self.ok, default=tk.ACTIVE)
w.pack(side=tk.LEFT, padx=5, pady=5)
w = tk.Button(box, text="Cancel", width=10, command=self.cancel)
w.pack(side=tk.LEFT, padx=5, pady=5)
self.bind("<Return>", self.ok)
self.bind("<Escape>", self.cancel)
box.pack()
def ok(self, event=None):
if not self.validate():
self.initial_focus.focus_set()
return
self.withdraw()
self.update_idletasks()
self.apply()
self.cancel()
def cancel(self, event=None):
self.parent.focus_set()
self.destroy()
def validate(self):
if self.e.get(): return True
else: return False
def apply(self):
value = self.e.get()
self.parent.l.config(text=value)
class App(tk.Tk):
def __init__(self):
tk.Tk.__init__(self)
self.l = tk.Label(self, text="Hello!")
self.b = tk.Button(self, text="Hello?", command=self.dialog, default=tk.ACTIVE)
self.l.pack()
self.b.pack()
def dialog(self):
self.dialogbox = Dialog(self)
self.dialogbox.wait_window()
app = App()
app.mainloop()
So Dialog is a dialog box opened by App, and it's used to alter label's text,
etc. while it seems to run without much problem, the console shows following
error:
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:\Users\Administrator\Documents\JW\Python 3.5\guiPractice.py", line 58, in dialog
self.dialogbox.wait_window()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 490, in wait_window
self.tk.call('tkwait', 'window', window._w)
_tkinter.TclError: bad window path name ".9605808"
From what I've found, it's raised because I'm trying to deal with a destroyed
object(self.dialogbox?), but I can't see why exactly this error is raised in
this case.
Please Help!
Answer: It is because the dialog itself is calling `wait_window` on itself, and then
after it is destroyed the main program is also calling `wait_window`. The
second call can't happen until after the first exits, and the first can't exit
until the window is destroyed.
|
FTP Access erro : ftraise NotImplementedError NotImplementedError
Question: I am using one rhc openshift server.
So ii is installed python on it and i installed **_pyftpsync_** module on its,
so i want to connect to anther host via ftp, but i got this error:
res = self._sync_dir()
File "/var/lib/openshift/56856e180c1e6670500000bb/app-root/runtime/srv/python/lib/python2.7/site-packages/pyftpsync-1.0.3-py2.7.egg/ftpsync/synchronizers.py", line 375, in _sync_dir
remote_entries = self.remote.get_dir()
File "/var/lib/openshift/56856e180c1e6670500000bb/app-root/runtime/srv/python/lib/python2.7/site-packages/pyftpsync-1.0.3-py2.7.egg/ftpsync/ftp_target.py", line 270, in get_dir
self.ftp.retrlines("MLSD", _addline)
File "/var/lib/openshift/56856e180c1e6670500000bb/app-root/runtime/srv/python/lib/python2.7/ftplib.py", line 443, in retrlines
callback(line)
File "/var/lib/openshift/56856e180c1e6670500000bb/app-root/runtime/srv/python/lib/python2.7/site-packages/pyftpsync-1.0.3-py2.7.egg/ftpsync/ftp_target.py", line 263, in _addline
raise NotImplementedError
NotImplementedError
Mys sorce code are here:
cd /tmp
cat << 'EOF' > ftp_sync.py
from ftpsync.synchronizers import DownloadSynchronizer, UploadSynchronizer,BiDirSynchronizer
from ftpsync.targets import FsTarget #, UploadSynchronizer, DownloadSynchronizer
from ftpsync.ftp_target import FtpTarget
import os
env_var = os.environ['OPENSHIFT_HOMEDIR']
local = FsTarget('/tmp')
passwd = "ss123456"
ip='sa1sss.atspace.cc';user='2025575';# sa1sss.atspace.cc [email protected]
remote = FtpTarget("/mashhadpc.tk", ip,21, user, passwd)
opts = {"force": False, "delete_unmatched": False, "verbose": 3, "execute": True, "dry_run" : False}
s = UploadSynchronizer(local, remote, opts)
s.run()
stats = s.get_stats()
print(stats)
EOF
nohup sh -c " ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/python/bin/python ftp_sync.py"> $OPENSHIFT_LOG_DIR/python_ftp_sync.log /dev/null 2>&1 &
tail -f $OPENSHIFT_LOG_DIR/python_ftp_sync.log
So As you can see there is ftp user and pass free for your test its
connections,So what is my mistake in writing this codes,which causes to got
that error.
Thanks a lot.
Answer: As stated in the docs, the The FTP server must support the
[MLSD](https://tools.ietf.org/html/rfc3659#page-23) command.
The error indicates, that the response of this command could not be parsed.
I would suggest, that you open an issue on the pyftpsync project site.
|
Why does Python allow function calls with wrong number of arguments?
Question: Python is my first dynamic language. I recently coded a function call
incorrectly supplying a wrong number of arguments. This failed with an
exception at the time that function was called. I expected that even in a
dynamic language, this kind of error can be detected when the source file is
parsed.
I understand that the **type** of actual arguments is not known until the
function is called, because the same variable may contain values of any type
at different times. But the **number** of arguments is known as soon as the
source file is parsed. It is not going to change while the program is running.
### So that this is not a philosophical question
To keep this in scope of Stack Overflow, let me phrase the question like this.
Is there some feature, that Python offers, that requires it to delay checking
the number of arguments in a function call until the code actually executes?
Answer: Python cannot know up-front what object you'll end up calling, because being
dynamic, you can _swap out the function object_. At any time. And each of
these objects can have a different number of arguments.
Here is an extreme example:
import random
def foo(): pass
def bar(arg1): pass
def baz(arg1, arg2): pass
the_function = random.choice([foo, bar, baz])
print(the_function())
The above code has a 2 in 3 chance of raising an exception. But Python cannot
know a-priori if that'll be the case or not!
And I haven't even started with dynamic module imports, dynamic function
generation, other callable objects (any object with a `__call__` method can be
called), or catch-all arguments (`*args` and `**kwargs`).
But to make this extra clear, you state in your question:
> It is not going to change while the program is running.
This is not the case, not in Python, once the module is loaded you can delete,
add or replace any object in the module namespace, including function objects.
|
Python not concatenating string and unicode to link
Question: When I append a Unicode string to the end of str, I can not click on the URL.
Bad:
base_url = 'https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xml&titles='
url = base_url + u"Ángel_Garasa"
print url
[](http://i.stack.imgur.com/tkSrv.png)
Good:
base_url = 'https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xml&titles='
url = base_url + u"Toby_Maquire"
print url
[](http://i.stack.imgur.com/zNvc2.png)
Answer: It appears that you're printing the results in an IDE, perhaps PyCharm. You
need to percent encode a UTF-8 encoded version of the string:
import urllib
base_url = 'https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xml&titles='
name = u"Ángel_Garasa"
print base_url + urllib.quote(name.encode("utf-8"))
This shows: [](http://i.stack.imgur.com/vqie9.png)
In your case you need to update your code, so that the relevant field from the
database is percent encoded. You only need to encode this one field to UTF-8
just for the percent encoding.
|
Reading/writing files in Python
Question: I am trying to create a new text file of stock symbols in the russell 2k from
one that looks like this:
[](http://i.stack.imgur.com/eM2nr.png)
All I want is the ticker symbol at the end of each line. So I have the
following code:
with open("russ.txt", "r") as f:
for line in f:
line = line.split()
line = line[-1]
if line == "Ticker": continue
print line
with open("output.txt", "w") as fh:
fh.seek(0,2)
print line
fh.write(line)
All I end up with in the `output.txt` file is one line with the very last
ticker in the list instead of all the tickers. I thought using `fh.seek(0,2)`
would create a new line at the end each time through. What am I doing wrong?
Also, in reality I don't need to create another doc, I could just edit the
current one but I couldn't figure that out either so if you could show me how
to just write to the same file that also is perfectly acceptable.
Answer: I believe using
[`fileinput`](https://docs.python.org/2.7/library/fileinput.html?highlight=fileinput#fileinput.input)
will be also handy in your case:
import fileinput
import sys
for line in fileinput.input("russ.txt", inplace=1):
sys.stdout.write(line.split(' ')[-1])
`fileinput.input` will change original file.
|
How to access a remote datastore when running dev_appserver.py?
Question: I'm attempting to run a localhost web server that has remote api access to a
remote datastore using the `remote_api_stub` method
`ConfigureRemoteApiForOAuth`.
I have been using the following Google doc for reference but find it rather
sparse:
<https://cloud.google.com/appengine/docs/python/tools/remoteapi>
I believe I'm missing the authentication bit, but can't find a concrete
resource to guide me. What would be the easiest way, given the follow code
example, to access a remote datastore while running `dev_appserver.py`?
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.remote_api import remote_api_stub
class Topic(ndb.Model):
created_by = ndb.StringProperty()
subject = ndb.StringProperty()
@classmethod
def query_by_creator(cls, creator):
return cls.query(Topic.created_by == creator)
class MainPage(webapp2.RequestHandler):
def get(self):
remote_api_stub.ConfigureRemoteApiForOAuth(
'#####.appspot.com',
'/_ah/remote_api'
)
topics = Topic.query_by_creator('bill')
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('<html><body>')
self.response.out.write('<h1>TOPIC SUBJECTS:<h1>')
for topic in topics.fetch(10):
self.response.out.write('<h3>' + topic.subject + '<h3>')
self.response.out.write('</body></html>')
app = webapp2.WSGIApplication([
('/', MainPage)
], debug=True)
Answer: This get's asked a lot, simply because you can't use app engines libraries
outside of the SDK. However, there is also an easier way to do it from within
the App Engine SDK as well.
I would use `gcloud` for this. Here's how to set it up:
If you want to interact with google cloud storage services inside or outside
of the App Engine environment, you may use Gcloud
(<https://googlecloudplatform.github.io/gcloud-python/stable/>) to do so.
You need a service account on your application as well as download the JSON
credentials file. You do this on the app engine console under the
`authentication` tab. Create it, and then download it. Call it
`client_secret.json` or something.
With those, once you install the proper packages for gcloud with pip, you'll
be able to make queries as well as write data.
Here is an example of authenticating yourself to use the library:
from gcloud import datastore
# the location of the JSON file on your local machine
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/location/client_secret.json"
# project ID from the Developers Console
projectID = "THE_ID_OF_YOUR_PROJECT"
os.environ["GCLOUD_TESTS_PROJECT_ID"] = projectID
os.environ["GCLOUD_TESTS_DATASET_ID"] = projectID
client = datastore.Client(dataset_id=projectID)
Once that's done, you can make queries like this:
query = client.query(kind='Model').fetch()
It's actually super easy. Any who, that's how I would do that! Cheers.
|
Python, assign function to variable, change optional argument's value
Question: Is it possible to assign a function to a variable with modified default
arguments?
To make it more concrete, I'll give an example. The following obviously
doesn't work in the current form and is only meant to show what I need:
def power(a, pow=2):
ret = 1
for _ in range(pow):
ret *= a
return ret
cube = power(pow=3)
And the result of `cube(5)` should be `125`.
Answer: [`functools.partial`](https://docs.python.org/2/library/functools.html#functools.partial)
to the rescue:
> Return a new partial object which when called will behave like func called
> with the positional arguments args and keyword arguments keywords. If more
> arguments are supplied to the call, they are appended to args. If additional
> keyword arguments are supplied, they extend and override keywords.
from functools import partial
cube = partial(power, pow=3)
Demo:
>>> from functools import partial
>>>
>>> def power(a, pow=2):
... ret = 1
... for _ in range(pow):
... ret *= a
... return ret
...
>>> cube = partial(power, pow=3)
>>>
>>> cube(5)
125
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.