text
stringlengths 226
34.5k
|
---|
"Delayed" writing to file in python
Question: I have very specific problem and I have no idea how to get rid of it. I'm
sending some data over php client on webserver. Code looks like this:
<?php
session_start();
include 'db_con.php';
include('db_functions.php');
error_reporting(E_ALL);
/* Get the port for the WWW service. */
$service_port = $_SESSION['port'];
/* Get the IP address for the target host. */
$address = $_SESSION['ip'];
/* Create a TCP/IP socket. */
try {
$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
} catch (Exception $e) {
echo "error" . socket_strerror(socket_last_error()) . "\n";
socket_close($socket);
}
try {
$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
} catch (Exception $e) {
echo "error" . socket_strerror(socket_last_error()) . "\n";
socket_close($socket);
}
$result = socket_connect($socket, $address, $service_port);
//data from database soonTM
$in = returnResults($db);
$data = "";
foreach ($in as $r){
$data .= $r['ring_time'] . ' ' . $r['bell_mode'] . "\r\n";
}
socket_write($socket, $data, strlen($data));
echo "Reading response:\n\n";
while ($out = socket_read($socket, 2048)) {
echo $out;
}
socket_close($socket);
?>
Data from database is then sent to TCP server and it Works great, since I get
the exact same data as I sent.
Anyways this is my TCP server code:
import socket
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Bind the socket to the port
server_address = ('10.10.10.120', 10000)
print('starting up on %s port %s' % server_address)
sock.bind(server_address)
while True:
# Listen for incoming connections
sock.listen(1)
# Wait for a connection
print('waiting for a connection')
connection, client_address = sock.accept()
try:
print('connection from', client_address)
# Receive the data in small chunks and retransmit it
while True:
data = connection.recv(2048)
neki = 'hello rok'
print('received "%s"' % data)
if data:
try:
dataFile = open('data.txt', 'w')
try:
dataFile.write(data)
except (Exception) as inst:
print(type(inst))
print("error at writing to file")
except (Exception) as inst:
print(type(inst))
print("error opening file")
print('sending data back to the client')
#connection.sendall(bytes(neki, 'UTF-8'))
connection.sendall(neki)
break
except (Exception) as inst:
print(type(inst))
print("error opening connection")
finally:
# Clean up the connection
connection.close()
And it seems to be working perfectly since in terminal it prints the stuff I
sent via TCP client. However, the problem is with writing to data.txt file.
When I send data for the first time, nothing Will be written to the file. If I
send it again it Will work as intended (even though the right data is printed
in terminal). If I add another line to my database and send it again, the old
value Will be written to file again. If I send it again, the new value Will be
added.
Answer: You need to close your file. I think the problem in this. Please try to add
'finally' block like this:
try:
dataFile = open('data.txt', 'w')
try:
dataFile.write(data)
except (Exception) as inst:
print(type(inst))
print("error at writing to file")
except (Exception) as inst:
print(type(inst))
print("error opening file")
finally:
dataFile.close()
|
Spark-SQL Window functions on Dataframe - Finding first timestamp in a group
Question: I have below dataframe (say UserData).
uid region timestamp
a 1 1
a 1 2
a 1 3
a 1 4
a 2 5
a 2 6
a 2 7
a 3 8
a 4 9
a 4 10
a 4 11
a 4 12
a 1 13
a 1 14
a 3 15
a 3 16
a 5 17
a 5 18
a 5 19
a 5 20
This data is nothing but user (uid) travelling across different regions
(region) at different time (timestamp). Presently, timestamp is shown as 'int'
for simplicity. Note that above dataframe will not be necessarily in
increasing order of timestamp. Also, there may be some rows in between from
different users. I have shown dataframe for single user only in monotonically
incrementing order of timestamp for simplicity.
My goal is - to find User 'a' spent how much time in each region and in what
order? So My final expected output looks like
uid region regionTimeStart regionTimeEnd
a 1 1 5
a 2 5 8
a 3 8 9
a 4 9 13
a 1 13 15
a 3 15 17
a 5 17 20
Based on my findings, Spark SQL Window functions can be used for this purpose.
I have tried below things,
val w = Window
.partitionBy("region")
.partitionBy("uid")
.orderBy("timestamp")
val resultDF = UserData.select(
UserData("uid"), UserData("timestamp"),
UserData("region"), rank().over(w).as("Rank"))
But here onwards, I am not sure on how to get `regionTimeStart` and
`regionTimeEnd` columns. `regionTimeEnd` column is nothing but 'lead' of
`regionTimeStart` except the last entry in group.
I see Aggregate operations have 'first' and 'last' functions but for that I
need to group data based on ('uid','region') which spoils monotonically
increasing order of path traversed i.e. at time 13,14 user has come back to
region '1' and I want that to be retained instead of clubbing it with initial
region '1' at time 1.
It would be very helpful if anyone one can guide me. I am new to Spark and I
have better understanding of Scala Spark APIs compared to Python/JAVA Spark
APIs.
Answer: Window functions are indeed useful although your approach can work only if you
assume that user visits given region only once. Also window definition you use
is incorrect - multiple calls to `partitionBy` simply return new objects with
different window definitions. If you want to partition by multiple columns you
should pass them in a single call (`.partitionBy("region", "uid")`).
Lets start with marking continuous visits in each region:
import org.apache.spark.sql.functions.{lag, sum, not}
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy($"uid").orderBy($"timestamp")
val change = (not(lag($"region", 1).over(w) <=> $"region")).cast("int")
val ind = sum(change).over(w)
val dfWithInd = df.withColumn("ind", ind)
Next you we simply aggregate over the groups and find leads:
import org.apache.spark.sql.functions.{lead, coalesce}
val regionTimeEnd = coalesce(lead($"timestamp", 1).over(w), $"max_")
val result = dfWithInd
.groupBy($"uid", $"region", $"ind")
.agg(min($"timestamp").alias("timestamp"), max($"timestamp").alias("max_"))
.drop("ind")
.withColumn("regionTimeEnd", regionTimeEnd)
.withColumnRenamed("timestamp", "regionTimeStart")
.drop("max_")
result.show
// +---+------+---------------+-------------+
// |uid|region|regionTimeStart|regionTimeEnd|
// +---+------+---------------+-------------+
// | a| 1| 1| 5|
// | a| 2| 5| 8|
// | a| 3| 8| 9|
// | a| 4| 9| 13|
// | a| 1| 13| 15|
// | a| 3| 15| 17|
// | a| 5| 17| 20|
// +---+------+---------------+-------------+
|
How to avoid Python script name being regarded as namespace with Doxygen
Question: I am now making documentations with Doxygen for Python scrips. Suppose the
name of the script is `all_the_best.py`, and at the beginning of the scripts,
I document it as follows:
##
# @namespace scripts.python
# This is a python script
import os
...
I suppose that the script will belong to scripts.python namespace in the
Namespaces tab of generated HTML file. However, I found that in the Namespaces
tab, not only scripts.python is available but also `all_the_best` appears. Any
ideas on how to avoid it? Thanks.
Answer: I tried a lot of things, and in my case I had to put EXTRACT_ALL = NO in the
Doxyfile to have the second instance removed.
|
Execute python in PHP and get response as i get in terminal
Question: i am new to PHP Programming. I need to execute a python code from php. My
Python code contains some external modules. The Php code is
$op=shell_exec('python mgs.py');
echo $op;
?>
This is my intended code and i need to get the exact output as i get in
terminal(if errors ,that too)
python code
import mechanize
def mgs():
a=0
flag=0
browser = mechanize.Browser(factory=mechanize.RobustFactory())
browser.set_handle_robots(False)
browser.open("http://14.139.185.88/cbcsshrCamp/index.php?module=public&page=result")
browser.select_form(nr=0)
control=browser.find_control('exam_id')
print control
control.value=['203']
browser.form["prn"]="130021069679"
browser.submit()
html = browser.response().readlines()
for i in range(0,len(html)):
if 'Failed' in html[i]:
flag=1
if flag==1:
print "Fail"
else:
print "Pass"
mgs()
. I am not getting any output on browser window, but when i tried to execute
the PHP code in terminal, its fine.. ie, ( php index.php)
My OS> Ubuntu Apache 2
Answer: Most likely, `python` is not in the path of the user that executes your php
script.
Try this instead:
<?php
$op = shell_exec('/usr/bin/python3 mgs.py');
echo $op;
?>
Of course, change the interpreter to the correct path if this isn't the
correct one.
|
RuntimeWarning: invalid value encountered in arccos
Question: I am new to using Python but getting along with it fairly well. I keep getting
the error you see below and not sure what the problem is exactly as I believe
the values are correct and stated. What do you think the problem exactly is? I
am trying to graph from t = 0 to t=PM, and the formula you see below is angle
arccos.
Couldn't find the troubleshooting of this arccos error online. Running Python
3.5.
import numpy as np
import matplotlib
from matplotlib import pyplot
from __future__ import division
rE = 1.50*(10**11)
rM = 3.84*(10**8)
PE = 3.16*(10**7)
PM = 2.36*(10**6)
t = np.linspace(0, PM, 200)
# anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: invalid value encountered in arccos
y = 0.5*(np.arccos(2*(np.pi)*t*((1/PM)-(1/PE))+90))
Answer: Well, if you do
np.arccos(90)
(which is your first element), you'll get the same warning - simplifying your
example considerably.
Why is that? The [arccos
function](https://en.wikipedia.org/wiki/Inverse_trigonometric_functions#arccos)
is the _x_ for which _cos(x) = 90_. From basic trignometry, you can tell
there's [no such
value](https://en.wikipedia.org/wiki/Trigonometric_functions#cosine).
|
Using a function as argument to re.sub in Python?
Question: I'm writing a program to split the words contained in an hashtag.
For example I want to split the hashtags:
#Whatthehello #goback
into:
What the hello go back
I'm having troubles when using
[`re.sub`](https://docs.python.org/3.5/library/re.html#re.sub) with a
functional argument.
The code I've written is:
import re,pdb
def func_replace(each_func):
i=0
wordsineach_func=[]
while len(each_func) >0:
i=i+1
word_found=longest_word(each_func)
if len(word_found)>0:
wordsineach_func.append(word_found)
each_func=each_func.replace(word_found,"")
return ' '.join(wordsineach_func)
def longest_word(phrase):
phrase_length=len(phrase)
words_found=[];index=0
outerstring=""
while index < phrase_length:
outerstring=outerstring+phrase[index]
index=index+1
if outerstring in words or outerstring.lower() in words:
words_found.append(outerstring)
if len(words_found) ==0:
words_found.append(phrase)
return max(words_found, key=len)
words=[]
# The file corncob_lowercase.txt contains a list of dictionary words
with open('corncob_lowercase.txt') as f:
read_words=f.readlines()
for read_word in read_words:
words.append(read_word.replace("\n","").replace("\r",""))
For example when using these functions like this:
s="#Whatthehello #goback"
#checking if the function is able to segment words
hashtags=re.findall(r"#(\w+)", s)
print func_replace(hashtags[0])
# using the function for re.sub
print re.sub(r"#(\w+)", lambda m: func_replace(m.group()), s)
The output I obtain is:
What the hello
#Whatthehello #goback
Which is not the output I had expected:
What the hello
What the hello go back
Why is this happening? In particular I've used the suggestion from [this
answer](http://stackoverflow.com/a/18737964/3646408) but I don't understand
what goes wrong in this code.
Answer: Notice that `m.group()` returns the entire string that matched, whether or not
it was part of a capturing group:
In [19]: m = re.search(r"#(\w+)", s)
In [20]: m.group()
Out[20]: '#Whatthehello'
`m.group(0)` also returns the entire match:
In [23]: m.group(0)
Out[23]: '#Whatthehello'
In contrast, `m.groups()` returns all capturing groups:
In [21]: m.groups()
Out[21]: ('Whatthehello',)
and `m.group(1)` returns the first capturing group:
In [22]: m.group(1)
Out[22]: 'Whatthehello'
So the problem in your code originates with the use of `m.group` in
re.sub(r"#(\w+)", lambda m: func_replace(m.group()), s)
since
In [7]: re.search(r"#(\w+)", s).group()
Out[7]: '#Whatthehello'
whereas if you had used `.group(1)`, you would have gotten
In [24]: re.search(r"#(\w+)", s).group(1)
Out[24]: 'Whatthehello'
and the preceding `#` makes all the difference:
In [25]: func_replace('#Whatthehello')
Out[25]: '#Whatthehello'
In [26]: func_replace('Whatthehello')
Out[26]: 'What the hello'
Thus, changing `m.group()` to `m.group(1)`, and substituting
`/usr/share/dict/words` for `corncob_lowercase.txt`,
import re
def func_replace(each_func):
i = 0
wordsineach_func = []
while len(each_func) > 0:
i = i + 1
word_found = longest_word(each_func)
if len(word_found) > 0:
wordsineach_func.append(word_found)
each_func = each_func.replace(word_found, "")
return ' '.join(wordsineach_func)
def longest_word(phrase):
phrase_length = len(phrase)
words_found = []
index = 0
outerstring = ""
while index < phrase_length:
outerstring = outerstring + phrase[index]
index = index + 1
if outerstring in words or outerstring.lower() in words:
words_found.append(outerstring)
if len(words_found) == 0:
words_found.append(phrase)
return max(words_found, key=len)
words = []
# corncob_lowercase.txt contains a list of dictionary words
with open('/usr/share/dict/words', 'rb') as f:
for read_word in f:
words.append(read_word.strip())
s = "#Whatthehello #goback"
hashtags = re.findall(r"#(\w+)", s)
print func_replace(hashtags[0])
print re.sub(r"#(\w+)", lambda m: func_replace(m.group(1)), s)
prints
What the hello
What the hello gob a c k
since, alas, `'gob'` is longer than `'go'`.
* * *
One way you could have debugged this is to replace the `lambda` function with
a regular function and then add print statements:
def foo(m):
result = func_replace(m.group())
print(m.group(), result)
return result
In [35]: re.sub(r"#(\w+)", foo, s)
('#Whatthehello', '#Whatthehello') <-- This shows you what `m.group()` and `func_replace(m.group())` returns
('#goback', '#goback')
Out[35]: '#Whatthehello #goback'
That would focus your attention on
In [25]: func_replace('#Whatthehello')
Out[25]: '#Whatthehello'
which you could then compare with
In [26]: func_replace(hashtags[0])
Out[26]: 'What the hello'
In [27]: func_replace('Whatthehello')
Out[27]: 'What the hello'
That would lead you to ask the question, if `m.group()` returns
`'#Whatthehello'`, what method do I need to return `'Whatthehello'`. A dive
into [the docs](https://docs.python.org/3/library/re.html#re.match.group) then
solves the problem.
|
access file's actual code as string Python
Question: this is mainly curiosity, I'd like to access my script's actual code, so for a
file:
#!/usr/bin/env python
# coding: utf-8
import os, sys, subprocess, time, re
import yagmail
it would return
code = """\
#!/usr/bin/env python
# coding: utf-8
import os, sys, subprocess, time, re
import yagmail"""
I see nice things like file, name, etc, but no code:
__IPYTHON__ __IPYTHON__active __debug__ __doc__ __file__ __import__ __name__ __package__
ipdb>
Thank you
Answer: The `inspect` library has some methods for this (specifically
`inspect.getsource()`).
<https://docs.python.org/3.4/library/inspect.html#retrieving-source-code>
|
Upgrading Django from 1.8.6 to 1.9--django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet; importing models issue
Question: I'm trying to upgrade from Django 1.8.6 to 1.9 but I've been running into
trouble with trying to get the project to build and run properly. I've changed
numerous things necessary for the upgrade, like making an `apps.py` file in
`my_app` and defining Configs and including their dotted paths in
`INSTALLED_APPS` in `settings.py` (and changing `INSTALLED_APPS` to a list),
but the same error I get every time is:
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
For some background info, the project uses Docker Compose. The Dockerfile used
for the `web` container which runs the Django server itself utilizes `paver`
to start up so that when `docker-compose up` is run, the following commands
are executed:
./manage.py makemigrations --noinput
./manage.py migrate --noinput
./manage.py collectstatic --noinput
pip install --upgrade pip
pip install --upgrade -r requirements.txt
./manage.py runserver 0.0.0.0:8000
To my knowledge, none of the dependencies are incompatible with Django 1.9 so
I'm not sure if that's the issue here. The only dependency that I initially
thought could have been incompatible was `django_hstore`, but it was updated
for compatibility shortly after 1.9 was officially released. So unless the
creator of `django_hstore` is mistaken or lying (which I doubt), I can't
really think of any incompatible dependencies. The database backend used is
`django.db.backends.postgresql_psycopg2`. There is also a `worker` container
that uses the same Dockerfile as `web` and is used to run Celery. The full
error traceback from the Django `worker` container is below:
worker_1 | Traceback (most recent call last):
worker_1 | File "/usr/local/bin/celery", line 11, in <module>
worker_1 | sys.exit(main())
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/__main__.py", line 30, in main
worker_1 | main()
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/bin/celery.py", line 81, in main
worker_1 | cmd.execute_from_commandline(argv)
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/bin/celery.py", line 770, in execute_from_commandline
worker_1 | super(CeleryCommand, self).execute_from_commandline(argv)))
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/bin/base.py", line 309, in execute_from_commandline
worker_1 | argv = self.setup_app_from_commandline(argv)
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/bin/base.py", line 469, in setup_app_from_commandline
worker_1 | self.app = self.find_app(app)
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/bin/base.py", line 489, in find_app
worker_1 | return find_app(app, symbol_by_name=self.symbol_by_name)
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/app/utils.py", line 238, in find_app
worker_1 | sym = imp(app)
worker_1 | File "/usr/local/lib/python2.7/site-packages/celery/utils/imports.py", line 101, in import_from_cwd
worker_1 | return imp(module, package=package)
worker_1 | File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
worker_1 | __import__(name)
worker_1 | File "/code/my_app/tasks.py", line 3, in <module>
worker_1 | from taskman.celery import app, DBTask
worker_1 | File "/code/taskman/celery.py", line 6, in <module>
worker_1 | from utils.db.clearblackbox import rm_invalid_blackbox
worker_1 | File "/code/utils/db/clearblackbox.py", line 9, in <module>
worker_1 | django.setup()
worker_1 | File "/usr/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
worker_1 | apps.populate(settings.INSTALLED_APPS)
worker_1 | File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate
worker_1 | app_config = AppConfig.create(entry)
worker_1 | File "/usr/local/lib/python2.7/site-packages/django/apps/config.py", line 142, in create
worker_1 | app_module = import_module(app_name)
worker_1 | File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
worker_1 | __import__(name)
worker_1 | File "/code/utils/db/blackboxquery.py", line 2, in <module>
worker_1 | from my_app.models import BlackBox, DataPoint, Value, SourceInfo, FormatString, Argument
worker_1 | File "/code/my_app/models.py", line 11, in <module>
worker_1 | class Value(models.Model):
worker_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/base.py", line 94, in __new__
worker_1 | app_config = apps.get_containing_app_config(module)
worker_1 | File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 239, in get_containing_app_config
worker_1 | self.check_apps_ready()
worker_1 | File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready
worker_1 | raise AppRegistryNotReady("Apps aren't loaded yet.")
worker_1 | django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
I've seen a number of similar questions that had the same AppRegistryNotReady
exception when upgrading to 1.9 but in my case I can specifically tell it's
because one of my apps whose AppConfig is in `apps.py` imports models when it
is recommended to not do so. The Django 1.9 documentation on application setup
says:
> At this stage, your code shouldn’t import any models!
>
> In other words, your applications’ root packages and the modules that define
> your application configuration classes shouldn’t import any models, even
> indirectly.
>
> Strictly speaking, Django allows importing models once their application
> configuration is loaded. However, in order to avoid needless constraints on
> the order of INSTALLED_APPS, it’s strongly recommended not import any models
> at this stage.
Unfortunately, the documentation seems to offer no alternative for importing
models during the setup stage, which is a shame since I can't actually avoid
importing models. Specifically, in my `celery.py`, I have a `Task` subclass
named `DBTask` whose `on_error` callback uses a module function to remove an
invalid database insertion. That module, `clearblackbox.py`, imports models
since it needs to call `delete()` on the invalid model that was inserted into
the database. The `DBTask` class is used as the base class for the main
database insertion task named `insertBlackboxIntoDatabaseTask`. Since I can't
get around importing models at the `setup()` stage, what else can I do to get
past this error and be able to run my server again?
**_EDIT:_** I was wondering if I had any unnecessary Configs so I got rid of
all but two: `my_app.apps.TasksConfig`, whose `name` field points to my
`tasks.py` containing the definition of my database insertion task, and
`taskman.celery.CeleryConfig`, which overrides `ready` such that it auto-
detects tasks from among `INSTALLED_APPS`. Though I put
`taskman.celery.CeleryConfig` in `INSTALLED_APPS`, the error I get now is
ImportError: No module named CeleryConfig
This happens if I put `import django` and `django.setup()` in either
`celery.py` or `clearblackbox.py` in an attempt to solve the
`AppRegistryNotReady` exception that is happening due to importing models
during setup.
Answer: What ended up working for me was getting rid of all Configs in `apps.py`
except for the `TasksConfig`, then modifying the imports in `clearblackbox.py`
and `celery.py`. In `clearblackbox.py`, I moved the model import into the
function itself instead of leaving it at the top and in `celery.py`, I moved
the import of the function from `clearblackbox.py` into the `on_failure`
definition for `DBTask` rather than leave the import at the top. After doing
this, putting the dotted paths to the `TasksConfig` and `CeleryConfig` ended
up working.
|
I installed pydicom through pip but it somehow cannot be found when I try to import the library. Could someone give a suggestion to fix this?
Question: [Please see the screenshot here.](http://i.stack.imgur.com/90HbA.png)
Isn't the package pydicom already installed?
Credit to Igor: It looks like that it's working in Sublime Text 2's build
using "import dicom". However it is still somehow not working in my Eclipse
with PyDev environment.
* * *
***I have solved the problem in Eclipse after adding a path in the Python
Interpreter configuration by hand (Somehow Auto-Config didn't add this path.).
[Please see the screenshot here.](http://i.stack.imgur.com/sRozv.png)
Answer: try:
python -m pip freeze
Is the package there? If not, you have installed it for the wrong python
environment. I suggest you install it as follows:
In Bash prompt (terminal):
python -m pip install pydicom
You can change `python` with `python3` or the absolute address of the
executable. If you don't know the absolute address of the executable, you can
obtain it as follows:
In Bash prompt (terminal):
which python
(or `python3`). The output would be the absolute path, which you may utilise
like so:
/Users/xyz/bin/python -m pip install pydicom
Finally, if you want to find out the path to your executable from within
Python, you may do so like this:
from sys import executable
print(executable)
The output will be the absolute path to the current environment's interpreter.
|
Python pandas: Identify route/path given by two columns
Question: I have an input file (csv-file) with data which has duplicate entries in the
column `group` and might have duplicate entries in the column `size`.
A snippet with the data of just one group is given below. However, there are
several groups in the real data file. So this is just a shortened and
simplified example (`sample.csv`):
group,size,from,to
group32a4,0500,6sq2gp,m4qfce
group32a4,0800,oxlwtg,ru1u5r
group32a4,1200,rpziz0,oxlwtg
group32a4,1400,ru1u5r,fvvskj
group32a4,0500,m4qfce,60m2eq
group32a4,0050,fvvskj,6sq2gp
Since the data is coming from external software I am not able to change
anything concerning the data format or data layout. So I need to import the
data for further data handling and do the following tasks:
1. Keep one entry for each group, only. This entry must have the biggest value in the column `size`.
2. Get the path wich routes through the group and can be arranged from the columns `from` and `to`.
I decided to use `pandas` for data handling since the real data file is rather
complex and I wanted to have the capability of its permormant features.
However, if there are any other (more suitable) tools or approaches using
other Python modules those would be totally fine and not a problem at all.
In order to accomplish the first task I did:
import pandas as pd
# open file and read data
with open('sample.csv') as f:
data = pd.read_csv(f)
# sort descending by columns `group` and `size`
# sorting descending because `df.drop_duplicates()` keeps first element by default
df_sorted = data.sort_values(['group', 'size'], ascending=False)
# drop duplicates in order to keep first entry only
one_entry = df_sorted.drop_duplicates('group')
# print handled data
print(one_entry)
Which leads to the desired output:
group size from to
3 group32a4 1400 ru1u5r fvvskj
So, I need to accomplish the second task. Since all of the above data handling
was not done inplace I am able to access all stages of data throughout the
data handling procedure.
Unfortunately, I do not have any idea about how to do that. I have some
conceptual thoughts about how that could be done. First of all I need to
arrange the route out of each group subset. In the example given above that
would result in:
rpziz0 --> oxlwtg --> ru1u5r --> fvvskj --> 6sq2gp --> m4qfce --> 60m2eq
After that I neet to extract source and destination and summarize the route
like that:
rpziz0 --> 60m2eq
Which should result into this overall output:
group size from to
3 group32a4 1400 rpziz0 60m2eq
So the question I came up with is as follows:
How can I identifiy the route out of each subset which is defined by each
`group` tag (using pandas' methods preferably)?
_Note: Using Python 3.4.3, Pandas 0.17.1_
Answer: You can use [`stack`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.stack.html) with
[`drop_duplicates`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.drop_duplicates.html) and last
[`pivot`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.pivot.html). Next group was added for better
testing:
print df
group size from to
0 group32a4 500 6sq2gp m4qfce
1 group32a4 800 oxlwtg ru1u5r
2 group32a4 1200 rpziz0 oxlwtg
3 group32a4 1400 ru1u5r fvvskj
4 group32a4 500 m4qfce 60m2eq
5 group32a4 50 fvvskj 6sq2gp
6 group13a4 500 6sq2gp m4qfce
7 group13a4 800 oxlwtg ru1u5r
8 group13a4 1200 rpziz0 oxlwtg
9 group13a4 1400 ru1u5r fvvskj
10 group13a4 500 m4qfce 60m2eq
11 group13a4 50 fvvskj 6sq2gp
#set index and stack data - columns 'from' and 'to' to one column 'route'
df = df.set_index(['group', 'size']).stack().reset_index(name='route')
print df
group size level_2 route
0 group32a4 500 from 6sq2gp
1 group32a4 500 to m4qfce
2 group32a4 800 from oxlwtg
3 group32a4 800 to ru1u5r
4 group32a4 1200 from rpziz0
5 group32a4 1200 to oxlwtg
6 group32a4 1400 from ru1u5r
7 group32a4 1400 to fvvskj
8 group32a4 500 from m4qfce
9 group32a4 500 to 60m2eq
10 group32a4 50 from fvvskj
11 group32a4 50 to 6sq2gp
12 group13a4 500 from 6sq2gp
13 group13a4 500 to m4qfce
14 group13a4 800 from oxlwtg
15 group13a4 800 to ru1u5r
16 group13a4 1200 from rpziz0
17 group13a4 1200 to oxlwtg
18 group13a4 1400 from ru1u5r
19 group13a4 1400 to fvvskj
20 group13a4 500 from m4qfce
21 group13a4 500 to 60m2eq
22 group13a4 50 from fvvskj
23 group13a4 50 to 6sq2gp
def f(x):
#set column size to max
x['size'] = x['size'].max()
return x.drop_duplicates('route', keep=False)
#apply custom function f
df = df.groupby('group').apply(f).reset_index(drop=True)
print df
group size level_2 route
0 group13a4 1400 from rpziz0
1 group13a4 1400 to 60m2eq
2 group32a4 1400 from rpziz0
3 group32a4 1400 to 60m2eq
#reshape data, remove column tmp
df = df.pivot(index='group', columns='level_2').reset_index()
df.columns = ['group','size','tmp','from', 'to']
df = df.drop('tmp', axis=1)
print df
group size from to
0 group13a4 1400 rpziz0 60m2eq
1 group32a4 1400 rpziz0 60m2eq
EDIT:
Similar, I think faster solution with filling DataFrame in
[`groupby`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.groupby.html) with
[`apply`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.core.groupby.GroupBy.apply.html) function `f` and
[`iat`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.iat.html),
[`iloc`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.iloc.html):
def f(x):
#get max of column size
m = x['size'].max()
#remove all duplicates - stay only one value from and one value to
x = x.drop_duplicates('route', keep=False)
x['group'] = x.iat[0, 0]
x['size'] = m
x['from'] = x.iat[0, 3]
x['to'] = x.iat[1, 3]
#print x
#return first row and columns group, size from to
#print x.iloc[0,[0,1,4,5]]
return x.iloc[0,[0,1,4,5]]
#apply custom function f
df = df.groupby('group').apply(f).reset_index(drop=True)
print df
group size from to
0 group13a4 1400 rpziz0 60m2eq
1 group32a4 1400 rpziz0 60m2eq
|
Strange Python behaviour with regex module
Question: I have been playing around with the [improved regex
module](https://pypi.python.org/pypi/regex) for Python by Matthew Barnett and
found a strange error (behaviour? bug?).
Consider the following code:
import regex as re
string = "liberty 123, equality 123, fraternity 123"
rx = r'\d+(?=,|$)'
results = re.findall(rx, string)
print (results)
When invoked from the command line on my Mac (`python regex.py`), I get the
error `AttributeError: 'module' object has no attribute 'findall'`, while when
I copy and paste the exact same code to the python shell, it correctly outputs
['123', '123', '123']
Can somebody enlight me please? Is this some obvious thing I am missing here?
Answer: You must not name your modules identically to the system modules. Rename your
file `regex.py` to something else, like `my_regex.py`, then delete the file
`regex.pyc`, if it exists.
|
Valid JSON using Python
Question: Is this a valid JSON object?
{"Age": "2", "Name": "Rice, Master. Eugene", "Parch": "1", "Pclass": "3", "Ticket": "382652", "PassengerId": "17", "SibSp": "4", "Embarked": "Q", "Fare": "29.125", "Survived": "0", "Cabin": "", "Sex": "male"}
Do I need an EOF?
I have used the following to create the file:
import json
import sys
fieldnames=["PassengerId","Survived","Pclass","Name","Sex","Age","SibSp","Parch","Ticket","Fare","Cabin","Embarked"]
csvfile=open('t1.csv','r')
jsonfile = open('file1.json', 'w')
reader = csv.DictReader( csvfile, fieldnames)
for row in reader:
# if reader.line_num ==1:
#continue # Skip the first line
json.dump(row, jsonfile)
jsonfile.write('\n')
print("Total No of Lines Wriiten : "+ str(reader.line_num))
Answer: Simple test via
[json.loads()](https://docs.python.org/3/library/json.html#json.loads):
import json
j = json.loads('{"Age": "2", "Sex": "male"}')
print j
or alternatively test it by loading it directly from the saved file using
[json.load()](https://docs.python.org/3/library/json.html#json.load):
import json
with open('file1.json', 'r') as f:
j = json.load(f)
print j
... seems to be valid.
|
File parsing using commas Python
Question: I am trying to write a program in python in which we have to add the numbers
from different categories and sub-categories. The program is about a farmer's
annual sale of produce from his farm. The text file from where we have to read
from has 4 categories. The first category is the type of product for example
Vegetables, Fruits, condiments. The second category tells us about the type of
product we have, for example Potatoes, Apples, Hot Sauce. The third category
tells us about the sales in 2014 and the fourth category tells us about the
sales in 2015. In this program, we only have to calculate the totals from the
2015 numbers. The 2014 numbers are present in the text file but are
irrelevant.
Here is how the text file looks like
PRODUCT,CATEGORY,2014 Sales,2015 Sales
Vegetables,Potatoes,4455,5644
Vegetables,Tomatoes,5544,6547
Vegetables,Peas,987,1236
Vegetables,Carrots,7877,8766
Vegetables,Broccoli,5564,3498
Fruits,Apples,398,4233
Fruits,Grapes,1099,1234
Fruits,Pear,2342,3219
Fruits,Bananas,998,1235
Fruits,Peaches,1678,1875
Condiments,Peanut Butter,3500,3902
Condiments,Hot Sauce,1234,1560
Condiments,Jelly,346,544
Condiments,Spread,2334,5644
Condiments,Ketchup,3321,3655
Condiments,Olive Oil,3211,2344
What we are looking to do is to add the sales for 2015 by products and then
the total sales for everything in 2015.
The output should look something like this in the written text file:
> Total sales for Vegetables in 2015 : {Insert total number here}
>
> Total sales for Fruits in 2015 : {Insert total number here}
>
> Total sales for Condiments in 2015 : {Insert total number here}
>
> * * *
>
> Total sales for the farmer in 2015: {Insert total for all the products sold
> in 2015}
Along with that, it should also print the grand total on the Python run screen
in the IDE along with the text file:
> Total sales for the farmer in 2015: {Insert total for all the products sold
> in 2015}
Here is my code. It works but prints a weird first line in the output. Also, I
would rather not use lists. Is there any other way? Please no CSV as we are
directed to use the data as a text file.
readFile = open("Products.txt", "r")
reportfile = open("report.txt", "w")
line = readFile.readline()
totalSum = 0
container = []
product = ()
sum=0
for line in readFile:
line=line.strip()
line=line.split(",")
if line[0] not in container:
print(product,sum, file=reportfile)
product = line[0]
totalSum += int(line[3])
sum = 0
sum += int(line[3])
container.append(product)
elif product == line[0]:
totalSum += int(line[3])
sum += int(line[3])
print(totalSum, file=reportfile)
Answer: These kinds of tasks are perfect for Pandas:
import pandas
df = pandas.read_csv('Products.txt')
df = df.groupby('PRODUCT').sum()
df.ix['Total'] = df.sum()
df
[](http://i.stack.imgur.com/93ajJ.png)
|
Count down error
Question: This code generates an error and I am not sure why or how to resolve it?
File "/Users/johnz/Dropbox/PythonWorkspace/BumpersRev1/test3.py",
line 7, in countdown
if self.remaining <= 0:
AttributeError: 'int' object has no attribute 'remaining'`
.
from tkinter import *
# count down timer
def countdown(self, remaining = None):
if remaining is not None:
self.remaining = remaining
if self.remaining <= 0:
pass
else:
self.remaining = self.remaining - 1
self.after(1000, self.countdown)
def main():
# create a Tk window
win1 = Tk()
countdown(90)
mainloop()
main()
Answer: You're trying to pass an integer into the function countdown. It should be
noted that `self` is conventional. It is not required to be named that, but it
is what we generally use. However, because this is not an instance function
(i.e. is not part of a class) then you have no reason to inherit the object
itself. This can be done without the self entirely.
import functools
from Tkinter import *
def countdown(remaining = None):
if remaining <= 0 or remaining is None:
pass
else:
remaining -= 1
# Requires a widget instance here....
# Widget.after(1000, functools.partial(countdown, remaining))
|
google storage python JSON client eternal 502 when uploading media to bucket
Question: This is my first time using Google Storage json api, and I am trying to do a
simple image file _UPLOAD_ to a bucket using the python client library. It
seems that I am correctly getting credentials (application default) because I
can _LIST_ the contents of the bucket when I provide the credentials to the
discovery.build() method, and then the same listing call fails when I do not
provide credentials. In terms of the ACL, I am maybe not 100% solid on how to
set the access controls in general, **BUT** I am pretty confident that that is
not the issue, because I have also tried to execute my script on a Compute
Engine VM after configuring it with Cloud Storage API READ and WRITE access in
the console.
Further, I am aware of the documentation's advice to retry promptly (with
'exponential back off') when you get a 502 Bad Gateway response, but I am
getting this error every single time I have tried for a couple hours now. So,
either the Google Storage service is broken in certain ways, or my client is
broken, right? Even further, I am able to push up the jpeg file on the command
line using `gsutil cp ...`, which makes me think I am broken and not Google,
and that the bucket is writable with my default credentials. Sound good?
Here's my very simple client script, which is a jury-rigged from
[this](https://cloud.google.com/storage/docs/json_api/v1/json-api-python-
samples) official sample:
import json
from apiclient import discovery
from apiclient.http import MediaFileUpload
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('storage', 'v1', credentials=credentials)
image = MediaFileUpload('up.jpg', mimetype='image/jpeg')
req = service.objects().insert_media(bucket='somebucket', media_body=image)
resp = req.execute()
print(json.dumps(resp, indent=2))
And here is the trace I get:
Traceback (most recent call last):
File "upload.py", line 10, in <module>
resp = req.execute()
File "/Envs/py2.7/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Envs/py2.7/lib/python2.7/site-packages/googleapiclient/http.py", line 729, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 502 when requesting https://www.googleapis.com/upload/storage/v1/b/somebucket/o?uploadType=media&alt=media returned "Bad Gateway">
I have tried python 2.7 and python 3.5, so I don't think that is the issue
(docs told me google python client is functional in 3.5, but not as well
tested). Also, I have tried a 'resumable upload' variation on this, as per
[here](https://developers.google.com/api-client-
library/python/guide/media_upload), and I get the same, ever-faithful 502 Bad
Gateway. Is google broken? Please help.
Answer: You need to specify a destination name for your object. I believe that you
want the method `service.objects().insert()` instead of
`service.objects().insert_media()`. So the full line would be:
req = client.objects().insert(
bucket='somebucket',
name='nameOfObject',
media_body=image)
There's [a complete
example](https://cloud.google.com/storage/docs/json_api/v1/objects/insert#examples)
on the objects.insert documentation page.
I'm not sure why you're getting a 502, though. That may be a bug.
|
Unable to access windows controls inside pywinauto's hwndwrapper (wrapper class
Question: [enter image description here](http://i.stack.imgur.com/BTJpy.png)I am new to
python and pywinauto. Trying to set or get Text for TextBox (windows control)
inside pywinauto.controls.hwndwrapper.hwndwrapper by using SWAPY, I have Class
Name of wrapper class. How to access controls inside wrapper class using class
name (like `Afx:633C0000:1008`) in pywinauto?
import pywinauto
import pywinauto.controls
from pywinauto.application import Application
app = Application().Connect(title=u'SAP', class_name='SAP_FRONTEND_SESSION')
sapfrontendsession = app.SAP
afxe = sapfrontendsession[u'Afx:633C0000:1008']
Answer: pywinauto provides a 2-level concept based on `WindowSpecification` and
wrappers. Window specification is just a description, set of criteria to
search desired control (it may not exist when `WindowSpecification` is
created). Concrete wrapper is created for really existing control if found. In
IDLE console it looks so:
>>> app.RowListSampleApplication
<pywinauto.application.WindowSpecification object at 0x0000000003859B38>
>>> app.RowListSampleApplication.WrapperObject()
<pywinauto.controls.win32_controls.DialogWrapper object at 0x0000000004ADF780>
Window specification can have no more than 2 levels:
`app.WindowName.ControlName`. It can be specified with more detailed search
criteria:
app.Window_(title=u'SAP', class_name_re='^Afx:.*$')
app.SAP.ChildWindow(class_name='Edit')
Possible `Window_/ChildWindow` arguments are the same as listed in
[find_windows](http://pywinauto.github.io/docs/code/pywinauto.findwindows.html?highlight=find_windows#pywinauto.findwindows.find_windows).
* * *
P.S. Great Python features can hide `WrapperObject()` method call in
production code so you need to call it for debugging purpose only. For example
these statements are equivalent (do the same):
app.WindowName.Edit.SetText(u'text')
app.WindowName.Edit.WrapperObject().SetText(u'text')
But the statements below return different objects:
app.WindowName.Edit # <WindowSpecification>
app.WindowName.Edit.WrapperObject() # <EditWrapper>
|
Compare files with a zip list and figure out if it is larger
Question: I want to compare the files of two zip folders. Copy only if the magnitude is
greater when there is already a zip at the end when missing an equal name,
copy the file. Only Name is compared not a date: es--> Campobasso[CB]-Molise
Folder DirTemp ZIP: Campobasso[CB]-Molise__02-02-2016.zip
Folder DirArc ZIP: Foggia[FG]-Puglia__22-01-2016.zip
Roma[RM]-Lazio__20-01-2016.zip
Folder DirArcScartati, They are the zip that if found and are smaller are put
another folder
This is my code, but work partially, I can not to copy the file (if not exist)
at the end of the control, with list.
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import os,glob,shutil
DirTemp = "/var/www/vhosts/anon_ftp/incoming/"
DirArc = "/var/www/vhosts/settings/BackupDTT/"
DirArcScartati = "/var/www/vhosts/settings/BackupDTT_scartati/"
ExtFile = ".zip"
def ControlFile():
# Controllo i nuovi file zip
listnew=[]
#print "Avvio copia"
for name in glob.glob(DirTemp + "*" + ExtFile):
listnew.append((name.replace(DirTemp,"").replace(ExtFile,"").split("__")[0],name))
#print "Nome: "+ str(listnew)
for oldname in glob.glob(DirArc + "*" + ExtFile):
#print "Setting Esistente: "+oldname
namesplit = oldname.replace(DirArc,"").replace(ExtFile,"").split("__")[0]
for newname in listnew:
#print "New Nome: "+str(newname[0])
print namesplit
if namesplit == newname[0]:
if os.path.getsize(newname[1]) >= os.path.getsize(oldname):
print ("trasferire file" + newname[1] + " >>> " + oldname)
shutil.copy2(newname[1],DirArc)
os.remove(oldname)
#os.remove(newname[1])
break
elif os.path.getsize(newname[1]) <= os.path.getsize(oldname):
print ("File più piccolo---\nFileNuovo: " + str(os.path.getsize(newname[1])) + " FileVecchio: " + str(os.path.getsize(oldname)))
shutil.copy2(newname[1],DirArcScartati)
#os.remove(newname[1])
break
else:
for newname in listnew:
print ("Nuova città trasferisco il file: " + newname[1])
shutil.copy2(newname[1],DirArc)
#os.remove(newname[1])
break
ControlFile()
Answer: The following approach might be a bit easier to follow:
def get_file_dictionary(folder):
""" Return a dictionary of zip files in the given folder as tuples in the form: (basename, full path, size) """
return {os.path.splitext(os.path.split(x)[1])[0].split('__')[0] : (x, os.path.getsize(x)) for x in glob.glob(folder + '*.zip')}
DirTemp = "/var/www/vhosts/italysat.eu/anon_ftp/incoming/"
DirArc = "/var/www/vhosts/italysat.eu/settings.italysat.eu/BackupDTT/"
DirArcScartati = "/var/www/vhosts/italysat.eu/settings.italysat.eu/BackupDTT_scartati/"
incoming = get_file_dictionary(DirTemp)
existing = get_file_dictionary(DirArc)
for base_name_inc, (full_name_inc, size_inc) in incoming.items():
try:
full_name_exist, size_exist = existing[base_name_inc]
if size_inc > size_exist:
print "Transfer {} -> {}".format(full_name_inc, full_name_exist)
os.remove(full_name_exist)
shutil.copy2(full_name_inc, full_name_exist)
else:
discard = os.path.join(DirArcScartati, os.path.split(full_name_inc)[1])
print "Discard {} -> {}".format(full_name_inc, discard)
shutil.copy2(full_name_inc, discard)
except KeyError, e:
new_entry = os.path.join(DirArc, os.path.split(full_name_inc)[1])
print "Transfer new {} -> {}".format(full_name_inc, new_entry)
os.remove(full_name_inc)
It first creates to dictionaries containing all of the entries in the incoming
folder and the archive folder. It then iterates over the incoming dictionary
to see if an entry is in the archive. If it is, it checks the two sizes, if
not it copies the new entry.
The dictionaries are stored with the key being your base name (without dates)
and the values being the full path name and the file size.
|
Using 2 dictionaries in python
Question: I'm trying to create a code that if the user types his first name and his
middle name initial into the input it will add the two dictionary numbers
together.
**For the first name values I want:**
d['a'] = 0
d['b'] = 60
d['c'] = 100
d['d'] = 160
d['e'] = 200
d['f'] = 240
d['g'] = 280
d['h'] = 320
d['i'] = 400
d['j'] = 420
d['k'] = 500
d['l'] = 520
d['m'] = 540
d['n'] = 620
d['o'] = 640
d['p'] = 660
d['q'] = 700
d['r'] = 720
d['s'] = 780
d['t'] = 800
d['u'] = 840
d['v'] = 860
d['w'] = 880
d['x'] = 940
d['y'] = 960
d['z'] = 980
**Middle name initial values I want:**
d['a'] = 1
d['b'] = 2
d['c'] = 3
d['d'] = 4
d['e'] = 5
d['f'] = 6
d['g'] = 7
d['h'] = 8
d['i'] = 9
d['j'] = 10
d['k'] = 11
d['l'] = 12
d['m'] = 13
d['n'] = 14
d['o'] = 14
d['p'] = 15
d['q'] = 15
d['r'] = 16
d['s'] = 17
d['t'] = 18
d['u'] = 18
d['v'] = 18
d['w'] = 19
d['x'] = 19
d['y'] = 19
d['z'] = 19
**Code so far:**
first_name = raw_input("what is your first name?: ")
middle_initial = raw_input("What is your middle initial?: ")
#First Name Initial Values
d = {}
d['a'] = 0
d['b'] = 60
d['c'] = 100
d['d'] = 160
d['e'] = 200
d['f'] = 240
d['g'] = 280
d['h'] = 320
d['i'] = 400
d['j'] = 420
d['k'] = 500
d['l'] = 520
d['m'] = 540
d['n'] = 620
d['o'] = 640
d['p'] = 660
d['q'] = 700
d['r'] = 720
d['s'] = 780
d['t'] = 800
d['u'] = 840
d['v'] = 860
d['w'] = 880
d['x'] = 940
d['y'] = 960
d['z'] = 980
lower = first_name.lower()
first_initial = lower[0]
if first_initial in d:
print d[first_initial]
**Example:**
For example if I type Josh for the first name and J for the middle name
initial the output should be 430.
If you are confused how I got 430, I added 420 and 10 together because 420 was
the value of 'j' for the fist initial of the first name 'josh' and 10 is the
value of 'j' for the middle name initial.
Answer: I'd suggest to use (`dict` of) `zip` for two lists:
from string import lowercase as abc
d1 = dict( zip(abc, [0, 60, 100, 160, ...]) )
d2 = dict( zip(abc, [1,2,3,4,5,...]) )
...
|
How to prevent Panda's itertuple method from adding extra decimals to records from a csv file? -Python
Question: I have a Python function that reads through a csv file and returns each row in
the csv in a tuple.
I'm using Python's Pandas library to achieve this.
The problem is after Pandas returns the tuple, it appends an extra decimal
point to records that looks like an integer. e.g `1001 becomes 1001.0`
Sample csv file:
key1, key2
a, '1001'
b, '2002'
The code is something like this:
import pandas as pd
file_content_df = pd.read_csv(path_to_csv_file)
for each_row in file_content_df.itertuples():
row_item1, row_item2 = each_row
print row_item1 # Prints 'a'
print row_item2 # Prints 1001.0 (Desired result is 1001)
Is there a way to control this behavior pls ?!
Answer: First you can check [`dtypes`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.dtypes.html) if column `key2` is `int`
or `float` or `object` and then you can use second item by `each_row[1]` and
third item by `each_row[2]`:
print df
key1 key2
0 a 1001
1 b 2002
print df.dtypes
key1 object
key2 int64
dtype: object
for each_row in df.itertuples():
print each_row
print each_row[1]
print each_row[2]
print '******'
Pandas(Index=0, key1='a', key2=1001)
a
1001
******
Pandas(Index=1, key1='b', key2=2002)
b
2002
******
If `dtypes` of column `key2` is `object` and `df` is like:
print df
key1 key2
0 a '1001'
1 b '2002'
print df.dtypes
key1 object
key2 object
dtype: object
#remove ' and cast to integer
df['key2'] = df['key2'].str.strip("'").astype(int)
print df.dtypes
key1 object
key2 int32
dtype: object
for each_row in df.itertuples():
print each_row
print each_row[1]
print each_row[2]
print '******'
Pandas(Index=0, key1='a', key2=1001)
a
1001
******
Pandas(Index=1, key1='b', key2=2002)
b
2002
******
|
2D array to represent a huge python dict, COOrdinate like solution to save memory
Question: I try to update a dict_with_tuples_key with the data from an array:
myarray = np.array([[0, 0], # 0, 1
[0, 1],
[1, 1], # 1, 2
[1, 2], # 1, 3
[2, 2],
[1, 3]]
) # a lot of this with shape~(10e6, 2)
dict_with_tuples_key = {(0, 1): 1,
(3, 7): 1} # ~10e6 keys
Using an array to store the dict values, (thanks to @MSeifert) we get this:
def convert_dict_to_darray(dict_with_tuples_key, myarray):
idx_max_array = np.max(myarray, axis=0)
idx_max_dict = np.max(dict_with_tuples_key.keys(), axis=0)
lens = np.max([list(idx_max_array), list(idx_max_dict)], axis=0)
xlen, ylen = lens[0] + 1, lens[1] + 1
darray = np.zeros((xlen, ylen)) # Empty array to hold all indexes in myarray
for key, value in dict_with_tuples_key.items():
darray[key] = value
return darray
@njit
def update_darray(darray, myarray):
elements = myarray.shape[0]
for i in range(elements):
darray[myarray[i][0]][myarray[i][1]] += 1
return darray
def darray_to_dict(darray):
updated_dict = {}
keys = zip(*map(list, np.nonzero(darray)))
for x, y in keys:
updated_dict[(x, y)] = darray[x, y]
return updated_dict
darray = convert_dict_to_darray(dict_with_tuples_key, myarray)
darray = update_darray(darray, myarray)
I get the exact result needed:
# print darray_to_dict(darray)
# {(0, 1): 2.0,
# (0, 0): 1.0,
# (1, 1): 1.0,
# (2, 2): 1.0,
# (1, 2): 1.0,
# (1, 3): 1.0,
# (3, 7): 1.0, }
For small matrix it work quit well, @njit work on it so it's very fast, but...
the creation of the huge empty `darray = np.zeros((xlen, ylen))` **does not
fit on memory**. How can we avoid to assign a very sparse array, and only
store non null values like sparse matrix in COOrdinate format ?
Answer: Use `dok_matrix` from `scipy`; a `dock_matrix` is a dictionary Of Keys based
sparse matrix. They allow you to build sparse matrices incrementally and they
won't allocate huge empty `darray = np.zeros((xlen, ylen))` that does not fit
into your computer memory.
The only change to do is to import the right module from scipy and to change
the definition of `darray` in your function `convert_dict_to_darray`.
It will look like this:
from scipy.sparse import dok_matrix
def convert_dict_to_darray(dict_with_tuples_key, myarray):
idx_max_array = np.max(myarray, axis=0)
idx_max_dict = np.max(dict_with_tuples_key.keys(), axis=0)
lens = np.max([list(idx_max_array), list(idx_max_dict)], axis=0)
xlen, ylen = lens[0] + 1, lens[1] + 1
darray = dok_matrix( (xlen, ylen) )
for key, value in dict_with_tuples_key.items():
darray[key[0], key[1]] = value
return darray
|
Python: Create set by removing duplicates in text processing?
Question: Let's say a text file with two columns like below
A "
A "
A l
A "
C r
C "
C l
D a
D "
D "
D "
D d
R "
R "
R "
R "
S "
S "
S o
D g
D "
D "
D "
D j
A "
A "
A z
I would like retrieve the information like below
list1= {A:l}, {C:r,l}, {D:a,d}, {S:o}
final_list= {A:l}, {C:r,l}, {D:a,d}, R{}, {S:o}
I understand that , I have to access the text file `line.strip().split()`
and after that I don't know how to proceed.
Answer:
import collections
list1 = collections.defaultdict(set)
final_list = collections.defaultdict(set)
for line in filetext: ## assuming youve opened it, read it in
key, value = line.strip().split()
final_list[key].add(value)
if value != '"':
list1[key].add(value)
This is slightly different in that `final_list` will have the empty string as
an element; this doesn't match what you said, so let's alter it a little:
import collections
list1 = collections.defaultdict(set)
final_list = {}
for line in filetext: ## assuming youve opened it, read it in
key, value = line.strip().split()
if key not in final_list:
final_list[key] = set()
if value != '"':
list1[key].add(value)
final_list.update(list1)
This should give you what you want - existence with empty-sets for things like
`R`.
|
python manipulating files, logic error in script?
Question: I have a file that looks like this:
**Chr-coordinate-coverage**
chr1 236968289 2
chr1 236968318 2
chr1 236968320 2
chr1 236968374 2
chr1 237005709 2
chr14 22086843 2
chr14 22086846 2
chr14 22086849 2
chr14 22086851 4
chr2 5078129 2
chr2 5341758 2
chr2 5342443 2
I want to manipulate it to obtain:
**chr-start-end-average coverage-distance**
chr1 236968289 236968374 2 85
chr14 22086843 22086851 2.5 8
chr2 5078129 5078129 2 0
chr2 5341758 5342443 2 685
I want that: if chr is different from the previous chr **or** the difference
between coordinates is bigger then 1000: it prints the output as shown. With
the chr, the starting coordinate, the ending coordinate, the average coverage
and the distance between start and end.
To do so, I wrote the following code:
cov=open("coverage.txt")
oldchr="chr55" #dummy starting data
oldcoordinate=1
sumcoverage=0
startcoordinate=0
try:
while True:
line=next(cov).split("\t",2)
newchr=line[0]
newcoordinate=int(line[1]) #read informations from file
newcoverage=int(line[2].strip())
if oldchr != newchr or newcoordinate - oldcoordinate > 1000:
distance=oldcoordinate-startcoordinate
averagecoverage=sumcoverage/distance
merge=oldchr+'\t'+str(startcoordinate)+'\t'+str(oldcoordinate)+'\t'+str(averagecoverage)+'\t'+str(distance)
print merge
startcoordinate=newcoordinate
sumcoverage=0
oldchr=newchr
oldcoordinate=newcoordinate #replace old with new chr and coordinates
sumcoverage=sumcoverage+newcoverage
except(StopIteration):
print ""
I am not able to understand why it doesn't work properly. The error I got is
that the division to obtain the "average coverage" is trying to divide per 0,
so in many cases the "distance" ( **distance=oldcoordinate-startcoordinate**)
is equal to 0. This should not happen, in the input file is never the case
that 2 lines have the same coordinate. I am not able to see where the error
is. I hope someone can help me, thank you in advance.
Answer: You could use Python's
[`groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby)
function to group up your entries according to the `chr` column. The
[`csv`](https://docs.python.org/2/library/csv.html#module-csv) library also
makes it easier to process the file:
from itertools import groupby
import csv
def display_block(block):
average_coverage = sum(x[2] for x in block) / float(len(block))
print block[0][0], "\t", block[0][1], "\t", block[-1][1], "\t", average_coverage, "\t", block[-1][1] - block[0][1]
with open('coverage.txt', 'rb') as f_coverage:
for chr, entries in groupby(csv.reader(f_coverage, delimiter='\t'), lambda x: x[0]):
entries = [(e[0], int(e[1]) , int(e[2])) for e in entries]
block = []
ientries = iter(entries)
block.append(next(ientries))
for chr, coord, coverage in ientries:
if coord - block[-1][1] > 1000:
display_block(block)
block = []
block.append([chr, coord, coverage])
if len(block):
display_block(block)
This would display the following output:
chr1 236968289 236968374 2.0 85
chr1 237005709 237005709 2.0 0
chr14 22086843 22086851 2.5 8
chr2 5078129 5078129 2.0 0
chr2 5341758 5342443 2.0 685
Each iteration of the `for` loop gives you a current `chr` and all matching
rows for that `chr`. The script goes through each row and converts the `coord`
and `coverage` to integers. It then makes the list it into an iterator. The
first matching row is stored in `block` and then any remaining rows are
iterated over. Each time the distance is greater than `1000` the block is
displayed and restarted. Any remaining entries at the end are then also
displayed.
Tested using Python 2.7.6
|
python error run from cmd
Question: For my coursework i am doing a booking system which uses some tree views. The
problem is that if ran from cmd when i double click on a row in the tree view
it does not populate the boxes below it, however if i run it from the idle it
does. Below is my where i have made my class and two defs which are
create_GUI(for first part of gui before double click) and double click(makes
second part of GUI)
from tkinter import *
import os
import datetime
import sqlite3
from tkinter.ttk import Combobox,Treeview,Scrollbar
import tkinter as tk
import Utilities
class Application(Frame):
""" Binary to Decimal """
def __init__(self, master):
""" Initialize the frame. """
super(Application, self).__init__(master)
self.grid()
self.create_GUI()
def Quit(self):
self.master.destroy()
def create_GUI(self):
frame1 = tk.LabelFrame(root, text="frame1", width=300, height=130, bd=5)
frame2 = tk.LabelFrame(root, text="frame2", width=300, height=130, bd=5)
frame1.grid(row=0, column=0, columnspan=3, padx=8)
frame2.grid(row=1, column=0, columnspan=3, padx=8)
self.title_lbl = Label(frame1, text = "Students")
self.title_lbl.grid(row = 0, column = 2)
self.fn_lbl = Label(frame1, text = "First Name:")
self.fn_lbl.grid(row = 1 , column = 1)
self.fn_txt = Entry(frame1)
self.fn_txt.grid(row = 1, column = 2)
self.ln_lbl =Label(frame1, text = "Last Name:")
self.ln_lbl.grid(row = 2, column = 1)
self.ln_txt = Entry(frame1)
self.ln_txt.grid(row = 2, column = 2)
self.q_btn = Button(frame1, text = "Back",padx=80,pady=10, command = lambda: self.Quit())
self.q_btn.grid(row = 3, column = 0)
self.s_btn = Button(frame1, text = "search",padx=80,pady=10, command = lambda: self.search())
self.s_btn.grid(row = 3,column = 3)
self.tree = Treeview(frame2,height = 6)
self.tree["columns"] = ("StudentID","First Name","Last Name")#,"House Number", "Street Name", "Town Or City Name","PostCode","MobilePhoneNumber")
self.tree.column("StudentID",width = 100)
self.tree.column("First Name",width = 100)
self.tree.column("Last Name", width = 100)
## self.tree.column("House Number", width = 60)
## self.tree.column("Street Name", width = 60)
## self.tree.column("Town Or City Name", width = 60)
## self.tree.column("PostCode", width = 60)
## self.tree.column("MobilePhoneNumber", width = 60)
self.tree.heading("StudentID",text="StudentID")
self.tree.heading("First Name",text="First Name")
self.tree.heading("Last Name",text="Last Name")
## self.tree.heading("House Number",text="House Number")
## self.tree.heading("Street Name",text="Street Name")
## self.tree.heading("Town Or City Name",text="Town Or City Name")
## self.tree.heading("PostCode",text="PostCode")
## self.tree.heading("MobilePhoneNumber",text="MobilePhoneNumber")
self.tree["show"] = "headings"
yscrollbar = Scrollbar(frame2, orient='vertical', command=self.tree.yview)
xscrollbar = Scrollbar(frame2, orient='horizontal', command=self.tree.xview)
self.tree.configure(yscroll=yscrollbar.set, xscroll=xscrollbar.set)
yscrollbar.grid(row=1, column=5, padx=2, pady=2, sticky=NS)
self.tree.grid(row=1,column=0,columnspan =5, padx=2,pady=2,sticky =NSEW)
self.tree.bind("<Double-1>",lambda event :self.OnDoubleClick(event))
def OnDoubleClick(self, event):
frame3 = tk.LabelFrame(root, text="frame1", width=300, height=130, bd=5)
frame3.grid(row=2, column=0, columnspan=3, padx=8)
self.message=StringVar()
self.message.set("")
self.lblupdate = Label(frame3, textvariable = self.message).grid(row=0,column=0,sticky=W)
curItem = self.tree.focus()
contents =(self.tree.item(curItem))
StudentDetails = contents['values']
print(StudentDetails)
self.tStudentID=StringVar()
self.tFirstName = StringVar()
self.tLastName = StringVar()
self.tHouseNumber = StringVar()
self.tStreetName = StringVar()
self.tTownOrCityName = StringVar()
self.tPostCode = StringVar()
self.tEmail = StringVar()
self.tMobilePhoneNumber = StringVar()
self.tStudentID.set(StudentDetails[0])
self.tFirstName.set(StudentDetails[1])
self.tLastName.set(StudentDetails[2])
self.tHouseNumber.set(StudentDetails[3])
self.tStreetName.set(StudentDetails[4])
self.tTownOrCityName.set(StudentDetails[5])
self.tPostCode.set(StudentDetails[6])
self.tEmail.set(StudentDetails[7])
self.tMobilePhoneNumber.set(StudentDetails[8])
self.inst_lbl0 = Label(frame3, text = "Student ID").grid(row=5,column=0,sticky=W)
self.StudentID = Label(frame3, textvariable=self.tStudentID).grid(row =5,column=1,stick=W)
self.inst_lbl1 = Label(frame3, text = "First Name").grid(row=6,column=0,sticky=W)
self.NFirstName = Entry(frame3, textvariable=self.tFirstName).grid(row =6,column=1,stick=W)
self.inst_lbl2 = Label(frame3, text = "Last Name").grid(row=7,column=0,sticky=W)
self.NLastName = Entry(frame3, textvariable=self.tLastName).grid(row =7,column=1,stick=W)
self.inst_lbl3 = Label(frame3, text = "House Number").grid(row=8,column=0,sticky=W)
self.HouseNumber = Entry(frame3,textvariable=self.tHouseNumber).grid(row=8,column=1,sticky=W)
self.inst_lbl4 = Label(frame3, text = "Street Name").grid(row=9,column=0,sticky=W)
self.StreetName =Entry(frame3,textvariable=self.tStreetName).grid(row=9,column=1,sticky=W)
self.inst_lbl5 = Label(frame3, text = "Town or City Name").grid(row=10,column=0,sticky=W)
self.TownOrCityName =Entry(frame3,textvariable=self.tTownOrCityName).grid(row=10,column=1,sticky=W)
self.inst_lbl6 = Label(frame3, text = "Postcode").grid(row=11,column=0,sticky=W)
self.PostCode = Entry(frame3,textvariable=self.tPostCode).grid(row=11,column=1,sticky=W)
self.inst_lbl7 = Label(frame3, text = "Email").grid(row=12,column=0,sticky=W)
self.Email =Entry(frame3,textvariable=self.tEmail).grid(row=12,column=1,sticky=W)
self.inst_lbl8 = Label(frame3, text = "Mobile phonenumber").grid(row=13,column=0,sticky=W)
self.MobilePhoneNumber =Entry(frame3,textvariable=self.tMobilePhoneNumber).grid(row=13,column=1,sticky=W)
self.btnSaveChanges = Button(frame3, text = "save changes",padx=80,pady=10,command = lambda:self.SaveChanges()).grid(row=14,column=0,sticky=W)
self.btnSaveChanges = Button(frame3, text = "delete record",padx=80,pady=10,command = lambda:self.DeleteRecord()).grid(row=14,column=1,sticky=W)
def search(self):
FirstName = self.fn_txt.get()
LastName = self.ln_txt.get()
with sqlite3.connect("GuitarLessons.db") as db:
cursor = db.cursor()
cursor.row_factory = sqlite3.Row
sql = "select StudentID,FirstName,LastName,HouseNumber,StreetName,TownOrCityName,PostCode,Email,MobilePhoneNumber"\
" from tblStudents"\
" where FirstName like ?"\
" and LastName like ?"
cursor.execute(sql,("%"+FirstName+"%","%"+LastName+"%",))
StudentList = cursor.fetchall()
print(StudentList)
self.loadStudents(StudentList)
def loadStudents(self,StudentList):
for i in self.tree.get_children():
self.tree.delete(i)
for student in StudentList:
self.tree.insert("" , 0,values=(student[0],student[1],student[2],student[3],student[4],student[5],student[6],student[7],student[8]))
def SaveChanges(self):
valid = True
self.message.set("")
NFirstName = self.tFirstName.get()
NLastName = self.tLastName.get()
NHouseNumber = self.tHouseNumber.get()
NStreetName = self.tStreetName.get()
NTownOrCityName = self.tTownOrCityName.get()
NPostCode = self.tPostCode.get()
NEmail = self.tEmail.get()
NMobilePhoneNumber = self.tMobilePhoneNumber.get()
StudentID = self.tStudentID.get()
if NFirstName == "" or NLastName == "" or NEmail == "" or NMobilePhoneNumber == "":
valid = False
self.message.set('missing details,first name,last name,phone number, email are all needed')
if not Utilities.is_phone_number(NMobilePhoneNumber ):
valid = False
self.message.set('invalid mobile phone number')
if not Utilities.is_postcode(NPostCode):
valid = False
self.message.set('invalid postcode')
if not Utilities.is_email(NEmail):
valid = False
self.message.set('invalid email')
if NHouseNumber != "":
if int(NHouseNumber) < 0:
self.message.set('invalid house number')
if valid == True:
with sqlite3.connect("GuitarLessons.db") as db:
cursor = db.cursor()
sql = "update tblStudents set FirstName =?,LastName=?,HouseNumber=?,StreetName=?,TownOrCityName=?,PostCode=?,Email=?,MobilePhoneNumber=? where StudentID=?"
cursor.execute(sql,(NFirstName,NLastName,NHouseNumber,NStreetName,NTownOrCityName,NPostCode,NEmail,NMobilePhoneNumber,StudentID))
db.commit()
self.message.set("student details updated")
def DeleteRecord(self):
StudentID = self.tStudentID.get()
#StudentID = int(StudentID)
with sqlite3.connect("GuitarLessons.db") as db:
cursor = db.cursor()
sql = "delete from tblStudents where StudentID = ?"
cursor.execute(sql,(StudentID))
db.commit()
self.tlabeupdate.set("student details deleted")
root = Tk()
root.title("booking system")
root.geometry("800x800")
root.configure(bg="white")
app = Application(root)
root.mainloop()
EDIT when i was copyign and pasting the rest of my code that i forgot to put
in the original post i found that it only stops working if i open it through
the student menu(separate peice of code) but it works if i dont go through the
menu
Answer: You must call `mainloop()` on the root window so that your program can process
events.
|
import graph tool doesn't work on iPython
Question: I installed graph-tool on ubuntu. When trying to import it in iPython I get an
error:
ImportError: No module named graph_tool.all
As I read in other posts it might be possible that I used a different version
of python for installing graph-tools than the system version I'm using. My
question is now, how do I check which version graph-tool is installed with and
how do i change this in order to import it? Thanks for any advice!
Answer: If you install using `pip` you can check
pip -V
result (for example)
pip 8.0.2 from /usr/local/lib/python3.5/dist-packages (python 3.5)
You should have `pip`, `pip2`, `pip2.7`, `pip3`, `pip3.4`, etc. to install
with different Python.
(in bash write `pip` and press `tab` twice to see all programs started with
`pip`)
|
Python Nosetest multi-processing enable and disable at Class/Package level
Question: So I have a directory with sub-directory of Acceptance Tests. Most of my tests
have no dependancies on each other, expect for one suite. Is there a way I can
tell nose when it reaches this class to execute the tests sequentially. Then
once it hits the next class to enable multi-processing again? This is nothing
to do with fixtures in this test suite they simply can't run concurrently.
They are executing APIs which affect other tests running at the same time.
Thanks in advance.
Answer: I would use nose
[attribute](http://nose.readthedocs.org/en/latest/plugins/attrib.html) plugin
to decorate tests that require disabling multiprocessing explicitly and run
two nose commands: one with multiprocessing enabled, excluding sensitive
tests, and one with multiprocessing disabled, including only sensitive tests.
You would have to rely on CI framework should combine test results. Something
like:
from unittest import TestCase
from nose.plugins.attrib import attr
@attr('sequential')
class MySequentialTestCase(TestCase):
def test_in_seq_1(self):
pass
def test_in_seq_2(self):
pass
class MyMultiprocessingTestCase(TestCase):
def test_in_parallel_1(self):
pass
def test_in_parallel_2(self):
pass
And run it like:
> nosetests -a '!sequential' --processes=10
test_in_parallel_1 (ms_test.MyMultiprocessingTestCase) ... ok
test_in_parallel_2 (ms_test.MyMultiprocessingTestCase) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.071s
OK
> nosetests -a sequential
test_in_seq_1 (ms_test.MySequentialTestCase) ... ok
test_in_seq_2 (ms_test.MySequentialTestCase) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
|
Weird EOFError with python and paramiko
Question: my code:
#!/usr/bin/env python
# encoding: utf-8
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('10.X.X.X',username='user',password='password')
stdin, stdout, stderr=ssh.exec_command("get system status")
type(stdin)
stdout.readlines()
Quite simple as I thought, but running throws a traceback:
Traceback (most recent call last):
File "/Users/adieball/Dropbox/Multiverse/Programming/workspace/FortiNet/src/runCommand.py", line 8, in <module>
ssh.connect('10.X.X.X',username='user',password='password')
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/paramiko/client.py", line 325, in connect
t.start_client()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/paramiko/transport.py", line 492, in start_client
raise e
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/paramiko/transport.py", line 1726, in run
ptype, m = self.packetizer.read_message()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/paramiko/packet.py", line 386, in read_message
header = self.read_all(self.__block_size_in, check_rekey=True)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/paramiko/packet.py", line 251, in read_all
raise EOFError()
EOFError
I'm a bit confused here and maybe I'm blind, but I can't find the problem.
thanks
Obviously: connecting per "normal" ssh and running this command works
perfectly fine :-)
I changed to python 2.7 (the default on Mac) and removed the 3.5 installation
completely. I now get a different error (still EOFError though):
Traceback (most recent call last):
File "/Users/adieball/Dropbox/Multiverse/Programming/workspace/FortiNet/src/runCommand.py", line 8, in <module>
ssh.connect('10.X.X.X,username='user',password='password')
File "build/bdist.macosx-10.11-intel/egg/paramiko/client.py", line 325, in connect
File "build/bdist.macosx-10.11-intel/egg/paramiko/transport.py", line 492, in start_client
EOFError
As said before, I can perfectly fine connect to the box with "normal" ssh. I
also tested python's connectivity with:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(0.5)
try:
s.connect(('10.2.2.254',22))
except Exception, e:
print 'connection failed'
else:
print "success"
s.close()
and it worked (i.e. printed "success"), but for whatever reason, paramiko
seems to be unable to connect. I tried my code step by step in a python shell
to see when the error occurs and it happens right after
ssh.connect('10.X.X.X',username='user',password='password')
Answer: Hi i have faced same error while writing a script to access fortigate remotly.
i can see that you are trying on a forigate too.
I have changed the config on the fortigate under "config system global" and it
worked (i don't know which change actually did it , it needs some trial and
error to know)
config system global
- set admin-server-cert "Fortinet_Factory"
- set admin-sport 2222
- set admin-ssh-grace-time 20
- set admin-ssh-v1 enable
- set auth-cert "Fortinet_Factory"
- set dh-params 1024
- set fgd-alert-subscription advisory latest-threat
- set gui-theme red
- set gui-wireless-opensecurity enable
- set hostname "FortiGate-VM64"
- set ssh-cbc-cipher disable
- set ssh-hmac-md5 disable
- set strong-crypto disable
- set timezone 04
end
please let me know if you did know which one was essential.
|
ssh + here-document syntax with Python
Question: I'm trying to run a set of commands through ssh from a Python script. I came
upon the `here-document` concept and thought: cool, let me implement something
like this:
command = ( ( 'ssh user@host /usr/bin/bash <<EOF\n'
+ 'cd %s \n'
+ 'qsub %s\n'
+ 'EOF' ) % (test_dir, jobfile) )
try:
p = subprocess.Popen( command.split(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT )
except :
print ('from subprocess.Popen( %s )' % command.split() )
raise Exception
#endtry
Unfortunately, here is what I get:
bash: warning: here-document at line 0 delimited by end-of-file (wanted `EOF')
Not sure how I can code up that end-of-file statement (I'm guessing the
newline chars get in the way here?)
I've done a search on the website but there seem to be no Python examples of
this sort...
Answer: Here is a minimum working example,the key is that after `<< EOF` the remaining
string should not be split. Note that `command.split()` is only called once.
import subprocess
# My bash is at /user/local/bin/bash, your mileage may vary.
command = 'ssh user@host /usr/local/bin/bash'
heredoc = ('<< EOF \n'
'cd Downloads \n'
'touch test.txt \n'
'EOF')
command = command.split()
command.append(heredoc)
print command
try:
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception as e:
print e
Verify by checking that the created file `test.txt` shows up in the Downloads
directory on the host that you ssh:ed into.
Kind regards,
Filip
|
I was wondering how I would implement the vigenere cipher in python
Question: I was tasked to create a caesar cipher in the past and now I am trying to
implement the viginere cipher in python. I want to know how I would go about
doing this. I have a basic idea of using variables from the user as
'plaintext' and the alphabet as its own variable and then adding them to
create its own variable and using the index from these in the alphabet and the
line of code:
cipher += alphabet[(alphabet.index(c)+key) % (len(alphabet))
This perhaps may be wrong. If there's anything anyone can help with that would
be appreciated.
Answer: The following comes from [Rosetta
Code](http://rosettacode.org/wiki/Vigen%C3%A8re_cipher#Python)'s web site:
from itertools import starmap, cycle
def encrypt(message, key):
# convert to uppercase.
# strip out non-alpha characters.
message = filter(lambda _: _.isalpha(), message.upper())
# single letter encrpytion.
def enc(c,k): return chr(((ord(k) + ord(c)) % 26) + ord('A'))
return "".join(starmap(enc, zip(message, cycle(key))))
def decrypt(message, key):
# single letter decryption.
def dec(c,k): return chr(((ord(c) - ord(k)) % 26) + ord('A'))
return "".join(starmap(dec, zip(message, cycle(key))))
An example showing how to use the code is included:
text = "Beware the Jabberwock, my son! The jaws that bite, the claws that catch!"
key = "VIGENERECIPHER"
encr = encrypt(text, key)
decr = decrypt(encr, key)
print text
print encr
print decr
Finally, we can see what output of running the code should be:
Beware the Jabberwock, my son! The jaws that bite, the claws that catch!
WMCEEIKLGRPIFVMEUGXQPWQVIOIAVEYXUEKFKBTALVXTGAFXYEVKPAGY
BEWARETHEJABBERWOCKMYSONTHEJAWSTHATBITETHECLAWSTHATCATCH
|
django celery rabbitmq issue: "WARNING/MainProcess] Received and deleted unknown message. Wrong destination"
Question: My settings.py
CELERY_ACCEPT_CONTENT = ['json', 'msgpack', 'yaml', 'pickle', 'application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'djcelery.backends.cache:CacheBackend'
celery.py code
from __future__ import absolute_import
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'webapp.settings')
from django.conf import settings
app = Celery('webapp')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
tasks.py code
from __future__ import absolute_import
from celery.utils.log import get_task_logger
from celery import shared_task
import datetime
logger = get_task_logger(__name__)
@shared_task
def sample_code():
logger.info("Run time:" + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M")))
return None
**On shell I am importing and running as "sample_code.delay()"**
Full error stack:
[2016-02-12 00:28:56,331: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
The full contents of the message body was: body: '\x80\x02}q\x01(U\x07expiresq\x02NU\x03utcq\x03\x88U\x04argsq\x04]q\x05U\x05chordq\x06NU\tcallbacksq\x07NU\x08errbacksq\x08NU\x07tasksetq\tNU\x02idq\nU$f02e662e-4eda-4180-9af4-2c8a1ceb57c4q\x0bU\x07retriesq\x0cK\x00U\x04taskq\rU$app.tasks.sample_codeq\x0eU\ttimelimitq\x0fNN\x86U\x03etaq\x10NU\x06kwargsq\x11}q\x12u.' (232b)
{content_type:u'application/x-python-serialize' content_encoding:u'binary'
delivery_info:{'consumer_tag': u'None4', 'redelivered': False, 'routing_key': u'celery', 'delivery_tag': 8, 'exchange': u'celery'} headers={}}
Please let me know where I am wrong
Answer: The way it was solved for me is change in command for running celery
It was giving issue for:
celery -A <app_path> worker --loglevel=DEBUG
But it's running without issue if we use:
celery -A <app_path> worker -l info
It may be helpful for other if they face same issue.
|
Keep getting a 'SyntaxError: unexpected EOF while parsing' when running my app, can't figure out why
Question: I'm using the python 3 idle, and it's not highlighting anything to tell me
what the syntax error is. Mentions it's in line nine, though I can't see it.
Here's the code, it's a school project for a 'speed checker'
import time#python module with time related functions
file = open('speeders.txt', 'r')
speeders = file.read()
print (speeders) #prints out list of speeding cars
reg_plate = int(input("Please enter the car's registration plate"))#registration plate
speed_limit = int(input("Please enter your speed limit in mph"))#assigns speed limit
input("Press enter when the car passes the first sensor")#assign values to the end and start time variables
start_time = time.time()
input("Press enter when the car passes the second sensor")
end_time = time.time()
distance = float(input("Enter the distance between the two sensors in metres")) #assigns a value to distance
time_taken = end_time - start_time #works out the time it took the car to travel the length of the road
AverageSpeed = distance / time_taken #works out the average speed of the car
print ("The average speed of the car is", AverageSpeed, "m/s") #prints out the average speed of the car in m/s
AverageSpeedMPH = (AverageSpeed * 2.23694) #converts to mph
print ("That's", AverageSpeedMPH, "in mph") #prints out the speed in mph
if AverageSpeedMPH > speed_limit: #prints out whether car is speeding, adds to txt file
print (reg_plate, "is speeding")
file = open("speeders.txt", "a")
file.write(reg_plate + ",")
file.close()
else:
print (reg_plate, "is not speeding, be on your merry way") #prints out if not speeding
Here's what is displayed when the app runs
Please enter the car's registration plate5
Please enter your speed limit in mph5
Press enter when the car passes the first sensor
Traceback (most recent call last):
File "C:\Users\Szymon\Google Drive\Computing\Actual CA work\app2.py", line 9, in <module>
input("Press enter when the car passes the first sensor")#lines 3-7 assign values to the end and start time variables
File "<string>", line 0
^
SyntaxError: unexpected EOF while parsing
Answer: It looks like you are using Python2, but still using `input()`. Try either
switching to Python3, or using `raw_input()` instead.
|
errno 9 bad file descriptor, basic python server socket
Question: I'm having difficulty with getting data from a server in python. I'm getting
Errno 9 bad file descriptor and a 'connection was reset' message on my browser
whenever I try to access my test html file. The code is as follows:
#import socket module
from socket import *
serverSocket = socket(AF_INET, SOCK_STREAM)
serverPort = 12000
#Prepare a sever socket
serverSocket.bind(("", serverPort))
serverSocket.listen(1)
while True:
#Establish the connection
print 'Ready to serve...'
connectionSocket, addr = serverSocket.accept()#Accepts a TCP client connection, waiting until connection arrives
print 'Required connection', addr
try:
message = connectionSocket.recv(64)
filename = message.split()[1]
f = open(filename[1:])
outputdata = f.read()
#Send one HTTP header line into socket
connectionSocket.send('HTTP/1.0 200 OK\r\n\r\n')
#Send the content of the requested file to the client
for i in range(0, len(outputdata)):
connectionSocket.send(outputdata[i])
connectionSocket.close()
except IOError:
#Send response message for file not found
connectionSocket.send('404 Not Found!')
#Close client socket
connectionSocket.close()
serverSocket.close()
I can't tell why i am getting this error. I have tried removing the close from
right outside the outputdata for loop, which didn't work either. I tried
changing the server port, and closing the socket and server in different
orders.
the full trackback is:
Traceback (most recent call last):
File "UDPServer.py", line 13, in <module>
connectionSocket, addr = serverSocket.accept()#Accepts a TCP client connection, waiting until connection arrives
File "C:\Anaconda\lib\socket.py", line 202, in accept
sock, addr = self._sock.accept()
File "C:\Anaconda\lib\socket.py", line 170, in _dummy
raise error(EBADF, 'Bad file descriptor')
socket.error: [Errno 9] Bad file descriptor
Answer: You can't use the socket once it's closed. [The
docs](http://docs.python.org/2/library/socket.html#socket.socket.close) for
`socket.close()` say:
> All future operations on the socket object will fail.
You could create a new socket in the loop.
|
generates all tuples (x, y) in a range
Question: I'd like to know what is the pythonic way to generate all tuples (x, y) where
x and y are integers in a certain range. I need it to generate n points and I
don't want to take the same point two or more times.
Answer: The most Pythonic way is to use the standard library:
>>> import itertools
>>> itertools.product(range(3), range(4))
<itertools.product object at 0x7f2b5c8bc510>
>>> list(_)
[(0, 0), (0, 1), (0, 2), (0, 3), (1, 0), (1, 1),
(1, 2), (1, 3), (2, 0), (2, 1), (2, 2), (2, 3)]
|
Anaconda not able to import the packages like numpy, scipy, theano etc
Question: Without installing Anaconda, everything works fine. That is, I am able to
import the above mentioned packages. But after installing Anaconda, I am not
able to import the same packages. Here is the error which I get: -
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 199, in <module>
from . import random
File "/usr/local/lib/python2.7/dist-packages/numpy/random/__init__.py", line 99, in <module>
from .mtrand import *
ImportError: /usr/local/lib/python2.7/dist-packages/numpy/random /mtrand.so: undefined symbol: PyFPE_jbuf
Answer: Once you install the Anaconda distribution it appends the .bashrc paths with
the location of the anaconda/bin. This means that any python packages
installed in the /usr/local/ may not be importable.
I second the suggestion above and recommend using virtual environments to do
your work. The Anaconda Python distribution comes with conda package
management. This may make your life easier.
You can create a new environments and install packages not provided by the
distribution using conda
build(<http://conda.pydata.org/docs/build_tutorials.html>)
Also look at pip and python wheel.
|
TF save/restore graph fails at tf.GraphDef.ParseFromString()
Question: Based on this [converting-trained-tensorflow-model-to-
protobuf](http://stackoverflow.com/questions/35247406/converting-trained-
tensorflow-model-to-protobuf) I am trying to save/restore TF graph without
success.
Here is saver:
with tf.Graph().as_default():
variable_node = tf.Variable(1.0, name="variable_node")
output_node = tf.mul(variable_node, 2.0, name="output_node")
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
output = sess.run(output_node)
tf.train.write_graph(sess.graph.as_graph_def(), summ_dir, 'model_00_g.pbtxt', as_text=True)
#self.assertNear(2.0, output, 0.00001)
saver = tf.train.Saver()
saver.save(sess, saver_path)
which produces `model_00_g.pbtxt` with text graph description. Pretty much
copy paste from
[freeze_graph_test.py](https://github.com/tensorflow/tensorflow/blob/00440e99ffb1ed1cfe4b4ea650e0c560838a6edc/tensorflow/python/tools/freeze_graph_test.py#L79).
Here is reader:
with tf.Session() as sess:
with tf.Graph().as_default():
graph_def = tf.GraphDef()
graph_path = '/mnt/code/test_00/log/2016-02-11.22-37-46/model_00_g.pbtxt'
with open(graph_path, "rb") as f:
proto_b = f.read()
#print proto_b # -> I can see it
graph_def.ParseFromString(proto_b) # no luck..
_ = tf.import_graph_def(graph_def, name="")
print sess.graph_def
which fails at `graph_def.ParseFromString()` with `DecodeError: Tag had
invalid wire type.`
I am on docker container `b.gcr.io/tensorflow/tensorflow:latest-devel` in case
it makes any difference.
Answer: The `GraphDef.ParseFromString()` method (and, in general, the
`ParseFromString()` method on any Python protobuf wrapper) expects a string in
the binary protocol buffer format. If you pass `as_text=False` to
[`tf.train.write_graph()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#write_graph),
then the file will be in the appropriate format.
Otherwise you can do the following to read the text-based format:
from google.protobuf import text_format
# ...
graph_def = tf.GraphDef()
text_format.Merge(proto_b, graph_def)
|
different versions of python missing pymssql
Question: I have python2.7.3 in /usr/bin/
this version can import pymssql without error
i have python2.7.11 in /usr/local/bin/
this version gets an error when importing pymssql
$ sudo pip install pymssql
Requirement already satisfied (use --upgrade to upgrade): pymssql in /usr/local/lib/python2.7/dist-packages
Cleaning up...
$ python
Python 2.7.11 (default, Feb 9 2016, 14:42:25)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
/>>> import pymssql
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pymssql
$ which python
/usr/local/bin/python
how can i install pymssql to the version of python2.7.11?
Answer: When you run `sudo pip` you are invoking `/usr/bin/python`. Install your
package without `sudo`. I agree with @Michael Frystacky--using virtualenv will
save you from using `sudo pip` ever again.
|
Using input redirection command ("<") in IPython
Question: In windows command line, I am using the [redirection operator
("<")](https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-
us/redirection.mspx?mfr=true) to read input from a file. To run my python
module, I would do something like:
`python myscript.py <input.txt`
In `myscript.py`, I am using `sys.stdin.readline()` to read the input stream,
something like:
def main(argv):
line = sys.stdin.readline()
while line:
doSomething()
line = sys.stdin.readline()
if __name__ == "__main__":
main(sys.argv)
This works find on the command line.
Is there a way to use the redirection command in IPython? I want to debug the
file. Thanks.
I am running Python 3.5.1:: Anaconda 2.5.0 on Win64.
Answer: Not easily. The redirection is a feature of the host command shell, and
running anything inside IPython will isolate you from that.
Another way to do what you're looking for is to bring IPython into your
program. If you know the place where it is breaking, you can add the following
code to the except block of a try-except around the broken line:
import IPython
IPython.embed()
This will start an interactive IPython shell in the context the error
occurred.
Alternatively, you can run the program under the control of the debugger:
[Step-by-step debugging with
IPython](http://stackoverflow.com/questions/16867347/step-by-step-debugging-
with-ipython)
|
How to include text within a python file
Question: There are several questions related to reading python code from external
files, including
[Python: How to import other Python
files](http://stackoverflow.com/questions/2349991/python-how-to-import-other-
python-files)
[How to include external Python code to use in other
files?](http://stackoverflow.com/questions/714881/how-to-include-external-
python-code-to-use-in-other-files)
But it's not clear how to include snippets of text from an external file, as
is standard practice with the CPP `#include` directive. For example, the
following code:
def get_schema():
schema1 = \
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "0",
"type": "object",
"properties": {
"i": { "type": "integer" },
"n": { "type": "string" }
}
}
return(schema1)
s = get_schema()
print(s)
returns the expected dict:
{'id': '0', '$schema': 'http://json-schema.org/draft-04/schema#', 'type': 'object', 'properties': {'i': {'type': 'integer'}, 'n': {'type': 'string'}}}
But what I want to do is write the code so that it imports variable
definitions from external files that do not include any python code (function
definitions or variable assignments):
def get_schema():
schema1 = \
#include "schema1.json"
return(schema1)
s = get_schema()
print(s)
I'm sure this could be done with a bunch of code to open the definition files,
read the contents into a string, add pre- and post-amble lines of python, and
then exec the string, but it seems that there should be a simpler way to just
include text in a function definition. Is there?
Answer: If you want to load json from a file you can use
[**`json`**](https://docs.python.org/2/library/json.html#json.load) library:
import json
def get_schema():
with open('schema1.json', 'r') as f:
return json.load(f)
|
Python - Replace value in JSON file from second file if keys match
Question: I have two JSON files that look like this
{"type": "FeatureCollection", "features": [{ "type": "Feature", "properties": { **"id"**: "Carlow", **"density"**: "0" } , "geometry": { "type": "MultiPolygon", "coordinates": [ [ [ [ -6.58901, 52.906464 ], [ -6.570265, 52.905682 ], [ -6.556207, 52.906464 ],
Second JSON file
{"features": [{"**count**": 2, "name": "**Sligo**"}, {"count": 3"name":"Fermanagh"},{"count": 1, "name": "Laois"},
I am trying to check if **"id"** in the first file matches with **"name"** in
the second file and if so change the value for **"density"** to the value for
**"count"** from the second file. I am looking at using recursion from a
similar question I found here [Replace value in JSON file for key which can be
nested by n levels](http://stackoverflow.com/questions/14882138/replace-value-
in-json-file-for-key-which-can-be-nested-by-n-levels) but it only checks if
one key matches and changes value. I need two keys to match before changing
values. This is the code I have used so far but not sure how to add two keys
and two values. I use Counter to count the number of times a string appears
and save it to county_names.json, which is my second JSON file.
ire_countiesTmp.json is my first file that I am trying to replace the values
with from the second file. Im not sure how to do this with Python as only
started learning it. Any help would be great, or if you know a better way.
Thanks
import json, pprint
from collections import Counter
with open('../county_names.json') as data_file:
county_list = json.load(data_file)
for i in county_list:
c = Counter(i for i in county_list)
for county,count in c.iteritems():
with open('ire_countiesTmp.json') as f:
def fixup(adict, k1, v1, k2, v2):
for key in adict.keys():
if adict[key] == v1:
adict[key] = v
elif type(adict[key]) is dict:
fixup(adict[key], k, v)
#pprint.pprint( data )
fixup(data, 'id', county, 'density', count)
pprint.pprint( data )
Answer: Generally speaking, recursion is not a good idea in Python. The
compiler/interpreter does not handle it well and it becomes terribly slow, as
there is no tail recursion optimisation: [Why is recursion in python so
slow?](http://stackoverflow.com/questions/13543019/why-is-recursion-in-python-
so-slow) .
A possible brute-force-solution that assumes you have converted your JSON-data
into a dict could look like this:
def fixup_dict_a_with_b(a, b):
for feature_a in a["features"]:
for feature_b in b["features"]:
if feature_a["properties"]["id"] == feature_b["name"]:
feature_a["properties"]["density"] = feature_b["count"]
break
This can of course be "abstractified" to your liking. ;)
Other, more elegant solutions exist, but this one is straightforward and easy
to get when you just started to use Python. (Eventually, you might want to
look into pandas, for example.)
|
Resetting Python list contents in nested loop
Question: I have some files in this format that I need to return the oldest and newest
files to pass to a function for parsing
Nv_NODE_DATE_TIME
I would like the output to be
Nv_stats_172550_160211_230030
Nv_stats_172550_160212_142624
Nv_stats_75AKPD0_160211_230030
Nv_stats_75AKPD0_160212_142624
but I am getting the absolute first item and absolute last item
Nv_stats_172550_160211_230030
Nv_stats_75AKPD0_160212_142624
Nv_stats_172550_160211_230030
Nv_stats_75AKPD0_160212_142624
Here is the current code
import os
iostatslocalpath="/root/svc/testing/"
svchost='SVC_Cluster01'
nodenames=['75AKMX0', '75AKPD0', '172550', '172561']
filelist=sorted(os.listdir(iostatslocalpath+svchost+'/.'))
totalfilenumber=len(filelist)
def parse(filename, length):
print filename[0]
print test[length-1]
for nodename in nodenames:
test=[]
test[:]=[]
for file in filelist:
if nodename and "Nv" in file:
test.append(file)
parse(test, len(test))
There is probably something small I am overlooking, any help would be
appreciated
Answer: Note that the
def parse(filename, length):
print filename[0]
print test[length-1]
uses test. You should probably make it
def parse(filename, length):
print filename[0]
print filename[length-1]
Then
if nodename and "Nv" in file:
does the in first and then does the and. [5.15. Operator
precedence](https://docs.python.org/2/reference/expressions.html) It thus is
the equivalent of
if (nodename) and ("NV" in file):
Since you are looping over nodename the first section is alway true.
You probably want to use
if (nodename in file) and ("Nv" in file):
|
ImportError: No module named version in Astropy
Question: I currently have an anaconda installation of of Python, which includes astropy
and numpy among other useful packages. I recently updated my Astropy
individually through pip, by running
pip install --upgrade astropy
After this silly thing that I probably should not have done (I should have
upgrades the entire anaconda package), my pyspeckit package stopped working,
claiming it could not find the version.py in astropy. This is the error I get:
/Users/saracamnasio/Research/code/MC_test.py in <module>()
5 import utilities as u
6 import BDdb
----> 7 import pyspeckit
8 import StringIO
9 import corner
/Users/saracamnasio/Research/code/pyspeckit/pyspeckit/__init__.py in <module>()
8
9 if not _ASTROPY_SETUP_:
---> 10 from version import version as __version__
11 import spectrum
12 import specwarnings
ImportError: No module named version
I tried to uninstall and reinstall astropy, as well as update anaconda
independently but it's not working to fix it. Suggestions?
Answer: Evert's comment is most likely the correct answer: just update pyspeckit. The
version you're using is out of date and has some potential inconsistencies in
how it does relative imports.
However, what you have uncovered is, if not a bug, definitely not a feature,
so it will be removed soon:
<https://github.com/pyspeckit/pyspeckit/pull/134>
|
Python install location on OSX - trouble installing python modules
Question: I'm having a problem installing python packages and I think it has to do with
the fact that I apparently have 4 Python directories. I can download and
install them without a problem using pip... but when trying to import them in
an IDE they don't appear.
Any help would be appreciated and I should say that I'm a complete beginner.
Answer: That's a really tricky issue specific to OS X, and also hard to fix. The root
cause is the fact the GUI apps and console apps do not share the same
environment (with things like PATH and PYTHONPATH).
Read <http://stackoverflow.com/a/588442/99834>
|
Python3 - TypeError: module.__init__() takes at most 2 arguments (3 given)
Question: Please don't mark as duplicate, other similar questions did not solve my
issue.
This is my setup
/main.py
/actions/ListitAction.py
/actions/ViewAction.py
Main.py:
from actions import ListitAction, ViewAction
ListitAction.py:
class ListitAction(object):
def __init__(self):
#some init behavior
def build_uri():
return "test.uri"
ViewAction.py
from actions import ListitAction
class ViewAction(ListitAction):
def __init__(self, view_id):
ListitAction.__init__(self)
self.view_id = view_id
def build_uri():
return "test"
Running:
$ python3 main.py
The only error message I receive is:
Traceback (most recent call last):
File "/home/jlevac/workspace/project/listit.py", line 11, in <module>
from actions import ListitAction, ViewAction, CommentsAction
File "/home/jlevac/workspace/project/actions/ViewAction.py", line 3, in <module>
class ViewAction(ListitAction):
TypeError: module.__init__() takes at most 2 arguments (3 given)
Even if I try for the python3 console, I received the same error message:
$python3
from actions import ViewAction
I am new to Python, but not new to programming. I'm assuming that my error
messages have to do with the import statements, but based on the message I
can't really figure out what it means.
Thanks
Answer: Your imports are wrong, so you're trying to inherit from the modules
themselves, not the classes (of the same name) defined inside them.
from actions import ListitAction
in `ViewAction.py` should be:
from actions.ListitAction import ListitAction
and similarly, all other uses should switch to explicit imports of `from
actions.XXX import XXX` (thanks to the repetitive names), e.g. `from actions
import ListitAction, ViewAction` must become two imports:
from actions.ListitAction import ListitAction
from actions.ViewAction import ViewAction
because the classes being imported come from different modules under the
`actions` package.
|
Cannot parse a protocol buffers file in python when using the correct .proto file
Question: _(see update at bottom)_
[Tilemaker](https://github.com/systemed/tilemaker) is an
[OpenStreetMap](http://www.openstreetmap.org) programme to generate [Mapbox
vector tiles](https://www.mapbox.com/developers/vector-tiles/) (which are
themselves [protocol buffers](https://developers.google.com/protocol-buffers/)
(pbf) files) from an OSM pbf data file. I have compiled it and used it to
create a directory of vector tiles. I cannot parse those files in Python.
I created the vector tiles with:
tilemaker input.pbf --output=tiles/
Then I created a simple python programme, based on [Google's Protocol Buffers
Python Tutorial](https://developers.google.com/protocol-
buffers/docs/pythontutorial#writing-a-message) in this way:
Compiling the `.proto` files:
mkdir py
touch py/__init__.py
protoc --proto_path=include --python_out=./py ./include/osmformat.proto
protoc --proto_path=include --python_out=./py ./include/vector_tile.proto
This python programme `pyread.py` doesn't work:
import sys
import py.vector_tile_pb2
with open(sys.argv[1]) as fp:
pbf_file_contents = fp.read()
tile = py.vector_tile_pb2.Tile()
tile.ParseFromString(pbf_file_contents)
This is the error when trying to run it:
$ python pyread.py ./tiles/13/3932/2588.pbf
Traceback (most recent call last):
File "pyread.py", line 8, in <module>
tile.ParseFromString(pbf_file_contents)
File "/home/rory/.local/lib/python2.7/site-packages/google/protobuf/message.py", line 186, in ParseFromString
self.MergeFromString(serialized)
File "/home/rory/.local/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 841, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "/home/rory/.local/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 866, in InternalParse
new_pos = local_SkipField(buffer, new_pos, end, tag_bytes)
File "/home/rory/.local/lib/python2.7/site-packages/google/protobuf/internal/decoder.py", line 827, in SkipField
return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end)
File "/home/rory/.local/lib/python2.7/site-packages/google/protobuf/internal/decoder.py", line 797, in _RaiseInvalidWireType
raise _DecodeError('Tag had invalid wire type.')
google.protobuf.message.DecodeError: Tag had invalid wire type.
The `protoc` command is from the protcol buffers library. I downloaded the
latest release (2.6.1) from Google's page (which links to Github) and compiled
& installed it. That protoc invocation is just like what the [Tilemaker
Makefile
does](https://github.com/systemed/tilemaker/blob/15a18afca342362a7f3d780c7ca64b1c30552e3d/Makefile#L20).
What's going on? How can I read this protocol buffers file in python?
* * *
**UPDATE** Further investigation makes me think that one of my assumptions
might be wrong. Namely, that the `tilemaker` command has produced a valid
protobuf file. I got some [vector tiles from
Mapzen](https://mapzen.com/projects/vector-tiles/), which should have the same
format and very similar data. _But_ this format works with the python
`pyread.py` command, and with `protoc --decode_raw` and `protoc
--decode=vector_tile.Tile ./include/vector_tile.proto`. Hence I think the
problem is with the file I was looking at.
Answer: I think the problem is that OpenStreetMap's `.pbf` format is **not** a raw
protobuf. See my answer to your other question:
<http://stackoverflow.com/a/35384238/2686899>
|
Python: for loop stops at first row when reading from csv
Question: With this code, I can get it to fire thru the first row in a csv and post the
content. Without the for loop, it works great as well. I am also, using a
simple print statement, able to print out all of the rows in the csv. Where
I'm getting stuck is how to get this to loop thru my csv (2300 rows) and
replace two inline variable. I've tried a couple of iterations of this, moving
statements around, etc, this is my latest attempt.
from __future__ import print_function
import arcrest
import json
import csv
if __name__ == "__main__":
username = "uid"
password = "pwd"
portalId = "id"
url = "http://www.arcgis.com/"
thumbnail_url = ""
with open('TILES.csv') as csvfile:
inputFile = csv.DictReader(csvfile)
x = 0 # counter to display file count
for row in inputFile:
if x == 0:
map_json = {
"operationalLayers": [
{
"templateUrl": "https://{subDomain}.tiles.mapbox.com/v4/abc.GRSM_"+row['ID']+"_pink/{level}/{col}/{row}.png?access_token=pk.secret",
"id": "GRSM_SPECIES_OBSERVATIONS_MAXENT_5733",
"type": "WebTiledLayer",
"layerType": "WebTiledLayer",
"title": row['Species']+" Prediction",
"copyright": "GRSM",
"fullExtent": {
"xmin": -20037508.342787,
"ymin": -20037508.34278,
"xmax": 20037508.34278,
"ymax": 20037508.342787,
"spatialReference": {
"wkid": 102100
}
},
"subDomains": [
"a",
"b",
"c",
"d"
],
"visibility": True,
"opacity": 1
}
],
"baseMap": {
"baseMapLayers": [
{
"id": "defaultBasemap",
"layerType": "ArcGISTiledMapServiceLayer",
"opacity": 1,
"visibility": True,
"url": "http://services.arcgisonline.com/ArcGIS/rest/services/World_Topo_Map/MapServer"
}
],
"title": "Topographic"
},
"spatialReference": {
"wkid": 102100,
"latestWkid": 3857
},
"version": "2.0"
}
securityHandler = arcrest.AGOLTokenSecurityHandler(username,
password)
# Create the administration connection
#
admin = arcrest.manageorg.Administration(url, securityHandler)
# Access the content properties to add the item
#
content = admin.content
# Get the user #
user = content.users.user()
# Provide the item parameters
#
itemParams = arcrest.manageorg.ItemParameter()
itemParams.title = "GRSM_"+row['Species']
itemParams.thumbnailurl = ""
itemParams.type = "Web Map"
itemParams.snippet = "Maxent Output: "+row['Species']
itemParams.licenseInfo = "License"
itemParams.accessInformation = "Credits"
itemParams.tags = "Maxent"+row['Species']
itemParams.description = "This map depicts the tiled output of a Maxent model depicting the probability of occurrence of "+row['Species']+". An in-line legend is not available for this map. "
itemParams.extent = "-84.1076,35.2814,-82.9795, 35.8366"
# Add the Web Map
#
print (user.addItem(itemParameters=itemParams,
overwrite=True,
text=json.dumps(row)))
x = x + 1
Here's the csv:
Species,ID
Abacion_magnum,0000166
Abaeis_nicippe,0000169
Abagrotis_alternata,0000172
Abies_fraseri,0000214
Ablabesmyia_mallochi,0000223
Abrostola_ovalis,0000232
Acalypha_rhomboidea,0000253
Acanthostigma_filiforme,0000296
Acanthostigma_minutum,0000297
Acanthostigma_multiseptatum,0000298
Acentrella_ampla,0000314
Acer_negundo,0000330
Acer_pensylvanicum,0000333
Acer_rubrum_v_rubrum,0000337
Acer_rubrum_v_trilobum,0000338
Acer_saccharum,0000341
Acer_spicatum,0000343
Answer: I think your indentation is wrong, you only have inside your `for` loop the
`if` and the json:
if x == 0:
map_json = {
"operationalLayers": [
{
"templateUrl": "https://{subDomain}.tiles.mapbox.com/v4/abc.GRSM_"+row['ID']+"_pink/{level}/{col}/{row}.png?access_token=pk.secret",
"id": "GRSM_SPECIES_OBSERVATIONS_MAXENT_5733",
"type": "WebTiledLayer",
"layerType": "WebTiledLayer",
"title": row['Species']+" Prediction",
"copyright": "GRSM",
"fullExtent": {
"xmin": -20037508.342787,
"ymin": -20037508.34278,
"xmax": 20037508.34278,
"ymax": 20037508.342787,
"spatialReference": {
"wkid": 102100
}
},
"subDomains": [
"a",
"b",
"c",
"d"
],
"visibility": True,
"opacity": 1
}
],
"baseMap": {
"baseMapLayers": [
{
"id": "defaultBasemap",
"layerType": "ArcGISTiledMapServiceLayer",
"opacity": 1,
"visibility": True,
"url": "http://services.arcgisonline.com/ArcGIS/rest/services/World_Topo_Map/MapServer"
}
],
"title": "Topographic"
},
"spatialReference": {
"wkid": 102100,
"latestWkid": 3857
},
"version": "2.0"
}
|
Access state of object (tkinter, Python3) beginner's level
Question: What I want to get: change of checkbox state changes the state of the Entry
widget from 'disabled' into 'normal'. (checkbox off = Entry disabled, checkbox
on = Entry normal). My problem is that I don't know how to access and update
the state of entry.
My code:
from tkinter import *
from tkinter import ttk
class App(Frame):
def __init__(self, master):
ttk.Frame.__init__(self, master, padding='20')
self.grid()
self.create_checkbox()
self.create_entry()
def create_checkbox(self):
self.limit = BooleanVar()
Checkbutton(self,
text='Limit length',
variable= self.limit,
command= self.state_update,
).grid(row=1, column=1, sticky=W)
def create_entry(self):
self.entry_low = StringVar()
Entry(self,
width=6,
textvariable=self.entry_low,
state='disabled',
).grid(row=1, column=2, sticky=W)
def state_update(self):
self.entry_low.config(state="normal") #THIS OBVIOUSLY DOES NOT WORK
root = Tk()
root.title("Lottery")
app = App(root)
root.mainloop()
I'm beginner, so I'd be especially grateful for simple solutions.
Answer: Save a reference to the entry widget, then call the `configure` method. To
make things easy, give your checkbutton the values for the states. That isn't
strictly necessary, you can use a boolean and then translate that to the
appropriate state.
def create_checkbox(self):
self.limit = StringVar(value="normal")
checkbutton = Checkbutton(..., onvalue="normal", offvalue="disabled", ...)
checkbutton.grid(...)
def create_entry(self):
self.entry_low = StringVar()
self.entry = Entry(self,
width=6,
textvariable=self.entry_low,
state='disabled',
)
self.entry.grid(row=1, column=2, sticky=W)
def state_update(self):
self.entry.config(state="normal") #THIS OBVIOUSLY DOES NOT WORK
* * *
Note: you need to call `grid` in a second step. `grid(...)` (as well as
`place`) returns `None`. If you do `x=Entry(...).grid(...)`, `x` will always
be `None`.
|
PyQT4, Python27 - Serial connection and global variables
Question: I think my issue is simple, but I have hit a brick wall. I am not a programmer
but needed a program to control a laser engraver via Arduino. My Original code
was mostly working, but I wanted the ability to select a serial port with a
QComboBox so I can make it software available for everyone.
I figured out how to do that with the code below:
import sys
import serial
import time
import serial.tools.list_ports
from PyQt4 import QtGui
from window_test import Ui_MainWindow
class Main(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.ui.btn_laser_poweron.clicked.connect(self.btnFIRE)
self.ui.btn_laser_poweroff.clicked.connect(self.btnOFF)
self.ui.btn_lig_power.clicked.connect(self.btnLIG)
self.ui.btn_cutting_power.clicked.connect(self.btnCUT)
self.ui.btn_power_meter.clicked.connect(self.btnTEST)
self.ui.spinBox.valueChanged.connect(self.PwrLevel)
self.ui.comboBox.activated.connect(self.srlprt)
def srlprt(self):
serial.Serial(str(self.ui.comboBox.currentText()))
def btnFIRE(self):
ser.write("a" + chr(255))
def btnOFF(self):
ser.write("b" + chr(0))
def btnTEST(self):
ser.write("c" + chr(0))
time.sleep(59.5)
ser.write("d" + chr(255))
def btnLIG(self):
ser.write("e" + chr(29))
def btnCUT(self):
ser.write("f" + chr(160))
def PwrLevel(self):
val = self.ui.spinBox.value()
ser.write("g" + chr(val))
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = Main()
window.show()
sys.exit(app.exec_())
Now my problem is that none of my buttons work because "ser" is not globally
defined. I understand that I broke that when I removed "ser =
serial.Serial(port=COM3)" when it was above the class definition, but I don't
know how to fix it. Any help would be greatly appreciated.
Cheers!
Answer: A simple solution would be to just set `ser` as attribute of your `Main`
instance. Also it couldn't hurt to close the serial connection if it is open
before opening a new one, e.g:
import sys
import serial
import time
import serial.tools.list_ports
from PyQt4 import QtGui
from window_test import Ui_MainWindow
class Main(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
self.ser = None
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.ui.btn_laser_poweron.clicked.connect(self.btnFIRE)
self.ui.btn_laser_poweroff.clicked.connect(self.btnOFF)
self.ui.btn_lig_power.clicked.connect(self.btnLIG)
self.ui.btn_cutting_power.clicked.connect(self.btnCUT)
self.ui.btn_power_meter.clicked.connect(self.btnTEST)
self.ui.spinBox.valueChanged.connect(self.PwrLevel)
self.ui.comboBox.activated.connect(self.srlprt)
def srlprt(self):
if self.ser:
self.ser.close()
self.ser = serial.Serial(str(self.ui.comboBox.currentText()))
def btnFIRE(self):
self.ser.write("a" + chr(255))
def btnOFF(self):
self.ser.write("b" + chr(0))
def btnTEST(self):
self.ser.write("c" + chr(0))
time.sleep(59.5)
self.ser.write("d" + chr(255))
def btnLIG(self):
self.ser.write("e" + chr(29))
def btnCUT(self):
self.ser.write("f" + chr(160))
def PwrLevel(self):
val = self.ui.spinBox.value()
self.ser.write("g" + chr(val))
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = Main()
window.show()
sys.exit(app.exec_())
|
Key bindings don't work when using askopenfilename dialog in tkinter
Question: I'm working on a simple app for reading and displaying sequences of image
files from within a zip file using python 3.4 with tkinter, like you might use
for reading .cbz comic book files. Ideally I'd like to bind the left and right
keys to show the last and next images respectively. This works fine if I
specify the name of the zip file in the code; however, if I use
filedialog.askopenfilename() dialogue box to specify the file, then the
keyboard key bindings no longer work.
I assumed this was due to a focus issue, and I've tried setting the focus to
the label to which the keys are bound (both using the label.focus_set() method
and the parent option of the askopenfilename() dialogue) without success.
Code is below. Any help on this would be greatly appreciated, as it's starting
to drive me nuts.
from tkinter import *
from tkinter import filedialog
import io
from PIL import Image, ImageTk
import zipfile
class ComicDisplay():
def __init__(self, master):
frame = Frame(master)
frame.pack(fill='both', expand=1)
self.parent = master
self.fname = ""
self.label = Label(frame, bg="brown", height=500)
self.current_zip_file = filedialog.askopenfilename(filetypes=[(zip, "*.zip")])
# self.current_zip_file = "C:\\Users\\Alexis\\Dropbox\\Photos.zip"
self.image_list = self.acquire_image_list(self.current_zip_file)
self.current_image_number = 0
self.pil_image = self.acquire_image(self.current_zip_file, self.image_list[self.current_image_number])
self.tk_image = ImageTk.PhotoImage(self.pil_image)
self.parent.title(self.fname)
self.label.configure(image=self.tk_image)
self.label.focus_set()
self.label.bind("<Configure>", self.image_resizing)
self.label.bind("<Left>", self.get_last_image)
self.label.bind("<Right>", self.get_next_image)
self.label.bind("<Button-1>", self.get_next_image)
self.label.pack(padx=5, pady=5, fill='both', expand=1)
def acquire_image_list(self, zip_file):
image_list = []
with zipfile.ZipFile(zip_file, "r") as myFile:
for filename in myFile.namelist():
image_list.append(filename)
image_list.sort()
return image_list
def acquire_image(self, zip_file, image_file):
with zipfile.ZipFile(zip_file, "r") as myFile:
self.fname = image_file
image_bytes = myFile.read(image_file)
data_stream = io.BytesIO(image_bytes)
pil_image = Image.open(data_stream)
pil_image = self.image_sizer(pil_image)
return pil_image
def image_sizer(self, image_file, window_size=500):
w, h = image_file.size
if w > h:
image_file_height = int(h*(window_size/w))
image_file = image_file.resize((window_size, image_file_height), Image.ANTIALIAS)
else:
image_file_width = int(w*(window_size/h))
image_file = image_file.resize((image_file_width, window_size), Image.ANTIALIAS)
return image_file
def image_resizing(self, event):
new_height = root.winfo_height() - 14
new_size_image = self.image_sizer(self.pil_image, new_height)
self.tk_image = ImageTk.PhotoImage(new_size_image)
self.label.configure(image=self.tk_image)
def get_next_image(self, event):
if self.current_image_number >= len(self.image_list)-1:
self.current_image_number = 0
else:
self.current_image_number += 1
self.update_image()
def get_last_image(self, event):
if self.current_image_number == 0:
self.current_image_number = len(self.image_list)-1
else:
self.current_image_number -= 1
self.update_image()
def update_image(self):
self.fname = self.image_list[self.current_image_number]
self.pil_image = self.acquire_image(self.current_zip_file, self.image_list[self.current_image_number])
self.tk_image = ImageTk.PhotoImage(self.pil_image)
self.parent.title(self.fname)
self.image_resizing(None)
root = Tk()
app = ComicDisplay(root)
root.mainloop()
Answer: Bryan's comment held the answer: delaying the open file dialogue until after
the window was initialized solved the problem. Instead of opening the file
when the app starts, creating a file open method allows the key bindings to
work as they should.
|
I have a list of strings paired with a number. I want to find the average value for each word that matches to a list of words. How can I do this?
Question: Below is an example of what I am referring to. 'injury' is located both in the
second and third string with a value of 100,000 and 50,000 respectively. So
the average value for injury would be 75,000. But 'slip' is only located in
the first string, so it would have an average value of 150,000. I would like
to apply this logic to analyze a database. Are there any suggestions on how to
approach this using python?
word_list = ['loss', 'fault', 'slip', 'fall', 'injury']
data_list = [('there was a slip and fall', 150000), ('injury and loss', 100000), ('injury at fault', 50000)]
Output = [('injury', 75000), ('loss', 100000), ('slip', 150000), ('fall', 150000), ('fault', 50000)]
Answer: After stripping the syntax errors from your example, here's one solution using
loops. I don't think there's any neat comprehension you can pull off here, but
I'm eager to be proven wrong. I used floats for accuracy, convert to int as
needed. I also assumed the order of your `Output` does not matter, since I
cannot make out any intrinsic order that would make sense. That being said,
this should get you started:
from collections import defaultdict
d = defaultdict(dict)
word_list = ['loss', 'fault', 'slip', 'fall', 'injury']
data_list = [('there was a slip and fall', 150000), ('injury and loss', 100000), ('injury at fault', 50000)]
split_list = [(set(x.split()), y) for x,y in data_list]
for word in word_list:
for stringset, count in split_list:
if word in stringset:
d[word]['seen'] = d[word].get('seen', 0) + 1
d[word]['count'] = d[word].get('count', 0) + count
print([(word, float(d[word]['count'])/d[word]['seen']) for word in d])
Output:
[('loss', 100000.0), ('injury', 75000.0), ('fall', 150000.0), ('slip', 150000.0), ('fault', 50000.0)]
|
dynamic table names with SQLalchemy
Question: I am trying to convert old sqlite3 code to sql alchemy. I am trying to make
sense of how best to handle my use case. I am new to the ORM method of
database access.
I am trying to dynamically generate unique table names based on a common
definition. I have read the [mixins
guide](http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/mixins.html#declarative-
mixins) as well as the post on how to use `type` [to dynamically declare
classes](http://stackoverflow.com/questions/2768607/dynamic-class-creation-in-
sqlalchemy), but I am still unsure how of I would go about this. Here is what
I have so far:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.ext.declarative import declared_attr
Base = declarative_base()
class DynamicName(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
class Genome(DynamicName, Base):
__tablename__ = 'AbstractGenome'
AlignmentId = Column(Integer, primary_key=True)
StartOutOfFrame = Column(Integer)
BadFrame = Column(Integer)
def build_genome_table(genome):
d = {'__tablename__': genome}
table = type(genome, (Genome,), d)
return table
If I try to use this, it doesn't work:
>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite:///:memory:', echo=True)
>>> genomes = ["A", "B"]
>>> tables = {x: build_genome_table(x) for x in genomes}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <dictcomp>
File "<stdin>", line 3, in build_genome_table
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/ext/declarative/api.py", line 55, in __init__
_as_declarative(cls, classname, cls.__dict__)
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 88, in _as_declarative
_MapperConfig.setup_mapping(cls, classname, dict_)
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 103, in setup_mapping
cfg_cls(cls_, classname, dict_)
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 135, in __init__
self._early_mapping()
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 138, in _early_mapping
self.map()
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 529, in map
**self.mapper_args
File "<string>", line 2, in mapper
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py", line 623, in __init__
self._configure_inheritance()
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py", line 930, in _configure_inheritance
self.local_table)
File "<string>", line 2, in join_condition
File "/cluster/home/ifiddes/python2.7/lib/python2.7/site-packages/sqlalchemy/sql/selectable.py", line 839, in _join_condition
(a.description, b.description, hint))
sqlalchemy.exc.NoForeignKeysError: Can't find any foreign key relationships between 'AbstractGenome' and 'A'.
How do I go about dynamically generating a `Genome` table based on a passed
name? I also would ideally like a setup where I can have hierarchical
inheritance, so that I can declare different subclasses like `ReferenceGenome`
or `TargetGenome` which have additional columns but also can have dynamic
names.
Answer: See: `sqlalchemy.orm.mapper`:
<http://docs.sqlalchemy.org/en/latest/orm/mapping_api.html#class-mapping-api>.
My understanding (as I've only modified code using this function in the past)
is that it directly maps a model class to a Table object, which itself is
connected to a database table.
This use-case actually sounds pretty similar to the recipe history_meta:
<http://docs.sqlalchemy.org/en/latest/_modules/examples/versioned_history/history_meta.html>.
It might take some time to sort through, but a Table object is being created
here dynamically based on an existing model (any subclass of Versioned), and
then directly mapped to the database table when the class is created.
Here's the issue though: you do need an actual database table to map to. It's
an ORM after all. You have a few options here:
1. If you want to create a table on the fly that will persist in the database, you can use Table.create() as per here: <http://docs.sqlalchemy.org/en/latest/core/metadata.html#creating-and-dropping-database-tables>
2. If you only need to create tables every now and then, you can integrate with alembic: <https://pypi.python.org/pypi/alembic>
3. If you just need it for one process, and never again, you can create temporary tables, though I'm not sure if SQLAlchemy directly supports it. The few resources I checked seem to be using create() and drop() anyway. I haven't used SQLAlchemy 1.0+, so it may have some support somewhere that I haven't seen.
Let me know if anything here isn't clear. It's been a while since I've played
with history_meta.py, so I may be rusty.
|
Parsing an ArrayList within an ArrayList not working.
Question: I'm new to Java and I'm trying to learn how to parse through an ArrayList
within an ArrayList and I can't quite figure it out. I'm used to Python where
all you had to do was `list[index][index]`. Why am I getting an error reading
`Exception in thread "main" java.lang.RuntimeException: Uncompilable source
code - Erroneous tree type: <any>` when trying to use
`list.get(index).get(index)`? Is this not the proper syntax?
import java.io.*;
import java.util.*;
public class Practice {
public static void main(String[] args){
ArrayList list = new ArrayList(Arrays.asList(new Integer[]{1,2,3,4,5,6,7,8,9,10}));
ArrayList list1 = new ArrayList(Arrays.asList(new Integer[]{1,2,3,4,5,6,7,8,9,10}));
list.add(list1);
System.out.println(list.get(10).get(0));
}
}
Answer: Java and Python are quite different when it comes to types: [Java Types vs
Python Types](http://www.programcreek.com/2012/09/java-vs-python-data-types/)
Java requires explicit type declarations and is very strict on how types are
used. For example, you need to explicitly specify what type of ArrayLists you
are using.
Assuming that you wanted to create 2 ArrayLists, outerList that contains
innerLists that each contain the numbers 1-10, this is Java code will do the
trick:
import java.io.*;
import java.util.*;
public class Practice {
public static void main(String[] args) {
ArrayList<Integer> innerList = new ArrayList<Integer>(Arrays.asList(new Integer[]{1,2,3,4,5,6,7,8,9,10}));
ArrayList<ArrayList<Integer>> outerList = new ArrayList<ArrayList<Integer>>();
for (int i = 0; i < 10; i++) {
outerList.add(innerList);
}
System.out.println(outerList.get(9).get(0));
}
}
|
Pygame independent moving images on the screen
Question: I am new to Python and Pygame. I want to have a screen in pygame with multiple
copies of the same images moving around independently. I have tried to write
it as a class and then call instances of it inside the `while` loop, but it
doesn't work. Could someone show how can i basically do such a thing using a
`class`?
Answer: I've tried to keep everything simple
Example:
import pygame
pygame.init()
WHITE = (255,255,255)
BLUE = (0,0,255)
window_size = (400,400)
screen = pygame.display.set_mode(window_size)
clock = pygame.time.Clock()
class Image():
def __init__(self,x,y,xd,yd):
self.image = pygame.Surface((40,40))
self.image.fill(BLUE)
self.x = x
self.y = y
self.x_delta = xd
self.y_delta = yd
def update(self):
if 0 <= self.x + self.x_delta <= 360:
self.x += self.x_delta
else:
self.x_delta *= -1
if 0 <= self.y + self.y_delta <= 360:
self.y += self.y_delta
else:
self.y_delta *= -1
screen.blit(self.image,(self.x,self.y))
list_of_images = []
list_of_images.append(Image(40,80,2,0))
list_of_images.append(Image(160,240,0,-2))
done = False
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
screen.fill(WHITE)
for image in list_of_images:
image.update()
pygame.display.update()
clock.tick(30)
pygame.quit()
Each image can be called individually from the list and moved by simply
changing Image.x/y to whatever you want
|
Retrieving arguments from HTTP_REFERER string
Question: I am currently building a Python/Django application where I have a search
function and is looking great.
`http://localhost:8000/search/?q=&start=Feb+13%2C+Sat&end=Feb+20%2C+Sat`
Then say I selected a link from the number of listed results the led me to the
detail of the product whiche is another page. On the Django view of the
product detail I captured the HTTP_REFERER using:
referer_url = request.META.get('HTTP_REFERER')
`referer_url` is now a string.
I wanted to retrieve the data included in the `referer_url` like:
start = self.request.GET.get("start")
print start
Desired output is:
`Feb 13, Sat`
however I seem to have difficulty. Any ideas?
Answer: You can use the [`parse_qs`
method](https://docs.python.org/2/library/urlparse.html#urlparse.parse_qs) of
`urlparse` module:
import urlparse
parsed = urlparse.parse_qs(referer_url)
print parsed['start'][0]
the result would be:
Feb 13, Sat
|
What does y_p :::python do in this (or any) script?
Question: I am trying to work through the tutorial by Sebastian Raschka on Feature
scaling and I can't get the code below to run because it throws and error with
the third line, the one that end in 'python'.
from matplotlib import pyplot as plt
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(10,5))
y_p :::python
# Standardization
x = [1,4,5,6,6,2,3]
mean = sum(x)/len(x)
std_dev = (1/len(x) * sum([ (x_i - mean)**2 for x_i in x]))**0.5
z_scores = [(x_i - mean)/std_dev for x_i in x]
# Min-Max scaling
minmax = [(x_i - min(x)) / (min(x) - max(x)) for x_i in x]os = [0 for i in range(len(x))]
ax1.scatter(z_scores, y_pos, color='g')
ax1.set_title('Python standardization', color='g')
ax2.scatter(minmax, y_pos, color='g')
ax2.set_title('Python Min-Max scaling', color='g')
ax3.scatter(z_scores_np, y_pos, color='b')
ax3.set_title('Python NumPy standardization', color='b')
The-effect-of-standardization
ax4.scatter(np_minmax, y_pos, color='b')
ax4.set_title('Python NumPy Min-Max scaling', color='b')
plt.tight_layout()
for ax in (ax1, ax2, ax3, ax4):
ax.get_yaxis().set_visible(False)
ax.grid()
plt.show()
So, what does the y_p :::python do?
Answer: The answer is it isn't valid python code.
You should look at the ipython notebook that I believe you got some parts of
that code from.
<http://nbviewer.jupyter.org/github/rasbt/pattern_classification/blob/master/preprocessing/about_standardization_normalization.ipynb>
The relevant snippet is
from matplotlib import pyplot as plt
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(10,5))
y_pos = [0 for i in range(len(x))]
ax1.scatter(z_scores, y_pos, color='g')
ax1.set_title('Python standardization', color='g')
|
ValueError: Found arrays with inconsistent numbers of samples [ 6 1786]
Question: Here is my code:
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import KFold
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import datasets
import numpy as np
newsgroups = datasets.fetch_20newsgroups(
subset='all',
categories=['alt.atheism', 'sci.space']
)
X = newsgroups.data
y = newsgroups.target
TD_IF = TfidfVectorizer()
y_scaled = TD_IF.fit_transform(newsgroups, y)
grid = {'C': np.power(10.0, np.arange(-5, 6))}
cv = KFold(y_scaled.size, n_folds=5, shuffle=True, random_state=241)
clf = SVC(kernel='linear', random_state=241)
gs = GridSearchCV(estimator=clf, param_grid=grid, scoring='accuracy', cv=cv)
gs.fit(X, y_scaled)
I am getting error and I don't understand why. The traceback:
> Traceback (most recent call last): File
> "C:/Users/Roman/PycharmProjects/week_3/assignment_2.py", line 23, in
>
> gs.fit(X, y_scaled) #TODO: check this line File
> "C:\Users\Roman\AppData\Roaming\Python\Python35\site-
> packages\sklearn\grid_search.py",
> line 804, in fit
> return self._fit(X, y, ParameterGrid(self.param_grid)) File
> "C:\Users\Roman\AppData\Roaming\Python\Python35\site-
> packages\sklearn\grid_search.py",
> line 525, in _fit
> X, y = indexable(X, y) File
> "C:\Users\Roman\AppData\Roaming\Python\Python35\site-
> packages\sklearn\utils\validation.py",
> line 201, in indexable
> check_consistent_length(*result) File
> "C:\Users\Roman\AppData\Roaming\Python\Python35\site-
> packages\sklearn\utils\validation.py",
> line 176, in check_consistent_length
> "%s" % str(uniques))
>
> **ValueError: Found arrays with inconsistent numbers of samples: [ 6 1786]**
Could someone explain why this error occur?
Answer: I think you've got a bit confused with your `X` and `y` here. You want to
transform you `X` into a tf-idf vector and train using this against `y`. See
below
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import KFold
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import datasets
import numpy as np
newsgroups = datasets.fetch_20newsgroups(
subset='all',
categories=['alt.atheism', 'sci.space']
)
X = newsgroups.data
y = newsgroups.target
TD_IF = TfidfVectorizer()
X_scaled = TD_IF.fit_transform(X, y)
grid = {'C': np.power(10.0, np.arange(-1, 1))}
cv = KFold(y_scaled.size, n_folds=5, shuffle=True, random_state=241)
clf = SVC(kernel='linear', random_state=241)
gs = GridSearchCV(estimator=clf, param_grid=grid, scoring='accuracy', cv=cv)
gs.fit(X_scaled, y)
|
python rock paper scissors
Question: Im new to programming and trying to build a simple rock paper scissors program
in python 2.7. in my function i have 2 main if statements
rules = raw_input("Before we being playing would you like to hear the rules first? ")
if rules.lower() == "yes":
print """
Scissors cuts Paper
Paper covers Rock
Rock crushes Scissors"""
and the second
choice = raw_input("please enter your choice? (Must be either rock, paper or scissors) ")
computer = random.choice(["rock", "paper", "scissors"])
if choice == "rock" or "paper" or "scissors":
if choice == computer :
print "Its a tie !"
elif choice == "rock" and computer == "scissors":
print "Rock beats scissors you win!."
elif choice == "rock" and computer == "paper":
print "Paper beats rock you loose !."
elif choice == "paper" and computer == "scissors":
print "Scissors beats paper you loose !."
elif choice == "paper" and computer == "rock":
print "Paper beats rock you win !."
elif choice == "scissors" and computer == "paper":
print "Scissors beats paper you win !."
elif choice == "scissors" and computer == "rock":
print "Rock beats scissors you loose !."
else :
print "Invalid Entry Please try again."
individually both bits of code work as they should but when i try and put them
together in one function the first if statement asking about rules works but
then quits before the second if statement which has the main functionality of
the program. Ive tried indenting the second bit of code within the first if
statement but it doesn't seem to work
I was wondering if there is anyway of making these 2 snips of code work in a
simple function ? or should i create a class with these 2 functions ? also if
anyone has any tips on how to make my progam better please let me know. Thanks
for any help in advance.
heres the full code
import random
def rock_paper_scissors_spock():
rules = raw_input("Before we being playing would you like to hear the rules first? ")
if rules.lower() == "yes":
print """
Scissors cuts Paper
Paper covers Rock
Rock crushes Scissors"""
choice = raw_input("please enter your choice? (Must be either rock, paper or scissors) ")
computer = random.choice(["rock", "paper", "scissors"])
if choice == "rock" or "paper" or "scissors":
if choice == computer :
print "Its a tie !":
elif choice == "rock" and computer == "scissors":
print "Rock beats scissors you win!."
elif choice == "rock" and computer == "paper":
print "Paper beats rock you loose !."
elif choice == "paper" and computer == "scissors":
print "Scissors beats paper you loose !."
elif choice == "paper" and computer == "rock":
print "Paper beats rock you win !."
elif choice == "scissors" and computer == "paper":
print "Scissors beats paper you win !."
elif choice == "scissors" and computer == "rock":
print "Rock beats scissors you loose !."
else :
print "Invalid Entry PLease try again."
rock_paper_scissors_spock()
Answer: You said `if choice == "rock" or "paper" or "scissors":`, but Python does not
connect the `choice ==` to all of the choices. You could put parentheses
around `choice == "rock"`, and it would do the same thing. Change it to `if
choice in ("rock", "paper", "scissors")`
|
Read environment variables in Python
Question: I have set some environment variables in `~/.profile`:
SOMEVAR=/some/custom/path
and already did `source ~/.profile`. So when I do:
echo $SOMEVAR
it prints the correct directory:
/some/custom/path
However, when I try to read this variable in a Python script, it fails:
import os
print(os.environ["SOMEVAR"])
I get:
Traceback (most recent call last):
File "environment_test.py", line 3, in <module>
print os.environ["SOMEVAR"]
File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'SOMEVAR'
What's wrong there?
Answer: You don't want the launched processes see all the crap (= variables) you've
created. Hence regular variables are only visible in this shell you're
executing.
You have to export the variable:
export SOMEVAR=/some/custom/path
|
No module named Win32com.client error when using the pyttsx package
Question: Today, while surfing on _Quora_ , I came across
[answers](https://www.quora.com/What-amazing-things-can-Python-do) on amazing
things that python can do. I tried to use the **pyttsx** _Text to Speech
Convertor_ and that gave me an `No module named Win32com.client` error.
There are many answers on this error but most of them weren't sufficient
enough (Atleast for me) as the proposed solutions didn't matched the
requirements.
For starters, I'm using Python2.7, and there are no DLLs in the
`C:/Windows/System32` or any Scripts related to the keyword 'pywin32' in my
`C:/Python27/Scripts` Folder. I need a concrete solution.
This is what I have tried so far:
>>> import pyttsx
>>> engine = pyttsx.init()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pyttsx\__init__.py", line 39, in init
eng = Engine(driverName, debug)
File "C:\Python27\lib\site-packages\pyttsx\engine.py", line 45, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "C:\Python27\lib\site-packages\pyttsx\driver.py", line 64, in __init__
self._module = __import__(name, globals(), locals(), [driverName])
File "C:\Python27\lib\site-packages\pyttsx\drivers\sapi5.py", line 19, in <module>
import win32com.client
ImportError: No module named win32com.client
**SOLUTION** : Install the package from [This
Link](https://sourceforge.net/projects/pywin32/files/). Choose the 32/64 bit
version depending on your Python installation type (32/64 bit).
Answer: I had the same problem. I installed pywin32 from
[here](https://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/). I
downloaded for my python version (32 bit).
After installing I was able to import win32com.client
import win32com.client
|
Real-time capture and processing of keypresses (e.g. keypress event)
Question: _Note: I want to do this without using any external packages, like PyGame,
etc._
I am attempting to capture individual keypresses as they arrive and perform an
action for specific characters, whether I simply want to "re-echo" the
character, or not display it at all and do something else.
I have found a cross-platform (though not sure about OS X) getch()
implementation because I do not want to read a whole line like input() does:
# http://code.activestate.com/recipes/134892/
def getch():
try:
import termios
except ImportError:
# Non-POSIX. Return msvcrt's (Windows') getch.
import msvcrt
return msvcrt.getch
# POSIX system. Create and return a getch that manipulates the tty.
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(fd)
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
**[Attempt 1]** I first tried a simple while-true loop to poll getch, but if I
type too fast, characters go missing. Reducing the sleep time makes it worse.
The debug statements only print on press of the enter key and not consistently
in time nor position. (It appears there might be some line buffering going
on?) Taking out the loop and sleep lets it work once but perfectly.
#!/usr/bin/env python3
import sys
import tty
import time
def main():
while True:
time.sleep(1)
sys.stdout.write(" DEBUG:Before ")
sys.stdout.write(getch())
sys.stdout.write(" DEBUG:After ")
if __name__ == "__main__":
main()
**[Attempt 2]** I got an example for using a threaded approach
(<http://stackoverflow.com/a/14043979/2752206>) but it "locks up" and won't
accept any input (including `Ctrl-C`, and etc)..
#!/usr/bin/env python3
import sys
import tty
import time
import threading
key = 'Z'
def main():
threading.Thread(target=getchThread).start()
while True:
time.sleep(1)
sys.stdout.write(" DEBUG:Before ")
sys.stdout.write(key)
sys.stdout.write(" DEBUG:After ")
def getchThread():
global key
lock = threading.Lock()
while True:
with lock:
key = getch()
if __name__ == "__main__":
main()
Does anyone have any advice or guidance? Or more importantly, can someone
explain _why_ the two attempts do not work? Thanks.
Answer: First off, I don't really thing you need multithreading. You'd need that if
you, for example, wanted to do some tasks like drawing on screen or whatever
and capturing keys while you do this.
Let's consider a case where you only want to capture keys and after each
keypress execute some action: Exit, if **x** was pressed, otherwise just print
the character. All you need for this case is simple while loop
def process(key):
if key == 'x':
exit('exitting')
else:
print(key, end="", flush=True)
if __name__ == "__main__":
while True:
key = getch()
process(key)
Notice absence of sleep(). I am assuming you thought getch() won't wait for
user input so you set 1s sleep time. However, your getch() waits for one entry
and then returns it. In this case, global variable is not really useful, so
you might as well just call process(getch()) inside the loop.
`print(key, end="", flush=True)` => the extra arguments will ensure pressed
keys stay on one line, not appending newline character every time you print
something.
The other case, where you'd want to execute different stuff simultaneously,
should use threading.
Consider this code:
n = 0
quit = False
def process(key):
if key == 'x':
global quit
quit = True
exit('exitting')
elif key == 'n':
global n
print(n)
else:
print(key, end="", flush=True)
def key_capturing():
while True:
process(getch())
if __name__ == "__main__":
threading.Thread(target=key_capturing).start()
while not quit:
n += 1
time.sleep(0.1)
This will create global variable `n` and increment it 10 times a second in
main thread. Simultaneously, `key_capturing` method listens to keys pressed
and does the same thing as in previous example + when you press **n** on your
keyboard, current value of the global variable `n` will be printed.
Closing note: as @zondo noted, you really missed braces in the getch()
definition. `return msvcrt.getch` should most likely be `return
msvcrt.getch()`
|
Python ImportError - Custom Module
Question: I have installed a custom module (Twilio) using PIP, but when I try to
`import` it, it will bring up:
ImportError: No module named 'twilio'
I'm running Windows 10 and Python 3.5. What am I missing? It seems to be an
error with the paths. If it is, how do I set the paths?
Edit: I have my PYTHONHOME set to C:\Python33 and my PYTHONPATH set to
C:Python33\Lib
Answer: First of all you need to check your package location.
pip show custom_package
Then check the system paths by
import sys
sys.path
If you dont' see your package path here, you can add it.
sys.path.append(custom_package_path)
If this doesn't work try reinstalling it. Or you can also install it with
easy_install
|
Implicit OAuth2 grant with PyQt4/5
Question: I have been working on a python app that uses OAuth2 to identify users. I seem
to have successfully implemented the workflow of an OAuth2 implicit grant
(commonly used for installed and user-agent apps), but at the last step of
receiving the token, something appears to be going wrong.
Whenever the user needs to authenticate, a PyQt QWebView window (based on
webkit) is spawned which shows the login page. After the user has logged in
and allowed the scoped permissions for my app, the OAuth2 server redirects to
the prespecified redirect_uri.
The problem is that when using the QWebView browser, the token string,
normally occurring after the #, seems to have been dropped from the URL: the
URL that QWebView returns is just the base redirect_uri.
If I copy paste the OAuth authorization URL and follow through these same
steps of logging in and authorizing in a normal web browser such as Chrome or
Firefox, I do get to see the redirect_uri including the token string, so the
problem does not lie in the OAuth2 process, but must go wrong somewhere at the
implementation at my side.
Is this behavior inherent to the implementation of QWebView or webkit? I am
reading out the QUrl incorrectly?
For completeness, here is my code:
osf.py module that generates the OAuth2 URLs for the Open Science Framework.
# Import basics
import sys
import os
# Module for easy OAuth2 usage, based on the requests library,
# which is the easiest way to perform HTTP requests.
# OAuth2Session object
from requests_oauthlib import OAuth2Session
# Mobile application client that does not need a client_secret
from oauthlib.oauth2 import MobileApplicationClient
#%%----------- Main configuration settings ----------------
client_id = "cbc4c47b711a4feab974223b255c81c1"
# TESTED, just redirecting to Google works in normal browsers
# the token string appears in the url of the address bar
redirect_uri = "https://google.nl"
# Generate correct URLs
base_url = "https://test-accounts.osf.io/oauth2/"
auth_url = base_url + "authorize"
token_url = base_url + "token"
#%%--------------------------------------------------------
mobile_app_client = MobileApplicationClient(client_id)
# Create an OAuth2 session for the OSF
osf_auth = OAuth2Session(
client_id,
mobile_app_client,
scope="osf.full_write",
redirect_uri=redirect_uri,
)
def get_authorization_url():
""" Generate the URL with which one can authenticate at the OSF and allow
OpenSesame access to his or her account."""
return osf_auth.authorization_url(auth_url)
def parse_token_from_url(url):
token = osf_auth.token_from_fragment(url)
if token:
return token
else:
return osf_auth.fetch_token(url)
The main program, that opens up a QWebView browser window with login screen
# Oauth2 connection to OSF
import off
import sys
from PyQt4 import QtGui, QtCore, QtWebKit
class LoginWindow(QtWebKit.QWebView):
""" A Login window for the OSF """
def __init__(self):
super(LoginWindow, self).__init__()
self.state = None
self.urlChanged.connect(self.check_URL)
def set_state(self,state):
self.state = state
def check_URL(self, url):
#url is a QUrl object, covert it to string for easier usage
url_string = url.toEncoded()
print(url_string)
if url.hasFragment():
print("URL CHANGED: On token page: {}".format(url))
self.token = osf.parse_token_from_url(url_string)
print(self.token)
elif not osf.base_url in url_string:
print("URL CHANGED: Unexpected url")
if __name__ == "__main__":
""" Test if user can connect to OSF. Opens up a browser window in the form
of a QWebView window to do so."""
# Import QT libraries
app = QtGui.QApplication(sys.argv)
browser = LoginWindow()
auth_url, state = osf.get_authorization_url()
print("Generated authorization url: {}".format(auth_url))
browser_url = QtCore.QUrl.fromEncoded(auth_url)
browser.load(browser_url)
browser.set_state(state)
browser.show()
exitcode = app.exec_()
print("App exiting with code {}".format(exitcode))
sys.exit(exitcode)
Basically, the url that is provided to the check_URL function by the
QWebView's url_changed event never contains the OAuth token fragment when
coming back from the OAuth server, whatever I use for redirect_uri (in this
example I simply redirect to google for the sake of simplicity).
Could anyone please help me with this? I have exhausted my option of where to
look for a solution to this problem.
Answer: This appears to be a known bug in Webkit/Safari:
<https://bugs.webkit.org/show_bug.cgi?id=24175>
<https://phabricator.wikimedia.org/T110976#1594914>
Basically it is not fixed because people do not agree on what the desired
behavior should be according to the HTTP specification. A possible fix is
described at [How do I preserve uri fragment in safari upon
redirect?](http://stackoverflow.com/questions/17982594/how-do-i-preserve-uri-
fragment-in-safari-upon-redirect) but I have not been able to test this.
# EDIT
I have managed to find a (not-so elegant) work around to solve this problem.
Instead of using the urlChanged event from QWebView (which shows nothing of
the 301 redirects done by the OAuth server), I have used
QNetworkAccessManager's finished() event. This gets fired after _any_ http
request is finished (so also for all the linked content of page such as
images, stylesheets and the such, so you have to do a lot of filtering).
So now my code looks like this:
class LoginWindow(QtWebKit.QWebView):
""" A Login window for the OSF """
# Login event is emitted after successfull login
logged_in = QtCore.pyqtSignal(['QString'])
def __init__(self):
super(LoginWindow, self).__init__()
# Create Network Access Manager to listen to all outgoing
# HTTP requests. Necessary to work around the WebKit 'bug' which
# causes it drop url fragments, and thus the access_token that the
# OSF Oauth system returns
self.nam = self.page().networkAccessManager()
# Connect event that is fired if a HTTP request is completed.
self.nam.finished.connect(self.checkResponse)
def checkResponse(self,reply):
request = reply.request()
# Get the HTTP statuscode for this response
statuscode = reply.attribute(request.HttpStatusCodeAttribute)
# The accesstoken is given with a 302 statuscode to redirect
if statuscode == 302:
redirectUrl = reply.attribute(request.RedirectionTargetAttribute)
if redirectUrl.hasFragment():
r_url = redirectUrl.toString()
if osf.redirect_uri in r_url:
print("Token URL: {}".format(r_url))
self.token = osf.parse_token_from_url(r_url)
if self.token:
self.logged_in.emit("login")
self.close()
|
Fonts aliasing in Pillow
Question: I'm using Pillow 3.1.1 image library and Python 3.5.1.
I'm trying to use Pillow for drawing fonts on images. But results looks
absolutely ugly and unacceptable.
1st example: looks like font not antialiased. But docs contains absolutely
nothing on aliasing fonts. I've tried to make some changes (e.g. set
`fonttype`), but texts still looks terrible. [1st
example](http://i.stack.imgur.com/e8vC2.png)
And second example. Sometimes characters just overlay each other. And I don't
have any idea how it could be fixed. [2nd
example](http://i.stack.imgur.com/f7cuq.png)
I'm so frustrated with my experience. Is it possible to fix aliasing problem
in Pillow or I should look to ImageMagick side?
Aliasing problem is my main concern, I cannot use fonts rendered such way.
Thanks for your attention!
Code example:
from PIL import Image, ImageDraw, ImageFont
DEFAULT_OFFSET = (100, 160, )
def draw_text(image, text):
base = Image.open(image).convert('RGBA')
txt_image = Image.new('RGBA', base.size, (255, 255, 255, 0))
ttf = get_font()
fnt = ImageFont.truetype(ttf, 40)
d = ImageDraw.Draw(txt_image)
# just return some string in format 'blah-blah\nblah-blah'
multiline = generate_multiline(txt_image, text)
d.multiline_text(DEFAULT_OFFSET, multiline, align='left', font=fnt, fill=(40, 40, 40, 200))
out = Image.alpha_composite(base, txt_image)
out.show()
Answer: Accordingly to [martineau](http://stackoverflow.com/users/355230/martineau)
comment, Pillow doesn't support font anti-aliasing.
|
To generate basic bar graphs using python
Question: I am a newbie to python. Please help me with this error.My idea is to generate
a bar chart between cancer types and females. whereas, cancers on x-axis and
females on the y-axis.In my dataset list of cancers are in a first column and
females in second column. My code goes here:
from pylab import *
import csv
import sys
import matplotlib
import matplotlib.pyplot as plt
cancers = []
females = []
readFile = open('DeathEst.csv', 'r').read()
eachLine = readFile.split('\n')
for line in eachLine:
split = line.split(';')
cancers.append(split[0])
females.append(split[0])
pos = arange(len(cancers))+.5
barh(pos, females, align='center', color='#b8ff4c')
yticks(pos,name)
plt.show()
Error:
Traceback (most recent call last):
File "death.py", line 20, in <module>
barh(pos, females, align='center', color='#b8ff4c')
File "C:\Users\.....\Desktop\Python34\lib\sitepackages\matplotlib\pyplot.py", line 2533, in barh
ret = ax.barh(bottom, width, height=height, left=left, **kwargs)
File "C:\Users\......\Desktop\Python34\lib\sitepackages\matplotlib\axes.py", line 5180, in barh
bottom=bottom, orientation='horizontal', **kwargs)
File "C:\Users\......\Desktop\Python34\lib\sitepackages\matplotlib\axes.py", line 5047, in bar
if w < 0:
TypeError: unorderable types: str() < int()
Answer: Your lists named cancers and females are both holding string elements, and not
integers. Matplotlib doesn't know what to do with that.
|
Handling HTML characters in web scraped html using Python BS4
Question: This might be a duplicate question, but unable to find any answers searching
through stackoverflow..
Scraped some html files from the web, but they contain special characters like
'>', '<' in the text and BeautifulSoup is unable to handle it and throwing
BeautifulSoup.find erratic. Is there a way to escape the text before using
BeautifulSoup to parse the html?
EDIT: Thought this is generic enough, but adding html with issue:
<HTML>
<HEAD><TITLE>Title</TITLE>
</HEAD><BODY>
<p>
<h2>Heading 2</h2>
<hr align=left width=75%>
<dl><h3>Heading 3</h3>
<p>
<dd><a href="./ref.pl?R1"><b>R1</b></a>
<i><b>PP</b></i>:
<a href="./refs.pl?R2">R2</a>
<dl>
<dd>
Text1 <a href="./refs.pl?T1">T1</a>
; Text2 <a href="./refs.pl?T1">T1</a>
<i>value<=500</i> <a href="./refs.pl?+T2">T2</a>
; Text3 <a href="./refs.pl?T3">T3</a>
</dl>
Sat Feb 14 23:36:59 EST 2016
<p></body></html>
Trying to collect all text values, calling dd = soup.find('dd') and parsing
dd.contents misses out value<=500 and Text3..
Answer: Answering my own question, but is there a easier way to handle it directly
with BeautifulSoup?
from tidylib import tidy_document
doc, errors = tidy_document(htmlfile.read())
soup = BeautifulSoup(doc, "lxml")
Now the HTML document has `<i>value<=500</i>`, and this helps
`BeautifulSoup.find` from behaving erratically.
Calling `dd = soup.find('dd')` and parsing `dd.contents` now provides
`value<=500` and `Text3`.
|
python exception handler to recomend package
Question: I would like to have python recomend a python package in the event of an
import error.
I tried:
except ImportError as e:
sys.exit("'Error: Try sudo pip install %s'" % e)
but this is the output:
'Error: Try sudo pip install No module named 'Crypto''
I would like the output to be:
'Error: Try sudo pip install Crypto'
how can I do that?
Update: it pretty hacky, but here is something that seems to work:
except ImportError as e:
e = e.replace("No module named '", "")
e = e.replace("'", "")
sys.exit("'Error: Try sudo pip install %s'" % e)
If someone has a better solution, I'd love to hear about it.
Answer: Use `ImportError.name`
>>> try:
... import fakecrypto
... except ImportError as e:
... ex = e
...
>>> dir(ex)
['__cause__', '__class__', '__context__', '__delattr__', '__dict__',
'__dir__', '__doc__', '__eq__', '__format__', '__ge__',
'__getattribute__', '__gt__', '__hash__', '__init__', '__le__',
'__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__',
'__subclasshook__', '__suppress_context__', '__traceback__', 'args',
'msg', 'name', 'path', 'with_traceback']
>>> ex.name
'fakecrypto'
|
Python-yad Progress Bar not working in python 3.4 but works in python 2.7
Question: I have created a python [interface](https://gitlab.com/dvenkatsagar/python-
yad/blob/2c1fb0379d425fdc5474f1ab4f2fe38219a1ff8d/python-yad/yad.py) for the
[yad](https://sourceforge.net/projects/yad-dialog/) program. What the code
basically does is that, it generates a string which gets passed to the `yad`
program using pythons `subprocess` and/or `pexpect` module and executes it
Now, Im facing a weird bug where I am trying to display a simple
[multi]progress bar and update the bar with a certain value like this:
import yad, time
yad = yad.YAD()
x = yad.Progress(autoclose=True) # yad.MultiProgress(autoclose=True)
for i in range(0,105,5):
print(i)
x(i,msg=str(i)+"% done")
time.sleep(0.5)
The problem is that, in python 2.7, it works fine(updates the bar, and closes
after wards), But when it comes to python 3.4, it does not work(shows the bar,
but does not update, even though the `for` loop prints the numbers).
Im trying to figure out what the problem is with my interface. The functions
are written in such a way that, it should update the bar, but for some reason
its not working in python 3.4.
Kindly help me with this problem. I am not able to figure out where the bug
is.
Edit : `x` is a function that is returned as output when we call the
`yad.Progress()`. Using the `x`, we can write some standard input to the yad.
The shell equivalent of the code would be something like this:
yad --progress --auto-close
> 5
> # 5% done
...
Answer: Reposting as an answer:
Inside the wrapper module, call `p.stdin.flush()` after writing to the
subprocess' stdin.
In Python 2, the default is to create Popen pipes without any buffering (the
`bufsize` argument to `subprocess.Popen` defaults to 0). That means that any
data you write is sent to the subprocess immediately. In Python 3, buffering
is the default (`bufsize` defaults to -1, which means the default buffer
size). So, for performance reasons, data is stored in memory until either the
buffer fills up or you call flush.
|
Comments in python don’t work after sublime upgrade to 3103
Question: I just upgraded to Sublime 3103, and now the comment shortcut `command+/` does
not work. This is weird because it doesn't work only in Python. For all other
programming languages, it works just fine.
I tried setting up a custom keybinding for comments, and again the same
problem. Works everywhere else, except in python.
What could be the problem?
Answer: I also cannot reproduce this, but here is a way to fix it. Go to
**`Preferences → Browse Packages…`** to open the `Packages` folder in your
operating system's file manager. Create a new folder named `Python`, and
inside that new folder create an empty file named `Comments.tmPreferences`
(capitalization is important). Next, open the new file in Sublime with XML
syntax highlighting and add the following contents:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>name</key>
<string>Comments</string>
<key>scope</key>
<string>source.python</string>
<key>settings</key>
<dict>
<key>shellVariables</key>
<array>
<dict>
<key>name</key>
<string>TM_COMMENT_START</string>
<key>value</key>
<string># </string>
</dict>
</array>
</dict>
<key>uuid</key>
<string>6550FEAD-D547-44E4-84F7-7D421D6078B0</string>
</dict>
</plist>
Save the file, and it should take effect immediately.
* * *
This works by explicitly telling Sublime to use a certain pattern for
comments. The `.tmPreferences` extension came from
[TextMate](http://macromates.com), a pretty good editor for OS X that Jon
Skinner used as one of his inspirations (along with `vi`) when writing
Sublime. (BTW, if you're on OS X, check out TextMate 2 - it's open-source, and
has a lot of neat features. A much smaller plugin community, though...)
As you can see, the file is XML-based, and defines a `shellVariable` named
`TM_COMMENT_START` (again, the `TM` is from TextMate) which is used internally
to demarcate a single-line comment. Depending on the `scope` value, a
`Comments.tmPreferences` file can be used for any language you wish. If your
programming language also has a block comment construct, as well as a single-
line comment, you can define that with `TM_COMMENT_START_2` and
`TM_COMMENT_END_2` like so:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>name</key>
<string>Comments</string>
<key>scope</key>
<string>source.python</string>
<key>settings</key>
<dict>
<key>shellVariables</key>
<array>
<dict>
<key>name</key>
<string>TM_COMMENT_START</string>
<key>value</key>
<string># </string>
</dict>
<dict>
<key>name</key>
<string>TM_COMMENT_START_2</string>
<key>value</key>
<string>"""</string>
</dict>
<dict>
<key>name</key>
<string>TM_COMMENT_END_2</string>
<key>value</key>
<string>"""</string>
</dict>
</array>
</dict>
<key>uuid</key>
<string>6550FEAD-D547-44E4-84F7-7D421D6078B0</string>
</dict>
</plist>
Here, we're still in Python, but we're using triple quotes to define a block
comment or docstring. Simply highlight the region you want to surround with
triple quotes and hit `⌘``Shift``/` (`Ctrl``Shift``/` on Windows/Linux).
|
Implied volatility calculation in Python
Question: With the comments from the answer, I rewrote the code below
(math.1p(x)->math.log(x)), which now should work and give a good approximation
of the volatility.
I am trying to create a short code to calculate the implied volatility of a
European Call option. I wrote the code below:
from scipy.stats import norm
import math
norm.cdf(1.96)
#c_p - Call(+1) or Put(-1) option
#P - Price of option
#S - Strike price
#E - Exercise price
#T - Time to expiration
#r - Risk-free rate
#C = SN(d_1) - Ee^{-rT}N(D_2)
def implied_volatility(Price,Stock,Exercise,Time,Rf):
P = float(Price)
S = float(Stock)
E = float(Exercise)
T = float(Time)
r = float(Rf)
sigma = 0.01
print (P, S, E, T, r)
while sigma < 1:
d_1 = float(float((math.log(S/E)+(r+(sigma**2)/2)*T))/float((sigma*(math.sqrt(T)))))
d_2 = float(float((math.log(S/E)+(r-(sigma**2)/2)*T))/float((sigma*(math.sqrt(T)))))
P_implied = float(S*norm.cdf(d_1) - E*math.exp(-r*T)*norm.cdf(d_2))
if P-(P_implied) < 0.001:
return sigma
sigma +=0.001
return "could not find the right volatility"
print implied_volatility(15,100,100,1,0.05)
This yields: 0.595 volatility which should be somewhere 0.3203. That is a huge
difference...
I know this is not a fast method by any means, I just want to demonstrate how
the principle works, but I am not able to calculate a good approximation. For
some reason when I call the function it gives me really bad approximation of
the actual implied volatility which I calculated using a Matlab Program and
the following webpage: [Implied Volatility](http://www.option-
price.com/implied-volatility.php). Could anyone please help me to figure out
where I made the mistake?
Answer: There are two problems I see, none of which are directly python related:
1. You are using `log1p(x)`, which is the natural logarithm of `1+x`, while you actually want `log(x)`, which is the natural logarithm of `x` (cf. [Wikipedia](https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model#Black.E2.80.93Scholes_formula)).
2. An option price of `100` is way to high considering the other parameters. Try to calculate the implied volatility for a price of `10` \- which should be about `0.18` both by your program and the calculator you linked.
|
appengine python remote_api module object has no attribute GoogleCredentials
Question: AttributeError: 'module' object has no attribute 'GoogleCredentials'
I have an appengine app which is running on localhost. I have some tests which
i run and i want to use the remote_api to check the db values. When i try to
access the remote_api by visiting:
'http://127.0.0.1:8080/_ah/remote_api'
i get a:
"This request did not contain a necessary header"
but its working in the browser.
When i now try to call the remote_api from my tests by calling
remote_api_stub.ConfigureRemoteApiForOAuth('localhost:35887','/_ah/remote_api')
i get the error:
Error
Traceback (most recent call last):
File "/home/dan/src/gtup/test/test_users.py", line 38, in test_crud
remote_api_stub.ConfigureRemoteApiForOAuth('localhost:35887','/_ah/remote_api')
File "/home/dan/Programs/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 747, in ConfigureRemoteApiForOAuth
credentials = client.GoogleCredentials.get_application_default()
AttributeError: 'module' object has no attribute 'GoogleCredentials'
I did try to reinstall the whole google cloud but this didn't work.
When i open the client.py
google-cloud-sdk/platform/google_appengine/lib/google-api-python-client/oauth2client/client.py
which is used by remote_api_stub.py, i can see, that there is no
GoogleCredentials class inside of it.
The GoogleCredentials class exists, but inside of other client.py files which
lie at:
google-cloud-sdk/platform/google_appengine/lib/oauth2client/oauth2client/client.py
google-cloud-sdk/platform/gsutil/third_party/oauth2client/oauth2client/client.py
google-cloud-sdk/platform/bq/third_party/oauth2client/client.py
google-cloud-sdk/lib/third_party/oauth2client/client.py
my app.yaml looks like this:
application: myapp
version: 1
runtime: python27
api_version: 1
threadsafe: true
libraries:
- name: webapp2
version: latest
builtins:
- remote_api: on
handlers:
- url: /.*
script: main.app
Is this just a wrong import/bug inside of appengine. Or am i doing something
wrong to use the remote_api inside of my unittests?
Answer: Answering instead of commenting as I cannot post a comment with my reputation
-
Similar things have happened to me, when running these types of scripts on
mac. Sometimes, your PATH variable gets confused as to which files to actually
check for functions, especially when you have gcloud installed alongside the
app engine launcher. If on mac, I would suggest editing/opening your
~/.bash_profile file to fix this (or possible ~/.bashrc, if on linux). For
example, on my Mac I have the following lines to fix my PATH variable:
export PATH="/usr/local/bin:$PATH"
export PYTHONPATH="/usr/local/google_appengine:$PYTHONPATH
These basically make sure the python / command line will look in
/usr/local/bin (or /usr/local/google_appengine in the case of the PYTHONPATH
line) BEFORE anything in the PATH (or PYTHONPATH).
The PATH variable is where the command line checks for python files when you
type them into the prompt. The PYTHONPATH is where your python files find the
modules to load at runtime.
|
How do I count exoplanets per system in a file with over 10,000 lines in Python?
Question: I am working with astronomical data and I need help summarizing it.
My data contains ~10,000 lines, where each line represents a system.
The input file is tab delimited like this: exo sys_planet_count
0 1
0 0
3 4
0 1
2 5
0 0
Note that exo planet count is usually 0 or 1, but NOT Always.
**Each line represents a system** and there are two columns, one for the
exo_planets found in that system and one for the total number of planets
found.
I need the data summarized like this by increasing sys_planet_count:
system_planet_count exo system_hits system_misses
5 3500 3000 1000
6 4500 4000 1500
The **number of exo planets must be greater or equal than system_hits** ,
because there could be only one exo planet per system or several, it depends.
system_planet_count is how the the table is organized.
For each line (system) that matches a particular system_planet_count, it adds
the number of exos found. If there were exos found, it adds +1 to the
system_hits category because that line found exo planets, a hit. If there were
NO exos found in that line, it adds one to the system_misses category because
there were no lines in a planet.
NOTE that system_misses and system_hits category is specific to that
system_planet count, i.e. 3000 and 1000 for system_planet_count of 5 but 4000
and 1500 for a system_planet_count of 6
The problem is that the data is NOT ordered in ascending order of
sys_planet_counts.
To summarize the data, I came up with the following code. What should I do to
summarize the data in a quick manner that doesn't take 10 or 15 minutes?
I was thinking about using a dictionary, since each system_planet_count could
act as key
while open('data.txt','r') as input:
for line in input:
system_planet_count = 0
exo_count = 0
system_hits = 0
system_misses = 0
foo
output.write(str(system_planet_count) + '\t' + str(exo_count) + '\t' + str(system_hits) + '\t' + str(system_misses) + '\')
Input example:
exo sys_planet_count
2 1
0 1
1 1
0 5
1 5
0 5
0 5
2 5
0 5
0 4
Output:
system_planet_count exo system_hits system_misses
1 3 2 1
4 0 0 1
5 3 2 4
Answer: This should do the summary you want:
from collections import defaultdict
def summarize(file_name):
exo, hit, miss = 0, 1, 2 # indexes of according counts
d = defaultdict(lambda: [0, 0, 0]) # keep all counts for each type of system
with open(file_name, 'r') as input:
for line in input:
exos, planets = map(int, line.strip().split()) # split, cast to int
if exos:
d[planets][exo] += exos
d[planets][hit] += 1
else:
d[planets][miss] += 1
for key in sorted(d.keys()):
print('{} {} {} {}'.format(key, d[key][exo], d[key][hit], d[key][miss]))
summarize('data.txt')
|
Execute python script from within HTML
Question: On my raspberry pi i have apache2 running. i have a very basic image displayed
when you go to the site. What i want to be able to do is, when the image is
clicked i want the following script to run.
import subprocess
subprocess.call('./milight_sources/milight 0 ON', shell=True)
Now, i'm pretty sure django isn't the answer and neither is Flask. can you
suggest the best way to do this? perhaps i don't even need to use a framework
at all?
I'm pulling my hair out over this and am determinted to get it working. any
suggestions will be great.
Many thanks!
Answer: Sounds like a small script that utilizes cgi should do the job
<https://docs.python.org/2/library/cgi.html>
|
check if a json file already has the data i am overriding in python
Question: I am currently making a python script that automates a task by sending an
email after first parsing the data from a website then sending the message
from that data using twilio.
But what I want is to first compare the data parsed with the already existing
json file that I parsed previously and if it has same date or message then it
should not send the message.
I have no idea how to do this I have tried to load the json file but I
couldn't get it to work properly.
Here is my json file that I want to check:
{
"date": "11/02/2016 11:42:57",
"message": "Dear students,\r\n\r\nAs informed in the class, this is to remind you Today special class from 6 to 6.50 pm at same venue SJT 126.\r\n\r\nregards\r\n\r\nR. Raghavan\r\nSITE",
"name": "RAGHAVAN R (SITE)",
"subject": "ITE308 - Distributed Systems - TH"
}
here is my code:
infoTable = tables[0].findAll('tr')
name = infoTable[2].findAll('td')[0].text
if (len(name) is 0):
return None
subject = infoTable[2].findAll('td')[1].text
msg = infoTable[2].findAll('td')[2].text
sent = infoTable[2].findAll('td')[3].text
textmyself.textmyself(msg)
# Parsing the open hours of the faculties
outputPath = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'output')
if os.path.isdir(outputPath) is False:
os.makedirs(outputPath)
result = {'name': name, 'subject': subject, 'message': msg, 'date': sent}
with open('output/' + str(facultyID) + '.json', 'w') as outfile:
json.dump(result, outfile, indent=4)
return result
**Update:** Here is what I tried and found working but json file should
already be there if one is running script for the first time, so is my code
correct?
with open('output/WS.json') as data_file:
data = json.load(data_file)
if data["date"] == sent:
outputpath = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'output')
if os.path.isdir(outputpath) is False:
os.makedirs(outputpath)
result = {'name': name, 'subject': subject, 'message': msg, 'date': sent}
with open('output/' + str(facultyID) + '.json', 'w') as outfile:
json.dump(result, outfile, indent=4)
return result
else:
outputpath = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'output')
if os.path.isdir(outputpath) is False:
os.makedirs(outputpath)
result = {'name': name, 'subject': subject, 'message': msg, 'date': sent}
with open('output/' + str(facultyID) + '.json', 'w') as outfile:
json.dump(result, outfile, indent=4)
textmyself.textmyself(msg)
return result
Answer: You can load the old JSON file right away by using:
import json
with open('old_data.json') as f:
old_message = json.load(f)
Then you can compare
if not old_message['date'] == sent:
# send your mail etc.
Then bundle your data in another JSON and write the file back with the new
message:
new_message = {
"date" : sent,
... }
with open('old_data.json', 'w') as f:
json.dump(new_message, f)
Putting together the pieces with what you already have, and also adding some
error handling, should solve your issue.
|
Getting insecure pickle string with ggplot
Question: I'm trying to use ggplot for Python working inside iPython Notebook, but when
running `from ggplot import *` the following error appears:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-22-2199c088d178> in <module>()
2 import numpy as np
3 import dateutil
----> 4 from ggplot import *
~/ipython-venv/lib/python2.7/site-packages/ggplot/__init__.py in <module>()
19 __version__ = '0.6.8'
20
---> 21 from .qplot import qplot
22 from .ggplot import ggplot
23 from .components import aes
~/ipython-venv/lib/python2.7/site-packages/ggplot/qplot.py in <module>()
3
4 import ggplot
----> 5 from .components import aes
6 from .geoms import geom_point, geom_bar, geom_boxplot, geom_histogram, geom_line
7 from .geoms.chart_components import xlab as xlabel
~/ipython-venv/lib/python2.7/site-packages/ggplot/components/__init__.py in <module>()
2 unicode_literals)
3 from .aes import aes
----> 4 from . import colors, shapes, size, linetypes, alphas
5
6
~/ipython-venv/lib/python2.7/site-packages/ggplot/components/colors.py in <module>()
4 import numpy as np
5 from matplotlib.colors import rgb2hex
----> 6 from ..utils.color import ColorHCL
7 from .legend import get_labels
8 from copy import deepcopy
~/ipython-venv/lib/python2.7/site-packages/ggplot/utils/__init__.py in <module>()
2 unicode_literals)
3
----> 4 from .ggutils import ggsave, add_ggplotrc_params
5 from .date_breaks import date_breaks
6 from .date_format import date_format
~/ipython-venv/lib/python2.7/site-packages/ggplot/utils/ggutils.py in <module>()
4 unicode_literals)
5
----> 6 import matplotlib.pyplot as plt
7 import json
8 import os
~/ipython-venv/lib/python2.7/site-packages/matplotlib/pyplot.py in <module>()
27 from cycler import cycler
28 import matplotlib
---> 29 import matplotlib.colorbar
30 from matplotlib import style
31 from matplotlib import _pylab_helpers, interactive
~/ipython-venv/lib/python2.7/site-packages/matplotlib/colorbar.py in <module>()
32 import matplotlib.artist as martist
33 import matplotlib.cbook as cbook
---> 34 import matplotlib.collections as collections
35 import matplotlib.colors as colors
36 import matplotlib.contour as contour
~/ipython-venv/lib/python2.7/site-packages/matplotlib/collections.py in <module>()
25 import matplotlib.artist as artist
26 from matplotlib.artist import allow_rasterization
---> 27 import matplotlib.backend_bases as backend_bases
28 import matplotlib.path as mpath
29 from matplotlib import _path
~/ipython-venv/lib/python2.7/site-packages/matplotlib/backend_bases.py in <module>()
60
61 import matplotlib.tight_bbox as tight_bbox
---> 62 import matplotlib.textpath as textpath
63 from matplotlib.path import Path
64 from matplotlib.cbook import mplDeprecation, warn_deprecated
~/ipython-venv/lib/python2.7/site-packages/matplotlib/textpath.py in <module>()
13 from matplotlib.path import Path
14 from matplotlib import rcParams
---> 15 import matplotlib.font_manager as font_manager
16 from matplotlib.ft2font import FT2Font, KERNING_DEFAULT, LOAD_NO_HINTING
17 from matplotlib.ft2font import LOAD_TARGET_LIGHT
~/ipython-venv/lib/python2.7/site-packages/matplotlib/font_manager.py in <module>()
1419 verbose.report("Using fontManager instance from %s" % _fmcache)
1420 except:
-> 1421 _rebuild()
1422 else:
1423 _rebuild()
~/ipython-venv/lib/python2.7/site-packages/matplotlib/font_manager.py in _rebuild()
1404 def _rebuild():
1405 global fontManager
-> 1406 fontManager = FontManager()
1407 if _fmcache:
1408 pickle_dump(fontManager, _fmcache)
~/ipython-venv/lib/python2.7/site-packages/matplotlib/font_manager.py in __init__(self, size, weight)
1042 # Load TrueType fonts and create font dictionary.
1043
-> 1044 self.ttffiles = findSystemFonts(paths) + findSystemFonts()
1045 self.defaultFamily = {
1046 'ttf': 'Bitstream Vera Sans',
~/ipython-venv/lib/python2.7/site-packages/matplotlib/font_manager.py in findSystemFonts(fontpaths, fontext)
322 fontfiles[f] = 1
323
--> 324 for f in get_fontconfig_fonts(fontext):
325 fontfiles[f] = 1
326
~/ipython-venv/lib/python2.7/site-packages/matplotlib/font_manager.py in get_fontconfig_fonts(fontext)
274 pipe = subprocess.Popen(['fc-list', '--format=%{file}\\n'],
275 stdout=subprocess.PIPE,
--> 276 stderr=subprocess.PIPE)
277 output = pipe.communicate()[0]
278 except (OSError, IOError):
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
708 p2cread, p2cwrite,
709 c2pread, c2pwrite,
--> 710 errread, errwrite)
711 except Exception:
712 # Preserve original exception in case os.close raises.
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
1332 if e.errno != errno.ECHILD:
1333 raise
-> 1334 child_exception = pickle.loads(data)
1335 raise child_exception
1336
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.pyc in loads(str)
1386 def loads(str):
1387 file = StringIO(str)
-> 1388 return Unpickler(file).load()
1389
1390 # Doctest
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.pyc in load(self)
862 while 1:
863 key = read(1)
--> 864 dispatch[key](self)
865 except _Stop, stopinst:
866 return stopinst.value
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.pyc in load_string(self)
970 if rep.startswith(q):
971 if len(rep) < 2 or not rep.endswith(q):
--> 972 raise ValueError, "insecure string pickle"
973 rep = rep[len(q):-len(q)]
974 break
ValueError: insecure string pickle
The environment running is on OS-X El Capitan, Python 2.x is installed through
HomeBrew, and the following libraries are installed:
> (ipython-venv)➜ ~ pip list appnope (0.1.0) backports-abc (0.4)
> backports.ssl-match-hostname (3.5.0.1) beautifulsoup4 (4.4.1) brewer2mpl
> (1.4.1) certifi (2015.11.20.1) cycler (0.9.0) decorator (4.0.6) functools32
> (3.2.3.post2) ggplot (0.6.8) gnureadline (6.3.3) ipykernel (4.2.2) ipython
> (4.0.3) ipython-genutils (0.1.0) Jinja2 (2.8) jsonschema (2.5.1) jupyter-
> client (4.1.1) jupyter-core (4.0.6) MarkupSafe (0.23) matplotlib (1.5.1)
> mechanize (0.2.5) mistune (0.7.1) nbconvert (4.1.0) nbformat (4.0.1)
> notebook (4.1.0) numpy (1.10.4) pandas (0.17.1) path.py (8.1.2) patsy
> (0.4.1) pexpect (4.0.1) pickleshare (0.6) pip (8.0.2) ptyprocess (0.5)
> Pygments (2.1) pyparsing (2.1.0) python-dateutil (2.4.2) pytz (2015.7) pyzmq
> (15.2.0) scipy (0.17.0) selenium (2.50.0) setuptools (15.0) simplegeneric
> (0.8.1) singledispatch (3.4.0.3) six (1.10.0) statsmodels (0.6.1) termcolor
> (1.1.0) terminado (0.6) tornado (4.3) traitlets (4.1.0)
Answer: This has been discussed here:
[github.com/matplotlib/matplotlib/pull/5640](http://github.com/matplotlib/matplotlib/pull/5640).
It may be a permissions issue - `matplotlib` shouldn't be building that cache
every time
The suggestion there is to delete the contents of `~/.cache/matplotlib` and
try again.
|
Ford-Fulkerson Algorithm Max flow algorithm - What is wrong with this Java implementation?
Question: I am trying to implement a java version of Ford-Fulkerson. I used the python
implementation from
[wikipedia](https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm#Python_implementation)
as a basis:
class Edge(object):
def __init__(self, u, v, w):
self.source = u
self.sink = v
self.capacity = w
def __repr__(self):
return "%s->%s:%s" % (self.source, self.sink, self.capacity)
class FlowNetwork(object):
def __init__(self):
self.adj = {}
self.flow = {}
def add_vertex(self, vertex):
self.adj[vertex] = []
def get_edges(self, v):
return self.adj[v]
def add_edge(self, u, v, w=0):
if u == v:
raise ValueError("u == v")
edge = Edge(u,v,w)
redge = Edge(v,u,0)
edge.redge = redge
redge.redge = edge
self.adj[u].append(edge)
self.adj[v].append(redge)
self.flow[edge] = 0
self.flow[redge] = 0
def find_path(self, source, sink, path):
if source == sink:
return path
for edge in self.get_edges(source):
residual = edge.capacity - self.flow[edge]
if residual > 0 and edge not in path:
result = self.find_path( edge.sink, sink, path + [edge])
if result != None:
return result
def max_flow(self, source, sink):
path = self.find_path(source, sink, [])
while path != None:
residuals = [edge.capacity - self.flow[edge] for edge in path]
flow = min(residuals)
for edge in path:
self.flow[edge] += flow
self.flow[edge.redge] -= flow
path = self.find_path(source, sink, [])
return sum(self.flow[edge] for edge in self.get_edges(source))
This is my java implementation:
import org.junit.Test;
import java.util.*;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
public class MaxFlowTest {
@Test
public void maxFlowTest() {
FlowNetwork unit = new FlowNetwork(Arrays.asList(0, 1, 2, 3, 4, 5));
unit.addEdge(0, 1, 16);
unit.addEdge(0, 2, 13);
unit.addEdge(1, 3, 12);
unit.addEdge(1, 2, 10);
unit.addEdge(2, 1, 4);
unit.addEdge(2, 4, 14);
unit.addEdge(3, 2, 9);
unit.addEdge(4, 3, 7);
unit.addEdge(3, 5, 20);
unit.addEdge(4, 5, 4);
assertThat(unit.maxFlow(0, 5), is(23)); //algorithm incorrectly returning 7
}
}
class FlowNetwork {
Map<Integer, List<Edge>> edges = new HashMap<>();
Map<Edge, Integer> flow = new HashMap<>();
public FlowNetwork(List<Integer> vertices) {
vertices.forEach(v -> this.edges.put(v, new ArrayList<>()));
}
public Integer maxFlow(Integer source, Integer sink) {
List<Edge> path = null;
while (! (path=findAugmentedPath(source,sink,new ArrayList<>())).isEmpty()) {
int flow = path.stream().mapToInt(e -> e.capacity - this.flow.get(e)).min().getAsInt();
path.forEach(e -> {
this.flow.put(e, this.flow.get(e) + flow);
this.flow.put(e.reverseEdge, this.flow.get(e.reverseEdge) - flow);
});
}
return edges.get(source).stream().mapToInt(e -> flow.get(e)).sum();
}
List<Edge> findAugmentedPath(Integer source, Integer sink, List<Edge> pathSoFar) {
if (source == sink) {
return pathSoFar;
}
for (Edge neighbourEdge : this.edges.get(source)) {
int residual = neighbourEdge.capacity - flow.get(neighbourEdge);
if (residual > 0 && !pathSoFar.contains(neighbourEdge)) {
pathSoFar.add(neighbourEdge);
List<Edge> path = findAugmentedPath(neighbourEdge.b, sink, pathSoFar);
if (!path.isEmpty()) {
return path;
}
}
}
return Collections.emptyList();
}
public void addEdge(int source, int sink, int capacity) {
Edge edge = new Edge(source, sink, capacity);
Edge reverseEdge = new Edge(sink, source, 0);
edge.reverseEdge = reverseEdge;
reverseEdge.reverseEdge = edge;
this.edges.get(source).add(edge);
this.edges.get(sink).add(reverseEdge);
flow.put(edge, 0);
flow.put(reverseEdge, 0);
}
}
class Edge {
Integer capacity;
Edge reverseEdge;
int b;
int a;
public Edge(Integer a, Integer b, Integer capacity) {
this.a = a;
this.b = b;
this.capacity = capacity;
}
}
The problem is the java algorithm is outputting 7, instead the correct 23 for
the example given in the test case. There are no errors when the code runs.
When debugging the algorithm I can't see any misbehavior. I am not sure how it
is going wrong.
My question is, can anyone help me understand how and why it is going wrong?
If it helps a visual representation of the example graph is available in this
pdf: <http://web.stanford.edu/class/cs97si/08-network-flow-problems.pdf>
Answer: I found the problem...I wasn't creating a copy of pathSoFar when calling
findAugmentedPath. Whoops!
List<Edge> findAugmentedPath(Integer source, Integer sink, List<Edge> pathSoFar) {
if (source == sink) {
return pathSoFar;
}
for (Edge neighbourEdge : this.edges.get(source)) {
int residual = neighbourEdge.capacity - flow.get(neighbourEdge);
if (residual > 0 && !pathSoFar.contains(neighbourEdge)) {
List<Edge> pathSoFarCopy = new ArrayList<>(pathSoFar);
pathSoFarCopy.add(neighbourEdge);
List<Edge> path = findAugmentedPath(neighbourEdge.b, sink, pathSoFarCopy);
if (!path.isEmpty()) {
return path;
}
}
}
return Collections.emptyList();
}
|
Building a dictionary out of components in a file
Question: I have a file, where in each line, it contains a readname; a '+' or '-'; a
position marked by a number.
I went ahead to first open the file and have in Python script:
#!/usr/bin/env python
import sys
file=open('filepath')
dictionary={}
for line in file:
reads=line.split()
read_name=reads[0]
methylation_state=reads[1] #this is a plus or minus
position=int(reads[2])
I am having a hard time building a dictionary, where I would have
{keys:values} as {methylation_state:position}.
If someone can please help me, I would greatly appreciate it. I hope this was
clear enough.
**SAMPLES**
input1.txt
SRR1035452.21010_CRIRUN_726:7:1101:4566:6721_length=36 + 59399861
SRR1035452.21010_CRIRUN_726:7:1101:4566:6721_length=36 + 59399728
SRR1035452.21010_CRIRUN_726:7:1101:4566:6721_length=36 + 59399735
SRR1035452.21010_CRIRUN_726:7:1101:4566:6721_length=36 + 59399752
SRR1035452.21044_CRIRUN_726:7:1101:5464:6620_length=36 + 31107092
input2.txt
SRR1035454.47_CRIRUN_726:7:1101:2618:2094_length=36 + 18922145
SRR1035454.174_CRIRUN_726:7:1101:6245:2159_length=36 + 51460469
SRR1035454.174_CRIRUN_726:7:1101:6245:2159_length=36 + 51460488
SRR1035454.174_CRIRUN_726:7:1101:6245:2159_length=36 + 51460631
Answer: It sounds like you just need simple sets of positions. Once you know all of
the positions in each file you can perform many types of operations.
def positions(filename):
# split on + and the second element is what we want
return set(line.split('+')[1].strip() for line in open(filename)
if '+' in line)
# get sets from both files
f1 = positions('f1.txt')
f2 = positions('f2.txt')
# with sets, subtraction shows you what is in one not the other
print("in 1 not 2", f1 - f2)
print("in 2 not 1", f2 - f1)
`positions` was implemented with python's compact "list comprehensions". I
have no idea where that name came from. You could break it into parts to see
the individual steps with the following, but the first implementation is clear
once you get used to python.
def positions(filename):
# open the file for reading
with open(filename) as fp:
# set will hold positions
pos = set()
# read the file line by line
for line in fp:
# we only care about lines with pluses
if '+' in line:
# split into two parts
parts = line.split()
# position is the second part but we need to get rid of
# extra spaces and newline
position = parts[1].strip()
# add to set. if position is already in set, you don't get
# a second one, this one is dropped
pos.add(position)
return pos
|
How to let user select which arg to print
Question: As a way to learn python, I am building Yahtzee.py.
After the first roll, I would like the user to decide what to keep or what to
reroll (up to 3 times).
So as to avoid writing a code for every scenario how can I allow user to
select which dice to reroll. ie( keep die 1 or keep dice 2, 3,5)
Here is the code I have so far:
import random
rollCount=1
roll1 = random.randint(1,6)
roll2 = random.randint(1,6)
roll3 = random.randint(1,6)
roll4 = random.randint(1,6)
roll5 = random.randint(1,6)
def rollAll():
roll1
roll2
roll3
roll4
roll5
def printAll():
print("roll 1:",roll1,"\nroll 2:",roll2,"\nroll 3:",roll3,"\nroll 4:",roll4,"\nroll 5:",roll5)
def printRoll():
print("press any key to roll dice")
input()
str(printAll())
print("Would you like to roll again?\nroll all, roll 1, roll 2 , roll 3, roll 4 , roll 5")
rollAgain = input()
if rollAgain== "roll all":
rollCount=2
rollAll()
str(printAll())
Answer: Welcome to Python! :)
As your post indicates that this is a learning exercise, I will offer advice
for objectives you should try to reach, rather than writing and pasting your
code for you.
Your program needs to remember the roll for each die. One of many mechanisms
for this could be:
outcomes = []
for i in range(0,5):
outcomes.append(random.randint(1,6))
Given the above, you now have programmatic memory of each outcome. Each of the
random.randint() outcomes are now kept in a list, which you can access by
element later in your program, depending on what the user chooses to keep or
reroll. Remember that the user's perception of "die one" is actually element
zero in your list, due to the way indices are numbered.
There are over a dozen ways to approach the remainder of your program, but
this should be a good starter, for you. You could even explore list
comprehensions to improve on the sample I provide, above.
|
PyGame: Unable to open file
Question: I'm trying some basic examples from the [Making Games with Python &
Pygame](https://inventwithpython.com/makinggames.pdf) book, but I'm facing a
weird problem. Here is the example source:
import pygame, time
soundObj = pygame.mixer.Sound('beep.wav')
soundObj.play()
time.sleep(1) # wait and let the sound play for 1 second
soundObj.stop()
This source produces the following error:
> Traceback (most recent call last): File
> "C:/Users/Thiago/PycharmProjects/PyGame/Sound/app.py", line 3, in soundObj =
> pygame.mixer.Sound('beep.wav') pygame.error: Unable to open file 'beep.wav'
The _beep.wav_ file is properly saved on the same folder of my Python script.
I've tried the `os.listdir()` command and it returns the wav file. Is there
any issue, known bug or am I doing something wrong?
Here is my environment:
* Windows 10 64 bits
* Python 3.4
* Pygame 1.9
Answer: You have to initialize the module or all of the pygame first. There is an
pygame_init() initializer that is going to help you with that. You can find it
[here](http://www.pygame.org/docs/ref/pygame.html#pygame.init)
|
Python: Extract chapter from long txt (or delete all other chapters)
Question: My problem is: I have several long txt files with many chapters of which I
want to extract one. The text within these chapters differs between files.
Identification should be done using the title of the chapter and the title of
the next chapter, which are the same for all files. These identification-
titles are in the file more than once, but I want to only use their first
occurrence...
Thus the logic is something like:
delete text; identification title (first occurrence) "start"; keep title and
text; identification title (first occurrence) "end"; delete text
The goal is a program, which will automatically open all files and edit them
in the stated way.
Thanks in advance!
Answer: I found a (somehow long) answer, which does only delete the first part of the
text (that means the chapter is not extracted but at least it is at the top,
which is sufficient for me)... maybe someone'll have the same problem:
import os
for file in os.listdir('.'):
if file == "delete_first_part.py":
pass
elif os.path.isfile(file):
# open the file, delete all lines before "word to begin", write rest to file
f = open(file,"r+")
lines = reversed(f.readlines())
f.seek(0)
strings = ("Word to Begin")
for line in lines:
if any(s in line for s in strings):
f.write(line)
f.truncate()
print file
break
else:
f.write(line)
f.close()
f = open(file,"r+")
lines = reversed(f.readlines())
f.seek(0)
for line in lines:
f.write(line)
f.close()
I read the file from the end, search for my "word to begin", truncate the rest
of the text and break the loop. Then write to file, open it again and read
from the end -> the end up with the text in the "right" direction.
|
Python list with Vcenter vm
Question: I found a Python script to list all Vcenter VM attributes, but now I need to
register some of attributes into a Python list (or array, dict... ).
But it doesn't works.
My getVminfos.py :
EDIT : the right file :
import argparse
import atexit
import itertools
import unicodedata
import pyVmomi
from pyVmomi import vmodl
from pyVmomi import vim
from pyVim.connect import SmartConnect, Disconnect
def GetArgs():
parser = argparse.ArgumentParser(description='Process args for retrieving all the Virtual Machines')
parser.add_argument('-s', '--host', required=True, action='store',help='Remote host to connect to')
parser.add_argument('-o', '--port', type=int, default=443, action='store',help='Port to connect on')
parser.add_argument('-u', '--user', required=True, action='store',help='User name to use when connecting to host')
parser.add_argument('-p', '--password', required=False, action='store',help='Password to use when connecting to host')
args = parser.parse_args()
return args
def print_vm_info(virtual_machine):
"""
Print information for a particular virtual machine or recurse into a
folder with depth protection
"""
Ansible_Hosts = []
Ansible_Groups = []
Ansible_Names = []
summary = virtual_machine.summary
print("Name : ", summary.config.name)
print("Template : ", summary.config.template)
#print("Path : ", summary.config.vmPathName)
print"Guest : ", str(unicodedata.normalize('NFKD', summary.config.guestFullName))
#print("Instance UUID : ", summary.config.instanceUuid)
#print("Bios UUID : ", summary.config.uuid)
print"State : ", summary.runtime.powerState
if summary.guest is not None:
ip_address = summary.guest.ipAddress
if ip_address:
Ansible_Hosts.append([ip_address])
print "Ansible_Hosts[1:15]", Ansible_Hosts[1:15]
def main():
args = GetArgs()
try:
si = SmartConnect(host=args.host,user=args.user,pwd=args.password,port=int(args.port))
if not si:
print("Could not connect to the specified host using specified "
"username and password")
return -1
atexit.register(Disconnect, si)
content = si.RetrieveContent() # get root folder
container = content.rootFolder # starting point to look into
viewType = [vim.VirtualMachine] # object types to look for
recursive = True # whether we should look into it recursively
containerView = content.viewManager.CreateContainerView(
container, viewType, recursive)
children = containerView.view
for child in children:
print_vm_info(child)
except vmodl.MethodFault as error:
print("Caught vmodl fault : " + error.msg)
return -1
return 0
# Start program
if __name__ == "__main__":
main()
Prints works like a charm, but always my lists (`Ansible_Hosts`, ...) are
empty...
Answer: The lists initialization statements (Ansible_Hosts = [] etc.) should go to
main()
|
How to make simple interacting with running python script for web?
Question: I have a running loop script in Python, which I want interact with a HTML-
page. For example:
> (HTML)Clicking button -> magic -> (Python) Do something function in
> script.py
[This response](http://stackoverflow.com/questions/27474557/interact-with-
python-script-running-infinitive-loop-from-web) is not appropriate for this.
Answer: Probably you can use Selenium python binding for the purpose of interecting
with Web page with your python script.
[Selenium link](http://selenium-python.readthedocs.org/getting-started.html)
Example:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
|
Jupyter install for Python 2.7 failed
Question: I have the Anaconda distribution of Python 2.7 and I needed to install the
Jupyter notebook package. During the installation process my computer turned
off and after that I couldn't continue with the process. I tried to uninstall
Jupyter and try installing it again, but I keep getting the same error:
Traceback (most recent call last):
File "C:\Users\Acer\Anaconda\Scripts\ipython-script.py", line 3, in <module>
from IPython import start_ipython
File "C:\Users\Acer\Anaconda\lib\site-packages\IPython\__init__.py", line 49, in <module>
from .terminal.embed import embed
File "C:\Users\Acer\Anaconda\lib\site-packages\IPython\terminal\embed.py", line 19, in <module>
from IPython.terminal.ipapp import load_default_config
File "C:\Users\Acer\Anaconda\lib\site-packages\IPython\terminal\ipapp.py", line 22, in <module>
from IPython.core.completer import IPCompleter
File "C:\Users\Acer\Anaconda\lib\site-packages\IPython\core\completer.py", line 71, in <module>
from IPython.utils import generics
File "C:\Users\Acer\Anaconda\lib\site-packages\IPython\utils\generics.py", line 8, in <module>
from simplegeneric import generic
ImportError: No module named simplegeneric
What should I remove/add in order to make it work?
Answer: You need to install the python package
[simplegeneric](https://pypi.python.org/pypi/simplegeneric). After you install
it, you need to install the next package that you fail to import. Continue
this process until you dont get any import errors.
|
Python: Creating list of subarrays
Question: I have a massive array but for illustration I am using an array of size 14. I
have another list which contains 2, 3, 3, 6. How do I efficiently without for
look create a list of new arrays such that:
import numpy as np
A = np.array([1,2,4,5,7,1,2,4,5,7,2,8,12,3]) # array with 1 axis
subArraysizes = np.array( 2, 3, 3, 6 ) #sums to number of elements in A
B = list()
B[0] = [1,2]
B[1] = [4,5,7]
B[2] = [1,2,4]
B[3] = [5,7,2,8,12,3]
i.e. select first 2 elements from A store it in B, select next 3 elements of A
store it in B and so on in the order it appears in A.
Answer: You can use
[`np.split`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.split.html)
-
B = np.split(A,subArraysizes.cumsum())[:-1]
Sample run -
In [75]: A
Out[75]: array([ 1, 2, 4, 5, 7, 1, 2, 4, 5, 7, 2, 8, 12, 3])
In [76]: subArraysizes
Out[76]: array([2, 3, 3, 6])
In [77]: np.split(A,subArraysizes.cumsum())[:-1]
Out[77]:
[array([1, 2]),
array([4, 5, 7]),
array([1, 2, 4]),
array([ 5, 7, 2, 8, 12, 3])]
|
ng-init gives parse error when it finds u' in data received from Django
Question: I am sending some data from the Django backend to the template, where I use
Angular with `ng-repeat` and `ng-init` to loop through the data and print it
on screen.
This is how I get the data in the backend with Python2 and Django:
country = "Global"
songs = []
for pl in pls:
song_dict = {}
song_dict['title'] = pl.songs.title
# other fields...
songs.append(song_dict)
context = {}
context['country'] = country
context['songs'] = songs
return render(request, 'spotify_list/index.html', context)
In the template, I try to `ng-init` like this in order to access the data
received from Django:
<div ng-app="instantSearch" ng-init="items={{songs}}">
But it looks like `ng-init` doesn't like the `u'` preceeding each key and
value (`{u'title':u'Test1'}, {u'title':u'Test2'}`). This is the error I get:
Error: [$parse:syntax] http://errors.angularjs.org/1.4.9/$parse/syntax?p0='title'&p1=is%20unexpected%2C%20expecting%20%5B%3A%5D&p2=10&p3=items%3D%5B%7Bu'title'%3Au'Prueba1'%7D%2C%20%7Bu'title'%3Au'Prueba2'%7D%2C%20%7Bu'title'%3Au'Prueba3'%7D%5D&p4='title'%3Au'Prueba1'%7D%2C%20%7Bu'title'%3Au'Prueba2'%7D%2C%20%7Bu'title'%3Au'Prueba3'%7D%5D
at Error (native)
at https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:6:416
at Object.s.throwError (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:213:32)
at Object.s.consume (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:213:207)
at Object.s.object (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:212:370)
at Object.s.primary (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:209:335)
at Object.s.unary (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:209:174)
at Object.s.multiplicative (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:208:434)
at Object.s.additive (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:208:261)
at Object.s.relational (https://ajax.googleapis.com/ajax/libs/angularjs/1.4.9/angular.min.js:208:96) <div ng-app="instantSearch" ng-init="items=[{u'title':u'Prueba1'}, {u'title':u'Prueba2'}, {u'title':u'Prueba3'}]" class="ng-scope">
I know that if I could eliminate the `u'` from the data, it would work well.
Like this:
<div ng-app="instantSearch" ng-init="items=[{'title':'Prueba1'}, {'title':'Prueba2'}, {'title':'Prueba3'}]">
So my question is what is the best way to come around this issue?
Should I handle the data differently in the Django side? How?
Should I handle the data differently in the front side? How?
Any help would be greatly appreciated.
Answer: The `u` is python's way of designating that the data is of type `unicode`. The
solution is to probably dump your data as `json`.
e.g.
<div ng-app="instantSearch" ng-init="items={{songs | json}}">
Then you just need to [define and
register](https://docs.djangoproject.com/en/1.9/howto/custom-template-
tags/#registering-custom-filters) a filter on the django side that uses
`json.dumps` to dump the data to json. If I read the docs correctly, it looks
like:
from django import template
import json
register = template.Library()
register.filter('json', json.dumps)
|
OpenCV - using VideoWriter gives no file
Question: I have searched for a solution of my problem on many sites (including stack
overflow). I am trying to save a video from my webcam in my Raspberry using
OpenCV. Theoretically, my code works fine (I can see my webcam in the window,
i can see python printing "frame"), but when it goes to saving the file I
cannot see anything. I have found that I should change codecs in FourCC, but
it changes nothing.
My code:
#!/usr/bin/python
import cv2
import numpy as np
def InitCamera():
cap = cv2.VideoCapture(0)
if cap is None:
print('No camera access')
else:
print('Camera init done')
return cap
def InitWriter():
fps = 20
size = (640,480)
outFile = 'output.mp4'
#fourcc = cv2.cv.CV_FOURCC('D','I','V','X')
#fourcc = cv2.cv.CV_FOURCC('R','G','B',' ')
#fourcc = cv2.cv.CV_FOURCC('Y','U','Y','2')
#fourcc = cv2.cv.CV_FOURCC('Y','U','Y','U')
#fourcc = cv2.cv.CV_FOURCC('U','Y','V','Y')
#fourcc = cv2.cv.CV_FOURCC('I','4','2','0')
#fourcc = cv2.cv.CV_FOURCC('I','Y','U','V')
#fourcc = cv2.cv.CV_FOURCC('Y','U','1','2')
#fourcc = cv2.cv.CV_FOURCC('Y','8','0','0')
#fourcc = cv2.cv.CV_FOURCC('G','R','E','Y')
#fourcc = cv2.cv.CV_FOURCC('B','Y','8',' ')
#fourcc = cv2.cv.CV_FOURCC('Y','1','6',' ')
#fourcc = cv2.cv.CV_FOURCC('X','V','I','D')
#fourcc = cv2.cv.CV_FOURCC('M','J','P','G')
fourcc = cv2.cv.CV_FOURCC('M','P','E','G')
out = cv2.VideoWriter(outFile, fourcc, fps, size, 1)
if out is None:
print('No video access')
else:
print('Video init done')
return out
def CapVideo(cap,out):
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
out.write(frame)
print('frame')
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF==ord('q'):
break
else:
break
print 'Done'
cap.release()
out.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
cam = InitCamera()
out = InitWriter()
CapVideo(cam,out)
Answer: Do you have a strong reason to write mp4 files? Consider writing to file with
an avi extension. I have struggled with this problem a lot as well and in
cases where I strongly needed an MP4 file, i ended up using ffmpeg to do the
conversion post writing the file as an avi by openCV.
I was developing on C++ and simply made a system call. Something like this:
std::string ffmpeg_conversion_cmd = "ffmpeg -y -i data\\intermediate_result.avi -c:v libx264 -crf 19 -preset slow -c:a libfaac -b:a 192k -ac 2 data\\intermediate_result.mp4";
std::system(ffmpeg_conversion_cmd.c_str());
Of course, you need to have ffmpeg installed on your Pi to do this.
|
python 2.7 - argument as name to following argument
Question:
import argparse
def add(number_one = 0,number_two = 2):
a = int(number_one) + int(number_two)
return(a)
def Main():
parser = argparse.ArgumentParser()
parser.add_argument("n1", help = "first number", type=float)
parser.add_argument("n2", help = "second number", type=float)
args = parser.parse_args()
result = add(args.n1, args.n2)
print(str(result))
if __name__ == '__main__' :
Main()
Hello, I am learning argparse so I write this simple program that sums two
number.
python add.py 3 5
I want the program to do the same, but with argument that defines next
argument. For example:
python add.py --n1 3 --n2 5
Answer: You need to add `--` before `n1` and `n2`. Example:
parser.add_argument("--n1", help = "first number", type=float)
parser.add_argument("--n2", help = "second number", type=float)
You can also add a short-option:
parser.add_argument("--n1", "-1", help = "first number", type=float)
parser.add_argument("--n2", "-2", help = "second number", type=float)
This way, you can call your program with, for example, the `-1` option instead
of writing out the _incredibly_ long option `--n1`.
|
Having trouble using a C library via a Python API: What am I doing wrong?
Question: I'm using a Python script in conjunction with a thermocouple and a
thermocouple reader. To communicate with the thermocouple reader, which is
written in C, I have to use an API. That API is
[PyDAQFlex](https://github.com/torfbolt/PyDAQFlex). The thermocouple reader
also came with a tester script, written in C. I'm trying to get a temperature
reading from the thermocouple reader, but it only outputs the CJC value.
My code:
import daqflex
d = daqflex.USB_2001_TC()
def get_temperature():
return float(d.send_message("?AI{0}:CJC/DEGC").encode("utf-8"))
The output of my code:
u'AI{0}:CJC/DEGC=23.8'
Note: 23.8 is _not_ the temperature. That value is the CJC, as seen in the
tester script's command line output below. It's related, but not the value I'm
looking for.
* * *
The tester script's code:
<http://pastebin.com/Atsdy7X0> (to get the temperature, I press "t" and then
"k" because I have a K-type thermocouple).
The tester script's command line output:
<http://pastebin.com/jq4Rr4QX> (the temperature here is accurate. This is what
I want to plug into my script.)
* * *
The PyDAQFlex script:
<https://github.com/torfbolt/PyDAQFlex/blob/master/daqflex/devices.py> (see
line 105)
The C code for the thermocouple reader:
<http://pastebin.com/rEDR9efR> (Not included in entirety, only the relevant
parts.)
* * *
I am seriously struggling to see my mistake here. This [exact piece of
code](https://github.com/torfbolt/PyDAQFlex/issues/6) appears to have worked
for someone else in the PyDAQFlex Github page, so I'm extremely confused. I
have emailed the creator of the software, a person in Github with a similar
issue as me, and I just spent 6 hours in various IRC chats. Please help me. If
it helps, I used parts of
[this](http://www.mccdaq.com/TechTips/TechTip-9.aspx) tutorial to install the
drivers and things for the thermocouple reader. Thank you so much.
Answer: If 23.8 is related, but not the right temp, then it is a calibration problem.
Could you print what original tester program and pydaqflex send and receive
from the device, when calibrating?
|
Installing RKernel
Question: Despite all my efforts, I wasn't able to install the R kernel for my
IPython/Jupyter notebook on Canopy.
I've closely followed the clear instructions given in:
<http://www.michaelpacer.com/maths/r-kernel-for-ipython-notebook> (or,
alternatively, <http://irkernel.github.io/installation/>)
All goes well until the last step that install the kernel on Jupyter:
IRkernel::installspec()
Here is the weird message I get:
File "/Users/julien/Library/Enthought/Canopy_64bit/User/bin/jupyter-kernelspec", line 8
from jupyter_client.kernelspecapp import KernelSpecApp.launch_instance
^
SyntaxError: invalid syntax
My configuration is the following:
* Macbook with El Capitan
* R version 3.2.2
* IPython 4.0.1
* Jupyter 4.0.6
Answer: It turns out that the file "jupyter-kernelspec " was corrupted for some
reason. I replaced it by the following code:
#!/usr/bin/env python
from jupyter_client.kernelspecapp import KernelSpecApp
def main():
KernelSpecApp.launch_instance()
if __name__ == '__main__':
main()
It solved my issue.
Julien
|
python csv importing to link
Question: I am trying to get Python to open sites based on a csv file. I checked all of
my code individually to make sure it worked and it does but when I introduce
this variable from the csv file I am getting the error message below: Here is
the code:
import urllib
import urllib.request
from bs4 import BeautifulSoup
import os
import csv
f = open('gropn1.csv')
csv_f = csv.reader(f)
for row in csv_f:
theurl="http://www.grote.com/?s="+csv_f[1] + "&q1=1"
thepage = urllib.request.urlopen(theurl)
soup = BeautifulSoup(thepage,"html.parser")
for partno in soup.find('h2',{"class":"single-product-number"}):
print(partno)
for link in soup.find('ul',{"class":"breadcrumbs"}).findAll('a'):
print(link.text)
f.close()
Here is the error:
Traceback (most recent call last):
File "grotestart2.py", line 13, in <module>
theurl="http://www.grote.com/?s="+csv_f[1] + "&q1=1"
TypeError: '_csv.reader' object is not subscriptable
Any help would be greatly appreciated! Thanks
Answer: > TypeError: '_csv.reader' object is not subscriptable
`csv_f` is your _csv reader instance_ and it is actually ["not
subscriptable"](http://stackoverflow.com/questions/216972/in-python-what-does-
it-mean-if-an-object-is-subscriptable-or-not) by definition.
Did not you mean to use the `row` variable instead. Replace:
theurl="http://www.grote.com/?s="+csv_f[1] + "&q1=1"
with:
theurl="http://www.grote.com/?s="+row[1] + "&q1=1"
* * *
You are also trying to iterate over the results of `soup.find()` call which is
a `Tag` instance which is _not iterable._ You meant to use `find_all()`.
Replace:
for partno in soup.find('h2',{"class":"single-product-number"}):
with:
for partno in soup.find_all('h2', {"class":"single-product-number"}):
Or, a shorter version using a [CSS
selector](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-
selectors):
for partno in soup.select('h2.single-product-number'):
|
python object changes the value of an input variable
Question: So I don't know if this is a well-formed question, and I'm sorry if it isn't,
but I'm pretty stumped. Furthermore, I don't know how to submit a minimal
working example because I can't reproduce the behavior without the whole code,
which is a little big for stackexchange.
So here's the problem: I have an object which takes as one of its arguments a
numpy array. (If it helps, this array represents the initial conditions for a
differential equation which a method in my object solves numerically.) After
using this array to solve the differential equation, it outputs the answer
just fine, BUT the original variable in which I had stored the array has now
changed value. Here is what I happens:
import numpy as np
import mycode as mc
input_arr = np.ndarray(some_shape)
foo = mc.MyClass(input_arr)
foo.numerical_solve()
some_output
Fine and dandy. But then, when I check on `input_arr`, it's changed value.
Sometimes it's the same as `some_output` (which is to say, the final value of
the numerical solution), but sometimes it's some interstitial step.
As I said, I'm totally stumped and any advice would be much appreciated!
Answer: If you have a mutable object (`list`, `set`, `numpy.array`, ...) and you do
not want it mutated, then you need to make a copy and pass that instead:
l1 = [1, 2, 3]
l2 = l1[:]
s1 = set([1, 2, 3])
s2 = s1.copy()
arr1 = np.ndarray(some_shape)
arr2 = np.copy(arr1)
|
Python GUI TKinter
Question: I'm making an interface for my e-learning question which is on easy
difficulty. is there anything wrong with the code? It keeps on saying there is
an error on line 21. Whats the mistake?
import Tkinter
MathsEasyLevel1 = Tkinter.Tk()
MathsEasyLevel1.geometry("320x260")
MathsEasyLevel1.title("Mathematics Easy")
total = 0
getanswer = Tkinter.IntVar()
def userinput():
Answer1 = getanswer.get()
if Answer1 == 8 :
total = total + 1
else :
total = total
MathsEasyLevel1.withdraw()
MathsEasyLevel1.deiconify()
return
LabelName = Tkinter.Label (MathsEasyLevel1, text="Question 1", font("Impact",20)).grid(row=0,column=2,sticky="new")
LabelName = Tkinter.Label (MathsEasyLevel1, text="State the number of edges in a cube")
LabelName.pack()
TxtBoxName = Tkinter.Entry (MathsEasyLevel1, textvariable= getanswer)
TxtBoxName.pack()
MathsEasyLevel2 = Tkinter.Tk()
MathsEasyLevel2.geometry("320x260")
MathsEasyLevel2.title("Mathematics Easy")
MathsEasyLevel2.withdraw()
BtnName = Tkinter.Button (MathsEasyLevel1, text="Proceed", command=userinput).pack()
Answer: There are a few problems I can see. Line 21 (`LabelName = Tkinter.Label
(MathsEasyLevel1, text="Question 1",
font("Impact",20)).grid(row=0,column=2,sticky="new")`, I presume) takes `font`
as an argument in the form `font = ("Impact",20)`, so your corrected code for
this line would be:
LabelName = Tkinter.Label (MathsEasyLevel1, text="Question 1", font=("Impact",20)).grid(row=0,column=2,sticky="new")
Also, you are assigning the outcome of the `grid` method you are running to
LabelName. You probably want to be doing this:
LabelName = Tkinter.Label (MathsEasyLevel1, text="Question 1", font=("Impact",20))
LabelName.grid(row=0,column=2,sticky="new")
This way you can reference `LabelName`, now the actual label, multiple times.
You also use the same variable name, `LabelName`, for two different `Label`
widgets. This means a reference to the previous one is not kept which could
cause problems at some stage. Another problem is that you mix the use of the
`grid` packing method and the `pack` packing method in the same window, which
is [not a good idea](http://stackoverflow.com/a/28217012/5539184). Try this
instead:
LabelName1 = Tkinter.Label (MathsEasyLevel1, text="Question 1", font=("Impact",20))
LabelName1.grid(row=0,column=2,sticky="new")
LabelName2 = Tkinter.Label (MathsEasyLevel1, text="State the number of edges in a cube")
LabelName2.grid(row=1,column=0)
TxtBoxName = Tkinter.Entry (MathsEasyLevel1, textvariable= getanswer)
TxtBoxName.grid(row=2,column=0)
Obviously you can change the `rows` and `columns` as you want. The rest of
your code looks fine to me!
|
class instance as process within kivy app: kivy app gets stuck
Question: I do have an interface to store settings and to start a process. However, I
cannot close everything once the process starts because kivy gets stuck after
calling `process.run`. Here is a minimal example:
#! /usr/bin/env python
"""
Activate the touch keyboard. It is important that this part is on top
because the global config should be initiated first.
"""
from kivy.config import Config
Config.set('kivy', 'keyboard_mode', 'multi')
# the main app
from kivy.app import App
# The Builder is used to define the main interface.
from kivy.lang import Builder
# The start screen is defined as BoxLayout
from kivy.uix.boxlayout import BoxLayout
# The pHBot class
from phbot import pHBot
# The pHBot is defined as process. Otherwise it would not be possible to use the close button.
from multiprocessing import Process, Queue
# Definition of the Main interface
Builder.load_string('''
<Interface>:
orientation: 'vertical'
Button:
text: 'Run pH-Bot'
font_size: 40
on_release: app.run_worker()
Button:
text: 'Close pH-Bot'
font_size: 40
on_release: app.stop_phbot()
''')
# Interface as a subclass of BoxLayout without any further changes. This part is used by kivy.
class Interface(BoxLayout):
pass
class SettingsApp(App):
"""
The settings App is the main app of the pHBot application. It is initiated by kivy and contains the functions
defining the main interface.
"""
def build(self):
"""
This function initializes the app interface and has to be called "build(self)". It returns the user interface
defined by the Builder.
"""
# A queque for the control all processes.
self.qu_stop = Queue()
# returns the user interface defined by the Builder
return Interface()
def run_worker(self):
"""
The pHBot application is started as a second process.
"""
bot = pHBot(self.qu_stop)
phbot = Process(target=bot.ph_control())
# start the process
phbot.run()
def stop_phbot(self):
self.qu_stop.put('STOP')
if __name__ == '__main__':
SettingsApp().run()
The second class is within a file called `phbot.py`:
import time
class pHBot:
def __init__(self, qu_stop_in):
self.qu_stop_in = qu_stop_in
def ph_control(self):
while True:
if self.qu_stop_in.full():
if self.qu_stop_in.get() == 'STOP':
break
print('Back and forth forever ...')
time.sleep(2)
What am I missing here ?
Answer: Note that a `Process` is started with
[`start()`](https://docs.python.org/2/library/multiprocessing.html#process-
and-exceptions). Calling `run()` really immediately launches the worker from
the same process, and thus it is blocking. The relevant lines in `run_worker`
should therefore be:
bot = pHBot(self.qu_stop)
phbot = Process(target=bot.ph_control)
# start the process
phbot.start()
In addition, in your worker, don't check whether the `Queue` is full. Rather,
do a non-blocking `get` and handle the `Queue.Empty` exception:
import Queue
...
def ph_control(self):
while True:
try:
item = self.qu_stop_in.get(False)
if item == 'STOP':
break
except Queue.Empty:
print "Nothing to see"
print('Back and forth forever ...')
time.sleep(2)
|
Efficient functional list iteration in Python
Question: So suppose I have an array of some elements. Each element have some number of
properties.
I need to filter this list from some subsets of values determined by
predicates. This subsets of course can have intersections. I also need to
determine amount of values in each such subset.
So using imperative approach I could write code like that and it would have
running time of 2*n. One iteration to copy array and another one to filter it
count subsets sizes.
from split import import groupby
a = [{'some_number': i, 'some_time': str(i) + '0:00:00'} for i in range(10)]
# imperative style
wrong_number_count = 0
wrong_time_count = 0
for item in a[:]:
if predicate1(item):
delete_original(item, a)
wrong_number_count += 1
if predicate2(item):
delete_original(item, a)
wrong_time_count += 1
update_some_data(item)
do_something_with_filtered(a)
def do_something_with_filtered(a, c1, c2):
print('filtered a {}'.format(a))
print('{} items had wrong number'.format(c1))
print('{} items had wrong time'.format(c2))
def predicate1(x):
return x['some_number'] < 3
def predicate2(x):
return x['some_time'] < '50:00:00'
Somehow I can't think of the way to do that in Python in functional way with
same running time. So in functional style I could have used groupby multiple
times probably or write a comprehension for each predicate, but that's
obviously would be slower than imperative approach.
I think such thing possible in Haskell using Stream Fusion (am I right?) But
how do that in Python?
Answer: Yes, fusion in Haskell will often turn something written as two passes into a
single pass. Though in the case of lists, it's actually foldr/build fusion
rather than stream fusion.
That's not generally possible in languages that don't enforce purity, though.
When side effects are involved, it's no longer correct to fuse multiple passes
into one. What if each pass performed output? Unfused, you get all the output
from each pass separately. Fused, you get the output from both passes
interleaved.
It's possible to write a fusion-style framework in Python that will work
correctly if you promise to only ever use it with pure functions. But I'm
doubtful such a thing exists at the moment. (I'd loved to be proven wrong,
though.)
|
search for substring in output, then print the line
Question: I would like to list a file and then search the lines for a pattern, like I do
in Bash with.
cat /etc/passwd | grep nologin
I try the following:
#!/usr/bin/python3
import subprocess
CMD=('cat', '/etc/passwd')
SEARCH="nologin"
PIPE_ERG=subprocess.Popen(CMD , stdout=subprocess.PIPE)
OUTPUT = PIPE_ERG.communicate()[0].splitlines()
for LINE in OUTPUT:
if SEARCH in LINE:
print(LINE)
If I execute the script with python3 I always get this errormessage:
Traceback (most recent call last):
File "./pipe2.py", line 11, in <module>
if SEARCH in LINE:
TypeError: 'str' does not support the buffer interface
When I just print the lines without search, the script will list all my lines.
How do I get each line with the pattern "nologin" from the output?
Answer: Looks like in Python 3,
[`PIPE_ERG.communicate()[0]`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate)
is not a `str`, but `bytes`, and for `bytes`, the `in` operator is not
defined. You have to convert those `bytes` to `str` first. The simplest way
would be just to do `LINE = str(LINE)` in the loop, or use `LINE.decode()`:
for LINE in OUTPUT:
LINE = LINE.decode()
if SEARCH in LINE:
print(LINE)
Or use `Popen` with
[`universal_newlines=True`](https://docs.python.org/3/library/subprocess.html#subprocess.check_output):
PIPE_ERG=subprocess.Popen(CMD , stdout=subprocess.PIPE, universal_newlines=True)
From the documentation:
> By default, this function will return the data as encoded bytes. The actual
> encoding of the output data may depend on the command being invoked, so the
> decoding to text will often need to be handled at the application level.
>
> This behaviour may be overridden by setting universal_newlines to True
|
how to add a module to Anaconda
Question: This is what i get when i do "**_python -V_** "
**Python 2.7.11 :: Anaconda 2.4.0 (64-bit)** I usually use my terminal to play
with IDLE.But now i have also installed IDLE shell.
I tried _import sys;sys.path_ on both.They throw different paths. My Terminal
returned the path with anaconda in it.
I tried to install a module following these steps.
1. python setup.py sdist
2. sudo python setup.py install
Then i opened IDLE(shell).I was able to import and also use my module.
I wanna do the same in Anaconda..I tried using conda install filename.py.It
doesn't work. Please help.
Answer: There are several ways to add a module to Anaconda.
* `conda install <package>`
* `pip install <package>`
* `python setup.py install` (if you are in the source directory, no sudo required if anaconda is in your home directory)
To make a package for others to use you will need to put it up where people
can access it like Github. You will have to make a config file (takes yaml
file manipulation) you can read up on how to make/distribute packages here.
<http://conda.pydata.org/docs/build_tutorials/pkgs.html>
Now to answer your question: There is a difference between using a file and
using a module/package. A file can just be imported in another python program
using `import filename` where filename.py is the name of the file you want to
use. to make that a module you want to take a look at the answer to this
question. [How to write a Python
module?](http://stackoverflow.com/questions/15746675/how-to-write-a-python-
module)
|
AttributeError: 'module' object has no attribute 'Sframe'
Question: I installed Dato's `GraphLab Create` to run with `python 27` first directly
from its executable then manually via `pip` ([instructions
here](https://dato.com/products/create/)) for troubleshooting.
Code:
import graphlab
graphlab.SFrame()
Output:
[INFO] Start server at: ipc:///tmp/graphlab_server-4908
- Server binary: C:\Users\Remi\Anaconda2\envs\dato-env\lib\site-packages\graphlab\unity_server.exe
- Server log: C:\Users\Remi\AppData\Local\Temp\graphlab_server_1455637156.log.0
[INFO] GraphLab Server Version: 1.8.1
Now, attempt to load a .csv file as an Sframe:
csvsf = graphlab.Sframe('file.csv')
complains:
AttributeError Traceback (most recent call last)
<ipython-input-5-68278493c023> in <module>()
----> 1 sf = graphlab.Sframe('file.csv')
AttributeError: 'module' object has no attribute 'Sframe'
Any idea(s) how to pinpoint the issue? Thanks so much.
Note: Uninstalled an already present `python 34` version
Answer: You can only load a graplab package ('file.gl') by `graphlab.SFrame()`.
Instead to load a csv file use `csvf = graphlab.SFrame.read_csv('file.csv')`
for more information and other data types read this docs
<https://dato.com/products/create/docs/graphlab.data_structures.html>
|
Getting correct exogenous least squares prediction in Python statsmodels
Question: I am having trouble getting a reasonable prediction behavior from [least
squares
fits](http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.WLS.html)
in `statsmodels` version 0.6.1. It does not seem to be providing a sensible
value.
Consider the following data
import numpy as np
xx = np.array([1.1,2.2,3.3,4.4]) # Independent variable
XX = sm.add_constant(xx) # Include constant for matrix fitting in statsmodels
yy = np.array([2,1,5,6]) # Dependent variable
ww = np.array([0.1,1,3,0.5]) # Weights to try
wn = ww/ww.sum() # Normalized weights
zz = 1.9 # Independent variable value to predict for
We can use `numpy` to do a weighted fit and prediction
np_unw_value = np.polyval(np.polyfit(xx, yy, deg=1, w=1+0*ww), zz)
print("Unweighted fit prediction from numpy.polyval is {sp}".format(sp=np_unw_value))
and we find a prediction of 2.263636.
As a sanity check, we can also see what **_R_** has to say about the matter
import pandas as pd
import rpy2.robjects
from rpy2.robjects.packages import importr
import rpy2.robjects.pandas2ri
rpy2.robjects.pandas2ri.activate()
pdf = pd.DataFrame({'x':xx, 'y':yy, 'w':wn})
pdz = pd.DataFrame({'x':[zz], 'y':[np.Inf]})
rfit = rpy2.robjects.r.lm('y~x', data=pdf, weights=1+0*pdf['w']**2)
rpred = rpy2.robjects.r.predict(rfit, pdz)[0]
print("Unweighted fit prediction from R is {sp}".format(sp=rpred))
and again we find a prediction of 2.263636. My problem is that we do _not_ get
that result from statmodels OLS
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
owls = sm.OLS(yy, XX).fit()
sm_value_u, iv_lu, iv_uu = wls_prediction_std(owls, exog=np.array([[1,zz]]))
sm_unw_v = sm_value_u[0]
print("Unweighted OLS fit prediction from statsmodels.wls_prediction_std is {sp}".format(sp=sm_unw_v))
Instead I obtain a value 1.695814 (similar things happen with `WLS()`). Either
there is a bug, or using `statsmodels` for prediction has some trick too
obscure for me to find. What is going on?
Answer: The results classes have a `predict` method that provides the prediction for
new values of the explanatory variables:
>>> print(owls.predict(np.array([[1,zz]])))
[ 2.26363636]
The first return of `wls_prediction_std` is the standard error for the
prediction not the prediction itself.
>>> help(wls_prediction_std)
Help on function wls_prediction_std in module statsmodels.sandbox.regression.predstd:
wls_prediction_std(res, exog=None, weights=None, alpha=0.05)
calculate standard deviation and confidence interval for prediction
applies to WLS and OLS, not to general GLS,
that is independently but not identically distributed observations
Parameters
----------
res : regression result instance
results of WLS or OLS regression required attributes see notes
exog : array_like (optional)
exogenous variables for points to predict
weights : scalar or array_like (optional)
weights as defined for WLS (inverse of variance of observation)
alpha : float (default: alpha = 0.05)
confidence level for two-sided hypothesis
Returns
-------
predstd : array_like, 1d
standard error of prediction
same length as rows of exog
interval_l, interval_u : array_like
lower und upper confidence bounds
The sandbox function will be replaced by a new method `get_prediction` of the
results classes that provides the prediction and the extra results like
standard deviation and confidence and prediction intervals.
<http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.RegressionResults.get_prediction.html>
|
Scraping Google images with Python3 (requests + BeautifulSoup)
Question: I would like to download bulk images, using Google image search.
My first method; downloading the page source to a file and then opening it
with `open()` works fine, but I would like to be able to fetch image urls by
just running the script and changing keywords.
First method: Go to the image search
([https://www.google.no/search?q=tower&client=opera&hs=UNl&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiM5fnf4_zKAhWIJJoKHYUdBg4Q_AUIBygB&biw=1920&bih=982](https://www.google.no/search?q=tower&client=opera&hs=UNl&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiM5fnf4_zKAhWIJJoKHYUdBg4Q_AUIBygB&biw=1920&bih=982)).
View the page source in the browser and save it to a html file. When I then
`open()` that html file with the script, the script works as expected and I
get a neat list of all the urls of the images on the search page. This is what
line 6 of the script does (uncomment to test).
If, however I use the `requests.get()` function to parse the webpage, as shown
in line 7 of the script, it fetches a _different_ html document, that does not
contain the full urls of the images, so I cannot extract them.
Please help me extract the correct urls of the images.
Edit: link to the tower.html, I am using:
<https://www.dropbox.com/s/yy39w1oc8sjkp3u/tower.html?dl=0>
This is the code, I have written so far:
import requests
from bs4 import BeautifulSoup
# define the url to be scraped
url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982'
# top line is using the attached "tower.html" as source, bottom line is using the url. The html file contains the source of the above url.
#page = open('tower.html', 'r').read()
page = requests.get(url).text
# parse the text as html
soup = BeautifulSoup(page, 'html.parser')
# iterate on all "a" elements.
for raw_link in soup.find_all('a'):
link = raw_link.get('href')
# if the link is a string and contain "imgurl" (there are other links on the page, that are not interesting...
if type(link) == str and 'imgurl' in link:
# print the part of the link that is between "=" and "&" (which is the actual url of the image,
print(link.split('=')[1].split('&')[0])
Answer: Just so you're aware:
# http://www.google.com/robots.txt
User-agent: *
Disallow: /search
* * *
I would like to preface my answer by saying that Google heavily relies on
scripting. It's very possible that you're getting different results because
the page you're requesting via `reqeusts` doesn't do anything with the
`script`s supplied in on the page, whereas loading the page in a web browser
does.
[Here's what i get when I request the url you
supplied](http://pastebin.com/sQpviE9i)
The text I get back from `requests.get(url).text` doesn't contain `'imgurl'`
in it anywhere. Your script is looking for that as part of its criteria and
it's not there.
I do however see a bunch of `<img>` tags with the `src` attribute set to an
image url. If that's what you're after, than try this script:
import requests
from bs4 import BeautifulSoup
url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982'
# page = open('tower.html', 'r').read()
page = requests.get(url).text
soup = BeautifulSoup(page, 'html.parser')
for raw_img in soup.find_all('img'):
link = raw_img.get('src')
if link:
print(link)
Which returns the following results:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQyxRHrFw0NM-ZcygiHoVhY6B6dWwhwT4va727380n_IekkU9sC1XSddAg
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRfuhcCcOnC8DmOfweuWMKj3cTKXHS74XFh9GYAPhpD0OhGiCB7Z-gidkVk
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSOBZ9iFTXR8sGYkjWwPG41EO5Wlcv2rix0S9Ue1HFcts4VcWMrHkD5y10
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTEAZM3UoqqDCgcn48n8RlhBotSqvDLcE1z11y9n0yFYw4MrUFucPTbQ0Ma
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSJvthsICJuYCKfS1PaKGkhfjETL22gfaPxqUm0C2-LIH9HP58tNap7bwc
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQGNtqD1NOwCaEWXZgcY1pPxQsdB8Z2uLGmiIcLLou6F_1c55zylpMWvSo
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSdRxvQjm4KWaxhAnJx2GNwTybrtUYCcb_sPoQLyAde2KMBUhR-65cm55I
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQLVqQ7HLzD7C-mZYQyrwBIUjBRl8okRDcDoeQE-AZ2FR0zCPUfZwQ8Q20
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQHNByVCZzjSuMXMd-OV7RZI0Pj7fk93jVKSVs7YYgc_MsQqKu2v0EP1M0
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcS_RUkfpGZ1xJ2_7DCGPommRiIZOcXRi-63KIE70BHOb6uRk232TZJdGzc
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSxv4ckWM6eg_BtQlSkFP9hjRB6yPNn1pRyThz3D8MMaLVoPbryrqiMBvlZ
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQWv_dHMr5ZQzOj8Ort1gItvLgVKLvgm9qaSOi4Uomy13-gWZNcfk8UNO8
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRRwzRc9BJpBQyqLNwR6HZ_oPfU1xKDh63mdfZZKV2lo1JWcztBluOrkt_o
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcQdGCT2h_O16OptH7OofZHNvtUhDdGxOHz2n8mRp78Xk-Oy3rndZ88r7ZA
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRnmn9diX3Q08e_wpwOwn0N7L1QpnBep1DbUFXq0PbnkYXfO0wBy6fkpZY
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSaP9Ok5n6dL5K1yKXw0TtPd14taoQ0r3HDEwU5F9mOEGdvcIB0ajyqXGE
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTcyaCvbXLYRtFspKBe18Yy5WZ_1tzzeYD8Obb-r4x9Yi6YZw83SfdOF5fm
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTnS1qCjeYrbUtDSUNcRhkdO3fc3LTtN8KaQm-rFnbj_JagQEPJRGM-DnY0
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSiX_elwJQXGlToaEhFD5j2dBkP70PYDmA5stig29DC5maNhbfG76aDOyGh
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQb3ughdUcPUgWAF6SkPFnyiJhe9Eb-NLbEZl_r7Pvt4B3mZN1SVGv0J-s
|
Possible to add numpy arrays to python sets?
Question: I know that in order to add an element to a set it must be hashable, and numpy
arrays seemingly are not. This is causing me some problems because I have the
following bit of code:
fill_set = set()
for i in list_of_np_1D:
vecs = i + np_2D
for j in range(N):
tup = tuple(vecs[j,:])
fill_set.add(tup)
# list_of_np_1D is a list of 1D numpy arrays
# np_2D is a 2D numpy array
# np_2D could also be converted to a list of 1D arrays if it helped.
I need to get this running faster and nearly 50% of the run-time is spent
converting slices of the 2D numpy array to tuples so they can be added to the
set.
so I've been trying to find out the following
* Is there any way to make numpy arrays, or something that functions like numpy arrays (has vector addition) hashable so they can be added to sets?
* If not, is there a way I can speed up the process of making the tuple conversion?
Thanks for any help!
Answer: Create some data first:
import numpy as np
np.random.seed(1)
list_of_np_1D = np.random.randint(0, 5, size=(500, 6))
np_2D = np.random.randint(0, 5, size=(20, 6))
run your code:
%%time
fill_set = set()
for i in list_of_np_1D:
vecs = i + np_2D
for v in vecs:
tup = tuple(v)
fill_set.add(tup)
res1 = np.array(list(fill_set))
output:
CPU times: user 161 ms, sys: 2 ms, total: 163 ms
Wall time: 167 ms
Here is a speedup version, it use broadcast, `.view()` method to convert dtype
to string, after calling `set()` convert the string back to array:
%%time
r = list_of_np_1D[:, None, :] + np_2D[None, :, :]
stype = "S%d" % (r.itemsize * np_2D.shape[1])
fill_set2 = set(r.ravel().view(stype).tolist())
res2 = np.zeros(len(fill_set2), dtype=stype)
res2[:] = list(fill_set2)
res2 = res2.view(r.dtype).reshape(-1, np_2D.shape[1])
output:
CPU times: user 13 ms, sys: 1 ms, total: 14 ms
Wall time: 14.6 ms
To check the result:
np.all(res1[np.lexsort(res1.T), :] == res2[np.lexsort(res2.T), :])
You can also use `lexsort()` to remove duplicated data:
%%time
r = list_of_np_1D[:, None, :] + np_2D[None, :, :]
r = r.reshape(-1, r.shape[-1])
r = r[np.lexsort(r.T)]
idx = np.where(np.all(np.diff(r, axis=0) == 0, axis=1))[0] + 1
res3 = np.delete(r, idx, axis=0)
output:
CPU times: user 13 ms, sys: 3 ms, total: 16 ms
Wall time: 16.1 ms
To check the result:
np.all(res1[np.lexsort(res1.T), :] == res3)
|
Converting NBA play by play specific .json to .csv
Question: I am trying to build a database containing play by play data for several
seasons of NBA games, for my Msc. in economics dissertation. Currently I am
extracting games from the NBA's API ([see
example](http://stats.nba.com/stats/playbyplayv2?GameID=0041300402&StartPeriod=0&EndPeriod=0&tabView=playbyplay))
and splitting each game into a different .json file using [this
routine](https://github.com/gmf05/nba/blob/master/scripts/py/savejson.py)
(duly adapted for p-b-p purposes), thus yielding .json files as (first play
example):
{"headers": ["GAME_ID", "EVENTNUM", "EVENTMSGTYPE", "EVENTMSGACTIONTYPE", "PERIOD", "WCTIMESTRING", "PCTIMESTRING", "HOMEDESCRIPTION", "NEUTRALDESCRIPTION", "VISITORDESCRIPTION", "SCORE", "SCOREMARGIN"], "rowSet": [["0041400406", 0, 12, 0, 1, "9:11 PM", "12:00", null, null, null, null, null], ["0041400406", 1, 10, 0, 1, "9:11 PM", "12:00", "Jump Ball Mozgov vs. Green: Tip to Barnes", null, null, null, null]
I plan on **creating a loop to convert all of the generated .json files to
.csv** , such that it allows me to proceed to econometric analysis in stata.
At the moment, I am stuck in the first step of this procedure: the creation of
the json to CSV conversion process (I will design the loop afterwards). The
code I am trying is:
f = open('pbp_0041400406.json')
data = json.load(f)
f.close()
with open("pbp_0041400406.csv", "w") as file:
csv_file = csv.writer(file)
for rowSet in data:
csv_file.writerow(rowSet)
f.close()
However, the yielded CSV files are showing awkward results: one line reading
`h,e,a,d,e,r,s` and another reading `r,o,w,S,e,t`, thus not capturing the
headlines or rowSet(the plays themselves).
I have tried to solve this problem taking into account the contributes [on
this thread](http://stackoverflow.com/questions/1871524/how-can-i-convert-
json-to-csv-with-python?newreg=b44536fc4e274a0287105b853feec545), but I have
not been able to do it. Can anybody please provide me some insight into
solving this problem?
[EDIT] Replacing rowset with data in the original code also yielded the same
results.
Thanks in advance!
Answer: try this:
import json
import csv
with open('json.json') as f:
data = json.load(f)
with open("pbp_0041400406.csv", "w") as fout:
csv_file = csv.writer(fout, quotechar='"')
csv_file.writerow(data['headers'])
for rowSet in data['rowSet']:
csv_file.writerow(rowSet)
Resulting CSV:
GAME_ID,EVENTNUM,EVENTMSGTYPE,EVENTMSGACTIONTYPE,PERIOD,WCTIMESTRING,PCTIMESTRING,HOMEDESCRIPTION,NEUTRALDESCRIPTION,VISITORDESCRIPTION,SCORE,SCOREMARGIN
0041400406,0,12,0,1,9:11 PM,12:00,,,,,
0041400406,1,10,0,1,9:11 PM,12:00,Jump Ball Mozgov vs. Green: Tip to Barnes,,,,
|
Python: Generate matrix of Nx4 integers who sum is a constant
Question: I have search the web which has provided various solution on how to produce a
matrix of random numbers whose sum is a constant. My problem is slightly
different. I want to generate an NX4 matrix of exhaustive list of integers
such that sum of all numbers in the row is exactly 100. and integers have a
range from [0,100]. I want to the integers to increment sequentially as
opposed to random. How can I do it in Python?
Thank you.
Answer: `product` is a handy way of generating combinations
In [774]: from itertools import product
In [775]: [x for x in product(range(10),range(10)) if sum(x)==10]
Out[775]: [(1, 9), (2, 8), (3, 7), (4, 6), (5, 5), (6, 4), (7, 3), (8, 2), (9, 1)]
The tuples sum to 10, and step sequentially (in the first value at least).
I can generalize it to 3 tuples, and it still runs pretty fast.
In [778]: len([x for x in product(range(100),range(100),range(100)) if sum(x)==100])
Out[778]: 5148
Length 4 tuples takes much longer (on an old machine),
In [780]: len([x for x in product(range(100),range(100),range(100),range(100)) if sum(x)==100])
Out[780]: 176847
So there's probably case to be made for solving this incrementally.
* * *
[x for x in product(range(100),range(100),range(100)) if sum(x)<=100]
runs much faster, producing the same number of of 3 tuples (within 1 or 2).
And the 4th value can be derived that that `x`.
In [790]: timeit len([x+(100-sum(x),) for x in product(range(100),range(100),range(100)) if sum(x)<=100])
1 loops, best of 3: 444 ms per loop
|
How to add a variable of a method of a class in aonther program in python?
Question: I have a class called WIFISegment as below in ns3.py :
class WIFISegment( object ):
"""Equivalent of radio WiFi channel.
Only Ap and WDS devices support SendFrom()."""
def __init__( self ):
# Helpers instantiation.
self.channelhelper = ns.wifi.YansWifiChannelHelper.Default()
self.phyhelper = ns.wifi.YansWifiPhyHelper.Default()
self.wifihelper = ns.wifi.WifiHelper.Default()
self.machelper = ns.wifi.NqosWifiMacHelper.Default()
# Setting channel to phyhelper.
self.channel = self.channelhelper.Create()
self.phyhelper.SetChannel( self.channel )
def add( self, node, port=None, intfName=None, mode=None ):
"""Connect Mininet node to the segment.
Will create WifiNetDevice with Mac type specified in
the MacHelper (default: AdhocWifiMac).
node: Mininet node
port: node port number (optional)
intfName: node tap interface name (optional)
mode: TapBridge mode (UseLocal or UseBridge) (optional)"""
# Check if this Mininet node has assigned an underlying ns-3 node.
if hasattr( node, 'nsNode' ) and node.nsNode is not None:
# If it is assigned, go ahead.
pass
else:
# If not, create new ns-3 node and assign it to this Mininet node.
node.nsNode = ns.network.Node()
allNodes.append( node )
# Install new device to the ns-3 node, using provided helpers.
device = self.wifihelper.Install( self.phyhelper, self.machelper, node.nsNode ).Get( 0 )
mobilityhelper = ns.mobility.MobilityHelper()
# Install mobility object to the ns-3 node.
mobilityhelper.Install( node.nsNode )
# If port number is not specified...
if port is None:
# ...obtain it automatically.
port = node.newPort()
# If interface name is not specified...
if intfName is None:
# ...obtain it automatically.
intfName = Link.intfName( node, port ) # classmethod
# In the specified Mininet node, create TBIntf bridged with the 'device'.
tb = TBIntf( intfName, node, port, node.nsNode, device, mode )
return tb
This class has a method called **def add( self, node, port=None,
intfName=None, mode=None )** and in this method we define
**_mobilityhelper_**.
I was wondering if I can use mobilityhelper in another program. For example, I
wrote another program and in my program I import WIFISegment, then define wifi
= WIFISegment() and I can use its method "add" as follow _wifi.add(h1)_ (h1 is
host here).
**My question is How I can use mobilityhelper of add() method in my other
program. Because I need to set a new mobility each time.**
Thanks Farzaneh
Answer: The obvious way is to return it:
return tb, mobilityhelper
and use it like this:
original_ret_value, your_mobilityhelper = wifi.add(h1)
But that would break compatibility with your old code (`add` returned
`TBIntf`, but now it returns tuple). You could add optional parameter to
indicate whether the `add` method should return mobilityhelper or not:
def add( self, node, port=None, intfName=None, mode=None, return_mobilityhelper=False ):
...
if return_mobilityhelper:
return tb, mobilityhelper
else:
return tb
Now if you use `add` the same way you did before, it behaves the same way it
did before `wifi.add(h1)`. But you can use the new parameter and then get your
mobilityhelper
whatever, mobilityhelper = wifi.add(h1, return_mobilityhelper=True)
or
returned_tuple = wifi.add(h1, return_mobilityhelper=True)
mobilityhelper = returned_tuple[1]
* * *
The other way is to modify a parameter (a list)
def add( self, node, port=None, intfName=None, mode=None, out_mobilityhelper=None):
...
if hasattr(out_mobilityhelper, "append"):
out_mobilityhelper.append(mobilityhelper)
return tb
It's still compatible with your old code but you can pass a list to the
parameter and then extract your mobilityhelper
mobhelp_list = []
wifi.add(h1, out_mobilityhelper=mobhelp_list)
mobilityhelper = mobhelp_list[0]
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.